Because I'm all about the "good enough."

Wednesday, December 7, 2011

Baby, it's Veracode outside ...

Just read Veracode's chilling new State of Software Security Report, Volume 4 (I'm just waiting for the Greatest Hits to come out), and it's pretty depressing.  Among those organizations that use Veracode in any capacity -- testing their own applications or someone else's -- things haven't gotten all that much better.

As I've said once or twice in my talks, I like to learn about how security goes wrong; how the best-laid plans of CISOs gang aft agley.  One of my favorite German words is Sollbruchstelle:  the place where something is supposed to break or tear, such as a perforation.  And as my lovely and talented spouse points out, "Sollbruchstelle heißt nicht unbedingt Wollbruchstelle" -- just because something is supposed to break at a particular place doesn't mean it will.  For this reason, I'm interested in other data from reports like Veracode's and those of other vendors.  Why are we not seeing more progress in securing software?  Is it really just a matter of awareness and education, or is it something more?

Reading between the lines of the Veracode report, we see the statistics on the number of applications that never get resubmitted for testing, but not much explanation as to why they didn't.  The authors seem a little puzzled by this, but it makes a lot of sense to me: the applications probably never got resubmitted because they haven't been fixed yet.  I'd love to see data over a longer period of time for those enterprises to see which fixes got made quickly, which ones took longer, and which ones were pretty much abandoned.

I tried doing a meta-analysis at one point among the various "state of the state" reports for application security to see whether there was a large difference in findings between dynamic and static testing.  The effort sort of fell apart for a number of reasons: one, because not all reports described the data in the same way, and two, because vendors are increasingly melding static with dynamic both in their tools and in their revenue streams.  In the few places where I was able to do a one-to-one comparison (as best as I understood the language in the report), the statistics from static analysis tools were all very similar to one another, and the dynamic ones were also very similar for a common set of vulnerabilities; there were marked gaps between the two types of testing results.

Of course we know that static analysis and dynamic testing are two different beasts; that's no surprise.  But I'll be very interested in seeing how the two are bridged going forward.  I think there needs to be some sort of translation between the two before a significant amount of correlation can be done; and Dinis Cruz thinks it ought to be an abstraction layer. This is a good idea, but before it can be abstracted it needs to be normalized, and we can have a lot of visibility into how the application is functioning in real time, but unless we can describe all the testing with the same words, I don't think we're going to achieve enough understanding of the problem space.  (Yeah, I'm getting my Wittgenstein on.)

So kudos to Veracode for adding some more to the shared knowledge out there.  It's clear that we have more work to do, and only outcome data will really point us in the right direction.  We need to understand why and where things break before we can make sustainable progress.