@H1tM0nk3y - when the availability is possible to stick around at a client for the full development life-cycle of a product, then I'm all for it however I'll quote: "Would you rather push out the next release or spend time patching the current one?
" (Rev. Bill Blunden - The Rootkit Arsenal).
Companies don't care to spend on Fortify, Klocwork, beStorm, etc., and even if they did, the developers most of the times won't get it and even if they *do
* get it, they're often under tight deadlines to push out the "next release."
There are plenty of instances that I can quote to prove a point but I'll choose one; the talk of the town right now. Tavis Ormandy's Help Center disclosure (http://seclists.org/fulldisclosure/2010/Jun/205
) To be outright blunt, many people have failed to look at the reality of it all, they don't care to, it doesn't mean anything to them, they'd rather point the finger for their own issues than fix them:
How can a company keep making the same repetitive mistakes. It's pure negligence and it shows the lack of investment in security in the SDLC. So it's one thing (wishful thinking) to have the luxury of implementing security controls at the development phase (phase 2 of the SDLC) and its completely another implementing it in the initiation phase (phase 1 of the SDLC) (ref: http://csrc.nist.gov/groups/SMA/sdlc/index.html
). As it stands right now, the practicality of coming in as a pentester from the ground up (phase 1) would be a waste of time. At phase 2 it would be a waste of time in fact, until it's a product, it's a waste of time. This does not mean a company should have its workers tapping away at the keyboards releasing whatever it is their producing "right here right now", then coming back after it's deployed to find holes.
At the initiation phase, programmers, project managers, etc., need to think outside of the frameworks and step into reality:Current realityPM: "We're making a program that will allow people to chat with each other"
Developer: "We can make it transfer files and send icons!"
PM: "We have two months to get this done"
PM: "We're making a program that will allow people to chat with each other"
Developer: "We can make it transfer files and send icons however we need to be careful as to avoid having people spoof, inject code"
Other Developer with Security Experience: "Definitely... We need to test the code along the way with protocol fuzzers, application fault injection programs, etc., to make sure no one steals or subverts the application"
PM: "You're right, the last thing we need is a corporation being compromised because we didn't check. We could look like fools and lose $N amount of money"
PM: "Other_Developer, work with Developers to make sure we put out a rock solid program. We have two months to get it done and I want further testing even after it's released"
If you're the "corporate pentester" then it will work for you however, doing contract work (hired gun pentesting) you're better off as a company paying to train your developers to understand security. Getting it before it even goes into the mainstream. It's more cost-effective to fork out say 10,000.00 to pay for your programmers to take courses at places like Immunity, Dino Dai Zovi's courses, Alex Sotirov's courses where your developers will come out understanding "security risk" from the programmers point of view, than it is to fork out millions in "patching the current one."
But alas, reality is what reality is and companies would rather spend money on deflection - marketing away security holes (http://www.sophos.com/blogs/gc/g/2010/0 ... t-zeroday/
) - than they would on training. Companies have it backwards and don't care to change this stance (marketing versus training versus implementing security). It's much easier and cost effective to spend a couple of thousand in damage control than it is to put out clean code. At the end of the day, blame the consumer though, for continuously buying buggy software and thinking that "they're seeing history" is good news.