20070223

Too Much the Perfectionist

I think I'm too much the perfectionist. I get too worked up about doing things "the right way". In a perfect world:

  • We'd not focus only on making the problem go away, but also removing our own sensitivity to the problem
  • We wouldn't need application firewalls because there would exist no semantic flaws.
  • We wouldn't need ethical hackers. They're smart and creative, and I like them a lot - I was one. But if the world was perfect we wouldn't need them.
  • Perfect code would come out of user acceptance testing, which would go much quicker without a security review.
  • There's no security review (!) during the user acceptance testing, because we know perfect code came out of Integration Testing.
  • Integration Testing, integration testing actually gets to test the integration of the system and its operation under stress.
  • Component Integration would be much faster because the components don't have to be tested for security.
  • All of this because our developers aren't taught about security, they're well-versed in the fundamentals of writing academically-correct code, without flaws, security related or otherwise.
  • And also because our engineers engineer secure designs. Flow is checked between phases of the application just like geological survey software is certain that particular requirements have been met before measurements begin. Assertions can be made about the state of the application, and consistency of these states can be mathematically and logically proven.
  • Developers know to write academically correct code and engineers design secure transitions and transactions because really sharp security practitioners are involved from engineering from day one - during the imagination phase.
  • The American League wouldn't have a designated hitter.
Now am I proposing that you don't do all these things? Heck no! But the thing is, the earlier security is involved in the engineering of the application, and the better trained your coders are at writing fundamentally flawless code, and the better your engineers are at being able to guarantee state and make assertions that the state of the application is known at any point, the more secure applications will be, and the less re-work that will need to take place, and the faster your ethical hackers will be able to write the report (and the more frustrated they'll feel).

But because our engineers have been raised on a steady diet of trusting the application to behave itself, and the developers not understanding that users will push the wrong buttons, we really need to do all of the following:
  • Work on making the problem go away.
  • Use ethical hacking at multiple phases of the lifecycle to find logical flaws.
  • Use black-box application scanners throughout the lifecycle to discover semantic flaws in the application.
  • Use application firewalls (yuck!) to make sure that even if the blackbox testing didn't catch everything, nothing new gets through.
  • Have security codeheads perform source code analysis to find logical flaws.
  • Have the developers perform static analysis on source code to find semantic flaws.
  • Let the American League have their DH.
Long story short - just another rant that security flaws aren't only security flaws. They're actually more dangerous looking versions of flaws that 30 years ago, computer scientists were trained to not allow to happen. (Not saying that always worked, either, though).

0 comments: