Security practitioners are actually a large part of the reason developers aren't truly getting better at security. As security practitioners, we've told developers to actually think about security, but because we don't know enough about development, we've actually lessened the degree of the problem. Let me give a few examples, then the results of that...
- Cross-site Scripting - is actually just an extension of the old < or > problem. This is also partially browsers' faults for allowing malformed HTML, but I digress.
- SQL Injection - just an extension of the question "what happens when somebody sends something that mucks up the statement?"
- XSRF - just an extension of the thing we used to solve by adding "Please only submit the form once"
It's not that those threats aren't threats against security. But the real solutions to those problems are solutions to other problems. But as security practitioners, we don't seem to understand that all HTML output should be encoded in case something would mess up the validity of the HTML.
The end result is that we're able to identify the things we deem to be security flaws, and developers "fix" those specific findings to make us go away, but they never really learn to write better code. They don't see the correlation between security findings and the real solutions to the problem.
Now, security practitioners aren't the only ones at fault:
- All programming books are at fault because they tell you how to write the language, and in being focused on getting functionality right, they completely abandon writing good code.
- Classrooms are at fault because rarely do students take a class on dealing with the unexpected. Most entry-level programmers assume people will use the application just as expected, with no keystroke errors.
- It's the fault of QA testers, because they test the application for functionality and load, but not for dealing with erroneous behavior.
- It's the fault of timelines (not sure who to really blame that on - customers? managers? CEO's?) because the entire development lifecycle is geared around functionality, not necessarily quality of said functionality.
So the short part of the story here is that when a "security problem" exists, see if you can determine the real cause, so that developers can get into good habits. Good habits that will be covered in more detail in later posts....