Okay, I've not gone off my rocker. It's almost true.
When performing an application test, you might not find every single instance of a particular vulnerability. Due to time, tool, or other resource constraints, some things simply slip through the cracks. A common response to this is to enumerate all of them that were found with a great big note to the developers that this is a pervasive issue, and that a global policy needs to be adopted to fix them all. And of course, there's always the pushback - "we'll fix exactly the instances you find."
This is where I think "old school" static analysis far outshines the new fangled static analysis engines. With a really good developer, grep, and a hammer, fixing semantic flaws really comes down to a few short steps:
- Identify the common idioms used that result in "bad things". These will differ from environment to environment, which is why you have the sharp developer. Some examples:
- <%= %> tags
- Grep the entire source tree for those idioms.
- Replace. The examples above become:
tags (in Java. And I realize there are other things you need to do to make that work)
- Connection.prepareStatement() and PreparedStatement.set...()
- Get rid of it