Making Security Rewarding

One of the problems with the field of Information Security, particularly secure application development, is that it is not an inherently rewarding practice. Development is a rewarding practice when you add new functionality, or make existing processes easier to use or more efficient.

The really dangerous part of this lack of reward is that hacking is inherently rewarding for those with that mindset. I've told co-workers many times that if I didn't really have a concern with the legality or the morality of it, (we won't get into whether Robin Hood was virtuous or not), I would probably want to be a hacker. In a development world, or even in support of a development world, you're driven by project deadlines and feature sets that the users covet. Security is (generally) an afterthought. In the black hat world, unless you're being directly paid for something you're supposed to produce ultimately, you're driven by your own creativity and your own ability to come up with an effective attack, and to do so without being caught, and before everybody else does.

To me, the second sounds very rewarding. In the former, you're paid to implement somebody else's creativity, and on their timeline, etc. In the latter, you're driven by the good idea. - new and crazy ideas are rewarded, not dismissed.

So how do we apply this type of reward to security? People relate vulnerability assessment and code review to insurance - you don't know how valuable it is until it's needed. The only problem with that analogy is that when you need insurance, you know precisely how valuable it was. With information security, you really don't know how many dollars you're saving because you're preventing the breach in the first place. How much money does eating right and exercise save you in the long run? Well, it's hard to measure, but proponents of the behavior would basically say "a lot".

In vulnerability assessment, you get glimpses of that great reward. You can say to your colleagues "I totally pwned this site", and provided you have a contract that dismisses you from any damages because you were doing a security test, you can be confident that you probably did it for the right reason.

How do we apply that to developers? Can we tie their salary to flaws found in code review? Can we tie Christmas bonuses for engineers to defects found during an ethical hack? Can we dock developer's pay when their work ends up getting hacked?

It's hard to say that I necessarily disagree with the functionality vs. security argument. Security doesn't make money - it has the potential to save money, and until you're hacked, you don't know how much. If you prevent a hack, you still can't say how much money you've saved. And it's hard to hang a likelihood number on particular threats - that metric would be dependent on a billion factors including the economy and human emotion.

While we try to bake security into the entire development lifecycle, we need to come up with creative ways to make a practice that doesn't improve efficiency or add fancy functionality rewarding just the same.