What Can We Learn From Sports?
A member of a local coaching staff was recently let go. In order to justify that he shouldn't have been fired, he produced some stats that basically said that his portion of the team was as effective this year as it was in a previously very successful year.
IBM, as a sponsor of the NBA, awards a player each year the "IBM Award", which is based on a formula of really common statistics that add up to what IBM considers the individual player who made the greatest contribution to their team. The formula I've seen is really simple, but I remember when IBM was really pushing their sponsorship (I believe it was 2000-2001 when Tim Duncan won it), they also had some really convoluted stats that didn't go toward an award. It included things like "Field Goals Attempted in the 4th quarter of a game the team trailed after the 3rd quarter where the team actually won" and things like that.
Baseball as a game is the king of stats. There are a million different ways to slice and dice the actions of the game into (somewhat) meaningful stats.
American football has some really interesting stats that help oddsmakers to set the line on a game. The most interesting ones are environmental - remember way back when, when Tampa Bay had never won a game when the temperature was below 40°F/4°C? And does anybody really understand the quarterback rating?
Even football (soccer to Americans) has some interesting stats, even though the game is so simple to understand. There aren't yard lines (except the 18 and 6, and the penalty spot at 12 yards), but it's amazing the kind of stats they can come up with for a game where there seems to be one thing going on all the time.
So why all this talk about sports? If a member of a coaching staff can prove he was no worse this year than a few years ago statistically, why is it so difficult for us to come up with meaningful statistics. I understand there is an OWASP Project to Define Common Metrics, which for a very long time was a very quiet mailing list. But how do we measure security in meaningful ways - not only a single assessment of an application at a single point in time, but one vendor to another, or an application measured against its previous results, including new threat vectors found in the wild? And the real holy grail - how much money or how many customers do we stand to lose of a particular finding sticks around?
I've had this same rant over and over and over. And believe me - my primary goal in documenting findings is documenting them in such a way that the individual problems get fixed. But management makes decisions based on metrics. Managers deal with many, many, many facets of production, so they need things abstracted. They could really care less about how many reflected XSS flaws there are versus persisted - they want to know how much money they're gonna lose if they don't fix it, and how much money it's gonna cost them to fix it. (See Applying the Formula.)
I've tried to use things to measure between applications like, flaws found per use-case, but then not every application has proper use-case analysis (/sigh). And measuring an application over time means that new threats will be involved in the new test - so an application reviewed a year ago, with NO changes, will actually have MORE findings this year - because new ways to hack have been documented.
I love sports - and particularly because of the statistics they use. I think the basis of their statistics, though, are that they document so many events. I think the starting point for us is to start documenting more than the vulnerabilities themselves, so that maybe we can start to put things together.
0 comments:
Post a Comment