A Non-Technical Rant

First, if you're looking for a post that is exemplary in its technical merit, this isn't it.

Second, my apologies for the long silence. I've been working on exciting stuff, but that's no excuse for posting nothing for this long of a period.

Now for our regularly-scheduled complaint...

I received some news today that was very disturbing. The thing in the news that bothered me was not that the person who sent the news disagreed with me - I'm not bothered by that. I tend to embrace being a bit different.

But what bothered me was a statement in the end of the message. The end of the message said that the team I was working with needed to spend more time researching ways to make defensive coding completely invisible to developers. The people who made this statement were way smarter than me, so I must be completely in the dark here.

Defensive coding should be automatic, yes. Invisible? Never. Similarly to shielding your children from every bacteria and virus that might come their way, only to find they spend the remainder of their life sick, these people think that the way to have more defensive code is to make it totally invisible to developers. Developers should be trained less on defensive programming so that when new attack vectors come out, the only people who can rescue them are security professionals. (Security professionals - what's our track record so far when we do things our way?)

So the gist of the statement was this - developers are too stupid to write defensive code. Functional code is no problem, but defensive code is serious business that we need to leave to the security professionals.

You trust developers to handle your income tax returns, but don't trust them to use prepared statements on purpose?

You trust developers to navigate the space shuttle, but doing some proper input validation is too complex for them?

You trust developers with sensitive health care information, but think that they're too dull to properly obfuscate it when logging?

See, this is exactly what the problem is right now. We security people think that everybody else is too stupid to get it, and that we have to rescue them. We can't possibly hold people accountable for gaining new knowledge? Look - the security vulnerabilities we find today aren't new. People had to program defensively long before there was a security industry. But now that there's a security industry, the best thing we can do is make writing good code transparent? Pshaw! If the coders aren't doing things right, raise your standards. If you are or know a programmer, you know the way to motivate them - tell them it can't be done. That will show you who the real programmers are. (But I'm still convinced that a decent blacklist simply cannot be written.)

A previous group I worked with used a set of tools I had written as a benchmark for hiring people. This isn't to say that what I wrote was rocket science - it just took some time, reading, and mostly tinkering. Nobody we hired was able to crack the nut in a reasonable amount of time - but we weren't looking for the people who could figure it out in an hour - the people we hired were the ones who tried - who weren't scared of a challenge - who were convinced it could be done, and who were determined to find a way to do it.

There do exist people with that mindset. The real programmers are those people. And they're not so stupid that you have to make everything invisible to them. They're smart enough that once you make the need clear to them, they will do the right thing automatically - because it's the right thing. They're the ones who will program defensively when there aren't automated tools around making their code defensive without them knowing it.



50 Ways to Inject Your SQL


If I had to rate it, it'd be an A+ on musicianship (I mean - who can't get an A+ for a parody of a Paul Simon song?), an A+ on lyrics (that's not the easiest song in the world to write new lyrics to), and a B+ on technology - only because with a video that short, it's really difficult to demonstrate receiving the results out of band - like in PDF, images, or by forcing the database server to do DNS lookups and logging the DNS events. But it's still good fun.


Building Security In Maturity Model


It's no secret I'm a big fan of the work that Gary McGraw, Brian Chess, and Sammy Migues have done on the Building Security In Maturity Model (BSIMM). The idea of any maturity model is to have a set of criteria in any process that demonstrate how maturely the process is being run. While it's not an exact science, it is a pretty good comparative model, and for any process that is not fully matured or innovating, it gives some idea what the "next steps" are to improving the process. The good news about the BSIMM is that they didn't just make up the criteria for the model - they dealt with ten companies from several industries to get a picture of what some of the mature practices are out there.

One of the great things about the results of their research is that they've released the results under the Creative Commons Attribution - Share Alike 3.0 License. But it's also good to know that the run-time testing is not limited to penetration testing by the software security group, but includes fuzzing and failure or abuse cases in QA.

Anyway, I can't provide any more valuable information that what's in the model itself. So go take a look.


Dealing with SQL Injection Part I


It turns out the new cool way of spreading malware is by SQL Injection. SQL Injection is also my favorite way of getting almost every piece of data available in the application. But I'm a solutions guy, and this is a solutions blog, so let's learn how to deal with it.

As Jeremiah said in his excellent post on the subject, a way (not necessarily the best way) of finding injection points is by outside in penetration testing. Trying particular sets of metacharacters might yield a database error, and you can often tell if the application is giving the error or if the database is giving the error. There are a couple of flaws with this approach: 1) programmers know errors are bad, and often do what they can to hide those, so you might not get adequate enough information to formulate a good attack. 2) Black-box penetration testing will only find a subset of the problems. If a particular injection is only exploitable on Tuesday, for instance, if it's not exercised on a Tuesday, it's never found. (These are called "corner cases")

The best place to fix SQL Injection vulnerabilities is in the source code. And if you have access to the source code, the absolute best method of finding SQL Injection points is by source code analysis - preferably a combination using automated static analysis tools and manual analysis. The way static analysis tools work to find the injection points is by marking data that enters the application as tainted or untrusted. Expensive tools can trace the taint through the application, through API calls, assignments, etc. until the data goes into a function that's not trusted. SQL Injections happen when untrusted data is concatenated into a query to be used in a prepared statement or query. Some example API's include in .NET DbCommand.CommandText (and children), or JDBC Statement.execute* or Connection.prepareStatement.

Here's a quick example from a web application:

Connection conn = DBUtil.getConnection();
Statement stmt = conn.createStatement();
String sql = "SELECT id, surname, givenName FROM person WHERE surname = '"
  + request.getParameter("surname")
  + "'";
ResultSet rs = stmt.executeQuery(sql);

There are two things that need to be corrected in the code above:
  • Rather than using a concatenated query, we should use a prepared statement, parameterized query, bind variables statement, whatever you want to call it. This allows the database driver to properly escape and transmit data to the query. For repeated queries, it also substantially improves performance.
  • Some sort of input validation needs to take place on the request parameter "surname" to verify that it's a legal surname syntax according to our syntax.

Here's the improved code using the first item:

Connection conn = DBUtil.getConnection();
String sql = "SELECT id, surname, givenName FROM person WHERE surname = ?";
PreparedStatement stmt = conn.prepareStatement(sql);
stmt.setString(1, request.getParameter("surname"));
ResultSet rs = stmt.executeQuery();

It's important to note that using PreparedStatement doesn't immediately make the code immune. You have to use them properly by using bind variables. The syntax for using them differs from driver to driver and RDBMS to RDBMS, but question marks are pretty common. Some things can't be bound, however. Only constant values can be bound, so if you're depending on the user to provide table or column names, you need to make sure it is exactly one of the allowed values, or use a lookup method such as a map to map integers to the column names. Also, LIKE statements seem to cause people grief - you need to concatenate the percent signs to the bind variables like "WHERE foo LIKE '%' + ? '%'" or "WHERE foo LIKE CONCAT('%',?,'%')".

As above, the other thing that needs to happen is input validation. I picked surname on purpose because the most common SQL metacharacter first used for injection is the apostrophe. An apostrophe might also be allowable in a surname. So it becomes a perfect example of why prepared statements work so well - it doesn't mean that you don't have to use apostrophes. Here's an updated version that takes pretty complex US-ASCII surnames including apostrophes, but still gets those safely to the query later.

String search = request.getParameter("surname");
if (search == null || !search.matches("^([A-Za-z][a-z]*[' ])?[A-Z][a-z]+(-([A-Za-z][a-z]*[' ])?[A-Z][a-z]+)?$")) {
Connection conn = DBUtil.getConnection();
String sql = "SELECT id, surname, givenName FROM person WHERE surname = ?";
PreparedStatement stmt = conn.prepareStatement(sql);
stmt.setString(1, search);
ResultSet rs = stmt.executeQuery();

Also, it's worth noting that simply using stored procedures does not remove SQL Injection vulnerabilities. There are two ways that injection can still take place - either by the way the statement is called from the code (with an EXEC with user data concatenated into it), or if the stored procedure uses concatenation to build a dynamic query - all you've done is moved the injection attack into the stored procedure, effectively using the prepared statement to safely transport the attack there.



Can you smell that? sssnnnnnnifffffff..... Aahhhh yes. It's that time of year. Yeah, regardless of what Punxsutawney Phil might have had to say, it's springtime! (You folks in the colder parts of the Northern Hemisphere that won't thaw until June will just have to bear with the analogy - sorry). Yeah - it's the time of year when we clean up and clean out. Black Hat is about to start their rounds, SchmooCon just wrapped up, and all the new sales pitches start.

But I think this spring is bound to be a much more delightful one. I've become somewhat disgruntled in the AppSec industry because for a couple of years, we've not been focused on app security, but app insecurity. I think we left short. I personally think that finding weaknesses is only good if you have one of two end goals in mind: either you intend to exploit the weakness for fun and profit (or friends), or you identify them so that you can fix them. For a couple of years, the industry has grown rapidly, but sadly, mostly to the end that we're getting really good at identifying weaknesses, but leaving developers with no indications whatsoever about what to do about their problem. With no solutions, we've left our developers with two options: give up and cross your fingers hoping the bad guys never find out, or second, give up and pull the plug on your project.

But there are lots of things going on in AppSec right now which are very promising for those of us who actually want things to get better:

  • Gartner is releasing a Magic Quadrant on Static Analysis tools. While static analysis tools identify weaknesses, they identify them in such a way that developers can actually do something about the problems. (Please don't read that the wrong way. I'm not arguing that static analysis is "better" than blackbox/greybox testing. Lots of other people have those fights. Not for me). This is great news because an industry researcher has put a lot of effort into finding out from the industry what the best tools are in particular areas.
  • WhiteHat Security is providing WAF integration as one of their many services. While WAF's are not a silver bullet, when you don't have the source code, or a lot of cycles to fix issues, WAF's are a good stand-up solution to many semantic types of flaws, and a few logical ones, too. It's a step in the right direction.
  • Gary McGraw, Brian Chess, and Sammy Migues spent a great deal of time working with businesses learning about how they're dealing with application security, and have put together a Software Security Maturity Model to help businesses identify where they are in terms of baking in security, and where they need to go next. My favorite surprise of their investigation? The one thing that all their subjects said was most important in their program was training. And not just training on identifying weaknesses, but developing solutions.
  • RSnake and others are working with browser vendors to work out solutions to the whole clickjacking thing. I hate to see the standards-compliant focus of the browsers over the past few years go to back to the browser wars, but the slow-movement of the standards have crippled the advancements of browsers that are helpful in protecting users.
  • I'm personally doing a little bit of research on developing securely, and might actually get some work done on the project at some point and have a paper to prove it. But right now, it's all just theoretical.

If you're new to the blog, I'm passionate about fixing issues. Certainly, the first step to recovery is admitting you have a problem, but at some point you have to move beyond admitting you have a problem, and working on overcoming the problem.


Twitter Continues to Be Caught With Their Pants Down


flee over at Fortify has an excellent analysis of the recent incidents with Twitter where very high-popularity profiles have been hijacked. The analysis is exceptional, but I have one question:

Does anybody take Twitter seriously? I mean....really?

Yes, certain brands use it as a means to remind deliberate followers that the brand is indeed still alive. In fact, is there really a better way to receive notification of the daily woot? But on the other hand, do followers really trust that Twitter has validated the authenticity of the people sending them tweets?

I suppose that unfortunately, the answer is yes. Or to be more accurate, I suppose that users have not really thought about it. In fact, I suppose that a few security-savvy users depend on it as Bruce Schneier thought it necessary to clarify that @bruceschneier is not Bruce Schneier. (Or at least not the Bruce Schneier that runs schneier.com.) And by not thinking about whether Twitter truly authenticates the source of tweets, users implicitly trust the source.

Expect to see a GPG plugin for laconica soon. (And yes, I do realize how silly of a statement that is.)

And while you're here, be sure to subscribe to the Fortify Blog. Those are all folks who do development and security, and do them both pretty well.

New Year Rundown

I've been away from the blog for a bit lately, mostly because of work on a couple of projects that have not necessarily taken all my free time, but they've not lent themselves cleanly to a bloggable idea.

For security experts, there's no real news here. For developers, some of these tidbits may be of interest.

  • There's a big debate about whether pen-testing is dead, dying, evolving, or thriving. If you really want my opinion on the matter, pen testing is no less important than it has ever been. However, I think that enterprises need to shift focus from picking their jaw up off the floor to seeing how they can actually correct the issues. Pen testing will always find flaws that no other form of testing can. And it can provide good "shock value". But pen testing doesn't solve problems. For those who have pen testers in the enterprise, they need to understand that the pen test is not the end game - more defensive applications are the goal. In fact, I don't think the focus-shift that's necessary reduces the need for pen-testing, but rather increases the need for good pen-testers with good documentation ability.
  • Gary McGraw and Brian Chess did a superb job culling metrics from nine different enterprises to develop a maturity model for where enterprises are in their application security programs. As you've seen me say before, it's time for us to get the information that's in the security experts' hands and properly translate it to developers who are responsible for fixing findings. See the report here.
  • Every so often I get to work on another scenario for clickjacking. It really, really scares me. Certainly, the victim audience would be somewhat smaller than the general XSRF audience, but to think that you can use clickjacking to work around XSRF protections (including CAPTCHA) is frightening. If you're in control of it, make sure your site has some framebusting code. And even that's not 100% effective. (Considering IE can force an iframe to load at a particular security zone).
  • Regarding the MD5-collision-fake-CA attack demonstrated at 25C3: It's a very cool attack. And it completely negates the purpose of CA's - to validate the authenticity of a site. But who are the victims in a successful attack? The victims would be the people who are susceptible to phishing scams or DNS poisoning, but actually verify the authenticity of certificates. Sure, the browser will be fooled, but even without properly faking a CA, how many people who fall for phishing attacks don't click "Yes, I really want to do this foolish thing" when the browser warns them?
  • Regarding twishing: nifty idea. I wish it were mine. You can actually see the day that people got twished by the sudden change in their posts that have URL's in them. So why is twishing more severe than making thousands of fake twitter accounts? There are two reasons: 1) Twitter promptly disables ID's it deems to be automated spam accounts. Normal user accounts that are suddenly used for sending spam will pass more heuristics for legitimate use, and 2) real people already follow (and trust) the real users who got twished.
  • Attackers are continuing to use open redirects in Flash for new phishing attacks. Yes, just like Java applets or ActiveX controls, your Flash movies need to be analyzed for potential security flaws. Although there are lots of nifty attacks involving Flash, the simplest are still the most effective.

And...oh yeah! Happy New Year! It's a delightful time to be in the security industry, and a very good time to be a developer who understands defensive programming.