Showing posts with label xss. Show all posts
Showing posts with label xss. Show all posts

20071219

Orkut Worm

Link

Of course, everybody who reads this will have heard about the Orkut Worm by now, but it does bring up the same usual information.

I'm not at all against Web 2.0. I'm not a gloom and doom type. Many of the ideas behind Web 2.0 are very useful and make the web something that it wasn't before. However, it seems that Google is now facing (although on a smaller scale) the same sorts of pain that MySpace had for years.

My personal opinion is that Google should have waited until they could revamp the content management of Orkut to be the same as Blogger before rolling it out. Require XML/XHTML valid syntax coming in, that can be validated against a DTD. Of course, I don't use Orkut, so I'm not positive that the initial worm didn't fit these requirements, there just wasn't sufficient attribute checking (unfortunately, the available news on the worm is really thin right now). But it appears as though this was a simple <script> tag with an external source. /sigh

For developers, let MySpace and Google be an example to you - you'd better be really certain you know what you're doing before you let users dictate look and feel.

20071001

Allowing Script and HTML Content from untrusted sources

Link

I think I've said a billion times that the MySpace model of allowing HTML and/or script is an exception, not a rule. However, it seems the exceptions are getting more and more prominent as businesses are driven to allow dynamic content from their customers in order to help the company's bottom line.

A colleague today asked me (without reading here first - shame, shame) how a company he is helping could allow markup and script, but not allow just any old markup or script. Of course, my first response is "why?", but I keep that to myself. My second response is always very aggressive white-listing of what you believe is acceptable.

I think the colleague has pretty well told their customer that starting with XHTML and a restrictive DTD (or even better, XML-Schema or RelaxNG) will be the most beneficial starting place. This way, you can rely on the schema and your really excellent processor to define what is and isn't allowed. Granted, you'll have to not allow entity definition and other things that could potentially cause XML processing DoS's. But then you're left with a whitelist of the types of tags available.

Once you have a really scaled-down, well-defined list of what is allowed, you can then go through attributes and perform additional whitelisting. Suppose we don't allow javascript in href's on attributes - we can just check the href attributes in the DOM (we know we have a valid DOM and a finite set of attributes to test since it passed the schema), and verify that they all begin with http:// or https:// . Img tags would work similarly.

For those things that do need scripting, you define new sets of tags. This is not dis-similar to Blogger's plugin model. They allow scripting, but they control the script - you have to put script in by using their pre-defined plugin tags, which use (probably) an XSLT to translate that to script they can deal with.

It's not going to be without work, but I think my colleague is going to be able to propose a solution to their customer that will be quite secure, and still give the clients the control over their content they crave. Thanks be to Blogger for the model. Blogger is certainly not the only site that operates this way, but it beats the myspace model - allow anything until a worm starts, then disallow the exact vector that created the worm.

Seems this is precisely what XML and XSLT were invented for...

20070811

More on Output Filtering

Link

Another one of the Justice League guys has posted about the value of output filtering. Now, the people at Cigital are uber smart - not because they're blogging about output filtering, but because they actually look at code from a very academic perspective. If you're not reading Justice League, you should be.

Now, I'm not sure I've ever said that you shouldn't do input validation. I hope I never have, and if I have, I apologize for leading you astray. However, the phrase I've been using in reporting for some time is "business-rule input validation, presentation-specific output filtering". A favorite line of a favorite movie of mine (PCU) is when Gutter is going to see George Clinton, and he's wearing a Funkadelic shirt:

Droz: What's this? You're wearing the shirt of the band you're going to see? Don't be that guy.

While I understand the desire to prepare data early (when it comes in), you've already manipulated it to be improper in some other context. For example, if you HTML encode data when it comes in, a month from now when you're using it in a PDF, your customers will complain about all these &lt; in it.

And the very best thing about output filtering is that it can be a habit. Scott hit the nail on the head in his post - the way that you properly get input squared away varies all the time, but output filtering for each presentation layer is always the same. In HTML/XML, using .encodeAsHTML() or <c:out /> or Server.HTMLEncode becomes a habit, just as much as the style guides you expect your developers to adhere to. And it's something that can be checked with a couple of regex's, instead of a really expensive tool (not that those really expensive tools aren't good for other things).

20070808

Radeox Wiki Rendering

Link

My apologies if the link doesn't work.

In previous posts, I made a recommendation that you use a wiki markup library as an alternate method to allow your users to enter formatted text instead of HTML, in order to reduce the possibility of cross-site scripting. And I mentioned that I did a cursory look for one and couldn't find one.

So I did another search tonight, and Radeox, has been made a separate library from SnipSnap. And to further add to my excitement, there's already a plugin for it for Grails. The plugin comes with example domain classes, controllers, and views for an uber-simple wiki. But you don't have to use it for a Wiki - you could certainly use it for just more simple rich-text editing.

Now, it appears that SnipSnap development has stalled (stopped) and the one viable fork has very little available other than some initial mods in subversion, but nothing on Sourceforge, and no real momentum (yet). If that's the case, (the SnipSnap folks say this shouldn't be the case), that might explain why Radeox's site is not responding for now. I hope this doesn't go away.

I would like to be able to snap a WYSIWYG editor into Radeox, but the one open source WYSIWYG editor that's not wired into an existing project is FCKEditor, but it doesn't appear you can change what the markup is - it's only HTML, which would require you to expect XHTML in order to do really good input validation to ensure that no scriptable attributes are included (not only event handlers, but styles that allow javascript, etc.).

So take a look at the Radeox plugin for Grails, and hopefully somebody with some time will begin to pick up Radeox and or SnipSnap again.

20070804

Finally, somebody sees it my way

Link

Okay, Pravir had the idea all on his own. But this is the soapbox I almost constantly stand on when it comes to security.

If you don't want to read the article, the gist is that if you're depending on input validation to fix your semantic flaws, you're missing a great deal of the application where the data bounces around and could potentially get re-broke. And that you potentially end up denying characters that are really legitimate in some context, just not one.

Now, I know Pravir is running the CTF competition at DEFCON, which makes me wonder why he had this post yesterday. Either he has a bunch of them queued and set to go, or he's listened to so many speakers say that something is a problem and the way to fix it is to do better input validation.

20070415

Struts, I never knew thee

Link

It's certainly not a complete transformation, yet, but in small projects, I think I'm going to give up on Struts Action, and am going to take Stripes for a spin. Once I finally got my head around all the configuration (or at least enough configuration to get an application operational), I really began to love Struts, but there was always something very inelegant about it.
Now, all my Ruby on Rails fanboy co-workers make fun of Stripes because in Java, it's probably too little, too late. But to me, it's a close-to perfect balance between Struts (where you must understand everything that's going on) and Rails (where you don't know squat about what's going on). I have a problem with not knowing what's going on, as in Rails. And I know that if you spend enough time, you can figure out what Rails is doing. But it's ridiculously simple to do the ridiculously simple in Rails, but if you need to customize that in the least (say, remove key exposure), then you're going to spend as much time re-implementing things, in which case you save yourself no development time, and at a substantial cost (it's basically CGI, so long-running processes have to be fork()'ed and detached). And with Struts, knowing what's going on comes at a huge cost - you have a bazillion configuration files, and as soon as you use the Struts tag libraries, you have to make all the components (an action, a form bean, and the JSP) all at once.
So my task for this week is to get a nice-looking base ActionBean class for Stripes that I can use with utilities necessary to generate and check action tokens to deal with XSRF and Javascript Hijacking. Then on to adding Prototype to some mock-up apps around the house.
And if you can't tell from this post, I'm really slow about adopting new frameworks.

20070322

Opinion: Jikto - Evil by the Good that will result in Good

Link

At Shmoocon, Billy Hoffman of SPI Dynamics is supposed to release information about a tool he's been working on called Jikto. And a lot of stink has come up about whether the tool is good or evil. And here's my post to let you know that I'm going to sit comfortably on the fence.

If the good guys were always ahead of the bad guys, then this would be evil, but it also wouldn't matter. The thing is, because the bad guys have more time resources than the good guys, they have other advantages (you have to make your app work exactly as expected under all circumstances; they have to make it behave abnormally under one circumstance) the good guys always do things in response to what the bad guys are doing.

See, Firefox and IE didn't add their anti-phishing technology, and then the bad guys started phishing. We didn't start doing source code analysis, and then the attackers responded by trying to make SQL injection. We didn't decide to use single-use tokens on form submits, and then the bad guys responded by finding XSRF attacks. All of these things happened in the reverse order. I wish it weren't true, but it is. The bad guys work in an environment where:

  • They have all the time in the world because their attacks don't have a project deadline
  • They're rewarded for creativity, or if they're not recognized by their organization for creativity, they'll do really creative stuff on their own and make all the money personally for it
  • They have all the targets in the world, not just one
  • They have lots and lots and lots of people because (strangely) they don't have to pay them. (Black markets are really interesting one - they get more resources because they don't have to be paid - there's not an economical limit to the number of people they can add).
So, Billy is a good guy. He's uber sharp, and I think Jikto will be a killer app. And Billy is on the white side of the fence. But maybe a javascript hacking botnet is exactly what the industry needs to get all those organizations who don't currently take security seriously to begin doing so. And maybe because a good guy released a goodbad or badgood tool to the bad guys (or to the public in general, which includes the bad guys), the good guys will be able to invent a silver bullet to fix this thing once and for all.

20070216

XSS in SVG embedded Base64

Link

Blech!

But then again, the more I thought about it, the more I decided that it's probably not that great of a vector. First thought was that because the script is embedded in Base64, output filtering isn't any good there.

But the more I think about it, the more I think it's an unlikely (although really creative) attack vector. Unlikely, because if output filtering is being done, you'd have to already be inside a src attribute of an embed element.

Very nice find, however.

20070125

Stopping XSS but allowing HTML is Hard

Link

RSnake has a post today that made me laugh out loud as soon as I read it. But rather than say "I told you so" (i.e. here, and here) I thought I'd give those of you who INSIST on allowing users to enter HTML into your site a couple of ways you might make it work:

1) You could use a wiki markup library. Many wiki's use the same markup semantics (and it isn't HTML), so I'm shocked that I did a quick search for just the markup library and haven't found one.
2) You could require your users to provide strict XML. The benefit here is that you have good XML parsers available that will puke if the XML is not well-formed. The benefit of having well-formed XML is that then you can then whitelist the tags and the attributes allowed for those tags. So rather than trying to get rid of anything that looks like it might be script, only allow stuff that you specify. And you only specify the bare minimum folks might need to get by with.
3) Combination of 1 and 2. Not saying it's perfect, but that's what Google does with Blogger. But they allow far more formatting than is probably necessary for most user-controlled display.

So there you go. And of course, even when you do one of these methods, you still need really sharp people like RSnake and kuza55 to check it out. If there's a way to script it, those cats will find it.

20070124

Sylvan von Stuppe: Breaking out of Same-Domain Constraints - Something useful

Link

And now it occurs to me that if there's XSS on the site in question, the user doesn't have to send their token to the attacker site to get it reflected - they can just use XSRF or img or script tags on the same site, including the token.

Breaking out of Same-Domain Constraints - Something useful

Link

What I had decided earlier was that breaking out of Same-site constraints wasn't that useful. But I thought of an interesting way of using it to break request tokens.

I'm not positive if there's a better name, but one of the biggest recommendations for dealing with XSRF is to use request tokens. You make a really big random number, put it in the user's session, and also put it in a hidden form element on the form. When the user comes back, they carry the token from the form, and the server compares the form token to the session token. If they don't match, fail the request. Theoretically, a user took an XSRF link from another site wouldn't be carrying this token.

But what if there's an XSS vulnerability anywhere on the target XSRF site? Here's the workflow:

  1. Attacker gives the victim script to carry to the injection point on the target site.
  2. This script collects the token value from the hidden form element.
  3. The script generates a request back to the attacker's site. The request back will include the request token.
  4. The attacker site generates javascript to request the ultimate target URL with the token included.
Note that the ultimate page we want to call (order widgets) doesn't have to be the page with the XSS vulnerability. If Change Profile has a scripting vulnerability, we can use that page just to gather the token and carry out the remainder of the attack.

This will work in many cases because the token doesn't include any other information about the requesting site. In order to combat this, the first (obvious) step is to not have any XSS vulnerabilities on pages that hand out tokens. Better yet, don't have any XSS. But if there are any, you also have to somehow tie the token to the page that genned it. So maybe your token generation looks something like:
  • k1 = [large random number]
  • k2 = [key signifying the proper target]
  • token = f(k1 + k2)
  • Store k1 in session
Then the check becomes:
  • k1 = [k1 from session]
  • k2 = [the key for this action]
  • token = f(k1 + k2)
  • compare token to request(token)
This ties a request token to the action that uses the token.

Under normal circumstances, if the user carries a random number that matches what's in the session, it goes through. That number only gets re-generated when they request a new form. And if there's no tying the number the user carries to the action, then any form can post to any action.

kuza55, Anurag Agarwal and all those other cats who like to break out of same-site checks sure are making life difficult. Fortunately, with this, your best bet is to get rid of all your output filtering flaws.

20070123

Breaking out of Same-Domain Constraints

Link

There have been a few articles in the past on working around same-domain policies. I kinda' disregarded them in the past because they still require an XSS injection point, and there are other things that are still effective with XSS.

But a coworker today dinged me about trying to make an OTP proxy using a single XSS point. So I did a little futzing around with the DOM, and when the page loaded, replaced the submit button with a new button that evades same-domain by genning a <script> tag sourced from the attacker site, with parameters being passed from what is in the form.

While I was shocked at how easy it was, it still doesn't paralyze me with fear. Here's why:

  1. It still requires an XSS injection point. No XSS, no problem.
  2. Said XSS injection point would, in a perfect world, have to exist on the page that renders the form you want to steal (more on those of you who are telling me "now just a minute, buster!")
  3. Because of 1 and 2, there are other, really effective, and really simple things to do. Most notably, if I've got a victim user who will click on anything, they'll at some point ignore the site, and give up OTP's to the attacker site, rather than to the real site anyway, right?
Now, for the problem with #2 - I say the injection point has to be on the same page. I understand that if the injection is on another page, I can just render the login form I want. Which gets me around this whole cross-domain problem, anyway, right? I can just render a brand new form with a target action of the attacker's proxy. I was looking for something where I could swipe form information as it was entered, and the user still go to the right site - which ends up being non-effective with OTP - if I let the user post to the real site, what good does their OTP do me?

So, it was a fun exercise, but not really as scary as the posts I've seen make it out to be.

For those of you who are developers, there's nothing new here - just get rid of all your XSS flaws. Er....ahem....Do output filtering on everything, then you won't need to worry about all this chaos.