Fingerprinting Your AJAX Applications
LinkThere's a pretty good article on adding fingerprinting to your Ajax requests so that you know on the server side that a request came from Ajax. However, there's one statement in it that (unless I completely misunderstand) is entirely false (page 25):
This fingerprinting technique helps in determining the type of client code that has sent this request. It is possible to lockdown resources for just the right client on the server-side as well. This type of header is harder to add by automated crawlers and bots since the logic and calls need to be understood first.
Consequently, automated attacks on your Ajax resources can be avoided.
Excuse me? The example they have is just a custom header with a timestamp in it. How is that hard for bot code to add? And how is it hard to falsify? Yes, it prevents XSRF vulnerabilities because the XSRF request would then have to be created using XmlHttpRequest, which can't go across domains, but to write a script that I run to steal resources from it is trivial.
It becomes hard when that Ajax request includes information that only could have been obtained by visiting the containing page first. For example, in the classic Netflix example, you have to visit a Netfix page where a hidden form field is set with a token. Future XmlHttpRequests add this token as either a request parameter or as a header, and the recipient of the request validates the token against the session - you can't do this with XSRF because you have to visit step 1 first (possible) then parse the response of that for the token to set on remaining calls (only possible with XHR, which won't go cross-domain).
I hope I'm just completely mis-reading their response.
0 comments:
Post a Comment