He captured a cross-site request forgery token which was valid for all users.
EDIT: Looks like I am wrong on HN. Damn.
CSRF tokens don't have to be unique per request. That's useful for things like preventing double submissions, ensuring order, or invalidating requests that are still in an active session but have timed out. However, it isn't necessarily more secure than having a secret tied to a user session and generating a token with that secret (which would, by itself, mean that any single token could continue to be used for the life of the session).
See OWASP's recommendation which includes:
In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is utilized for each subsequent request until the session expires. When a request is issued by the end-user, the server-side component must verify the existence and validity of the token in the request as compared to the token found in the session. If the token was not found within the request or the value provided does not match the value within the session, then the request should be aborted, token should be reset and the event logged as a potential CSRF attack in progress.
Assume that everyone was lying. Then this statement is true, but as a lie it must be false. Contradiction. So this case is not possible.
Asumme that not everyone was lying. E.g. some other post is false. Then this statement is false. This fits together if you have another post that is false.
Examples like the "this statement is false" thing are just rife with equivocations and other blatant nonsense. Yes, there is such a thing as "neither true nor false", but every example for "both true and false" I've seen is based on lousy thinking and semantic games.
Also, as a JS programmer, the distinction between true/false/neither/ineffable is very familiar (i.e.: true/false/null/undefined, null denoting the absence of a value and undefined denoting the absence of a definition -- though of course in practice the distinction is rarely necessary resulting in a lot of confusion and unnecessary double checks).
It's got nothing to do with "mysticism". It's just arm chair linguistics.
No it isn't - Having CSRF tokens be unique per-session is perfectly adequate.
There's a good explanation here: http://stackoverflow.com/a/25475141/240921
Just reduces the risk surface a bit.
Also, you may well have different session IDs across different applications anyway - pretty good idea, if you want to be able to selectively deauthorise logins.
I've never heard of using the CSRF token itself to answer the "who is this user" question. Has PayPal been using a single value for both the session cookie and the CSRF token? That seems fundamentally broken to me but it's more likely I'm misunderstanding the bug.
Furthermore, CSRF mitigation does not have to depend on a session existing. The CSRF token value can written to a separate cookie unrelated to the session, which value can be randomly modified with every page load. For every page view, the CSRF token on the form would be different, and would be matching an ever-changing CSRF cookie value.
> There is no reason why a session can only be created
> after login.
In your second scenario, while of course you could store a random value in a "CSRF cookie" on every page load, of what use could it be if you don't also store the "last CSRF cookie value" somewhere on the server and compare against that on new requests?
Wouldn't say $10k isn't nice. Rather, I'd say that history has told us people are susceptible to money. And if someone can choose between $10k and selling information worth much more, let's just conservatively say without pulling numbers from my ass, people would choose the latter option more than Paypal ought to want them to, and that this costs them way more than the $10k they'd otherwise lose. I think that's a pretty fair statement.
Anyway, about the issue itself... Really don't know what to think, it's pretty insane, puts a lot of things into perspective once more. (the whole 'if Paypal can't secure xyz, can my local hospital keep my health records safe, am I equiped to handle my own digital security responsibilities?' train of thought).
Oh if anyone feels like thinking this through, how bad do you guys think this hack could have turned out if deployed by a malicious group? Paypal is pretty walled in with various limits, fraud checks, frozen accounts, multi-day bank transaction processing, reversible transactions, partnerships with banks to do chargebacks there, and they do KYC on every account. The offramps are therefore pretty limited unless you completely expose your identity. Of course they could buy a ton of stuff online, but how anonymous would the shipping be, and what could you buy with Paypal that would be liquid enough, would ship quickly and could be received fairly anonymously (you don't want to use this hack and end up with 100 playstations and on 3-day shipping to your own home and have police arrive before the goods do!). I wonder what the best plan of action would be, I can't really come up with any solid way to actually walk away, anonymously, with a ton of money but surely there must be one. Indulge me if you want!
So, you buy some Paypal Prepaid cards, activate them and setup PayPal accounts with info bought from hacker forums, transfer money from hacked accounts to the new accounts, and use both the instant and regular Paypal load feature to load the PayPal prepaid cards to get as much cash out as possible. That card as a $940/day ATM limit, so even with 10 or 20 of them you're talking about real money for most of these criminal types. I wouldn't do this obviously, but you asked how someone might take advantage of it.
I also wonder if they have some verification on these cards before they're activated. (e.g. the card activation number is sent to your fake address.) It'd definitely be a solid plan IF you could create identifying documents based on stolen identities you can buy online that can fool a 17-year old cashier at a retailer, like an ID card. This is probably not trivial, but very far from impossible. And then use those cards without having to connect them to a verified Paypal account and without needing to activate the card. Nice idea!
Would need a ton of prep though if you want to walk away with more than $10k. If I had to choose (as an evil person, say) I'd still take the bounty over this idea.
I wonder how successful their card program is and what percentage of their fraud is generated by it. We're seeing businesses massively shift away from cash, as are governments. But I also thought the same was happening with cards to a smaller extent. Wouldn't have expected a physical-card push from a digital money company like Paypal. I guess cards (physical or stored as digital credentials like Appe Pay) are here to stay for quite some time.
As far as the best way to remove money from the system, I would think the first step would be buying bitcoins with paypal (perhaps from a bunch of different localbitcoins sellers), then using a mixing service to anonymize them, and hold onto them for a reasonably long period of time before spending. I haven't looked into it in any detail myself, but I'm sure there are "best practices" out there for anonymizing your bitcoin transactions.
^I should clarify that I would be one of those people! That doesn't make the intellectual exercise of thinking through how one would pull it off less interesting though.
I used the next dupe methods I found in that game and another more popular game by the same company to make money selling items to players. I should have reported all of the bugs instead of exploiting them regardless of the insulting lack of any bounty. We desperately needed money, and I gave in to temptation. I know that I was wrong, and hopefully my comment won't evoke responses that focus on my mistake alone. I certainly would've strongly considered reporting the bugs for a monetary bounty above $1k, the freedom to publicly explain the dupe method after the bug was patched, and a small acknowledgement from the company that I had found the bug.
Thought experiments about the most effective way to exploit a bug are a great way to discover places that flags should be set to trigger notifications of unusual activity in order to mitigate damage from exploitation. If the company that developed the game had put more effort into considering the question, "If a dupe method did exist in this game, what would be the most effective ways to profit from it?" then I wouldn't have been able to exploit the bugs for nearly ten years.
Holy shit, what game was this and in what ballpark did you raise selling duped items for nearly 10 years!? That's a story right there. Did you ever get caught?
As for rewards in virtual games... it's just a lot lower priority I guess, as it's all virtual. There's no direct loss to the company if someone is able to dupe items, unless it's really widespread, but in that case it's very noticeable quick (system wide deflation, fewer boss runs / grinding, rapidly rising item counts). It upsets the game balance and affects enjoyability (which indirectly affects subscriptions and thereby company profits), but it's not direct money out of the company's pockets (unless it's a shitty game where you can directly buy items with real money. In that case it can dampen revenues fairly quickly).
> flags should be set to trigger notifications of unusual activity in order to mitigate damage from exploitation.
I was an admin for a while and had access to every single chat log. It's pretty scary how privacy in virtual worlds doesn't at all exist, nor is there any debate about it that I know of. I'm sure companies like facebook or WoW have restrictions and various policies in place, but in a lot of games an admin will be able to simply read your chat. I frequently had to look into reports of things like racial slurs in private chats to warn or ban users (and yes, people indeed fake-reported other users they hated, so I always had to check the logs for evidence), and just scrolling through the convos you'd scroll past the craziest private things. In my early teens a decade ago now I couldn't help but read some of them. (morally pretty embarrassing to say). Anyway, as to the relevance of all this, needless to say every.single.time an exploit was found, you'd have users telling their friends in private chat IN game. You could search the logs for 'admin' or 'secret' or 'exploit' and you'd find messages like this 'Dude, keep this a secret cause admins will find out, but I just found a money duping exploit'. Anyway no admin ever did searches like this in practice, but it was pretty easy to build a flagging system. We eventually did build a proper flagging panel with charts of total credits in the game, or total unique boss items, or total XP earned. There'd trends for the time of the day in the US/Europe mostly, and trends for the day of the week (monday low, saturday high), and you could easily see deviations from that with exploits. In short, everyone always tells their friends, and not out-of-game either, and they almost always go full retard and become insanely wealthy in the span of days. Also, when veteran players complain of another player hacking, it almost always ends up being true after tons of denial. And it usually takes long to fix because almost half of the time the hacking player is a veteran player who is respected, trusted, well-connected and at times in partnership with admins who play themselves. Man online games are awesome, it's like the bold and the beautiful sometimes haha. Haven't played any in years though, wish I had the time I did as a teen! Feel free to share your stories! :)
> hopefully my comment won't evoke responses that focus on my mistake alone
Really don't care :P It's a game, and somehow, the drama of stuff like this is what made persistent online games so much fun. (talking about games like EVE online here, for games like counterstrike cheating is simply never fun or funny). For example check out this video: https://www.youtube.com/watch?v=XrYe_4vHzgE Mostly based on a story of two corporations vying for wormhole space based on cheating, completely awesome and epic
I made enough to pay off a car, credit card debt, student loans, build a small chunk of savings, make a small down payment on a house, and live free of financial stress for a few years. It helped me immensely when I was desperate, but it didn't end very well. I wrote more about it in my blog listed in my profile. Here is an HN discussion about one of the posts that generated some interest: https://news.ycombinator.com/item?id=8439648
This is where your plan fails. Since it is very easy to rollback the payment with paypal, no one is going to sell you bitcoins in exchange for it, except for tiny amounts.
If you look at localbitcoins for example, there are very few sellers who accept paypal, all of them limit trades to very small amounts (mostly $100-200) and require high feedback scores and some form of KYC
Excerpt from one of the offers:
"Due to previous fraudulent transactions. I can only sell you bitcoin via PayPal if you take a photo of yourself holding valid U.S. government issued ID where your face matches the photo. "
And I'd definitely assume it'd be considered insider trading.
But it's a very clean way. There aren't really any direct traces to you as you didn't enter into any actual agreement or conversations with anyone. You'd just dump information one sided, that's pretty easy to do anonymously. And if you look at the volume of trade on eBay, it'd be very unlikely they'd be able to pinpoint who was the source of the hack based on trading alone, if they even look for a link between traders and the hack in the first place. And even if they can reasonably say it might be you cause you just shorted $250k out of the blue (still puny volume for a stock like this), there'd be no way to prove anything else.
The biggest issue I'd say is having been tracked by server logs while finding out about the security leak.
As for whether it's a good plan... I wouldn't take it myself if I was evil. First it's probable that wherever you leak it underground to, will go to Paypal and collect a bounty. It'd be a quick fix probably in a span of mere hours after they become aware of it. It wouldn't show up on the price charts much if at all, I mean this story went pretty unreported, right? Even if you put of $100k, the price may drop 1-3%, that's peanuts for an opportunity like this, and it requires a shitton of spare money. Even if you leverage it, you might just coincide it with some big news. For example, since he reported the bug publicly, the price has risen by 9-10%. If you shorted that with leverage, ouch.
Safe but not very effective I think, good try in any case!
Turns out their digital goods checkout is broken for accounts that have 2fa enabled. Their response was "WONTFIX, disable 2FA".
They may have made many mistakes, but some of the smartest people I know work there, and most of the work they do is first-class.
Are we talking about the same PayPal?
Have you seen their web interface or their APIs? It hasn't evolved at all in 10 years. It's the same convoluted, bloated, and slow web experience as it was in 2005. Just because a few smart people work there doesn't mean the project is outputting quality software.
Countless software companies have been able to take aging and massive codebases and evolve them to more modern usable states. Most of the time these big companies such as Microsoft or Oracle still output bad software but at least they show signs of evolution and investment. From an outsiders perspective Paypal is particularly bad at this, their software quality and UX has been in a perpetual stasis while Stripe and others eat their lunch. And that's not simply because of their famously bad customer service.
>> some of the smartest people I know work there
I have no inside information here, this was told to me by Palantir people when interviewing there.
Secuirty is hard. But activating tokens before a user has actually logged in is a breathtakingly incompetent, fundamental design flaw. How such code ever made it into the production code base of a company responsible for protecting billions of dollars along with financial information for a significant portion of the world is incomprehensible. It makes me wonder what else is lurking over there.
This is serious when you consider that they are moving millions of transactions every day...
CSRF valid for ALL users, ouch!
> We may pay beyond the range at times when bugs are found to have significant risk.
If they don't consider this a significant risk, I'm not sure what is.
However a separate salaried 'cracking' team in another city, with bonuses for exploits found might work.
I think given that you could effectively steal from any account for which you knew the email was worth significantly more than 10k.
If you succeed in CSRF attack him, that is.
There is no locale on url whatsoever. I have to manually change my country to United States in their menu and then I'm able to access the url. So annoying.
Why not showing the page with a simple warning such as "Sorry, we don't have this content in your language, here's the original:"...
In Denver, Google has also at times decided I'm in France, then Hungary, with absolutely no way to fully disable it on sites like YouTube or Google.com (you always get the links and logos, and alt-text for images is still in the "local" language). Play Store shows certain section subtitles in one language, even though Android and everything else is in another.
I recently went to buy some books for my kids. The publisher's site detected I was in another country, and hid the book information with a page suggesting I contact a local reseller. Despite that I was just visiting another country at the time. No way to override.
Chrome would do the same on install, and offer no way to remove the default search engine from the localized copy. You could change to another one, but not fully delete it. Meanwhile, the entire UI stays in whatever language they decided. So despite using Windows in English, browser in English, etc. etc., Google though it'd be cool to run in Spanish, or Russian, or whatever, based on your IP address. Moronic.
Netflix has similar problems - the site does not respect language preferences completely and things like search autocomplete will use the "detected" language. Traveling and contacting support? Cool. Despite every indication you're using English, they'll route you to support in a local language. It's insane.
The one thing the HTTP spec got right, Accept-Language, and everyone just ignores it and implements this shitty geo localization + language handling. And they think it's a feature, and every time someone implements this, they probably get a pat on the back and a raise for doing something intelligent. FFS.
Commonly it was API features that work subtly differently depending on the region of the world you're in, but help pages also end up with completely different URL schemes.
In my personal opinion you want to balance bounties against how much money that the person could receive using the exploits, vs the amount of trouble they could potentially get into.
But these don't _seem_ to be flaws that you'd have if you were spending as much money/resources/prioritization on security as I'd expect a business in Paypal's business to be spending.
Am I wrong?
This exploit is the result of several bugs in different systems coming together into one "super bug." If you have the same developers who wrote the code go back over that same code they may not spot the bugs.
Microsoft do code reviews but they also do fuzzing, unit testing, path diagrams, and other methods with the hope that each type of testing will uncover different bugs. However when you have bugs caused by multiple systems working together badly, it might be better solved by policy (e.g. this ONE piece of code does all of our CSRF from now on, no re-implementing it five different times).
it wouldn't surprise me if companies such as paypal have moles who actively compromise software through subtle means. if it can happen in the government it can definitely happen in a huge software company.