Hacker News new | comments | show | ask | jobs | submit login
PayPal 2FA Bypass (henryhoggard.co.uk)
525 points by Spydar007 413 days ago | hide | past | web | favorite | 137 comments



Mistakes were made, and there are definitely lessons to be learned, but if we want to improve the state of security, we really need to change the way we react to these types of bugs.

If a service has an outage and a company posts a postmortem, we all think: "wow! that was an interesting bug, lets learn from this". We shouldn't be treating security issues differently.

People who make security mistakes aren't idiots. They aren't negligent. They're engineers just like us, who have tight deadlines, blindspots and mistakes. Shaming people and companies for security bugs will only cause less transparency and less sharing of information - making us all less secure.

This is a really cool bug. Kudos to the researcher for finding it, responsibly reporting it, and to paypal for fixing it in a timely fashion. Hopefully - this type of bug changes some internal processes and the way the company thinks about 2FA.

As for security questions - these are obviously insecure, and should really never be relied on. If you can opt out of security questions - do so. If you can't - just generate a random password as the answer. "I_ty/:QWuCllV?'6ILs`O12kl;d0-`1" is an excellent name for your first dog / high school. Just don't forget to use a password manager to store these.


I disagree. Your "lets be super nice to everybody" strategy has come to an absurd conclusion. Is there no-one who can be held accountable for competency which they claim, when it comes to computer stuff?

PayPal doesn't write on its websites "We're some enthusiasts with no software or security experience. Let's see how well this works, together!" No, like everyone in this industry, PayPal claims its security experts have your money and financial information super secure. It's one of the first in this space, and has almost two decades of experience.

This wasn't a tricky subtle bug, this was obvious. This should have been caught in code review and tests. PayPal should be afraid of rolling out slick easy-to-use features without code review and tests. It is many years too late for PayPal to be learning the basics.


>I disagree. Your "lets be super nice to everybody" strategy has come to an absurd conclusion.

You and I must have read a different response, cause I saw nothing in there about "being super nice to everyone." What I saw was a reasonable request not to commit the Fundamental Attribution Error. Which is paraphrased as: when I screw up, there were extenuating circumstances. When you screw up it's cause you're a moron.

https://en.wikipedia.org/wiki/Fundamental_attribution_error


A company comprised of otherwise reasonable people can behaving shockingly dumb. The only way to make companies learn is to impact their bottom line, and that means not-nice words need to be said.


-4 but not a single response? Folks, I wasn't aware it was in question that companies can behave in irrational ways.


> If you can't - just generate a random password as the answer. "I_ty/:QWuCllV?'6ILs`O12kl;d0-`1" is an excellent name for your first dog / high school. Just don't forget to use a password manager to store these.

Be wary of social engineering attacks though.

- <support on the phone> I'd also need you to provide me an answer to your security question. What was your first dog's name?

- <me> Oh, you know, it's a long string of random characters I generated, I'd have to give them to you one by one...

- <support> (looks at the answer) uh, right. I see. Let's continue then.


I always fill all social engineering-vulnerable questions with nonsense, especially when it is a banking site. I like when they let you set the question yourself so you can put something like "Why would a secure financial institution allow such a horrible security hole in it's system?" To which the answer is Tyrolese4Tokyo_Beulah!Papuan.


I fill them with nonsense words unrelated to the question. Mother's maiden name? Fire truck. First car? Air conditioner.

If I have to call a company they always ask me why. The explanation is anyone who has me as a Facebook friend can figure out who my first girlfriend was, my maternal grandmother's first name, my mother's maiden name, where I was born, my first car, etc. And if every company has the same data, a data breach at one makes the entire system fall apart.


Same here. But recently, United airlines changed their system to only allow selecting from a list (your favorite dog breed ? Choose 1 of 8. Your favorite movie genre? Choose one of 12). I picked a random set and wrote it in my password stash.

Seriously bad security practices.


And the answer is "because, by and large, it works just fine". Yes, people fall afoul of these kinds of questions, but the general public cannot handle proper security hygeine - and educating them takes so much effort on both sides, that your customers will just go elsewhere. Proper security procedures would also lock a great many more people out of their own accounts than would be lost to fraud. Can't satisfy security questions? Well, take the morning off work on Monday morning and bring in several forms of identification...

It's why ATM PIN codes are so short - it's easier for the bank to just reimburse losses in case of fraud than to properly/strictly control security access.

Any time I see someone talk about how dumb general banking security procedures are, it tells me that they've spent no time in tech support for the general public :)


Exactly, it's just another opportunity to password protect things.


Great point. "correct horse battery staple" wouldn't be vulnerable to such an attack.


But it must be said that GPU evolution, and that password cracking software developers are naturally going to go where the passwords are, that this type of simple password design does NOT work anymore.


How so? The point of a random-four-words password isn't that it won't be hit by existing brute force software, it's that it's easy to remember but impractical to brute force with any software - with a 60,000 word dictionary there are more than 2^63 possible passwords.


That's true, but the whole point of the strip was that you use words that evoke an easily-memorable scene in your head.

That will probably mean you can confine your list to words that most people know, which reduces the search space significantly. "correct", "horse", 'battery" and "staple" are all very common words.


The strip used a 2048 word dictionary. 2^44 is still far too many to brute force


Is it really an easily-memorable scene or has the strip just been referenced in every HN and reddit discussion about password security? There is no way I'm remembering some random story for an account I login to once a month. The point is to have a password that is easy to see in a password manager and then type on a different device. Seeing D8hsegfw_#7Ax42 and then trying to type it into a hidden password field is painful esp. on a phone. Seeing Dynamo-Stench3Player and typing it in is very doable.


They are suggesting it for a security answer, especially one you give over the phone to tech support, NOT a password.


Irrelevant. It works fine for passwords too. The security of "correct horse battery staple" method is (nearly) optimally resistant to GPU (or any other) brute force attack.


Oh, of course, right. Misinterpreted that bit.


Yes, having something readable (and "believable") is more useful and secure than having to rely on saying a random string

Just put "Plymouth Creek High"

(Not to mention the possibility that some "security genius" will ban special characters on those answers)


Generally what I do is put something tangentially related to the question.

For example, "What's the name of your high school?" would be answered with something like "Khan Academy" (the name of a site that helped me) or "Mr. Jefferson" (A teacher, or best friend)


Mine was Rainy Purple Road. Then I get to educate the person on the phone to, in her personal life, never give the correct answer to anything googleable for a security answer. That usually involves a discussion of Sarah Palin...


That's why mine answers are "DO NOT ACCEPT THIS ANSWER!!! <long string of random chars>". Hopefully the support person will get the hint. :-/


Unfortunately, if they don't or are forced by policy, then you've just told the Internet your security answers.

If I were you I'd edit that and reword it without specifics.


Thanks for your care, but there is a part that is random, and the wording is probably a bit different. I don't disclose passwords on the internet. :)


at least with one of my banks customer support centres this wouldn't happen, if you stumble for a split second they shut down the call and tell you to go into a branch to verify your identity, this is pretty annoying...


Good, they should be commended for the practice! I wish I could trust that all companies would do that, though.

(Anyway, I like the idea of using answers to security questions as hard passwords.)


That's terrible, because it makes using password managers impossible (while on your phone for example, or you simply don't have it open that instant because you didn't know when/if they would ask).


While I strongly agree with the thrust of your comment, I'd like to chime in and say that this is not a cool bug. On the scale of web security bugs, this is the kind of thing you expect an intern to find.

I actually think the post was written in recognition of that fact, and was amused by the thudding, abrupt conclusion it had; it was like the author was sharing a joke. "Yup, it was that easy".

People who do this kind of security work (check out the rest of the author's posts) tend to be running their browsers piped through a local interception proxy. Once you develop the habit of mind to look for stuff like security parameters, it's hard not to notice these kinds of things. I think more developers should tool up the same way and learn the same habits.


What are some tools you'd recommend running? I'd love to have more awareness as I passively browse.


The open source tooling here is getting better but the gold standard, used by virtually every professional application security worker in the industry, is Burp Suite. Lots of people have tried to make modernized, open source versions of Burp, but at this point cloning it is like cloning Microsoft Word.

If I was your director of security, one of the first things I'd do is build a plan to get all your developers trained up on Burp. It's useful for more than just security testing.


In addition to burp that's already had a mention, I'd recommend looking at OWASP ZAP. It's fully open source, which is nice and has had a lot of new features over the last couple of years.

It can also be integrated into CI pipelines for automated security testing.


All great points and true! The problem is PayPal hasn't been a great company to so many people their practices are abysmal. I've had my company account frozen more then once and it was a terrible experience and it's happened to lots of people. This is a company that makes a lot of mistakes and has bad judgement. They don't deserve my understanding. They haven't earned it. Other companies have.

But otherwise you are right. Less scrutiny more understanding so companies will be open and honest when they screw up.


Indeed - I've long since given up on security answers/questions as being secure. Kind of defeats the purpose of unique passwords if all the answers are common knowledge... Had to laugh at one instance where I actually had to read out the 30 character secret answer on one support phone call :P


The problem is in PayPal's case, 2FA has been terrible for years. I've even been locked out of the account for a whole week because of their shitty SMS sending service. This prompted me to disable 2FA on Paypal, because weirdly enough that makes me feel "safer" (as in safer from losing my money due to Paypal's stupidity by being locked out of the account).

So in this case I'm certainly not one to say "hey, mistakes were made - let's give them another chance." They've been getting reports about their 2FA system for years. So there's no excuse at this point.


> They aren't negligent

What would actually qualify as negligence in your view of the world!? This is as bad as it gets, this isn't an ordinary mistake.


Sounds like a lot of work! Paypal will just turn off two-factor themselves if you ask nicely via an unverified twitter DM.

http://imgur.com/a/Tu1AN

https://www.reddit.com/r/SocialEngineering/comments/3kgw3s/p...


PayPal's 2FA broke on me when it started locking my account every time I attempted to use it, because I'd previously made it send too many SMSes (poor signal).

I was thankful that support let me disable it, but it was worrying they didn't try to verify that I actually controlled my device first.


It's weird, don't all services that enable 2FA give you reset codes? Shouldn't they ask you to use those, or at least give them one if anything so they can help you disable your account? Kind of odd.


The simplicity of this exploit demonstrates something profound. The most dangerous things in life are not hidden deep in the weeds. Rather, they stare us in the face in the most obvious spots. It isn't the unknown that presents the biggest threat. It is the known that we never gave a second look.


The cardinal rule of security is: you never, ever, trust anything the client sends.

This bypass is a perfect example. Although author doesn't mention which interception proxy he used, I'm 99% sure it was Burp. Replaying modified content is trivial.


Even with a free software tool like mitmproxy modifying requests is trivial. You don't even need Burp.


the free version of burp is completely capable of doing this, and so much more


>you never, ever, trust anything the client sends.

The author likely wrote code that correctly validates "for all security questions a correct answer is given" and just forgot about the part where "for-all propositions are trivially true of the empty set."

It's easy to read a for loop for what it's intended as - a loop - and not think about "what if we never enter it at all?"


If we think well, we need to have loops, we might be feeling despair right now, however array languages don't need loops! I can write:

    min test each args
and I can do the same in JavaScript, it's just uglier:

    args.map(test).reduce(function(x,y){return Math.min(x,y)})
Writing in a functional style makes this kind of programming slightly less onerous, but it still feels strange in languages that are a bad fit.


I've seen multiple major financial companies vulnerable to modification of the page that could be done entirely in inspect element.


Fiddler also has this capability


heart disease vs. terrorism.

it seems to be an unfortunate emergent behavior of groups of humans.


I noticed that if it's a fire that kills many people it's only a one day news; while if it's a bomb that kills one everybody's afraid.


It's not the number of casualties that scares people, but rather the nature of the threat.

Fires have existed for several millennia. Our ancestors who built and lived in the very first settlements suffered from their homes/stores occasionally burning down. We know what types of conditions increase risk of fires and we know how to minimize those risks and put the fires out when they occur.

Bombs on the other hand are unpredictable. They also cause their damage instantly and there is no way to minimize or prevent it. You can escape from a burning building, or if stuck, wrap a piece of wet cloth around your mouth to minimize the amount of smoke you breathe while you wait for rescue. You can't outrun an explosion.

That's why people are a lot more scared of bombs than they are of fires (or car accidents, for that matter, which kill many more people than both fires and bombs combined).


I think perception of danger = amount of times hearing people die from doing act / amount of times doing act.

So flying is much higher than diving:

People drive much more than they fly (a few times a year vs twice a day) and hear about air-crashes (9/11, Malaysia Airlines) more than car crashes.

It's the brain playing games with us


Availability bias is definitely one aspect, but I think a big part of it is also how easy it is to tell a story that separates oneself from the victims (this often takes the form of victim blaming, but not necessarily). It's easy to tell yourself the story of how heart attacks happen to people with different lifestyles or genetics, or how car crashes happen to drivers who are less attentive, or how violent crime happens to people who live in other neighborhoods. It's a lot harder to tell yourself the story of how you'll avoid the plane with the latent mechanical fault or how you'll never be at a gathering place that would make an attractive terrorist target.


One of my PayPal 2FA phone numbers is listed twice and both cannot be removed (errors when I try). Their support can't help with the situation because their side wasn't able to see the duplicate.

This is not surprising to me.


I've been unable to remove a credit card from my account for almost 5 years. It's since expired, and is somehow stuck as the default payment method.


Is 17 days an acceptable TAT here? I know investigation and fixes can be a challenge, but with the severity of this exploit+PayPal being a serious financial service, I kind of would hope for a faster fix. Maybe I'm off base...I really don't know; curious what others think.

How much time would've had to pass (without PayPal doing anything) before the author is ethically obligated to post to HN/media/etc about the hack? I believe publicizing an (unpatched) exploit like this crosses into criminality, but it would be essential to demonstrate some kind of proof, for credence and gravity. I'm guessing the community has some standardized guidelines for this sort of thing, but I'm not aware of them.


17 days is fast, relatively speaking.

Security questions are hardly really that great of 2FA protection anyways.


Good to know.

And ya, a security question to bypass a phone 2SV is a joke. Almost entirely defeats the purpose.


Just to be clear, it bypasses any of their 2FA codes, not just SMS-based codes. The security questions bypass "feature" also appears on my account for which I use a VeriSign 2FA dongle.


Notice that 17 days is basically what is needed to add the issue to the next sprint, complete its development along with everything else for that sprint, and deploy to a live site. To me that sounds fair.


The "standardized guidelines" sometimes vary -- mostly dependent on the nature of the vulnerability -- but 90 days seems to be a pretty common timeframe. That's what Google gives others before they publicize the details, for example.


I've seen equally as ridiculous web bugs, computing prices browser side in javascript, credit card numbers encoded in REST API endpoints, financial websites not supporting 2FA at all or mixing http requests into the sites. We're solidly in the dark ages of web security still.


When I went to setup my online account for my old bank, I entered a randomly generated 16 digit key and got an error; "Maximum password length limited to 6 characters...only alpha-numeric"

I called to inform them that their account creation was broken, because obviously that was a bug. They told me that sometimes people have a hard time remembering their password, so they "need to balance between ease of use and security". My jaw dropped and my head rolled off my shoulders.

I didn't setup an online account.


It seems standard practice for German banks to limit online passwords to five alpha-numeric characters. Fortunately, you need a TAN number (generated by a device or from an SMS message) to actually make a transaction. I have no idea why they limit the password length like this.


I'm guessing it's five characters so people don't just use their four digit PIN. I don't have any explanation for why they would limit it to five characters though, or why it has to be alphanumeric.

That said, Comdirect seems to offer regular passwords or six digit PINs and Bank of Scotland (in Germany) seems to also offer regular passwords.

But there are plenty of other offenders. For example my energy provider E-wie-einfach requires a mix of alphanumeric characters but forbids pasting and autofill (the latter of which luckily Chrome simply ignores).

I don't know what idiot ever came up with the idea that disabling paste makes logins more secure (only justification I've ever heard was about preventing brute force attacks, proving an utter lack of understanding of the technology involved) but sadly it's still a thing and it still leads to people using trivial and easy to type passwords.


The justification is a rootkit which intercepts copy-paste but not the password field


Sure, except then it would intercept the copy, not the paste. And it basically trades clipboard vulnerabilities for keylogging vulnerabilities.

A more realistic exploit is a Flash banner on another tab intercepting the password in the clipboard. This is why offline password managers automatically expire the clipboard though.

The danger of discouraging complex or long passwords is far greater than either of these two attacks, both of which rely on the user's system already being compromised.


Commerzbank actually uses 8 characters, but that’s still horrible.

Luckily, you can also require all transactions to be done via HBCI with proper security and a smart card for auth.


Heh, both my banks (Banco do Brasil and Santander) are worse. 6 characters, numbers only! "For my safety" they recommend not using my birthday - how thoughtful.


It's the personal identifier (Kinda like social security number I guess? You write it on every contract you sign basically) and a 4-digit pin here in Spain. Stupidly insecure.


But then you (you= any person) have to consider that it'll block after some tries.

It's different from a system that never blocks passwords, security questions, and so on.


Great, then it's a DOS attack. Unless it is limited per IP, and then it's not effective again if attacker has a botnet.


Attacker's first attempt has a nonnegligible chance of success. Attacker can just do one attempt against one account and move to attacking a different account after each failure.


It's been a looong time ago but I remember when some instant messenger application was found to be performing authentication client side -- i.e. "Hey server, I'm $user. I promise!" and you were in.

I want to say it was Yahoo Messenger but my memory could very well be lying to me.


WhatsApp used to use your devices MAC address for authentication. A quick screenshot of the vicitim's settings page would be enough to send and receive messages in their name. Since whatsapp does not store messages after they have been delivered, the victim would never see the messages sent from his whatsapp number (except when looking at the recipients phone). You could, however, realize that your account has been hacked when you notice that some messages were not arriving (they would arrive at the attacker's client only and whatsapp will not transmit already recived message again).

The only fix was to buy a new phone and hope nobody will make a screenshot of your settings page again (or spoofe your MAC address which would not always work).


Ouch!

Also, PayPal really needs to stop using SMS for 2fa.

I expect more from a payment processor that is linked to my bank account.


What exactly is wrong with offering SMS 2FA? I don't have a smartphone, but I have a great little prepaid phone. Why should I get no features just because they are not necessarily as good as it gets ? Also, as far as I'm aware, all of the major "attacks" on SMS 2FA are just the fact that a smartphone can be compromised in many ways. I have much less attack surface: an attacker would need to reprogram my undocumented exotic architecture phone with a bug in a parser which is probably too small to contain bugs of that nature. The other way is SMS MITM, which on some networks is demonstrated feasible, but requires basically setting up an SDR near the victim, a lot more complicated.

With my prepaid provider, customer service is shoddy but would need considerably more to do a number port than just the number.

By removing SMS 2FA you gain nothing, and I lose my only viable second factor.


All the major 2FA attacks I have read about involved social engineering of the phone provider's customer service to number port. The thing is, since it is not a software system but depends on humans, attackers can keep trying until they get a CS rep they can manipulate.

Ex

* https://www.wired.com/2016/06/deray-twitter-hack-2-factor-is...

* https://www.hackread.com/gmail-id-hacked-google-two-factor-a...


> All the major 2FA attacks I have read about involved social engineering of the phone provider's customer service to number port.

SS7 attacks don't.

[1] http://www.forbes.com/sites/thomasbrewster/2016/06/01/whatsa...

... hackers can bypass the encryption protections by exploiting SS7 to create duplicate accounts that receive all the messages intended for the target phone.

This is done by tricking the telecoms networks into believing the hacker’s phone has the same number as the target’s. That means they can set up a new WhatsApp or Telegram account with the same number and will receive the supposedly secret code that confirms they are a “legitimate” user. From there, they can impersonate their target, sending and receiving new calls and texts.


There is a TOTP/Google Auth 2FA application for J2ME, which will run on many feature phones: http://totpme.sourceforge.net/

In addition, 2FA systems are not limited to devices the consumer already has -- Paypal could easily send you a device that generates a one-time password, or that uses a challenge-response protocol to do so.


Yeah, I'll need to upgrade to a J2ME compatible phone when they kill GSM here anyway, so I guess that will be an option. Thanks for the char * * uri;


From my limited reading on the issue. SMS in US is unsafe. Not sure if the same can be said in other places like EU or Japan.


They should be fixing service providers and not blaming google as in couple of days ago. (of course, if a nation state is trying to hack you good-luck!)


The attack angles are slightly different but it's not really any safer. The classic scenario is to call the provider to say the SIM card was "stolen" and either have them send the new SIM card to an address you control, or if that is not an option, snatch the new card as it arrives.


As I just mentioned elsewhere on this thread, SMS isn't the problem here. I use a VeriSign dongle for PayPal 2FA but PayPal still offers the same option of using security questions instead. I was previously under the reasonable assumption that the security questions form was ar least handled correctly, but apparently not.


Agreed, NIST stopped recommending SMS 2FA a few months back (https://www.schneier.com/blog/archives/2016/08/nist_is_no_lo...)

I really wish they had Google authenticator or Yubikey support.


They have both.

I have a "Symantec VIP" co-branded Yubikey that I've used with PayPal for years along with an authenticator app on my phone as a fallback.


I have both a yubikey and an auth app but can't seem to find a way to use them with paypal. Do you have some kind of special account or is that a feature bound to a certain market?


I don't think there's anything special about my account. It was a "personal" account when I created it, probably almost 15 years or so ago, then upgraded to a business account maybe 8-10 years ago.

I don't have my password handy right now so I can't login to check, but look for settings related to their "security key". I don't know if they still do or not but at one point they offered a hardware OTP generator (similar to the old RSA SecurID key fob) for a one-time $5 fee. Alternatively, you could use an existing one you already had just by entering its "ID number"; I used the IDs of my Symantec VIP Yubikey and also the app.

Sorry I can't be more specific or give you better guidance. I know that the option does exist, though; perhaps just explore the available options and maybe you'll stumble across it. Good luck!


Everyone except maybe phone apps should stop using SMS for 2FA.


This seems like a good time to rant about PayPal 2FA and its poor usability.

Every time I open the PayPal app I have to wait for a text message and type a code across. That should not be necessary! PayPal should count the app as the second factor and only ask for the password. I am happy to us 2FA with Google because I only have to use it when on a new device, or once a month or so in the browser.

Second, support 2FA apps like Authy already. SMS based 2FA is both insecure and unreliable.


Out of curiosity, how much was the bounty? 3, 4 or 5 digits?


I'm using Verisign's VIP Access app (silly name) to generate PayPal's 2FA tokens.

Good thing is it works without access to my phone.

Bad thing, the app has a unique ID that PayPal only allows me to use for one of my three accounts.

Wish they implement TOTP.


Does anybody know how to activate 2FA for PayPal?

In the security section I don't even have that option.


I don't remember exactly where it is in settings, but it's not called 2fa or something obvious it's called something like PayPal Security Key


Might be unavailable for your country.


Yup, I'm in Singapore and they told me that they don't have that feature yet.

I find that really ridiculous.


This is scarily simple. Profit indeed for a black hat. Coupled with a recent post about Gmail on how phone carriers are the weakest link, I just don't feel safe with anything but a dongle based 2fa these days.


That doesn't help in this case. I have a VeriSign 2FA dongle for PayPal and it still offers the same option of logging in with security questions.


Unless the master key is compromised allowing anyone to generate authenticator codes, as I seem to recall happened a few years ago with a major provider.


I think you're referring to RSA's SecurID? That was roughly five years or so ago.


Am I the only one who found it odd that the author had internet access, but there was no phone signal? Maybe it's because I'm Kenyan, where phone penetration is much higher than internet penetration, and where internet access over GSM has the biggest share of the internet access pie chart.


This often happens when I'm travelling internationally. If I plan on buying a local sim card instead of purchasing a roaming plan - I might not have access to my SMS until I get back home.


Get a next gen phone; They should all do Wifi Calling now. This causes your phone to tunnel the cellular via internet link, and you get full call and sms coverage.

Of course, 2FA via SMS is a bad and deprecated pattern and needs to die! But! you can get your phone overseas without roaming which is pretty neat.


Not really. If you're American international roaming fees are usually pretty steep so many times if you want phone service you get a local number. WiFi is ubiquitous, especially hotel wifi.


> Am I the only one who found it odd that the author had internet access, but there was no phone signal?

This happens to me at home. Poor cell reception, but WiFi.


The author mentioned being in a hotel, so I assume he was using their wifi.


If I were to guess this flaw was a result of monkey-patching to support 2FA that didn't quite consider different scenarios.

I've come across a few authentication bypass vulns that seem similar.


The lesson from this:

Just looping trough input arguments from the client, validating them and then acting on them gives the client control of the code execution.

It's not enough to validate each input argument. You musth also verify that all parameters are really there and no extra parameters can slip into the system. The whole combination must make sense. Enumerating all used parameter combinations in a record that can be changed easily is one way to solve this.


I'm assuming that the relevant code, is simply an if statement checking for the existence of the url parameters, not even checking if the security questions are correct.

    if(isset($_GET['securityQuesiton0')) {
        // success, 
    }
This is negligence on the developers part and I think they should be disciplined.


Or they designed it to show a variable number of security questions (so management could come along and say "we need 4 questions now" without causing havoc). Then they'd iterate through the responses, verifying them against the appropriate question. Simply forgetting to enforce that the number of questions asked has to equal the number of responses sent would cause the described vulnerability.


That doesn't actually make sense, since the exploit is to leave securityQuestion0 unset...


I imagine you could have got the same results with inspect element and deleting the form fields, rather than using a proxy.


What kind of API design is this? Post data should be sent within the request's body over HTTPS. Not as a url query.


Nowhere in the article does it say that the POST data was in the URL. As I understood it, he was editing the request body before the request was sent to PayPal's server.


The URL is encrypted too, so what's the difference in terms of security?


Does it matter in this case?


Short and sweet. Never seen a bug explained so succinctly.


What is the additional phone verification good for if you can bypass it anyhow?

I mean - if you can chose between pw+phone and pw+pw2 ... why bring the phone into play at all?


What could the backend logic possibly be this worked?


Something like this: (PHP felt like the right approach here :p)

        if ($selectedOption == SECURITY_QUESTION)
  	{
  		if (isset($_POST["SecurityQuestion0"]) && isset(["SecurityQuestion1"]))
  		{
    			if ($_POST["SecurityQuestion0"] != $answer0 || $_POST["SecurityQuestion1"] != $answer1)
  			{
  				// invalid answers
  				return;
  			}
  		}

  		authenticateUser();
  	}


More likely along the lines of

  if ((isset($_POST["SecurityQuestion0"]) && $_POST["SecurityQuestion0"] != $answer0) || 
  (isset($_POST["SecurityQuestion1"]) && $_POST["SecurityQuestion1"] != $answer1)


You should use !==.

isset is do not handle all corner cases, it would return true for empty strings or false for NULL. You should use framework like Laravel: Input::has('key')

By design type of security challenge should not be an option. API endpoint should not check for $selectedOption == SECURITY_QUESTION. In this case you still vulnerable for the same attack.

You always should return something. having just return; is bad.

Finally you should use something safer than PHP since mistake can cost you money.


likely using the following pseudo-ish code:

  # possibly done using a session variable
  security_questions = []
  # first question
  security_questions.push({question: answer})
  # second question
  security_questions.push({question: answer})
  
  forEach(security_questions as x)
      if(!validate_answer(x))
           return false;

  return true;


Hopefully not, but I've seen worse.

     def validate_security_questions():
        if not question_0 or not question_1:
            raise AuthException('Invalid security questions')

    try:
        validate_security_questions(question_0, question_1)
    except AuthException as ex:
        # Todo: Present error to user
        pass


  return all([is_valid_answer(q, a) for q, a in params])


If there's a SMS challenge, process.

If there's a question challenge, process.

If no exceptions were thrown, you're authenticated.


reminds me of this paypal 2fa exploit from a couple years ago:

https://duo.com/blog/duo-security-researchers-uncover-bypass...

because it was the same simple exploit on a different field.


It's 2016. They are a financial company. Why aren't they implementing TOTP codes? NIST officially deprecated SMS.


Bypass? Haha, it has been quite a while and they still haven't even enabled it for my country. Same goes for Apple.


Oh my god.


This is surreal.

Does PayPal outsource their web development to an anonymous script kiddie on 4chan?


no, they outsource to cheap dev in india


I'm happy to see that the article doesn't have any BS that I have to ignore. It's a simple page that only tells the 'required' story. As a reader, I want more people to cut the crap about 'blah blah blah' and get to the subject.


That only works if you can assume your audience has the necessary context.

That being said, I've often thought Hacker News should have a nice crowd sourced tldr summary at the top of all the comments.


Well, here the succinctness is a part of the story. It emphasises just how basic this bypass is.

For what it's worth, I thought the "I was in a hotel..." story was superfluous and probably not true.


Thank you to the author for reporting this big in a responsible way. They are a credit to our profession.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: