Hacker News new | past | comments | ask | show | jobs | submit login
Ebay posts every character a user types into the password box (slashcrypto.org)
497 points by slashcrypto on June 29, 2016 | hide | past | web | favorite | 201 comments



There is also the possibility of timing attacks on either type of request. By the length you can tell when the HTTPS request is most likely POST /PWDStrength, and from the times that the request is initiated, you can guess at some characteristics of the password (maybe they stopped typing for a second to verify requirements after typing 7 characters; maybe they stopped after 8 because they have to move to the numpad on their keyboard).

edit: the best sopution for this is probably to wait a specified amount between requests, rather than doing it with each character.


Came here to say this.

It is feasible to reconstruct passwords from timing information alone. This has been done against e.g.

SSH http://people.eecs.berkeley.edu/~daw/papers/ssh-use01.pdf and

TLS https://www.schneier.com/blog/archives/2010/03/side-channel_...


That's a very interesting interpretation of the linked papers.

While timing information may make brute force attacks against the passwords easier, it is not feasible to reconstruct passwords based on the timing information exposed by Ebay.

It is also worth noting that the ability to perform more efficient brute force searches doesn't really matter in the case of Ebay, as it will not make such attacks feasible over the internet.


Attacks only get better.


Sometimes they stay at exactly the same level forever.


Its a classic quote from Bruce Schneier. I should have attributed it. I thought the crowd would get it.


While often attributed to Schneier, he attributes it to the NSA https://www.schneier.com/blog/archives/2011/08/new_attack_on...


It is, and will remain impossible to deduce a victims password from such a small timing sample.

There simply isn't enough data.


I do trust you aren't an Ebay security team? ;)

http://www.wired.com/2011/10/iphone-keylogger-spying/ etc.


>I do trust you aren't an Ebay security team? ;)

Luckily, not my kind of a gig.

>http://www.wired.com/2011/10/iphone-keylogger-spying/ etc.

This attack depends on being able to identify individual keys so it's not really applicable here. However, a similar attack might be possible here if not for the very small sample size.


It was guessing pairs of keys. But anyway.


There are many failure modes for encryption that most people rarely think about. EX: If someone encrypts either the US constitution or Hamlet then you can tell based on message traffic size which it was. For a physical example, if collage rejection letters are a letter, but acceptance letters are a package then it's obvious to your mail room who got accepted.

This is probably secure, but non standard password exchanges open up a lot of possibility's.


Is this another point in the bucket for password managers? Harder to leak any timing related information when a browser plugin auto-fills the form...


Yes, a password manager likely negates this kind of attack. Although the timing info likely gives away that you're using the auto-fill (which isn't useful, just interesting)


I reproduced it for fun with BugReplay, the site I've been working on for the past year: https://app.bugreplay.com/shared/report/3efa632d-5b51-45f1-a... Checks out, password is in the GET param.


Awesome product. Signed up for the beta.

One small idea: Once the network traffic starts flowing, it's difficult to switch to the Javascript tab because it's constantly flowing off the screen. Maybe make the tabs fixed in place? Or have a check mark that can make them fixed or unfixed?


That's a good point, I've noticed that as well. I added that to the TODO list, I'd love any further feedback if you have any thoughts.


Very cool app! I've signed up for the beta; this could be a game changer for my support team.


Thanks! I got the notification, I'll send you over a registration invite, would love to get your feedback.


Perhaps I'm missing something, but this BugReplay appears to provide network request information as requests happen in the video.

I expected network requests to occur in the Network Traffic window, however no requests ever showed up.

Can I make a BugReplay of this BugReplay?


Sure, I'd love that, if you register for the beta at https://www.bugreplay.com/ I'll send you a registration link.


Very cool. You might be able to get some traction by using this to show some examples of common web vulnerabilities (just like this one) in the wild. I'd read a blog with those religiously. This was great.


Thanks! That's a great idea for our blog.


Cool. I was just using ScreenFlow today to capture a video of a bug for my team. This would have saved me alot of time today. Just signed up for the beta!


Shame it's not mobile optimised, I'd love to check this out right now. Guess I'll have to wait until I get to my laptop


Yeah we've put in some work to make it usable on mobile, but I wouldn't call it optimized for mobile yet.


Really nice work on this. I would have loved this tool on my last big web project.


I also just signed up, would love to try this out.


I signed up also


I'll send you out a registration link shortly.


Dear eBay,

Sending a request on each keyboard event to determine password strength is not only a security vulnerability, it's also poor design. APIs should primarily be used to consume external resources, not stand in for client side functionality.

If providing an API for password strength is important (i.e. you want to guarantee the same behavior across clients), think of your business logic as a resource and not a service. Rather than force the API figure to it out, have the API deliver the criteria for this behavior (regex strings, bounds of password length, etc.) and let your clients figure it out. This addresses the security concern, decouples your client side and server side logic and improves performance across the board by reducing network requests and absolving the server of this responsibility.

If you must go with this design, at least move from a `GET` to `POST` like others are suggesting.

Just my opinion,

Matyi


This is a terrible idea. It would allow a client to ignore the requirements, and submit an invalid password.


You have heard of server-side validation, right?


Sending your password as you type it as a GET request query parameter seems awfully hazardous. As you point out the password will appear in all manner of places, such as HTTP server logs. As the username/email is not included an ops person might not directly know from the GET request alone what user the password belongs to. It is not difficult to imagine however that they have enough info to correlate the IP address of the password strength request with a user.


Maybe they plan to cache the responses! I mean, from a POST to a GET, there is clearly a trend.


> there are some reasons behind our current solutions but I wouldn’t be able to give you more details on it.

I'd be curious to know if anyone here can come up with a good enough reason for sending out the user's email & their password(-prefix) at every keystroke?


This actually just sounds like a really bad implementation. Some front-end dev wasn't sure what's a good timeout to fire the password to the server on, so he or she just put it on keypress.

And then he included the email too, so the backend could look up the user and make a custom password blacklist for this specific case (eg: no personal details allowed).

I actually don't disagree with doing a POST of a password to check password strength server-side. It might be "cheaper" a bit in some cases.

But sending on every keypress and including the email - that's just silly.


> Some front-end dev wasn't sure what's a good timeout to fire the password to the server on, so he or she just put it on keypress.

Ebay is not a two bit software startup, it's an eCommerce powerhouse with extensive QA processes.


I am well aware of that, and agree with you. There had to be some seriously bad decisions made here, and it certainly doesn't look like someone from a big company would make such mistake.

Yet, those kind of bad decisions are made every day, by people all around the world. I wouldn't give benefit of the doubt to anyone these days.


Yeah, but being a powerhouse doesn't mean they don't introduce silly bugs. They do. E.g. on Facebook, a year or two ago, you could use dev tools and change hidden input field's value when writing a post and post to anyone's timeline (this story got tons of coverage for a bunch of reasons, vulnerability itself not being the prime one). Does it seem like a silly bug? Definitely. But it happened, it's not the first one, not the last one.

So it's a bit naive to assume devs at popular companies don't make bugs, they are superhumans, etc :)


I worked at a Fortune 100 that does billions in online sales. You'd be surprised at how often little, improper things like this can just percolate into production. And then they're defended by the people who allowed it to happen.


that doesn't mean they don't have bad implementations


I've seen some pretty janky pages on eBay.com, and the windows 'Turbolister' software is one of the worst things I've had the displeasure of using.

eBay is sufficiently large, and old-enough, to have substantial tech debt.


QA just assures that the deliverable meets the spec. It's perfectly possible to write an excellent implementation of a terrible idea.


or a terrible implementation of a good idea.

Manager: "We need password strength validation." Tech: writes code to send each character of password to server as cleartext Tech: "Done"


>I'd be curious to know if anyone here can come up with a good enough reason for sending out the user's email & their password(-prefix) at every keystroke?

I wonder if it ties into their fraud detection systems somehow.

Fraudsters are lazy - so lazy that, for a good long time, you'd see the exact same few recycled photos of counterfeit items being used in item descriptions. No idea if that's changed recently.

Anyway, going back to my main point: I wonder if something about password entry and email address choice serves as an early warning flag.

I'd kinda be surprised, but I could imagine it potentially being useful.


That might be it. They might use it to detect if someone is pasting a password in vs typing one in. Which might help identify against bots / attackers stealing someone's ebay account.

Which would explain why Ebay would be secretive. Because the detection is easily mitigated if attackers become aware of the detection.


I guess that fraud could be a big part of this, getting every character in sequence says way more about the end user then getting only the password in the end. I wonder how this will affect password manager users though.


Same thought here. I was imagining that some sort of timing analysis / fingerprinting could possibly be going on.

But to what end? A valid password is a valid password whether it comes from user that takes 2 minutes to type it in, a password manager, or a bot.


My guess is that they're doing bot detection or something similar using thing like the additional timing information and detection of typing errors.


If this was the case, I would think that a single request where they record the timing between characters clientside and post that timing information along with the password would work better. Timing incoming POST requests as part of a single password reset "session" seems fraught with problems, I can't see how you could really trust the timing numbers you would get. I type my password pretty fast generally and I wouldn't be surprised if the margin of error on that timing is a significant percentage of the average time per key press.

Of course you can't trust anything from the client and both methods are subject to tampering, I'm not sure which is more tamper resistant.


Both use cases could justify calling an API at every keystroke, where you send out either the user's identifier in the one case (to extract the timing info), or the password(-prefix) in the other (to check for typing errors). Linking together these two is where it becomes especially dangerous.


But then what about people who rely on password autofill and password manager ctrl+v users?


They prefer to implement their password strength measurement server side for some reason.

Maybe they don't want to make it public by doing it it JavaScript.

Or they want to disallow reusing previous passwords, without leaking them to the client.


You can use a client check to check for the basic requirements, like minimum and maximum length, characters required or allowed etc. Then when the user submits his password, you can do a serverside check.


The reason for using a server-side solution is for a password strength indicator. You need the full algorithm to run against the current entry, and every user-friendly implementation does this on every character input so you know when what you have typed is "strong enough". I'm not particularly a fan of password strength indicators in general, but if you're going to do it at least do it cleanly.


Is their algorithm so complex that it can't run in a reasonable time using client side javascript? I have trouble thinking of anything that isn't vastly over-engineered and runs slower than the network lag probably is.


Perhaps their algorithm isn't written in javascript?

Or perhaps they want to check the user's password choice against a multilingual dictionary, and they decided to save the user the multi-megabyte download?


No, it's more about leaking information to the JS client. If, for example, their password verification rules stipulate that you can't reuse any of your last N passwords, then they would need to make this check server-side as they don't want to provide that information to the client.


Which sending a POST on every keystroke won't really help with anyway, because they can't tell that you typing "h-u-n" will match your old password of "hunter2", assuming it's properly hashed.


Oh, I agree. Every keystroke seems like overkill.

If I were in charge of both requirements and implementation I'd debounce the input by 300-500ms and display a "loading" spinner in the password complexity box until the debounce timer and network request had fully resolved.

I was just trying to explain why, given some business use-cases, doing password validation on the client isn't always possible.


I think an overly complex password strength checker is probably dong it wrongly. Besides, when it comes to password complexity you need to tell the end user what the rules are!


Combining a password strength meter with password rules is the sort of thing where most implementations are going to get it wrong. It clutters up the UI a lot, to the point of being too complicated for "average" users to understand.

I prefer seeing one system or the other, but not both.

If you have rules, don't use a password strength meter. Rules are going to be a binary result - "yes the password is good enough", or "no, this password is not acceptable" - which is represented by showing which rule(s) were not respected by the user, or a green checkmark.

A password strength meter can be used when there are no rules enforced, to let the user decide for themselves whether that red progress bar showing a weak password is good enough for their needs. The meter doesn't need to be binary, with red/orange/yellow/green stages. The only reason I don't like password strength meters in general is that there is no standard across sites/services as to what constitutes a strong password. One site will show "password123" as weak, while another will say it's fairly strong. Each implementation has its own arbitrary algorithm that is likely not representative of "true strength".



This might get downvoted because it's just a link, but:

zxcvbn is actually a great password strength library, JavaScript, client-side, and only about 400 kB or so last time I checked (compressed, including (!) dictionaries). It was developed by a Dropbox engineer for the password setting/changing dialog at Dropbox, and open sourced, if I'm not mistaken.

Again, this is a great tool, client side, small (smaller than most webpages and adds these days at any rate), and it also allows to provide a list of "custom black list words" not to use in the password (e.g. username, site name, etc.).

AFAIK, zxcvbn really is the gold standard here.

Given this, I don't really see how a server-side check is better or necessary. Ebay really ought to provide a much better answer than "trust us" here.


You can do a lot of server round trips before you reach 400 kB.


What's better: A) compromising security but using less bandwidth, or B) using more bandwidth, but staying secure?

Besides, the password strength js can easily be loaded async.


Yes, but these 400 kB won't contain any personal data.


Yes, but why do it on the server and not on the client?


Why do it on the client and not on the server? It's just an implementation decision. I would never bug someone for choosing either one over the other. Most implementations probably have issues with the employed algorithm, best not to waste effort debating the delivery method.


Or they want to disallow reusing previous passwords, without leaking them to the client.

As an aside, I have always wondered how it is possible to disallow reusing previous passwords if the password is only saved on the server as a salted hash, which is recommended I believe.

Is it possible?


Building a user's password dictionary?


With common misspellings.


Timing? Perhaps they are timing the typing speed in some way.


How common is it to have internet fast enough that a POST request completes between characters? I would expect it's completion in a second or two, enough time for a human to type about 5-15 characters, making the timing information completely meaningless.


This would require that after a second or two the post requests will be instant, and after that second or two they will travel back in time to be delivered simultaneously with all the other requests.

The timing information, despite possibly arriving with a bit of a delay will be just fine. Not only that, but if they really wanted they could just grab the TCP timestamps.


Well, that's going to trigger on a large group of people that use password safes.


Not a good enough reason, or sane, but what if it's a honeypot of sorts. Perhaps eBay is using the timing information themselves to flag hack attempt sources...


Twitter also sends the password + email + name on each keypress once the user has entered at least 6 characters on it signup page. [0]

  [0]: https://twitter.com/signup


Ugh, I had to do this once on the sign up page at a small company I worked for about 10 years ago. Ever since then, I've been weary about beginning to fill out any forms unless I really, really want them to have the info. I still think its messed up to store user data that hasn't been submitted.


Yeah, there are shady techniques from companies offering cart abandonment solutions that do exactly that (e.g. VEInteractive). If you type in an email address and don't submit the form, they'll still send you chaser emails.

I think there are valid reasons to store incomplete form data, but I don't think it should be used for reasons the customer did not intend (e.g. receiving emails).


This one is interesting. Looks like they don't send any requests for username. For email address, they have a delay before sending to the server to see if it's a used email address (if you type quickly enough, it will only send a single request to validate the email).

For password, they start sending every character you type once the field has 6 characters in it. It then sends your full form details on every keypress (plus it has a delayed send to the same password_strength call, similar to what they do for emails). So if you type your password slowly, it will send your details twice for every keypress.


> This is not a security vulnerability itself because I think they have implemented this for some reason

IMO just because the behavior is by design doesn't mean it's not a vulnerability. That said, this one seems like a grey area. I'd be worried about password information leaking by making TLS attacks easier in this mode.


This only affects a specific form that the user might interact with once a year (and that's being really optimistic), I don't really see it generating enough requests to make TLS attacks easier.


If it increases the attack surface at all, it makes it easier. Being that this site facilitates monetary transactions, I would hope they would be trying to limit their attack surface in any way possible.

I think the real point here is that there are more secure solutions. Saying that it's not all that less secure isn't a great argument.


>I think the real point here is that there are more secure solutions. Saying that it's not all that less secure isn't a great argument.

I'd say it's a very good argument, this appears to be a non-issue that doesn't justify the dev time spent on "fixing" it. We don't live in a world with infinite dev resources.

Edit: Since someone appears to disagree, how would you exploit this "bug"?


"Let us help you make your password more secure by sending it over the wire a gazillion times."


Google does the same. They regularly send your password to their server to rate it. A curl-example is provided below.

I think I already noticed that some websites used googles api to do the rating of passwords on their website but I can't recall where I saw it.

curl 'https://accounts.google.com/RatePassword' -H 'Content-Type: application/x-www-form-urlencoded' --data 'Passwd=jbcfaihrwefgbGWETZHGAESjbnajfcw24704%$§&%§!vf&Emailnotme@useless.domain=&FirstName=Hacker&LastName=News'

or another endpoint:

curl 'https://accounts.google.com/InputValidator?resource=SignUp' -H 'Content-Type: application/json' -d '{"input01":{"Input":"Passwd","Passwd":"GoogleBatteryHorseStaple","PasswdAgain":"GoogleBatteryHorseStaple","FirstName":"Hacker","LastName":"News","GmailAddress":"i-have@none.yet"},"Locale":"en"}'


"Sending your password to the server" -- obviously required, that's what passwords are for.

"Sending each character of your password to the server, before you explicitly agree to submit" is quite another.


Parameters sent via GET can get cached by proxies and they appear in log-files.

Not to argue in favor of sending sensitive data via GET, but I think it is worth pointing out that third-party proxies cannot see the URL or other parts of the HTTP headers or body when the connection is using HTTPS.


But there is a good chance that these GET parameters are logged by the webserver. Even if these servers are very secure and strictly monitored, one bad employee can cause a lot of trouble.


Perhaps, but an employee in that position can steal credentials even without GET logs.

This entire discussion is predicated on a contradictory assumption, that an employee would be corrupt enough to steal credentials from web server logs, but not corrupt enough to steal the same credentials from any other source (inc. database access).

It is like letting a criminal into your home, then being concerned that they might see your security system's pin written on a sticky note on the fridge. Sure, it is a problem, but ultimately the criminal doesn't need that pin to steal your shit, you already let them walk right in.


GET logs end up in all sorts of places. I would not be at all surprised if anyone working at EBay could get access to them. Not to say they should have access to them, but access to the logs is different from access to the server. Log reading permissions have a rightfully lower standard than ssh/deployment permissions.

(But part of what makes it OK to have more people with access to the logs is you don't put things like username/passwords for all of your customers in the logs.)


With that logic it doesn't make sense to store passwords encrypted in the DB then either. If an outside attacker gains access to a system it would really suck to have a bunch of passwords sitting in logs unencrypted. Security in depth and all...


Often times server logs are sent to other locations (such as central locations) for storage. This can be storage for compliance purposes. I wonder if these are logged and sent to some other location. They may be visible to a great many people who don't have direct server access.

In general don't log sensitive information because you don't know how those logs will be used.


Not exactly. In corp/uni environments there may very well be a SSL-stripping proxy - and it works because in a corp setting you have the fake ca cert installed by IT, and in uni you often have to accept a cert when first connecting to the uni VPN.


In that kind of environment all bets are off already and recovering security is hopeless, no?


In this scenario the distinction between GET and POST becomes irrelevant.


No, it does not, because usually an appliance will have some sort of logging - which will usually include the URL, which in turn contains the GET parameter.


If there is an inserted CA then I believe any cert from any website can be MITM'ed and there are appliances that do this.

From PaloAltoNetworks website:

"... firewall proxies outbound SSL connections by intercepting outbound SSL requests and generating a certificate on the fly for the site the user wants to visit."


I'm writing a Html5/Js end to end encrypted chat app for just this scenario. It won't stop a nation state modifying requests in transit and injecting their own js but it will probably stop a nosy sysadmin.


"It won't stop a nation state modifying requests in transit and injecting their own js"

It should stop a nation state if you serve up the JS via HTTPS and use certificate pinning.


Pinning doesn't work against the "corporate CA" scenario, at least if the user is using Chrome:

Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor. A key result of this policy is that private trust anchors can be used to proxy (or MITM) connections, even to pinned sites. “Data loss prevention” appliances, firewalls, content filters, and malware can use this feature to defeat the protections of key pinning.

https://www.chromium.org/Home/chromium-security/security-faq...


Thanks for that. I had no idea. I wish there was a way to grab the certificate in Js. Just so you could alert the user that they are MITMed. As it stands I will have to instruct them to check manually.


If they're MITMing they could replace whatever JS you put with something else, by definition.


While this is true, a parent comment from the user you're responding to has already explicitly mentioned this as a potential issue:

"It won't stop a nation state modifying requests in transit and injecting their own js but it will probably stop a nosy sysadmin."


You could use https://openpgpjs.org/ for example to provide in-browser cryptography between user and server.


I've considered it but the overhead on the server side would be too much for the ad-free (ad's = loss of privacy IMO) and non-monetized vision I have. I use diffie-hellman to distribute an encrypted master key to each client that is used initially for the chat. I'm going to tell users not to consider that private (I can certainly man in the middle it from the server as I'm the one who generated the key) but they can use that key to discuss what private key they will use and then enter it manually (my brown dog's name + my birthday with only the first letter of my last name capitalized for example). In that master key I sent earlier there is also a salt to add to the hash of the password they select so even if the key they pick is weak it still might protect them. Everything is wrapped up in AES256 thanks to the Stanford Javascript Crypto Library.

I may use openpgpjs down the line for private messages within rooms. I also want to experiment with WebRTC for private messages and maybe offer some opportunistic peer to peer connections but I haven't gotten the far yet.


Wait, what? So I guess our admin can read my gmail :/

Will pay attention to what certs are being served from now on...


you got a point, i will do a short edit here.


Why is it that we didn't improve HTTP Digest Auth but let everyone implement their own mechanism, where the number of those using a challenge response protocol is not worth a mention? Do we have to wait until 2018 before https://tools.ietf.org/id/draft-yusef-httpauth-srp-scheme-00... can be a thing? Not saying SRP is the best option, but compared to what's implemented on websites right now, it is much better.

EDIT: I probably am missing details, but surely some secure challenge response protocol must be available for broad implementation in browsers without concern for patents, right?


SRP is an "Augmented PAKE" which does not require the server to ever see the plaintext password. I'm not aware of any others that are claimed to be patent-free.


Avoiding patents of other protocols seems to have been one of the goals, but then Thomas has patented SRP itself. https://www.google.com/patents/US6539479 which is set to expire in two years minus 15 days (Jul 14, 1998).


Ah right. It is patented, but the most common application of SRP can be used for free.


2 or maybe 4 years would be reasonable to earn back (some or all of) the investment, and allow others to improve upon and maybe even patent the new invention. As it stands, whole industries are held back due to 20 years for patents.


Wouldn't that be 2 years and /a month/ minus 15 days? :)


You're right.


For those who didn't read TFA - it does this for the password strength checker when creating a new password, not when logging in.

Honestly, I can see the challenge here. A truly robust password strength checker would use dictionaries, making it too heavy to run on the client, and for usability reasons you'd want it to check on keypress.

But it would be nice at the very least if they'd send it as POSTs in the body, not GET parameters.


> But it would be nice at the very least if they'd send it as POSTs in the body, not GET parameters.

If the GET is being sent via XHR over SSL, how is doing a POST any more secure?


The general argument here is server logs. You'll see the entire url show up for GET. By using a POST and actually putting the data in the post body you won't see it show up in logging.


My guess is simple things like the url showing up in server logs etc.


Is a dictionary really that heavy? (Honest question.)


The ones used by security experts are in the GB range.

Obviously you could do more efficient approaches like converting characters to recognize that P@ssw0rd is just Password, but then you've increased the algorithmic complexity you're sending to the client. If you want to get super-fancy, you've got to find word boundaries and whatnot to find that MyP45512345 is really just MyPass12345.

Of course, the simple brute force approach (server-side check if my password in this 5GB db of passwords?) might be too slow to use for this case anyways.


> The ones used by security experts are in the GB range.

Citation? The only multi gigabyte "dictionaries" I've seen are rainbow tables. I'm genuinely curious why you'd need multiple gigabytes when the Dictionary.com app a few years ago was no more than 200 megabytes.


The (most excellent) zxcvbn password strength checking library [1] (developed by an engineer at Dropbox) is 400 kB (compressed) including dictionaries.

[1] https://github.com/dropbox/zxcvbn


Depends on how big the dictionary is. :)


I knew there was a reason I always prefer POSTing data as opposed to GET query params.

It still gives attackers the knowledge that if they can get access to the logfiles, they can see passwords. Then the problem becomes getting access to the logfiles!

Any leak of relevant information about security is of potential value.


It's less of a concern about an attacker gaining access to the log files, as it is that passwords should simply not be stored plaintext... anywhere. One doesn't really need to ask "why", it's just good common sense.


I might even go as far as saying that passwords should simply not be stored at all anywhere.


I did some penetration testing on the Snapshat application last year. It was also very chatty every key-press on the craete/login screens.


I hope they pad the requests with some random data, otherwise they are sending encrypted requests with very little entropy.


They almost certainly do this to detect bots trying to change passwords. If the bot tries to change passwords for hundreds of accounts at once they will end up sending thousands of requests to the password checker and be ip banned and it can silently just reject every password they try to submit to not tip off the attacker that they have been detected.

It is a terrible way to implement bot detection but with ebay owning paypal they are on the hook for lost revenue so bot detection probably takes higher priority than other security due to the actual economic impact of bots who steal hundreds or thousands of account at a time being so bad for them


How has no-one here made the observation that the reason for this is due to true password strength checks, that use existing password distribution data that is prohibitive in size to send to the browser?

They're not doing the wrong thing, and the risk of side-channel attacks on this infrequent behaviour (i.e., not authentication) are trivial compared to the risks of high entropy passwords that are also highly reused, and are thus vulnerable to trivial brute force attempts.


All those people who think that Amazon don't want anyone to work out their password complexity algorithm... You just generate a script that works out the minimum number of characters and then submit a password list to the strength service. Then you'll know all the strongest passwords according to Amazon, and from here you can hopefully find patterns to construct rules around running dictionary cracks.


After reading the public post and these comments, do you think they (eBay) will give a better explanation...or better, an explanation...as to why they do this? Passwords are becoming difficult to maintain, even with a password manager. They should've, at least, obfuscate it in some way.


One that I use regularly that seems to be missing is 'set <N> <hour/minute> timer'. Also and amusing request from my father (who uses voice commands far more than I do): 'is there some way to print all of these out?'


I'm guessing you were also reading the post about Google Voice commands before this one, and got confused. https://news.ycombinator.com/item?id=12000264


I wonder if there is any example of a large corporation taking action after a flaw is submitted via an online email form? I think those forms are sent to people who's job it is to disregard their content as much as possible.


how do you know that they didn't disable logs for this url?


It is far safer not to do it in the first place. I can easily see a new sys admin coming in and wondering where all the logs are for a url and enabling it.

Or they send their logs to an analytics firm. The firm says innocently enough "it doesn't look like we are getting all the logs" and then it is turned on.

There's a lot of ways a policy can be circumvented just because people were trying to do their jobs and didn't know better. Also it is highly unlikely that they have another process to confirm that they aren't logging that url


Because (a) it's very unlikely given the use of POST elsewhere that they even realised this was using GET, and (b) other services can log URLs, such as your browser's history. By default, it may not matter, but perhaps of extensions get involved...

It's true, this isn't a straightforward vulnerability but it doesn't seem to be well-considered given the inconsistent use of both GET and POST for the same terrifying call.


How do you know that they did? It's up to eBay to state that they changed to a non-default setup, not on us to assume so without asking.


> Checking the password completely on the server is OK

I don't even agree with that, I think the best pratice should be to hash it on the client side before sending it to a server.


This is not really ideal either, because the hash becomes the password. If an attacker got the hashes from your DB, he would only need to send the stolen hash to the server to authenticate.

The ideal way to deal with passwords would be something like SCRAM [1], but you are adding a bunch of complexity on the client side, and you'd need to trust your JS libraries.

[1] https://en.wikipedia.org/wiki/Salted_Challenge_Response_Auth...


Hashing it on the client side doesn't really have any positive effect on security as the client must then know what salt is used for the hash. This is less secure than just hashing on the server as the salt and number of hash iterations is then unknown by the client (or potential attackers).


I disagree.

Whatever the server receives, it should do all the good things, salted hashing and what-have-you. But no one says what it receives needs to be a plaintext password.

Hash on the client side before sending- unsalted, or salt there as well and pass it along to the server- but let's just ensure that the server never has the ability to see a plaintext password. It can't log it, it can't accidentally leak the plaintext.

Will that solve all problems? Oh, hell no. But it at least strengthens the mitigation against certain attacks or mistakes.


If you hash, with or without salt, on the client for changing the password, you'll also need to hash identically when checking it (i.e. for login). In effect, the hash becomes the password; even if the plaintext is never leaked the first-level hash is just as good for access.


Yes. I agree. My point is I don't want to ever see plaintext on the server in any form. It won't stop everything, but it's still worth doing.


Right, but if a hacker releases a password dump for site X, no one has your password in plaintext, just the log in hash. That said, that solution requires JavaScript.


Yes, but then the attacker can ignore your JavaScript and just send the hash value they got from the dump. If you calculate hash(password) and send that for comparison to the hashed password stored in the user database, then hash(password) is your password from then on.


Yes, but they can't then use the dumped passwords on 300 other websites.


Isn't the salt not necessarily supposed to be impossible for the attacker to find out though? like isn't it usually like the username, since it needs to be different for each user but not impossible for the server to figure out?


I'd agree if the client in some way exists independently of the server. For example, if you have a smartphone app, then client-side hashing could be useful.

But for a web page, what's the point? The server is in full control of the JavaScript they send you. If the server is compromised, it can easily bypass the client-side hashing by sending your browser different code.


Either way, this is a password strength checker, sending just a hash to the server would be useless.

Client-side password strength checker would make it functionally impossible to check dictionaries.


Sorry, as I keep saying in this thread, the excellent client-side password strength checking library zxcvbn (https://github.com/dropbox/zxcvbn) is under 400 kB compressed including dictionaries, checking against "30k common passwords, common names and surnames according to US census data, popular English words from Wikipedia and US television and movies, and other common patterns like dates, repeats (aaa), sequences (abcd), keyboard patterns (qwertyuiop), and l33t speak"

Thus, it is, as a matter of fact, quite possible to do reasonable password strength checking on the client side with a footprint that's a small fraction of many of today's ad-infested websites.


If you hash it you can't determine the "strength" of the password except maybe looking up the hash in a rainbow table.


Do your strength checks in javascript client-side, then hash, then send. Server side can do further checks if it wants on the hashed password (hey, this password was already used, etc).


any clientside validation can easily be bypassed using something like fiddler.


Password strength checking is (properly understood, in my view) providing help to the user, not enforcing some silly and annoying "password validation" rules.


How would you even validate the password server-side if you're salting it at the client level?


With that approach you would also need to be mindful of pass-the-hash vulnerabilities.


So it's not possible to log on without enabling Javascript?

I guess that's one way to coerce the user into enabling Javascript, at least temporarily.


This is the change-password form, not the login form.


As an attacker this gives me information about how many characters are in password. Which can be quite useful information.


The HN title seems odd. Most websites send all the characters (unless I suppose backspace is used).


Bot detection?


Neither bot nor copy/paste detection require this. If you're checking keystroke pace/timing you could simply send a keystroke or clipboard event without the actual contents.

Also, like, if they're sophisticated enough to be doing that, they should probably get the basics right.


Yeah, or copy/paste detection


Bad title: it works only while you have the password field focused.

Bad content: when you log in or register you send your password to the servers anyway. It's irrelevant, since all connections (as shown in your post) are made with https.

One could argue "they are seeing what you write even if you haven't sent it yet", but meh, it's just a damn password field, not a chat field.

So bad, bad, bad.


As I said in the Post, it is not a security vulnerability itself, but I want to point out that it can be very dangerous to put a password in a GET request. And the response of ebay is bad too. But thank you for your constructive comment ;)


If you're on https ebay, sending GETs to https ebay then the GET parameters are not sent in plain text. The owasp article you link to mentions that GETs can be sent in clear when you have a mixed http/https scenario. I think your screenshots are a little misleading, as not all of the headers and information you show are sent in the clear when using TLS. The response of ebay seems OK, this isn't a big issue at all.

EDIT: Sorry for the misunderstanding: as mentioned elsewhere, the problem is not so much the user-agent end, but the hops between where the decryption happens and where the information is used. Why expose the information more than needed there? So I guess ebay's response is a bit lacking. They could make things more secure with relatively little effort.


>> I guess ebay's response is a bit lacking

Except that ebay's response was to the POST over https he mentions in the first section of his article. There is absolutely nothing at all suspicious about that. He wasn't looking into a potential security hole there, he was just prodding as to why they do server-side validation in a completely secure manner. His email had nothing to do with security; he was wasting someone's time asking about implementation details.

He then went on to find a GET version in another area on the site, for which he makes no mention of having sent an email. This might not be considered a security problem to ebay depending how they manage web server logs, but it's certainly a viable inquiry compared to the POST version he did email about.


He's got a point that sending via GET is kinda dirty. I'd hope ebay logs don't log the full url of requests. Sure, getting to those logs would be super difficult, but you really don't want plain text passwords anywhere.

In general though, yeah, not that exciting of an article.


They're embedding the password in a GET request. That'll get logged all over the place.


But GET arguments are not visible to 3rd parties when using https. Anything after the host name is sent encrypted.


Https is often terminated at a relatively early point, eg the load balancer, so that the request can be properly routed. (Eg if you use AWS, it's generally terminated at ELB). That means the request path may be logged by the load-balancer and whatever routers/proxies they're using, as well as in the request logs of the web server itself.

It's completely unnecessary to have everyone's passwords be viewable by however many people have access to one or more of those logs (for a org the size of ebay, maybe 10-100 people?).

Sure, it's not as terrible as if it was sent over http, but 'not being as the worst it could possibly be' isn't a very high bar.


Are you implying that POST data isn't going to be transmitted in cleartext beyond that point? Because that's incorrect - HTTPS doesn't selectively encrypt - the whole connection is encrypted. If you're worried about GET data being sent in cleartext, POST is no different.


The point is that GET parameters are more likely to be stored in server logs or other application logs where POST body is usually discarded from such logs.

So someone getting access to the logs will have access to a lot of possibly sensitive data, that's all depending on server and application settings, but by default GET are more likely to leave traces than POST.

It's a subtle but valid concern.


Ahhh, understood. There's someone between what you see as "ebay.com", where the GET is decrypted, and the actual ebay machine that will use the password information. I was thinking of it at the user-agent level. The GETs never leave your machine in plain text. I did not consider that the other end where you send the information could be "flakey". Bloody hell, it's a miracle anything works at all.


Not even necessarily, but the https logs are likely archived to another server, and that server will likely be secured completely differently.


Maybe ebay do the right thing and terminate on the final endpoint, and keep their logs appropriately secured. We shouldn't assume the worst without knowledge.


It's not so much the HTTPS but the logs, the server logs are not necessary designed to store all the users passwords.


If someone has broken ebay https they will surely be able to catch the whole password at the end.


As others are saying, using a GET request embeds that password in the URL, which means that server logs on eBay's side will have your password in them. Server logs aren't always the most protected thing in terms of locking down systems and permission management. On the flip side, most server logs do not have POST/PUT data logged.


Ummm... you're assuming that eBay is using a standard web server configured in some default manner. It's far more likely that this is communication with a custom authentication server of some sort. (Where server means a very large collection of machines.)


It's likely that eBay's internal infrastructure has compensated for this, but it also seems like a potentially overlooked aspect of their system. Even if there are no server logs per se (unlikely), they might be sending request logging information to some sort of analytics server. Since these requests are internal, it's also possible that it's not SSL-protected meaning that people internally could eavesdrop on the requests.


But the requests were POST in the picture.

Edit: They do also send GET ... that is worse.


Same as every website where you can login.


what do you mean? normally passwords are not stored in logfiles ...


"Normally" ? What refrain you from logging HTTP Body ? It's the same problem as logging HTTP query string. You should consider everything you send over HTTPS public for the receiver in any way.


The passwords are not necessarily being captured in logfiles, that's a huge assumption. We don't know anything about how eBay stores and manages their web server logs.


It'd be nice if they additionally implemented https everywhere while they're at it


> The main point I think is, that GET Requests are logged in log-files which are usually accessible by more people that the main database.

This is an outright assumption, and it's a bad one.

This is a non-issue, because they do NOT log these requests, and it's https.

So move on, this is just noise.


I don't know where you got the information that they do not log these requests, but it is a good assumption, not a bad one. It would be atypical not to log every https request.


A lot of setups have one machine doing the SSL and then forwarding the requests over HTTP to backend servers which are logging the requests and would include GET parameters in the log file.


Today's xkcd is surprisingly relevant:

http://m.xkcd.com/1700/


Indeed. Who would use that site any more after this “feature” has been publicized?


About 162 million people.


Ouch, that's quite a bit indeed. One reason why I shop at Amazon more often.



> why the website is sending docents of requests to Ebay’s servers.

"dozens" maybe?




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: