Hacker News new | past | comments | ask | show | jobs | submit login
Robinhood Stored Passwords in Plaintext
383 points by bdibs on July 24, 2019 | hide | past | favorite | 167 comments
Just received this email from Robinhood (https://robinhood.com/):

"When you set a password for your Robinhood account, we use an industry-standard process that prevents anyone at our company from reading it. On Monday night, we discovered that some user credentials were stored in a readable format within our internal systems. We wanted to let you know that your Robinhood password may have been included.

We resolved this issue, and after thorough review, found no evidence that this information was accessed by anyone outside of our response team. Out of an abundance of caution, we still recommend that you change your Robinhood password.

We take matters like this seriously. Earning and maintaining your trust is our top priority, and we’re committed to protecting your information. Let us know if you have any questions–we’re here to help.

Sincerely,

The Robinhood Team"

If you've used Robinhood in the past, it's a good idea to check your emails!




This doesn't mean they were stored in a database. "a readable format within our internal systems" could be log files if they didn't scrub passwords when logging requests.


This has happened to basically every large web app company. Turn on debug logging in the app which logs HTTP request headers and likely doesn't strip out sensitive information. Easy mistake, and hopefully it wasn't like this since the beginning, and was maybe only found to be an issue for a subset of Robinhood's logs or something.


This happened to Apple too, while logging API errors (CVE-2014-1317 - in this case it was hex-encoded).

Given the frequency that this occurs, I wonder if it'd be a good practice to use a sentinel value for a password during testing and grep logs for it.


> good practice to use a sentinel value for a password during testing and grep logs

Yes. I thought that was pretty common practice. We catch PII (not just password) log-leaks this way that verifying this way often enough that our developers have even learned[1] to check the searches before we yell at them when they try to take something leaky to prod.

If you don't automate detection of things like this you will make the same mistake again.

[1] Not being down on developers, I used to be one and still write a lot of code. Just noting a somewhat less than universal interest in compliance issues.


> Just noting a somewhat less than universal interest in compliance issues.

Is there a compliance document that requires this approach?

Or is it that the compliance states reasonable safety whilst handling PII and security credentials, and that it is down to the interpretation by each individual company as to what that means.


GDPR covers fines for logging sensitive things like passwords. There is additional separate certification needed if you're handling any payment details (i.e. Card numbers) yourself


Any recommendations of tools/libraries that help with this sort of testing/analysis?


So, if I understood correctly is that having a test info like username password etc and then grepping the logs for those known test info values and if it returns anything, then throw an error?


Yes. Ideally this would be an automated process, since it'd be tedious to grep every time you're messing around with logging, and you may sometimes forget to do it. If you're forwarding logs to something like ELK or Splunk, you can set up an alert / scheduled search to continuously check for those canary credentials every few minutes.


Maybe we should avoid sending passwords altogether. With some client-aide code, you could build a challenge-response system that eliminates this risk entirely.


It is possible. But much easier to get wrong than to get right.

Specifically if you hash the password client side then the hash fundamentally becomes the password, and having that sit plaintext in logs is identical to the pre-hash password since both can re-played to authenticate.

I believe this is still a respected/secure version of what you describe:

https://en.wikipedia.org/wiki/Secure_Remote_Password_protoco...


No, having a client side hashed password sit in your logs is way better because it only introduces risk for your own service, instead of risk for every user who reuses their password on multiple services. It changes a compromise log from a widespread identify theft risk to a contained privilege escalation risk.

(Provided either you, or all the other services using this scheme, are sensible enough to salt with a string unique to their website such as a domain name)


just "hashing the password" is not a "challenge-response" protocol.


Thank you


Has there been any research into this or any companies that use it yet?


There's this thing: https://trykno.com/

As they're incredibly opaque about what they're doing, it's hard to tell whether they're part of the "get it wrong" crowd, but being incredibly opaque about what they're doing doesn't bode well.


I used this when I was doing web apps for small enterprises in early 2000's - I sent a random string in a hidden input in the login form and had some javascript concatenate the password and the random string and send me back the username, the random string and the hash of the concatenated string. I hope I got that right (not doing web apps any more since a long time ...).


You would want to use a token and the string and only accept the token to look up the string that you would only accept once and for a limited time.

Otherwise its vulnerable to replay attacks


Yes I think I was cashing the generated string server side for a while and checking if the string I received back had a matching entry in the cache, otherwise reject it.

Anyway, it's been a long time ago, and if I had to do it again today, i would rather not roll my own.


Fido (and now w3c)'s WebAuthn come to mind


Chip-and-pin credit cards use it. 2FA/Fido/Yubikey use it.


Amazon use it in their Cognito offering.


if you can encrypt from the client side, you're always going to be able to reverse it too, its like storing the md5 of my plaintext password, just because it doesnt LOOK like my password in plaintext, it pretty much still is


And the hex encoded version of it.


base64, too, in all four alignments.


I guess the struggle is a) what format its being logged in and b) knowing exactly where things are being logged/stored.


It would be even better practice to use type systems that don't allow the printing of values deemed to be sensitive.


Perhaps users could be asked to use their password-typed keyboard when entering passwords, and their address-typed keyboard for their addresses too?


Not sure if this is a joke, but iOS does distinguish between these.

"Secure" text inputs get the system keyboard, regardless of the user's custom installed keyboard.


That's no excuse, especially for a financial services company like Robinhood. They have a duty to have process and people in place to prevent this exact type of issue.


plain text passwords in HTTP request headers? Wouldn't that be usually in POST body which almost never gets logged?


I have, on a not insignificant number of occasions, used mod_dumpio in Apache to debug things. Which means I end up with all POST payloads in log files. Which has occasionally meant I ended up with plaintext passwords in log files... I am extremely careful when doing this. But I have seen cow orkers copy my debugging techniques and "forget" to switch off the dumpio logging - which has at least once been discovered when a prod server's log files started blowing out in size dramatically. With real user passwords in it... (Quite how mod_dumpio and it's config ended up on a prod server is another rant, which needs to be done over whisky, not in the office...)



Small enough POST bodies could still be worth logging to detect issues. Of course because of things like this it generally isn't a good idea, but there's still instances where it could be important to log POST bodies to eg. understand certain automated attacks.


Headers often include session tokens, though


While careless and problematic, that is less problematic than logging passwords. The user probably reuses their password, so as soon as some developer can just look at logs and get a bunch of email/passwords, they probably now have access to other systems that they shouldn't.

That said, there are more than just logs to worry about. Is your service-to-service traffic encrypted? Then passwords are in the tcpdumps that you took to analyze a strange networking bug (you then copied the password to your workstation to look at the trace in wireshark). Do your programs segfault and core dump? Then passwords are probably in that core dump. Do you check the bounds of every memory read and write operation? Then passwords are probably in random variables (see: Heartbleed/Cloudbleed).

Sessions, though, are less scary. Sure, if you steal someone's cookie internally, you can impersonate that user on your own services. Nobody wants that, because it probably bypasses all the internal auditing systems; the auditing system can't tell that request apart from a legitimate user request. But, cookies can be revoked and have a shorter lifetime than a passwords, so it's not quite as bad.

Ultimately, it's all about limiting risk. If you have a database full of passwords, then someone compromising that database today gets all passwords ever. If you have a disk full of logs full of passwords, someone gets all the passwords that were used to log in within that log server's retention time period. If someone hacks in and starts tcpdumping your internal network and it's not encrypted, they only get passwords from users that log in while they're running tcpdump. If you encrypt network traffic, then someone only gets passwords that were in memory when your binary crashed. Nothing is perfect, but you can do more to add more security. Perfection is impossible, but not logging passwords is one step closer to perfection.


Session cookies should be locked to a TLS channel ID.

Then they are useless to anyone who steals them, since a TLS channel ID can't be recreated except with access to the TPM of the computer which initially created the cookie/session/TLS connection.


Can you post a bit more about how to do that? Your comment is a first result for my googling


That requires support for token binding which google killed.


This is why there is client side hashing. You still hash+salt on the server side, but the client side hashing prevents the worst kind of exposure.


What's the significant difference between sending a plaintext password and a sha256(plaintext password) to the server, except if the user reuses that password on other sites?


> except if the user reuses that password on other sites

Except you can't really except that.


But isn't it bad practice to send plain text passwords to the backend? Shouldn't you encrypt them in the client already?


"This has happened to basically every large web app company."

That is a seriously sad reflection on the poor quality of the software cranked out today.

Someone should be losing their job.


After having the risks explained to them and having received our professional recommendation that they add application-specific logging that had no means of accidentally logging PII, one of our customers (a major bank) still insisted that we implement full HTTP body logging in a proxy.

Nobody is going to lose their job over this because the practical consequences are that you have to send out an "our bad" email and offer a credit protection voucher that most customers won't actually use. There are no costs.


You’re both correct: someone “should” lose their job, but nobody will.

Not that that someone is necessarily the person who actually turned on this logging, it could be their supervisor who let it go live, or QA or… wherever the buck stops.


Sure, people should lose their job because someone's diagnosing a network problem, happens to run tcpdump on a server behind a TLS-terminating load balancer, and sees username/passwords. /s This literally happens all the time.


Depends, how did they get to the server (secure jumpbox)? Did anyone know it'd be a security related event when they were diagnosing the problem? Did the tech take precautions? In such a case, I don't think robinhood.com would need to notify users. They tcpdump would be handled securely, scrubbed properly, troubleshooting done, and that'd be the end of it.

If that isn't what happened, someone should be losing their job. The minute you think diagnosing network problems is a reason to ignore security to a degree that your users need to be notified you're incompetent.


If they used the info to hack accounts or other malicious activity, then yes, they should lose their job.

If they just left a log or capture file around somewhere, nope. Everyone makes mistakes.


"If they just left a log or capture file around somewhere, nope. Everyone makes mistakes."

I guess that's where you and I differ. There are certain mistakes that are tolerable. But ones that result in mass emailing to your clients that they need to change their passwords are not. I'd never heard of robinhood.com before, but now I have, and my impression is I'd never trust them because they have staff that makes mistakes with my secure information.

Think of how robinhood.com's employees feel about this. Management, and the people who are now watching the fallout rob the company of revenue.

Telling me that the person who made that mistake is gone might make me think differently.

The minute you start taking users name and passwords you're in the big boy world and need to treat it as such. Mistakes can be fatal, not just for your job, but your company.



You're proving my point. It's not taken seriously enough, even by huge companies with lots of resources.


I appreciate their transparency. It should make you trust them more because they're letting you know about the problem.

Some organizations would simply ignore this and move on. You'd never know it even happened.


I'll admit, that's a good point that made me go, "Hmmm."


Blameless post-mortems are a thing for a reason. Honesty about mistakes should make you trust an individual/company more not less.

Just my 2 c


Since we now put the user-id first, before the form goes to check how to even try to auth said user, we can just fetch a strong random salt from the server in it's response "just use a simple password, this is not a fancy account". Some nice, salted PBKDF shouldn't be a problem then.

Also, there are zero-knowledge authentication systems, but they are somewhat iterative due to their probabilistic nature. They are secure in the face of an imposter authentication service, as the password (or a hash, obviously) has to be stored in the real authentication service. But it is never transmitted after this initial store, so no amount of logging or posing as an evil imposter risks disclosure.


I'm not an authentication system programmer, so this may be a silly question, but why do clients still send passwords to a server? Doesn't it make more sense to hash the username and password together with some sort of nonce/salt that's sent to the server for validation?


If you hash the password on the client, then the hash essentially becomes the new password and you haven't solved anything. What you're looking for is a password authenticated key agreement in which one party authenticates to another without an eavesdropper learning any secret [1]. For whatever reason, no major websites use PAKEs today.

[1] https://blog.cryptographyengineering.com/2018/10/19/lets-tal...


I agree that it becomes the new password, but it prevents your own site becoming a contributor to credential stuffing attacks - if you don’t have the clear password transmitted to your systems, you can’t leak it accidentally in logs or through poor DB practices.

I wonder why PAKEs haven’t caught on?


Anything faster than bcrypt is practically clear text due to brute forcing, and running bcrypt inside the user interface is needlessly expensive. The common practice is to ensure that passwords are checked and discarded before they can be accidentally disclosed, or to avoid re-usable passwords altogether. Production logs should also only be accessible when there is a direct need. Sometimes this still fails, but it is relatively rare in well run sites.


The AWS Cognito User Pool Authentication Flow utilizes an augmented PAKE (SRP). I imagine there are a number of major sites that use Cognito along with the SRP auth flows baked into their std libs. I know I've used it a number of times.


I implemented SRP a decade ago — it has issues, and thus a lot of revisions. It also leaks your salt and you can’t use a pepper. There is Opaque (see the play on PAKE! but it’s new and difficult to search for).


But you have. It means even if you refuse your passwords elsewhere, only the website that was compromised is compromised.


You can (and should) store the salt server-side. Moving the hashing to the client does not solve anything.


Really, that should be a salted hadh to really defend against password reuse.


Blizzard Entertainment uses a PAKE implementation (SRP).

This article (somewhat ironically) talks about it:

https://arstechnica.com/information-technology/2012/08/hacke...


I don't know PAKE to know if this is equivalent to it, but you can use a timestamp + hash to avoid sending anything equivalent to a password. The caveat is your system needs to have the correct time set. But otherwise it works fine.


> why do clients still send passwords to a server? Doesn't it make more sense to hash the username and password together with some sort of nonce/salt that's sent to the server for validation?

You must ALWAYS hash the password (as a salted iterated cryptographic hash) on the server. You can in ADDITION also hash the password on the client end, but by itself that's not enough.

The issue, as usual, comes down to "what is the attack you're trying to thwart"? If you hash on the client and not the server, then when (not if) the attacker manages to download the hashed password set, the attacker can create a modified client & send those hashes directly. If you don't hash on the server, it's exactly the same as storing passwords as clear text, because you're storing the data that an attacker can directly use to log in.

You can also hash the password on the client IN ADDITION to the server. In that case, you're hiding from the server the actual password you type in. That's an improvement if you're sharing a single password across many services; in this case the attack you're trying to thwart is to prevent an attacker from actively capturing the password & trying to reuse the password on other systems. However, a much better idea is to not share a single password anyway, so it's not such a great thing.


Users reuse passwords. The main reason of not having your passwords in clear text is to protect them from that. At least when (not if as you noticed) your site is hacked (the same day this attacker download your password database probably), it has no other impact for your users that whatever can happen on your site (the attacker might be able to use the hashed passwords to log on your site but at this point you probably have bigger problems, given that someone is able to download your password database, and you are probably going to reset all those passwords anyway).


This comment is full of many inaccuracies. Leaked hashed passwords still pose a risk to users on their other accounts that may use the same password. Hashing passwords prevents internal abuse and makes your site a much less juicy target (in addition to making leaks less disastrous). It also makes timing attacks far less feasible by enabling reasonable constant time comparison of the hashed password.


I agree with what you are saying but I don't see how this conflicts to what I said.


That introduces additional complexity, and the password is already protected by https.

As others have pointed out, if you go for complexity, there are better schemes. However, by avoiding the complexity you keep simpler client side code.

Especially because this is standard, whilst doing extra stuff would involve doing your own crypto code.


The only thing that you nées is a hash fonction. I think thé added complexity ils negligible on any modern System.


Small tip: looking at the spelling errrors, I bet you're on mobile, and using the standard Google or iOS keyboard set to French. I constantly have to switch between English and my native language as well. But I found out that with Swiftkey (by Microsoft), you don't need to manually switch between two languages; you just configure two language at once and the software does autocorrect for both.


Complexity of the code base. The computation of hashes client side requires quite a bit of custom code. Sending a plain text password is build into the browser.


If you take your idea and step through the login//authentication process, you'll find that what you're suggesting isn't really all that different to what is done normally.

Overview, Scheme_0: - You send your unique identifier (email, username, etc) and password to the server in plaintext. - The server looks up the password hashing scheme and salt associated with your identifier - The server checks that salt+password+hashing scheme produce the stored hash - Server kicks back proper response

In your scenario, Scheme_1: - You prehash the password locally and send your identifier + prehashed_password to the server - The server looks up the password hashing scheme and salt associated with your identifier - The server checks that salt+prehashed_password+hashing scheme produce the stored hash - Server kicks back proper response

The only difference is the extra hashing step. From a security perspective, there is no gain; storing the prehashed_password from Scheme_1 in plaintext (e.g. logs) is no different than storing the password from Scheme_0 in plaintext.


The difference is that the password of scheme 0 is probably reused on many other websites. If for example you bcrypt the password client side, you at least don't have to deal with credentials that might work with all the other websites that your user is using.


How do you bcrypt at an appropriate difficulty client-side? Consider that it probably needs to work on a mid-range Android phone from years ago.


Password reuse on other sites is not your problem. It is your user's problem.


A fuck-you,-got-mine sort of approach to security doesn't work. It's everyone's problem.

PAKEs are probably a good idea across the board.



No it doesn't. In your example, the hash of the password becomes the new password (I.e. the new secret). The nonce/salt adds no security because it must be open to anyone attempting to authenticate.


Surely it adds security in that an attacker cannot take that new password and use it on another site, even if the user re-used the underlying password?


Yes. It adds this feature. But only if you assume that the database was compromised or nefarious logging was enabled and no further access to the server was possible. Otherwise, the attacker can modify the JavaScript that is used by the client to perform the hashing (and remove/adjust the hashing function).

To be clear, it would have prevented these logged passwords from impacting other websites if the true cause of the Robin Hood password reset was a logging issue.

As a user, you can prevent this from happening either way by choosing strong, unique passwords for every service.


This is still a meaningful improvement because accidentally making everything world-readable is in many cases easier than accidentally making everything world-writable.


It’s not assuming that has happened, it’s acknowledging it could happen (or any other bug that reveals database field, e.g. sql injection). It seems like it should just be a default practice - generally I try to design systems to fail safe, regardless of what causes that failure.


some additional discussion: https://security.stackexchange.com/questions/53594/why-is-cl...

I've also wondered why this isn't more common.


I wonder if there couldn't be some form of automated tooling which creates dummy accounts in your system and looks for leaks of this type?


That's a really interesting idea. Generate unique tokens e.g. for detecting leaked passwords and alert upon any text match within database, logs, etc.


Azure has this built into some of their DLP...at least when looking at emails


It wouldn't just be for passwords. It would be for any kind of confidential data.


Set an one time url as a password. Someone accesses it bam.


The detector would have to run everywhere, not just on your web application. It'd have to check linux/nginx/apache/etc logs, metric systems transit pipes and storage, etc, etc.

I guess you could install it as a Linux system which searches through all plaintext traversing the network AND filesystems of the entire infrastructure. Which might be a good business idea if you could pull it off.


I agree that it's likely a log file. I remember hearing about Twitter doing this (I also just found out that github may have done this as well).

https://www.bleepingcomputer.com/news/security/twitter-admit...


I worked at a place where on incorrect login, we recorded your attempt (with whatever was in the user/password field) into the logs. The logs went to data warehouse too. That was a fun day in the office :P


Exactly. They've got Kibana or similar setup with uri traces, tonnes of companies have been caught up with this same issue.


Not palantir, their logging system defaults to not writing unknown values.


+1 on that.

I remember on a bank's IVR I was auditing, required "_random 2 out of your 4 characters of a PIN_" (not your card's but dedicated to a call center support - that gave you full access to your accounts though) was being recorded/logged together with any phone key press (e.g. your customer ID). It was possible (if obtaining the logs of a few of a customer's calls) to reconstruct the PIN (e.g. call1#1 you press 1, 3, call#2 you press 1,2, call#3 you press 2,4). Those logs were scrubbed/dropped once a year, so for a frequent IVR user it was easy to reconstruct your IVR PIN. The folder containing the logs was a Windows shared folder with Everyone/Everyone (share/modify).

You should have seen the COO's face as I was describing how easily this could have been abused (likelihood), and how it could open a floodgate of serious problems for the bank (impact).


This is what I thought had happened.

They've supported MFA for awhile, so simple enough to reset a password and make sure MFA is turned on.

I prefer when companies are proactive.


Perhaps it would be helpful to somehow modify the title to reflect this. It doesn't seem at all that Robinhood is being negligent here.


I would agree. Although technically if they were accidentally in a log (the likely scenario) that does mean they were being stored.


The distinction still matters and I give Robinhood credit for admitting it and immediately notifying their users.

Security is a long term investment and it's easy to mess up. Taking responsibility and being transparent when you don't is the only way you're going to get it right.

There are some reverse proxies that are being sold as a way to mirror HTTP data to metric/analytics systems, which would bypass built-in filtering mechanisms which automatically remove passwords from logs (as Rails does). Metric/big data systems are an easy place for sensitive data leaks to happen. So your web developers could be getting security 100% right but some higher level systems integration with an analytics or ops software (requested by marketing or BI guys or whatever) could mess it up.


Logs or audit trace is the first thing I thought of as well. Have seen that happen multiple times. And yeah, logs are stored somewhere so the headline still makes sense (logs are "stored" on disk). Not that it makes it a good thing, but it's a bit of a different story than what is implied.


Yep, probably the same thing that happened with Facebook recently: https://newsroom.fb.com/news/2019/03/keeping-passwords-secur...


good point. it would still be silly for 'industry standard methods' to log passwords, in plain text - even for debugging purposes. But it could be result of some bug or error which dumped a load of information ,or perhaps even crashdumps of processes. they contain memory ,which could contain passwords if for instance, the login handling service crashed. It would be useful if companies were a little more clear about how these things happen, it'd be a good learning point for readers with similar setups in their environments. since the issue was resolved, it could be 'responsible disclosure'.


The version I got said the following:

>>> When you set a password for your Robinhood account, we use an industry-standard process that prevents anyone at our company from reading it. Additionally, our policy is to not store usernames and passwords for any bank accounts that you link to your Robinhood account.

On Monday night, we discovered that some user credentials were unintentionally stored in a readable format within our internal systems. We wanted to let you know that your Robinhood password and linked bank account credentials may have been included. <<<

What level of incompetence is required to have accidentally logged bank account credentials?

This is not simple logging of login, oops, this is a very serious breach of basic security.


My first guess, too. I flagged the story for the misleading headline, but alternatively an admin could change it to "Robinhood security incident" or something to that effect.


> I flagged the story for the misleading headline, but alternatively an admin could change it to "Robinhood security incident" or something to that effect.

Why is the headline misleading? I mean, they say in black and white that they stored passwords in a readable format. The headline is completely factual.

If my password gets compromised, it's irrelevant where Robinhood stored it, just that they did so in plain text. That it was stored in text files, in a database, or handwritten on pink post-it notes doesn't make much difference.


And this is why I still use an old framework like Ruby on Rails in 2019. Highly opinionated, and old, but rock solid and not easy to screw up.


This is yet another reminder: Do Not Reuse Passwords. There’s good password management even for mobile operating systems these days. I’m using Bitwarden on iOS, and it integrates well both with native apps and webpages.

Also, use two factor authentication, of course. If you are going the U2F route, I highly recommend having a permanent key for each computer and a bluetooth+NFC key for on-the-go. It’s a worthwhile investment.

My biggest problem today is that it is tedious to generate and save new passwords on mobile, which is increasingly where I do so... but security always comes with some costs, I suppose.


I haven't looked into password managers for a while. What do you do if you need to login on a random machine?


Don't. You shouldn't be putting your passwords into a random machine, it may be compromised.

20 years ago when I was in high school a friend of mine bought a hardware key logger. It went inline between the keyboard and computer. He would leave it on library computers to get the firewall password and to mess with other students. You never know when a machine is compromised.


2FA would have defeated your friend.


> Don't. You shouldn't be putting your passwords into a random machine, it may be compromised.

Sure, you shouldn't, but sometimes you have no other choice. The simplest case that comes to mind are the tourists who need to print boarding passes using the self-service computers in their hotel (or, even worse, at some printing shop down the street).

I would definitely change my password afterwards, though.


Just upload stuff you might need from an unsafe area from a private gitlab repo. Encrypt it if you like. Change the gitlab password when you're back (or just delete the account if you're paranoid).


You would get your password from the mobile app and type it in manually. For a suitably long and random password that takes effort but it should probably happen so rarely that it won't be a common annoyance.


Read it from my phone. The big problem here is to remember to make anything I might want to access without signing into LP on the device in question something I can type relatively easily (not XH6M6cz5d8jZ@$tNOZ5wUHTO3ewxYW@L for example)


Pull out my phone and lookup the password in the mobile app.


With Bitwarden, I can just log into the webvault. Be aware that this (logging into sensitive accounts on untrusted devices) is a considerably poor idea and should be avoided at all costs!

You should also memorize important passwords still. I know my personal Google password, for example.


With 1Password, you can print out an emergency access key that can act as fallback for 2FA


The convenience has come a long way on most password managers though, I started with lastpass when it was released and it would loose a few generated passwords now and then and yubikey would fail. I've been using bitwarden for a few years and I'm more than satisfied now, it just works as expected.


Lose a few? Is that like when a password stored by a browser no longer autofills because the website changed the field IDs or the subdomain or whatever else -- not lost but also not being used without digging it up?


I can also attest to LastPass once "losing" a password. My memory of the event is hazy, but it went something like:

1) Logged into website with LastPass 2) Had to enter a second piece of identifying info (like a PIN). 3) Once entered, LastPass thought the PIN was a new password and prompted me, "Would you like to update the login to [url] with the new password?" 4) I clicked "No". 5) Some kind of network hiccup or communication failure happened, the pop-up kinda hung there for a bit. 6) I opened the vault and checked the password for that site. LP had apparently not honored my request and replaced the password and, more disturbingly, the password history was missing.

Luckily I had my phone on me, and evidently it hadn't synced with LP's servers in the last few minutes, so I was able to pull out the old version of the password from the cached version of the fault and manually re-enter it on my computer to get the password fixed.

I've never been able to reproduce the issue after that (although I'm not keen to try). But it did happen.

LP also used to be pretty terrible at automatically saving passwords you entered into a form, even if you explicitly used the random password that LP generated. You really had to copy+paste it into notepad just as a backup until you could successfully confirm that it made it to the vault.


> If you are going the U2F route, I highly recommend having a permanent key for each computer

How do you deal with services that only permit a single key to be registered to each account? (Notably AWS. But they're not alone in this limitation.)


Connect AWS to a SAML provider that lets you have multiple security keys.


For personal use, seems a little overkill


I got a couple google U2F titan keys and set them up, and quickly realized 99% of websites don't allow you to use them to login anyways. So I am still stuck with using a password manager. I really wish that more companies would allow the use of U2F keys, as it stands these things only work on a minuscule amount of the internet.


Although U2F keys do resolve the main issue with password reuse, I think it is still good posture to use randomized passwords. It covers you in case there’s ever a vulnerability in a service that bypasses U2F.


Tedious how so? Are you using the latest Bitwarden IOS app? It makes this very easy IMHO!


Similar things have happened to Facebook and Google recently, the two companies that people keep repeating have the best security teams in the world. As long as humans are involved in programming - something that will likely be the case for the foreseeable future - these things will happen. But props to them for owning up about it and promptly mailing users pro-actively.


Genuine question: how would the non-involvement of humans make such issues unlikely? Doesn't the machine learning process involve trying (which, in the case of programming, would probably mean deploying to some users) what is quite certainly a combination of successful and unsuccessful things, to see what sticks?


I would assume, perhaps in error, that when such a system is created that it could be fed enough rules about what not to do that such oversights would not happen. The typical error is 'programmer debugging something will serialize a struct to a log file without realizing that there are critical fields in the struct'.


Not surprised, considering their security practices... see https://news.ycombinator.com/item?id=15679099


If you ever run a service, make sure you're not storing nginx logs with the contents of POST requests. A few years later you'll realize you "stored passwords in plaintext"

I assume this is what happened here, but maybe Robinhood will elaborate at some point.


Isnt it so convenient that they announce this after they close their funding round.


My thoughts exactly. How the hell do financial applications not take security more seriously? I just don´t understand. It isn´t that hard to make security a top priority. It isn´t even that expensive in comparison to the price they pay for issues like these, yet it seems that time after time, fast growth and dumping shares onto new VCs or public market investors takes priority over all else...


> How the hell do financial applications not take security more seriously?

This is what taking security more seriously looks like.

The lazy company doesn't even bother to look for problems like this, never finds them, and then an attacker eventually gains access to the plaintext passwords and compromises their customers.

The shortsighted company finds the problem and fixes it silently, even though they should really notify users to change their passwords to mitigate the possibility that the plaintext passwords were already compromised.

The company that takes security more seriously does own up to it despite the PR hit.


Yeah, at least they notified customers. I found a similar issue issue at a financial services company (money lending) where I previously worked as a junior dev. A dev accidentally added a log statement to debug something that made it to production. To make matters worse, the logs were also sent off to 3rd party log aggregator that we used and all devs had access to.

The company refused to do anything. No emails sent, not even a forced password reset. The dev who made the mistake responded with "This is not a real concern. I am disappointed we spent so much time working on this." I brought it up with the CTO who essentially did nothing. Then I brought it up with CEO who came to our standup where the responsible dev than said something along the lines of "we don't serve any heads of state, so it doesn't really matter." CEO did nothing. I emailed the general counsel who told me no one else brought it up with him.

I think I gave notice 2 weeks later. The general counsel apparently left within a year (not sure if related).


I'm at a "post-quantum" security startup right now and let it be known that it's not just the fintech guys who make these mistakes. If security isn't a top priority at a cryptography firm, where the hell is it one?


Because the punishment for such lapses aren't punished financially. The companies aren't generally held liable for damages resulting from such leaks. I'm not agreeing with the status quo but that's how it's been.


This very likely surfaced during the auditing by those investors, i.e. they knew


The fact that they prompt you for your literal bank password when funding your account says everything you need to know about how they look at security.


That's true of nearly every app I've used that links to your bank account (mint and venmo off the top of my head). Super archaic and obtuse and I solely blame the banks for not building a better method.


They could just ask for routing/account #. Yes it allows for unauthenticated push & pulls, but when they ask for username/password they're just scraping the routing/account number from your account and doing a normal ach transfer anyway. Except now that they're logged in they can verify your account has X funds, see transaction history, and be yet another security breach waiting to happen.

Transferwise switched to requiring this and it's a miserable feature. Lost me as a customer because of it.


ACH transfers do NOT require you to give your password to anyone, and they have been around forever.

Never give your bank password to anyone other than your bank.


But ACH transfers require you to give the information needed to print out checks against your bank account (routing number and account number).


The EU is forcing banks to work together and with FinTech. Have a look at PSD2 and the open banking regulations.


No it doesn't, many reputable mainstream banks and financial institutions have "share bank account credentials" as an option for linking accounts. It's standard practice.


The title is so misleading....


Man, I would spend time masking sensitive data in a shop with no traffic, but someone like Robinhood or Facebook can get away with it. They don't sweat the small stuff, do they?


> "we ... recommend that you change your Robinhood password"

This should really be a forced reset. Not finding evidence that it was accessed isn't proof that it wasn't.


So they probably have a request logger that logs request headers, and they accidently were logging credentials from those headers is probably what happened. This has hit basically every large web service at some point. Crazy this never came up in an earlier internal security audit, but not surprising it occurred in the first place.


Filtering those parameters should be a fundamental practice.


I got an email this morning that my hulu had been logged into... my hulu and my robinhood did in fact share a password. I have no evidence that these are connected, but better safe than sorry. (And I do use 1password now, I just didn't back when I used robinhood and hulu).


Have you checked HaveIBeenPwned? for your email? If you shared a password between Hulu and Robinhood in your pre-lastpass days, you probably used it on yet another site that was hacked.


The problem is that it says I've been pwned, but it's completely unhelpful because it doesn't say which service(s) it was. I use different passwords for most sites (randomly generated via password manager), though a few unimportant ones share a common password.

I use Bitwarden, which has a way of checking password dumps, but only for individual passwords. I have over a hundred entries, so checking that frequently is quite time consuming.

I've decided to just switch my email on as many sites as possible using a few aliases to make put them into buckets, but there has to be an easier way...


> The problem is that it says I've been pwned, but it's completely unhelpful because it doesn't say which service(s) it was

This is the most frustrating thing about hibp. I have hundreds of passwords on hundreds of services. "we found your email in a dump" doesn't help me if I don't know which server you found it for.


"found no evidence that this information was accessed by anyone outside of our response team. Out of an abundance of caution, we still recommend that you change your Robinhood password."

That is complete bullshiting pr spin and I'm greatly concerned about my account.

Gee wiz I wonder if anyone on the response team could have done something. Or a dev with nefarious purposes like guy that rigged the lottery multiple times.


Probably server logs / http request handler console log


"take matters like this seriously", not really.


I think it was some kind of logs. Probably request logs.


It's hard to believe a financial services company would store passwords in plain text out of stupidity. Could this be a legal strategy to avoid responsibility in some scenario?


thanks for the heads up. I got the same email earlier.


Plaintext bank account credentials. Wow.


Looks like storing in text is now a trend will do this in rest of my projects.


I smell weapons grade bullshit.

>On Monday night, we discovered

Because one casually does security audits on a Monday night and then releases a "nothing has come to our attention" statement?

The one is proactive (on a monday night?) the other speaks of a response to an external actor. Which is it?

The mere fact that a release like this happened suggests some sort of legal/SEC/accounting requirement was triggered. i.e. Something happened.


I think it's plausible. As other commenters have noted, the leak was most likely in application logs. So on Monday night, SRE gets paged about random issue, pulls up the logs and starts debugging, ends up stumbling across a user's password.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: