Hacker News new | comments | show | ask | jobs | submit login
Twitter urges users to change passwords after computer 'glitch' (reuters.com)
661 points by petethomas 4 months ago | hide | past | web | favorite | 462 comments



Actual twitter post: https://blog.twitter.com/official/en_us/topics/company/2018/...

"Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again."

Exact same thing that github did just recently.


Genuine question—how would this bug be produced in the first place?

My (limited) experience makes me think that cleartext passwords are somehow hard coded to be logged, perhaps through error logging or a feature that’s intended for testing during development.

I personally would not code a backend that allows passwords (or any sensitive strings) to be logged in any shape or form in production, so it seems a little weird to me that this mistake is considered a “bug” instead of a very careless mistake. Am I missing something?

EDIT: Thank you very much in advance!


Let's say you log requests and the POST body parameters that are sent along with them. Oops, forgot to explicitly blank out and fields known to contain passwords. Now they're saved in cleartext in the logs every time the user logs in.


Even logging a username field is likely to catch a bunch of false positives of users entering their passwords in the username input.


We made this mistake - the trick is determining what fields are sensitive, what are sensitive enough that they should be censored but included in the log, and the rest of the crud.

It turns out that this is non-trivial - when censoring how do you indicate that something was changed, while keeping the output to a minimum? blank/"null" was rejected because it would mask other problems, and "* THIS FIELD HAS BEEN REDACTED DUE TO SENSITIVE INFORMATION *" was rejected for being "too long". Currently we use "XXXXX", which has caused some intern head scratching but is otherwise fine.


Easy, you have a framework that validates & sanitizes all your parameters, don't allow any non-declared parameter, and make something like "can_be_logged" a mandatory attribute, then only log those & audit them.


I'd replace redacted fields with [redacted], or maybe U+2588


You prevent a lot of these problems by hashing passwords as soon as possible (i.e., on the client).


Wouldn't that make it easier for someone that has access to hashed passwords in the case of a database leak? They would just have to submit the username and the hashed password (which they now have).


You're right, but the attacker won't get the user's original password that they probably reuse elsewhere.

If it's just your authentication system hashes that are compromised, the damage can be contained.


In this case client side will have our algorithm (i.e in JavaScript) + private key which we will use to hash the password. If this is the case I could not see any different between giving hacker password or hash-password with algorithm and key.


While there is merit to clientside hashing, you should always hash serverside as well, lest a leak prove catastrophic.


In the context of production, why would you need to log anything other than X-Forwarded-For/X-Real-IP, timestamp, and the endpoint that was hit?


Remember that the context is a bug.

So sure you don't want to log everything in Prod, but maybe you do in Dev. In that case, a bug would be to push the dev logging configuration to Prod. Oops.

If you have the clear text password at any point in your codebase, then there is no full-proof way to prevent to log it unintentionally as the result of a bug. You just have to be extra-careful ( code review, minimal amount code manipulating it, prod-like testing environment with log scanner, ...)


Because when fatal exceptions happen you want to know what the request was. It helps debug what went wrong.


Not exactly log files, but I once noticed a C coredump contained raw passwords in strings that had been free'd but not explicitly overwritten. Similar to how Facebook "deletes" files by merely marking them as deleted, "free" works the same way in C, the memory isn't actually overwritten until something else writes onto it.


But if you have access to the programs memory you have access to all the post requests anyway.


Aren't coredumps static copies of the memory state at time of termination - usually unplanned? So not really the same thing as having ongoing access to a program's memory; I can't really see a debugging process that would involve viewing memory in a dynamic way, whereas it's somewhat of a concern if coredumps (an important debugging tool) reveal plaintext passwords.


Your getting a lot of what I would consider bad responses.

There are ways with downsides to mitigate the risk logging requests.

HMAC with time component will render the data useless before long. Essentially OTP. Downside client time needs to be accurate.

Negotiate a shared key ala NTLM. Downside more round trips; essentially establishing encrypted transport inside encrypted transport (https).


Careless mistakes are probably one of the most common types of bug you’ll find in the wild


In the past, I've seen logs monitored for high-entropy strings that could be API keys or passwords. However, in a NoSQL/UUID-using environment, this could be really hard to implement.


Perhaps implement some type of “password canary” - some type of test account(s) with known high-entropy passwords.

Have an automated system send periodic login requests (or any other requests which contain sensitive information that shouldn’t be logged) for this account, and have another system which searches log files for the password.

If it’s ever found, you know something is leaking.


And regularly check for that password on haveibeenpwned and other breached password databases.


Do you trust the database to not have been hijacked to capture checked passwords?

A better advice is to delete accounts you don't use. If not possible (illegal in EU now) scramble private data and the password.

Download the databases yourself and check them locally.

Changing passwords regularly also limits the damage.


Log line -> high entropy check -> false positive uuid check -> alerts

I’m not seeing how it would be a challenge in a uuid based environment, unless there’s a nuanced detail I’m missing.



Of course it could have! No API is foolproof


I think the joke is that both Github and Twitter are famous for being built on Rails (although Twitter just-as-famously required a move off of Rails in order to scale)


There was a great keynote about this at this year's RailsConf

The argument was essentially that if Twitter had instead chosen a language that was more natively inclined toward scalability, they would have necessarily hired 10x as many engineers and they would not have succeeded at building the product that people use to tell each other what bar they are at, simply, which ultimately was the braindead simple thing (that you can probably scale just fine in any language) which drove their success... it wasn't any great technological feat that made Twitter successful, it was pretty much just the "Bro" app that people loved.

(The talk was called "Rails Doesn't Scale" and will be available soon, but RailsConf2018 evidently hasn't posted any of the videos yet.)


Sounds like the same kind of thing that happened with APFS encryption passwords recently, too.

https://www.mac4n6.com/blog/2018/3/30/omg-seriously-apfs-enc...


Where is it showing a password there? I assume this has been fixed because I can't duplicate it on my machine and the screenshot posted on that article doesn't seem to show any plaintext passwords.


Which makes me wonder, is this really a bug or did someone make it look like a "bug"?

Also they say they found no evidence of anyone stealing these passwords, but I wouldn't be surprised if some companies decide not to look too hard just so they can later say "they found no evidence of such an act."


So best practice would be that the cleartext password is never sent to the server, so they could never log it even accidentally. That means the hashing needs to be done client side, probably with JavaScript. Is there any safe way to do that?


nah, that just makes the "hashed password" the equivalent of the cleartext password. Whatever it is your client sends to the server for auth is the thing that needs to be protected. If the client sends a "hashed password", that's just... the password. Which now needs to be protected. Since if someone has it, they can just send it to the server for auth.

But you can do fancy cryptographic things where the server never sees the password and it's still secure. like the entire field of public key cryptography, diffie-hellman key exchange, etc.


But wouldn't you due to random salting at least mitigate the disclosure of the password which might people use elsewhere?

edit: considering someone eavesdrops on the connection, otherwise that's a whole different kind of vulnerability


But then you have to store the password instead of a hash of it because it would change each time thanks to the salt. A much worse situation.


You can store things as follows. Store the salted hashed password with its salt server side. When the user wants to login send them the salt and a random salt. Client side hashes the password + salt then hashes that hash with the random value. What am I missing? Probably something since this is something I rolled my own version of when I was a teenager, but it's not immediately obvious to me.


So let me make sure we're on the same page...

--

Server stores hashed-password, hash-salt, and random-salt.

Server sends hash-salt, and random-salt to client.

Client uses user password and hash-salt to generate hashed-password.

Client hashes hashed-password using random-salt.

Client sends hashed-hashed-password to server.

Server grabs stored hashed-password and hashes used stored random-salt to check for match against client's hashed-hashed-password.

--

So the only thing this actually does is not share the text of the password that the user typed to the server. But at a technical level, now the hashed-password is the new "password".

Let's say the database is compromised. The attacker has the hashed-password. They make a login request to fetch the random-salt, hash their stolen hashed-password with it and send that to the server. Owned.

Along with being more complicated with no real gain, this also takes the hashing away from the server-side, which is a big negative, as the time that it takes to hash a password is a control method used to mitigate attacks.

Just send the plain-text password over HTTPS and hash it the moment it hits the server. There's no issue with this technique (as long as it's not logged!)


This is true. It does prevent an attacker from reusing a password they recover from your logs. But as others have pointed out a DB breach means all your users are compromised. Thank you.


No, random-salt is not stored permanently but generated at random by the server every time a client is about to authenticate. Alternatively a timestamp would be just as good.


The random-salt has to be stored, at least for the length of the authentication request, because the server needs to generate the same hashed-hashed-password as the client to be able to match and authenticate.

> Alternatively a timestamp would be just as good.

I don't see how that would work at all.

I also don't see the need to go any further in detail about how this scheme will not be better than the current best practices.

Never. Roll. Your. Own. Crypto. https://security.stackexchange.com/questions/18197/why-shoul...


A timestamp would work the same way it works in (e.g.) Google Authenticator.

Incidentally, I really resent how it's impossible to have a discussion of anything at all related to cryptography on HN without somebody bringing up the "never roll your own crypto" dogma.

If the ideas being proposed are bad, please point out why, don't just imply that everyone except you is too stupid to understand.

Edit:

I just reread your comment above and you did a perfectly good job of explaining why it's a bad idea, I must have misunderstood first time round: it's a bad idea because now the login credentials get compromised in a database leak instead of a MITM, which is both more common in practice and affects more users at once.

Sorry for saying you didn't explain why it is a bad idea.


You have to store the salt somehow, because you need to check that the salted, hashed password matches.


The problem with this scheme is that if database storing the salted hashed passwords is compromised, then an attacker can easily log in as any user. In a more standard setup, the attacker needs to send a valid password to log in, which is hard to reverse from the salted hashed password stored server-side. In this scheme, the attacker no longer needs to know the password, as they can just make a client that sends the compromised server hash salted with the random salt requested by the server.


Very true, I had not considered that possibility.


> Store the salted hashed password with its salt server side.

So now _this_ is effectively just "the password", that needs to be protected, even though you're storing it server side.

If an attacker has it, they can go through the protocol and auth -- I think, right? So you prob shouldn't be storing it in the db.

All you're doing is shuffling around what "the password" that needs to be protected is, still just a variation of the original attempt in top comment in this thread.

The reason we store hashed passwords in the db instead of the password itself is of course because the hashed password itself is not enough to successfully complete the "auth protocol", without knowing the original password. So it means a database breach does not actually expose info that could be used to successfully complete an auth. (unless they can reverse the hash).

I _think_ in your "protocol" the "original" password actually becomes irelevant, the "salted hashed password with it's salt" is all you need, so now _this_ is the thing you've got to protect, but you're storing it in the db, so now we don't have the benefits of not storing the password in the db that we were hashing passwords in the first place for!

I guess your protocol protects against an eavesdropper better, but we generally just count on https/ssl for that, that's not what password hashing is for in the first place of course. Which is what the OP is about, that _plaintext_ rather than hashed passwords ended up stored and visible, when they never should have been either.

Cryto protocols are hard. We're unlikely to come up with a successful new one.


It's unclear to me how your random salt would work. From my understanding, you're suggesting smth like:

register: send (username, user_salt, HMAC(user_salt, pwd))

login: send (username). retrieve user_salt. retrieve a server_salt generated randomly. send HMAC(server_salt, HMAC(user_salt, pwd))

But now your password is effectively just HMAC(user_salt, pwd), and the server has to store it in plaintext to be able to verify. Since plaintext passwords in the db are bad, this solution doesn't sound too attractive, unless you were suggesting something else.


Nope, that's what I was suggesting and I see now where it's weak.


"Since if someone has it, they can just send it to the server for auth" unless it's only good for a few moments (the form you type it into constantly polling for a new nonce).


The server would not be able to verify a changing hash without knowing the password


Or you could just use PAKE or SRP.


Not really... it's not that simple. You could use the time of day as a seed for the hash, for example. There are tradeoffs to be made, which is partly why they don't do it, but the story isn't as simple as "the hash becomes the password".


If the client knows to use the time of day then an attacker also does.

This is exactly the same: the seeded hash is the password.


Then how does the server check that it's valid?


The time of day is known to both the client and the server right? So they check to see that they get the same hash.


And how do you propose to do that when the clocks arent synchronized? Clock drift is exceptionally common. Not everyone runs ntp or ptp. Probably even fewer use ptp. Desktop/laptop clients it's typically configurable on whether or not to attempt clock sync, and ive never seen where the level of synchronization is documented for PCs. High precision ptp usually requires very expensive hardware, not something to be expected of home users or even a startup depending on the industry.


Well how do you think TOTP works?


TOTP works by having huge margins of errors (minutes worth). The original post is suggesting using time of day as seed.


The point was you could do similarly here. Just have a margin of like 30 seconds (or whatever). I never said you have to do this to nanosecond precision.


But the password is only known to the client?


Only if the server only keeps around the hash -- which is why I said there are trade-offs to be made. The point I was making was that the mere fact that you're sending a hash does not trigger the "hash-becomes-password" issue; that's a result of secondary constraints imposed on the problem.


Makes sense, and then you're getting into something akin to SSH key pairs, and I know from experience that many users can't manage that especially across multiple client devices.


There are probably ways to make it reasonable UX, but they probably require built-in browser (or other client) support.

Someone in another part of this thread mentioned the "Web Authentication API" for browsers, which I'm not familiar with, but is possibly trying to approach this?


Web Auth API (authn) does try to make it usable.

It ties in with the credential management API (A way to have the browser store login credentials for a site, a much less heuristic based approach than autocomplete on forms) and basic principle is generate a key pair, pass back public key to be sent to server during registration. On login generate a challenge value for the client to sign. I don't think iirc the JS code ever sees the private key, only the browser sees it.


How does Web Auth API and Credentials Management API address the "manage across multiple client devices" issue?


Useless unless browsers get their act together and encrypt their autocomplete data. I would never trust any API loosely associated with it.


I believe you could use a construction like HMAC to make it so that during authentication (not password setting events) you don't actually send the token. But if someone is already able to MITM your requests, what are the odds they can't just poison the JavaScript to send it in plaintext back to them?


I think their goal is to still use https, but stop anything important from leaking if a sloppy server-side developer logs the full requests after TLS decryption (as Twitter did here).


Couldn't you hash it client-side, then hash it again server-side?


How is that any different to only hashing server-side?


Password reuse wouldn't be as big of an issue if each site hashed the password a different way


No, there fundamentally isn't, because you can't trust the client to actually be hashing a password. If all the server sees is a hash, the hash effectively is the password. If it's stolen, a hacker can alter their client to send the stolen hash to the server.


If a hash is salted with a domain it won't be use-able on other websites. You should additionally hash the hash on the server, and if you store the client hashes, you can update the salts on next-sign in. A better question is why clients should be sending unhashed passwords to servers in the first place. https://medium.com/the-coming-golden-age/internet-www-securi...


This discussion is only relevant with an attacker that can break tls. A hash that such an attacker couldn't reverse might be slow on old phones so there is a tradeoff.

Also, hashed passwords shouldn't be logged either.



>That means the hashing needs to be done client side, probably with JavaScript. Is there any safe way to do that?

No [0,1...n]. Note that these articles are about encryption, but the arguments against javascript encryption apply to hashing as well.

Also consider that no one logs this stuff accidentally to begin with. If the entity controlling the server and writing the code wants to log the passwords, they can rewrite their own javascript just as well as they can whatever is on the backend. There's nothing to be done about people undermining their own code.

[0]https://www.nccgroup.trust/us/about-us/newsroom-and-events/b...

[1]https://tonyarcieri.com/whats-wrong-with-webcrypto


> consider that no one logs this stuff accidentally to begin with

It's possible. You create an object called Foo (possibly a serialized data like a protobuf, but any object), and you recursively dump the whole thing to the debug log. Then you realize, oh, when I access a Foo, sometimes I need this one field out of the User object (like their first name), so I'll just add a copy of User within Foo. You don't consider that the User object also contains the password as one of its members. Boom, you are now accidentally logging passwords.


Any user object on the server should only ever have the password when it is going through the process of setting or checking the password, and this should be coming from the client and not stored. So, your case of logging the user would only be bad at one of those times. Otherwise like in the case of a stored user you should just have a hashed password and a salt in the user object.


Ok.

Creating a User object that holds a password (much less a password in plaintext) seems next level stupid to begin with, but fair enough, I guess it could happen.


> Also consider that no one logs this stuff accidentally to begin with.

It can happen if requests are logged in middleware, and the endpoint developer doesn't know about it. It's still an extremely rookie mistake though, regardless of whether it was done accidentally or on purpose.


As others have stated, you'd just be changing the secret from <password> to H(<password>). The better solution is using asymmetric cryptography to perform a challenge-response test. E.g. the user sets a public key on sign up and to login they must decrypt a nonce encrypted to them.


Instead of trying to hash the password, just use SSL so the whole request is encrypted. But that doesn't fix servers accidentally logging passwords.

Maybe there could be a standard way to signal the beginning and end of a password string so logging software can redact that part.


You could do client ssl certs and just skip the password. It would be more work for the user though.


That would transfer it from something you know (a password) to something you have (a device with SSL cert installed) which are meant to protect against different problems.


Hmm why should passwords (hashed or not) be stored in logs though? I don’t see a reason for doing that. You could unset it (and/or other sensitive data) before dumping them into logs.


They shouldn’t. It was an unintentional bug


They shouldn't. It was a mistake.


Probably logging the HTTP/S requests, which included usernames & passwords in plaintext.


Wouldn't it be better to never even send the password to the server, but instead performing a challenge-response procedure? Does HTTP have no such feature built in?


So the us commander in chief can now be impersonated on twitter? I am shocked!


> implementing plans to prevent this bug from happening again

How does one do that?


Being simplistic, perhaps an automated test with a known password on account creation or login (e.g. "dolphins") and then a search for the value on all generated logs.


Is it there on the github blog? Any links would be appreciated


From what I've read it only applied to a small number of users and they were notified by email.


For people that know more about web security than I: Is there a reason it isn't good practice to hash the password client side so that the backends only ever see the hashed password and there is no chance for such mistakes?


Realize the point of hashing the password is to make sure the thing users send to you is different than the thing you store. You'll still have to hash the hashes again on your end, otherwise anyone who gets accessed to your stored passwords could use them to login.


In particular, the point is to make it so that the thing you store can't actually be used to authenticate -- only to verify. So if you're doing it right, the client can't just send the hash, because that wouldn't actually authenticate them.


But at least, with salt, it wouldn't be applicable to other sites, just one. Better to just never reuse a password though. Honestly sites should just standardize on a password changing protocol, that will go a long way towards making passwords actually disposable.


I don't think a password changing protocol would help make passwords disposable. Making people change passwords often will result in people reusing more passwords.


No the point is for password manager. The password manager would regularly reset all the password.... until someone accesses your password manager and locks you out of everything!


If by protocol you mean a standard, consistent API that can be used by password managers to update passwords automatically, then I completely agree.


Ultimately, what the client sends to the server to get a successful authentication _is_ effectively the password (whether that's reflected in the UI or not). So if you hash the password on the client side but not on the server, it's almost as bad as saving clear text passwords on the server.

You could hash it twice (once on the server once on the client) I suppose, but I'm not entirely sure what the benefit of that would be.


A benefit would be that under the sort of circumstance in the OP, assuming sites salted the input passwords, the hashes would need reversing in order to acquire the password and so reuse issues could be obviated. But I don't think that's really worth it when password managers are around.

I'm imagining we have a system where a client signs, and timestamps, a hash that's sent meaning old hashes wouldn't be accepted and reducing hash "replay" possibilities ... but now I'm amateurishly trying to design a crypto scheme ... never a good idea.


> meaning old hashes wouldn't be accepted and reducing hash "replay" possibilities

How would the server even verify the hash, then?


Verify the signature, check the time, use the hash as if it were the password to re-hash and compare with DB?


I think there is value in that. I would still be sure to hash it a second time on the server.

My guess is that this isn't popular because of the added client side complexity.

I'm also curious if anyone has considered or implemented this idea.


Ah answered elsewhere, if the client sends the hash and you log the hash then you still have a problem. The user should change passwords.

Although I think this still improves the situation if the password is reused. I.E. I can't use the logged hashed password on other sites.


Assuming that you are referring to browsers as client here. One simple reason is that the client side data can always be manipulated so it does not really makes any difference. It might just give a false sense of safety but does not changes much.

In case we are talking about multi-tier applications where probably LDAP or AD is used to store the credentials then the back end is the one responsible for doing the hashing.


I can't think of a good reason not to hash on the client side (in addition to doing a further hash on the server side -- you don't want the hash stored on the server to be able to be used to log in, in case the database of hashed passwords is leaked). The only thing a bit trickier is configuring the work factor so that it can be done in a reasonable amount of time on all devices that the user is likely to use.

Ideally all users would change their passwords to something completely different in the event of a leak. But realistically this just doesn't happen -- some users refuse to change their passwords, and others just change one character. If only the client-side hash is leaked rather than the raw password, you can greatly mitigate the damage by just changing the salt at the next login.


If you don’t have control on the client, it’s a bad idea: Your suggestion means the password would be the hash itself, and it wouldn’t be necessary for an attacker to know the password.


For one, you expose your hashing strategy. Not that security by obscurity is the goal; but there's no real benefit. Not logging the password is the better mitigation strategy.


>Due to a bug

>Write passwords to a log

Security level - Twitter.


It's funny, I wonder if hearing about that github bug made them check if they had committed the same mistake... only to find that they did :-)


I think I, and everyone here, should check as well. If capable, security-minded companies can make such a mistake, so can you.


We schedule log reviews just like we schedule backup tests. (Similar stuff gets caught during normal troubleshooting, but reviews are more comprehensive.)

It only takes one debug statement leaking to prod - it has to be a process, not an event.


Why not automate this?

Create a user with an extremely unusual password and create a script that logs them in once an hour. Use another script to grep the logs for this unusual password, and if it appears fire an alert.

Security reviews are important but we should be able to automate detection of basic security failures like this.


It would also be a good idea to search for the hashed version of that user’s password. It’s really bad to leak the unencrypted password when it comes in as a param, but it’s only marginally better to leak the hashed version.


This only works if you automate every possible code path. If you're logging passwords during some obscure error in the login flow then an automated login very likely won't catch it.


True, but it is more effective than doing nothing.


But it's not a choice of doing this or nothing. It's a choice of doing this or something else. That something else may be a better use of your time.


Log review is an awesome idea. Do you mind divulging your workplace?


Log review is done for every single project at my workplace too (Walmart Labs). So I don't think this is a novel idea. And it does not stop there. Our workplace has a security risk and compliance review process which includes reviewing configuration files, data on disk, data flowing between nodes, log files, GitHub repositories, and many other artifacts to ensure that no sensitive data is being leaked anywhere.

Any company that deals with credit card data has to be very very sure that no sensitive data is written in clear anywhere. Even while in memory, the data needs to be hashed and the cleartext data erased as soon as possible. Per what I have heard from friends and colleagues, the other popular companies like Amazon, Twitter, Netflix, etc. also have similar processes.


It's novel to me; never worked anywhere that required high level PCI compliance or that scheduled log reviews. Adhoc log review, sure. I think it's a fantastic idea regardless of PCI compliance obligations.


We just realised the software I'm working on has written RSA private keys in the logs for years. Granted, it was at debug level and only when using a rarely-used functionnality, but still.


For whatever its worth, I do security assessment (pentesting and the like).

Checking logs for sensitive data is a routine test when given access atleast.

Being given that access is disappointingly not routine though.


We also do log reviews, but 99% of the time they simply complain about the volume rather than the contents.

Do you enable debug logging in production? In our setup we log at info and above by default, but then have a config setting that lets us switch to debug logging on the fly (without a service restart).

This keeps our log volume down, while letting us troubleshoot when we need it. This also gives us an isolated time of increased logging that can be specifically audited for sensitive information.


Yep, glad I read this thread. We were making the same simple mistake.


We aren't.

Now.

(We caught ourselves doing it 4-5 months back, and went through _everything_ checking... Only random accident that brought it to the attention of anyone who bothered to question it too... Two separate instances by different devs of 'if (DEBUG_LEVEL = 3){ }' instead of == 3 - both missed by code reviews too...)


This is why you should turn on compiler warnings and heed them. It would have caught this.


And consider “Yoda Notation”[0], which some people find annoying, but I found an easy hurdle to clear:

  if ( 3 = DEBUGLEVEL ) 
wouldn’t pass the the parser because you can’t assign to an rvalue.

[0] https://en.wikipedia.org/wiki/Yoda_conditions


I know it's irrational but I really dislike Yoda notation. Every time I encounter one while reading code I have to take a small pause to understand them, I don't know why. My brain just doesn't like them. I don't think I'm the only one either, I've seen a few coding styles in the wild that explicitly disallow them.

Furthermore any decent modern compiler will warn you and ask to add an extra set of parens around assignments in conditions so I don't really think it's worth it anymore. And of course it won't save you if you're comparing two variables (while the warning will).


I don't think "Yoda notation" is good advice. How do you prevent mistakes like the following with Yoda notation?

  if ( level = DEBUGLEVEL )
When both sides of the equality sign are variables, the assignment will succeed. Following Yoda notation provides a false sense of security in this case.

As an experienced programmer I have written if-statements so many times in life that I never ever, even by mistake, type:

  if (a = b)
I always type:

  if (a == b)
by muscle memory. It has become a second nature. Unless of course where I really mean it, like:

  if ((a = b) == c)


FWIW I'm pretty sure both the devs who did this and both the other devs who code reviewed it would claim the same thing...

Like other people are saying - the toolchain should have caught this. And it should have, I don't remember how it'd been disabled...


One way to not write any bugs is to not write any code.

If you must write code, errors follow, and “defence in depth” is applicable. Use an editor that serves you well, use compiler flags, use your linter, and consider Yoda Notation, which catches classes of errors, but yes, not every error.


One of the sides is(should be) a CONSTANT. And you can't assign a value to a constant.


Why should one of the sides be a constant? There is plenty of code where both sides are variables.


What about this?

if (env('LOG_LEVEL') = 3) {}

Would throw and everything would be ok. Otherwise, use constants.

Also, lint against assignment in if/while conditions. If you want to assign in those conditions, disable linting for the line and make it explicit.


And if you write f# or Java code?


These kinds of issues are excellent commercials for why the strictness of a language like F# (or OCaml, Haskell, etc), is such a powerful tool for correctness:

1) Outside of initialization, assignment is done with the `<-` operator, so you're only potentially confused in the opposite direction (assignments incorrectly being boolean comparisons).

2) Return types and inputs are strictly validated so an accidental assignment (returning a `void`/`unit` type), would not compile as the function expects a bool.

3) Immutable by default, so even if #1 and #2 were somehow compromised the compiler would still halt and complain that you were trying to write to something unwriteable

Any of the above would terminally prevent compilation, much less hitting a code review or getting into production... Correctness from the ground up prevents whole categories of bugs at the cost of enforcing coding discipline :)


This is why I love F# but if you jump between f# and c# your muscle memory will suffer.


In Java you usually use `.equals()` to test equality, or if your argument is a boolean value:

    if (myVar) {
        //
    }
Instead of `myVar == true/false`.

The accidental assignment is much less common due to the way equality is tested in Java.

Also, `null` comparisons being assigned will fail to compile (assuming var is a String here):

    TestApp.java:6: error: incompatible types: String cannot be converted to boolean
        if (var = null) {


But in Java it’s much easier to do the error of using == instead of equals if you always jump language.


Sure, but that is such a common mistake that all Java IDE's warn you when you try to use == for Strings and normal non-number objects.


For java code, use final so that you have constants.


Note these are only constant pointers. Your data is still mutable if the underlying data structure is mutable, (e.g. HashMap). Haven't used Java in a few years, but I made religious use of final, even in locals and params.


Good point regarding mutable data. But since we were talking about loglevels, I don't think it's a problem there.


Yep - I pointed out that I used to do this in Perl back in '95 or so. At least one of the devs wasn't born then, none of them had ever used Perl.

(I'm not even sure how they'd ended up with a Grails configuration that'd let them do this anyway...)


Thanks for sharing. I am a less experienced programmer and have never seen this before. The name is so wonderful.


Apt day for discussing Yoda condition :)


Yoda makes code more confusing to read at a glance so I would recommend against it.


I don't really see how unless you've never actually read imperative code before; either way you need to read both sides of the comparison to gauge what is being compared. I'm dyslexic and don't write my comparisons that way and still found it easy enough to read those examples at a glance.

But ultimately, even if you do find it harder to parse (for whatever reason(s)) that would only be a training thing. After a few days / weeks of writing your comparisons like that I'm sure you'll find is more jarring to read it the other way around. Like all arguments regarding coding styles, what makes the most difference is simply what you're used to reading and writing rather than actual code layout. (I say this as someone who's programmed in well over a dozen different languages over something like 30 years - you just get used to reading different coding styles after a few weeks of using it)


Consistency is king.

Often when I glance over code to understand what it is doing I don't really care about values. When scanning from left to right it is easier when the left side contains the variable names.

Also I just find it unnatural if I read it out loud. It is called Yoda for a reason.


But again, not of those problems you've described are unteachable. Source code itself doesn't read like how one would structure a paragraph for human consumption. But us programmers learn to parse source code because we read and write it frequently enough to learn to parse it. Just like how one might learn a human language by living and speaking in countries that speak that language.

If you've ever spent more than 5 minutes listening to arguments and counterarguments regarding Python whitespace vs C-style braces - or whether the C-style brace should append your statement for sit on its own line - then you'd quickly see that all these arguments about coding styles are really just personal preference based on what that particular developer is most used to (or pure aesthetics on what looks prettiest - that that's just a different angle of the same debate). Ultimately you were trained to read

    if (variable == value)
and thus equally you can train yourself to read

    if (value == variable)
All the reasons in the world you can't or shouldn't are just excuses to avoid retraining yourself. That's not to say I think everyone should write Yoda-style code - that's purely a matter of personal preference. But my point is arguing your preference as some tangible issue about legibility is dishonest to yourself and every other programmer.


In this specific case DEBUGLEVEL should be a constant anyways, and thus assignment should fail, no? Also kind of denoted by being all caps.


Conventions cause assumptions.


There are always assumptions being made, no matter what you do. But "uppercase -> constant" is such a generic and cross-platform convention that it should always be followed. This code should never have passed code review for this glitch alone.


Which language would stop/warn you assigning the value of a constant to a variable? Doesn't "var = const" just work in most languages?


It's yoda condition, so const = var would fail.


Yes! I've been doing this in C for years. It's a little weird to read at first, but every now and then it really saves you.


Yeah, exactly. This error shouldn't ever happen, period. All modern development tools give big fat warnings when you do this.


“Should” is a bad word. If you are basing a conclusion off of a “should,” you are skating on thin ice.


This is just one of the many reasons why I like Python.

    > if a = 3:
           ^
    SyntaxError: invalid syntax


Rust has an interesting take on that, it's not a syntax error to write "if a = 1 { ... }" but it'll fail to compile because the expression "a = true" returns nothing while "if" expects a boolean so it generates a type check error.

Of course a consequence of that is that you can't chained affectations (a = b = c) but it's probably a good compromise.


Well in Rust ypu couldn't have = return the new value in general anyway, because it's been moved. So even if chained assignment did work, it'd only work for copy types, and really, a feature that only saves half a dozen characters, on a fraction of the assignments you do, that can only be used a fraction of the time, doesn't seem worth it at all.


People (atleast me) ignore warnings quite often, they aren’t safe haven if you ask me.


Hey no problem, just add -Werror to your compiler flags (C/C++/Java) or '<TreatWarningsAsErrors>true</TreatWarningsAsErrors>' to your csproj (C#).


This! Treat every warning as a failure, ideally in your CI system so people can't forget, and this problem (ignoring warnings..) goes away.

You will have a better, more reliable, and safer codebase once you clean up the legacy mess and turn this on..


I agree. Having worked in a project with warnings as errors on (c++) I found it annoying at first but it made me a better coder in the long run.

Plus you get out of the habit of not reading output from the compiler because there are so many warnings...


Unless you follow a zero warning policy they are almost useless. If you have a warning that should be ignored add a pragma disable to that file. Or disable that type of warning if it's too spammy for your project.


I'm curious, how often do you actually need to print out the password in a development context?


I've been working on authentication stuff for the last two weeks and the answer is "more than you'd like".

But luckily it's something we cover in code reviews and the logging mechanism has a "sensitive data filter" that defaults to on (and has an alert on it being off in production.)


seems like a bug in your platform


What developer in their right mind would ever log a password in the first place? Are we devolving as a profession?


Can be more accidental. e.g. dumping full POST data in a more generic way (e.g. on exceptions) that happens to also be applied on the login page.


Wasn't this GitLab, not GitHub?


From the email I received: "During the course of regular auditing, GitHub discovered that a recently introduced bug exposed a small number of users’ passwords to our internal logging system, including yours. We have corrected this, but you'll need to reset your password to regain access to your account."


"[We] are implementing plans to prevent this bug from happening again" sure makes it sound like this bug is still happening. Should we wait a couple of days before changing passwords? Will it end up in this log right now, just like the old one?


That sounds more like "We're adding a more thorough testing and code-review process for our password systems to prevent developers from accidentally logging unhashed passwords in the future".


No, it sounds like a reasonable bugfixing strategy. Identify the bug, identify the fastest way to resolve it, then once it's fixed figure out how to ensure it never happens again, and what to do if it does.


I think you read "prevent this bug from happening again" to mean "prevent this particular problem from happening one more time", while the blogpost probably means something like "prevent this class of bug from occurring in the future"


~~Sounds more like "we fixed this bug, and will ignore the processes that led to it happening" bullshit to me.~~


And this comment sounds more like "DAE hate twitter."

Their response is acceptable and textbook. Doesn't really seem like the appropriate place to wage the battle.


Yeah, bad kneejerk response on my part. Sorry.


A couple of steps you can take to reduce the chances of accidentally putting sensitive information in a log.

1. Make a list of all sensitive information that the test users in your test environment will be giving to your application.

As part of your test procedure, search all logs for that information. This can be as simple as having a text file with all the sensitive information, and doing a 'grep -F -f sensitive.txt' on all your log files.

2. If you are working in some sort of object oriented language, make separate classes for sensitive data, such as a class Password to hold passwords, a class CardNum to hold credit card numbers, and so on.

Make the method that whatever you use for logging uses to convert objects to strings return a placeholder string for these objects, such as "Password", "Credit Card Number", and so on.


Probably better than 2 is to use actual privilege separation between sensitive data and as much of your service as possible. Handle passwords and password changes in a separate microservice which exchanges it quickly for a session cookie, so that the bulk of your application logic doesn't have passwords in memory at all. For credit card numbers, do something like what Stripe does where one API endpoint exchanges the card number for a token and every other endpoint just needs the token, which means that Stripe can route that one URL to services they pay extra attention to.

This assumes that passwords and credit card numbers are more sensitive than session cookies or tokens, which is usually true because people tend to share passwords across sites and definitely share credit card numbers across sites. So the risk of logging cookies/tokens is much lower. Also, you can revoke all cookies and token much more easily than you can force everyone to change passwords or credit card numbers.


While good advice, your proposal is not necessarily better than #2 because #2 is something that happens automatically once the password hits your object model.

If rather than a naked string you have a Password class with literally no way to extract the plain text password then you can be positive that any code that uses it will never accidentally log the password.

In contrast, if you rely on microservice tokenization you can still accidentally log your password before your tokenization happens, just like people are accidentally logging passwords before they are bcrypted.

Both proposals have a problem of logging raw requests or raw service calls (outside of your object model).

Where plausible it would probably be best for microservice calls to only take already bcrypted passwords, and error out if it detects one that isn't, so there is zero chance of accidentally logging a plain text password when calling a microservice.

Again, an actual object (e.g. HashedPassword) shines here, because your code can automatically detect bad values the instant it hits your object model, and refuse to properly log or give access to anything that looks like it isn't already hashed.


You have to get the password into the object model first, though—and your object model can simply not contain a Password type in this process (and only in the password-handling one). You shouldn't pass the password on as a naked string and then drop it—you should prevent the password from getting past the auth service at all.

I think the risk of logging raw passwords in the auth service model is lower because logging in a password-specific microservice is an intuitively dangerous thing to do, so both code authors and code reviewers will pay heightened attention to it. Meanwhile, "log all requests" is a common thing to want to do and will raise fewer alarms in a primarily-business-logic service. (In fact, another usually reasonable thing to do is "log all requests that don't parse properly and return 500...")


Observation: security really does happen in low-level code quality like this.

This is a hard notion to accept because it has to be built into the organization. No silver bullet, no consultant, no five-step methodology. Just rigorous software engineering up and down the stack, and care, and attention to detail.


You could also grep logs as part of automated testing and system monitoring. Checking for known passwords for test accounts known to be freshly created, logging in, etc.


Do you know of an elegant way to do this when working with protobufs? Ideally, mark a field 'password', and the generated class' __str__ equivalent returns ""


Check out protobuf annotations


We do this extensively. A field can be marked as "redacted", and then interceptors can do things like:

1. (most pertinent in this discussion) The logging framework can redact fields before emitting log entries. 2. Endpoints can redact the data unless the client explicitly requests (and is allowed to receive) unredacted data. 3. Serialization mechanisms (e.g. Gson) can be configured to redact data before serializing. (Again, probably can't always do this, but can make that the default for safety.)

It's also very straightforward to hook up as a Java annotation that does the same things.


I highly recommend using a password manager. I finally bit the bullet and started using 1Password a few weeks ago, and I haven't looked back since. It's just so much better than having to remember a thousand different passwords.

Besides securely managing passwords, you can also use a password manager to secure your digital legacy. 1Password has a feature where you can print out "emergency kit" sheets that has the information required to access your password vault. I printed out two of these sheets and gave them to trusted family members in sealed envelopes. In the event that I become incapacitated, they will be able to access my accounts.


Time for me to advertise my personal setup again!

I use KeePassXC [1] with Syncthing [2] to synchronize my passwords between machines. No third-party!

[1] : https://keepassxc.org/

[2] : https://syncthing.net/


Same thing here. I ditched the online password managers a few years back for a similar setup and it's been just about as good as lastpass was, with the added benefit of being stored locally.


Does anyone have a recommendation for a good keepass client for iOS? Is MiniKeePass still the best option? I've been wanting to switch to KeePassXC + something for iOS for a while but I'm not sure what the best way to go is.


On KeePassXC’s website [1], they recommend MiniKeePass [2] and KeePass Touch[3]. I don’t own any iOS device, so I have tried neither.

[1] https://keepassxc.org/docs/#faq-platform-mobile

[2] https://itunes.apple.com/us/app/minikeepass/id451661808?mt=8

[3] https://itunes.apple.com/us/app/keepass-touch/id966759076?mt...


I use MiniKeePass. Don’t love it, but don’t know a better option.


As I pointed in my reply, you could also try KeePass Touch.


What if you’re on someone else’s machine?


The last time I needed a password on a someone else's machine, I just looked it up on my phone and typed it in.


Not GP, but KeePass user: I store my KeePass database on a small thumb drive (SanDisk Cruzer Fit), together with a copy of the KeePass executable. If I absolutely need to decrypt my password database on someone else's machine I can take the "secure" software from the USB and hope for the best. The USB also stores a copy of Truecrypt and a large Truecrypt container with backups of my encrypted private keys (PGP, SSH).


Key logger + making a cron job that copies everything off your drive = 5 minutes of work? I hope you trust the folks you use this setup on...


Key logger + screen shots and you also get access to a 1 password account

No matter what you do if the computer you are using isn't trustworthy you're losing.


Yeah, totally with you — don't trust devices you (or your employer) doesn't own. I'm borderline still where I trust my employer's devices with my personal passwords sometimes, but even that seems a bit iffy.


I'm going to plug https://bitwarden.com/ since nobody else has yet (I'm not affiliated). Open source, clients for everything, free for personal use, imports from other managers. I had been using a GPG encrypted text file for a long time, then later KeePass (and variants) on dropbox. I switched to Bitwarden a while ago and have been very happy with the whole thing.


Same here, used to use KeePass and then LastPass, then switched to BitWarden and have been happy with everything so far.


KeePass and Dropbox works great for me as a free alternative. I use the Kee plugin on Firefox and KeePass2Android on my phone.

I set it up to need both a private key and password to unlock my password DB. The private key moves around on a thumbdrive only (never in Dropbox).

I like that the only parts of this system I have to trust are open source.


How do you manage the private key on your phone?


I plug my phone into my computer and transfer it over USB. This is actually one of the primary reasons I switched from iPhone to Android.


Everyone should be using a password manager. You can't really trust the average joe to be able to make secure passwords for the potentially dozens or hundreds of sites and services, and even if they do, they probably use just one secure password for everything.

I just wish there was more seamless support for apps to use 1Password to paste in passwords. There are still sites that prevent pasting into password fields!


I have this fantasy that Apple starts rejecting App submissions that don't allow use of a password manager.


It's not that they don't allow the use, it's that they don't have a convenient 1Password icon next to the password field. I've noticed some apps have that. Not sure if it requires some specific integration or some open protocol.


The password manager integration is a public standard. More annoyingly, though, apps can also find out if you've pasted into a field and then immediately clear it (some rinkydink banking apps do this.) So it's a 2-pronged problem.


+1 for 1Password. Never looked back. Great for all sorts of passwords/credit cards/private keys. It also syncs to the 1Pass app on your phone.


+1 from me as well. Great for storing literally anything sensitive, syncs flawlessly across devices (I currently use dropbox to sync the vault, but it supports other options & they have their own syncing/account system (that is not required to use 1password)).

It is not open source but the vault format is open.

I've no affiliation, just a satisfied customer.


+2 for 1Password. Being able to use it as an MFA device has been brilliant. https://support.1password.com/one-time-passwords/


> I highly recommend using a password manager.

I really wish websites would support use of client side TLS certificates as part of the authentication process. Combining that with a username and password would give you two-factor authentication.


Client side TLS certificates get sent in the clear before you authenticate the server. (You can send them in a renegotiation, but renegotiation has been a historic source of both implementation and protocol security bugs because it does complicated things to TLS state.) So you don't want a client-side certificate that includes your name; that's a huge privacy leak.

You could imagine a scheme where you give a user a certificate with a random subject, and you have a server-side map of random string to user account (so leaking that map isn't the end of the world like leaking passwords, it merely reintroduces the privacy leak above). I recall some proposals for that several years ago. Today, Web Authentication effectively does something effectively equivalent, as does U2F, although they don't involve TLS client certificates specifically.


In TLS 1.3 client certs are sent over an encrypted link, and a reasonable client can and should wait for Finished from the server to arrive, at which point they're entirely sure of who their recipient is too.

Another nice thing is that TLS 1.3 servers can send a CertificateRequest asking for a particular _type_ of certificate, so (if that's ever used in anger) it lets us have clients that don't need to waste the user's time when they don't actually have a suitable certificate anyway. In earlier versions servers could only hint about which CAs they trust, not anything else.


Oh nice, that might be enough to put me back on team client certs!


> So you don't want a client-side certificate that includes your name; that's a huge privacy leak.

If it matches the username I have on a website like reddit or HN, then is it really a privacy issue? Anyone, regardless of whether they're logged in or not, can see posts I've made under my username. Though what you say can be an issue for websites where privacy from other users is expected (e.g. banks).

> Today, Web Authentication effectively does something effectively equivalent, as does U2F

Both of those seem to rely on HTTP, while TLS could work with any application level protocol.


They can't see that the posts are coming from your IP address, though. That's one of the things TLS protects—I can post from a coffee shop and nobody at the coffee shop can know (except perhaps by traffic analysis) that the person at the table next to them is the person with this username.


I looked into this and the user experience involved is very poor, in particular the browser interfaces. It sounds like a chicken-and-egg problem. On top of this, is the clear text sending of the client certificate as another poster mentioned.


You might want to look into the new web authentication API.


>It's just so much better than having to remember a thousand different passwords.

Login by email should really become a thing. There's just no reason to store passwords for most sites where you can just stay logged in indefinitely. On rare occasion you need your login cookie refreshed, just send a new link to your email. The burden of remembering a thousand secure and unique passwords dissolves immediately.


I keep all my passwords in a text file. I can't imagine remembering them all. I suppose I should keep that file encrypted and synced to multiple devices with rsync or so. Would a password manager give me any advantage over this scheme?


A password manager will have an integrated password generator where you can configure the spec (include special chars, brackets, custom characters, etc. or not). And you can keep password spec "favorites". So you can quickly generate a 20-char with special chars and accents password, or an 8-char, only letters and numbers for those websites that requires that.

It will allow you to organize the passwords in a hierarchical way with folders (banks, administration, forums, whatever), and set icons.

It will also keep the date of the last time you modified it. Sometimes this can be useful to know if you are impacted by a breach revealed after the fact. You can also make passwords expire if you like.

You can also add extra data in a way that doesn't clutter the main view. This can be interesting when credentials are more than login/password. For example you could add a PIN there. For my car radio there is a code to enter to make it work after the battery dies, I added the entire procedure to the extra data as I always forget it and it's not intuitive.

I just checked, I have 957 passwords in my KeePass.


Yes, a password manager is just an encrypted database for your passwords. 1Password synchronizes all of your passwords across devices and makes sure everything is secure. You only need to remember a single "master password", which is never sent outside of your local device. In the event that you lose or forget your master password, the password vault is completely unrecoverable.

1Password can also store other information besides passwords such as credit cards, software license numbers, passport numbers, etc. There is also a secure notes feature for storing arbitrary text.

The other password manager that I tried before 1Password is Lastpass. I ended up choosing 1Password since I think it's better designed and overall feels slicker. The /r/lastpass subreddit is littered with complaints about broken updates and bugs...


You might like this password manager:

https://www.passwordstore.org/

It uses a similar philosophy of encrypting plain text files and you can sync them how you wish. It might do some of the 'heavy lifting' for you.


Sync, browser integration, password generation, audits on password age and duplicates, validation against pwned passwords, shared vaults — nothing that you can't do yourself on top of a text file, if you've got the time and energy for that. TOTP, ACL, secure notes and files — these can't easily be done with a text file, but don't need to be part of a single password management system just because the commercial vendors have added these.


No, that's basically what they do, but in a more user-friendly format.


Well if he's not keeping that text file encrypted, I'd argue that there is a very significant difference in his methodology vs 1Password et al.


Yes. Among the many features a manager app like 1Password would provide is a way for easily pasting in a password to a login field with a simple keystroke.


I have all of mine on my desktop background, but it's rotated 180 degrees to make it a little harder for would-be hackers.


You don't need to trust others. For example, build the password using some simple algorithm that uses the TLD name. This way you only need to remember the algorithm.


I used to use this system but moved away from it. The reason is twofold. First, if it's a simple enough algorithm there will be enough 'hash collisions' that if someone gets their hands on one of your passwords and your email address, there's a non-negligible possibility that they will be able to find another domain that has the same password.

Second, sometimes sites mandate that you change your password. Or have rules that are incompatible with that algorithm. And then you need to start remembering exceptions to your algorithm, at which point you're back where you started.


There's also https://lesspass.com/ which is stronger than what the parent mentioned and I used a system like that for several years.

I gave up on it for the same reason. Having to remember exceptions, plus when you change your password, you have to change it everywhere, which is annoying because you can't remember every active account.


The problem with that is many sites have password requirements which requires password to be of a certain length and some arbitrary requirements.


Is there a reason to use 1Password over iCloud Keychain if you're mostly only on Apple devices?


It depends on your requirements.

Do you only use Apple devices? Despite spending most of my time on a laptop with macOS, I also have a gaming PC with Windows, a home server with Debian, a mobile device with Android, and a tablet with iOS. It's nice to have a bit of flexibility available.

If you use an alternative browser such as Firefox you lose access to the built-in integration.

I think their SaaS offering has vault sharing for friends and family, which isn't available through iCloud Keychain.

They provide additional security audit features, such as vulnerability tracking. Quite relevant: I just opened the app and Watchtower had a vulnerability alert notifying me to update my password on Twitter.

It supports One-Time Password, which can occasionally be convenient.

Other kinds of item are supported as well, such as credit cards, bank accounts, software licenses, identities, and secure notes. No more having to grab for my wallet when I need to input my credit card or driver's license info. No more having to search for a checkbook to find my bank account number.


Not particularly, if you don't need the 1Password features and use Safari on macOS. The password generation is integrated into the browser UI which is arguably better for non-nerd and lazy-nerd users.


I use 1Password to store everything. Security answers so I can use completely bogus ones, social security numbers for family members, software license keys, membership info, etc.


iCloud Keychain is definitely well-integrated, but I've run into a few edge cases where it doesn't behave the way I need it to. In these cases, 1Password is better since it actually lets me dig in and edit some of the low-level details in a quality UI (versus digging a couple levels deep in system settings/Safari preferences to find/edit the password in question).


My biggest problem is sometimes I switch between Mac and PC. I can't have all my website username and password locked into iCloud


I'm skeptical whether it is really worth it to trust yet another party that can potentially be bribed by intelligence agencies and what not; or even hacked. No. I sit down once a year and think of CorrectHorseBatteryStaple-like passwords [1] for each important service, where each password is a relatively complex function (involving deletions, insertions, swaps, associations, numbers and special characters) of details of my life, the current year, the service in question and the username. That way I have a unique password for each service and I can easily reconstruct it based on that sort of easily recallable information.

[1] https://xkcd.com/936/


How long have you done this, for how many sites, do you rotate passwords (when sites are breached, and/or on a schedule), and have you had to access sites in a mentally compromised state (distracted, sleep-deprived, post-concussion)?

Every once in a while I hear someone explain their system for this (and I used to use a simpler scheme), and I can think of arguments about why it won't work for long, but I'd be happy to update my internal monolog from actual evidence.

I've got about 1K passwords in 1Password. I generally rotate them when they're 1-3 years old, depending on the threat model, cost of compromise, and on what I'm using password review to procrastinate.


My question with these schemes is how do they deal with sites which have weird password requirements which don’t match the scheme.

Typically they don’t remind you when you are logging in that *we made you use 8 character passwords but you can’t use some special characters” or whatever. So you have to have some way of remembering what crazy password rules they had 3 years ago when you registered...


The Tweet from the Twitter CTO on this: https://twitter.com/paraga/status/992135139994943488

"We are sharing this information to help people make an informed decision about their account security. We didn’t have to, but believe it’s the right thing to do."

The "we didn't have to" is a little jarring given the scale of this.


Well, nothing ever left Twitter's servers. The logs themselves would probably be uninteresting to outside parties and inaccessible.


I suspect that many more employees at Twitter have access to the logs, than have access to a super computer and pasword hashes.

I know I wouldn't trust my password with the number of people that have easy access to logs at other large(ish) tech companies.

I really can't imagine why "we didn't have to" was included in that tweet, at all. What other flaps like this have occurred that exposed my creds or personal data to large numbers of employees, that they didn't have and didn't choose to tell us about?


Even if true, best practice is to strictly restrict access to and create audit trails for reading raw logs from production.

Ideally, you'd only need to read raw logs tied to a test account, or, maybe your own personal accounts.

Stack traces and exceptions and the like can be anonymized and collated.


More employees at virtually every major web company have access to instances (and thus instance memory) than have access to supercomputer clusters, too. Every mainstream popular web application is fed a constant high-volume feed of plaintext passwords, right there in memory (or, in typical TLS termination environments, on the wire) to be read by a persistent attacker.


That's true for nearly every single internet facing service, no? A compromise resulting in point-in-time access to traffic is a bit different than a bug that creates a persisted historical record of every single user who signed in for a period.

Maybe I miss the point behind this comparison? I guess I'd understand more if I thought the number of folks with node access and log access were in the same magnitude at Twitter, or if the TLS stack persisted data over time.


Last year a contractor deleted the president’s account.

The fact it didn’t leave Twitter doesn’t mean everything is good. There are still a LOT of people who may have had some kind of access to this data.


Assuming everyone has access to the logs.


> Last year a contractor deleted the president’s account.

The fact that they undeleted it is strong evidence that he didn't have discretion in how he performed his job, and thus was actually an employee and not a contractor.


If my gardener leaves a rake on my driveway, I'll remove the rake. That doesn't make the gardener an employee.


I’m not sure how that follows. Are you suggesting they don’t keep backups or use a “deletion” flag temporarily, e.g. as part of spam account removal?


Indeed. I deleted my Twitter account recently, there was a message that data is retained for 30 days to facilitate un-deletion. I assume their internal process is the same.


>Well, nothing ever left Twitter's servers.

Nothing is known to have ever left Twitter's servers.

FTFY.


>The "we didn't have to" is a little jarring given the scale of this.

How come? I interpreted it to mean that no regulations required this, but they chose to anyway. Which is true.


There is very little to be gained or added from pointing out this was optional -- for some reason the CTO decided to make a point of it.

I think the tone matters. Particularly for matters like this.

To my ear (and many who replied to that Tweet) this reads as "we've decided to do you a favor." Which is very much not the case.


I see it more of "we may have kept it quiet, but we're not making secrets from you - it's indeed our open culture"


So Twitter found out they had a bug that caused them to store passwords in one of their databases in plaintext. Their response is just a generic 'hey maybe you want to store your password'.

Compare that to Github who just yesterday went through the exact same problem, except they're requiring users change their password. Their CTO didn't make some 'hey you should THANK US" claim.


Are they? I just signed in on a machine where I had signed out and got no prompt to change it. Is it only certain users?


Yup. Affected users were emailed 24 hours ago. Only affected people who initiated a password reset previously (I assume during a certain time frame?)

> During the course of regular auditing, GitHub discovered that a recently introduced bug exposed a small number of users’ passwords to our internal logging system, including yours. We have corrected this, but you'll need to reset your password to regain access to your account.

> GitHub stores user passwords with secure cryptographic hashes (bcrypt). However, this recently introduced bug resulted in our secure internal logs recording plaintext user passwords when users initiated a password reset. Rest assured, these passwords were not accessible to the public or other GitHub users at any time. Additionally, they were not accessible to the majority of GitHub staff and we have determined that it is very unlikely that any GitHub staff accessed these logs. GitHub does not intentionally store passwords in plaintext format. Instead, we use modern cryptographic methods to ensure passwords are stored securely in production. To note, GitHub has not been hacked or compromised in any way.


+1 Thanks for the details.


I think it's part of the problem of writing a statement to be read by a large number of people - it could be read as "There is nothing legally forcing us to do this, but we are doing us anyway as it is in the best interests of our users" (which is positive) OR it could be read as "We didn't have to do this; we did you a favour, but there might be situations where we don't disclose this" (which is fairly negative). FWIW, I think the former is the intent.

Personally, I'd probably just say "We felt that we had to disclose this to protect the interests of our users" (and not acknowledge that they might not, or that they had to make a decision to do so), or just say what they _are_ doing ("We are disclosing this in order to protect our users") and avoid the possibility that it is misinterpreted. I don't think there is anything to be gained by saying that they might not have done this.


It's kind of like pre-emptively defending yourself against an accusation that hasn't been made yet.


Regardless, it seems very defensive. A company that looks after their users "because they choose to" is a lot more suspicious than one where customer care is simply assumed to be inherent to the operation.

More

Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: