I can give 2 real world examples of stupid security, or where rules are made without employees truly understanding why.
First, in the UK about a decade or more ago you used to have to sign the receipt and the checkout staff would check it matched the one on your credit card. There came a time when my credit card expired, so I produced a new one from my wallet. "It's not signed" she told me. So I signed it in front of her, then she gave me the receipt to sign, which I did. She then checked to make sure both signatures matched! Even though she'd seen me sign them BOTH in front of her.
Secondly, my wife tried to take 6 items into the changing rooms but was told they have a maximum of 5. I have no idea why. I mean, theft is theft regardless of how many items, right? Anyway, she handed one item over and took 5 in, and they gave her a plastic sign saying 5 on it. The idea is if you take 5 items in then they check that you bring 5 items out with you. Anyway, when she was in there she asked me to swap items around - go and get her 2 belts, swap the jeans for a different size but bring 2, etc
So, at the end she came out of the changing rooms with 7 or 8 items. The assistant took the clothes she didn't want and put the sign back ready for the next customer. At no point was she challenged about why the number of items she had didn't match the sign, all their "security checks" were done to make sure she couldn't try on more than 5 items at a time.
I wrote to the store but naturally received no reply.
I think the story with the credit card makes more sense than it seems at a glance:
They want to make sure you hadn't intentionally produced two different signatures. I think, in theory, you could later dispute the charge and the merchant would have nothing on you, if the signature didn't match the one on the card.
>They want to make sure you hadn't intentionally produced two different signatures. I think, in theory, you could later dispute the charge and the merchant would have nothing on you, if the signature didn't match the one on the card.
It certainly seems to be pretty useless though, and most stores just ignore the signature.
I recall being at a liquor store with my mother (15 years ago or so) and she was buying a bunch of wine. We got to the checkout and she decided she was going back to the car and handed me her credit card in full view of the cashier.
After the cashier ran the credit card, I had to sign the receipt. Since she'd seen my mother hand me the card, I asked the cashier (a teenage or early 20s American) "what dead president's name should I sign the receipt with?" (note that this is in the US)
Depressingly, she replied "Benjamin Franklin." (N.B., for non USAians, while Franklin was a very important member of our founding generation, he was never president).
As such, at least in my experience, it's been a very long time since anyone really cared about signatures on credit card receipts.
These days (in the US, and for much longer elsewhere), chip and PIN are king.
I think the credit card check sort of makes sense.
Suppose Alex gets a credit card, signs it, and then Bob steals the card and tries to buy something with it at the store. I presume this is back in the days when credit card transactions are processed offline. The store doesn't want to find out at the end of the day that the card has been cancelled and they're not getting the money. "Can you create a plausible replica of the signature on the back of the card" slightly reduces the probability of this scenario happening.
If Alex walks into the store with his shiny new card, but he hasn't signed it yet, the store is actually doing him a favor asking him to sign it - otherwise, if Bob nicks it, Bob can sign the card himself before entering the store, and pass the signature test.
Even the store clerk blindly following the "do the signatures match" procedure like described above produces some small non-stupid net positive security value.
In Germany, when the DHL delivery guy is unable to give you the package in person, they'll put a yellow note into your postbox and bring it to a nearby package store. The next day you can get the package in exchange for the yellow note, provided the recipient address matches your ID card.
If you had it deliver to a different address than which you are registered on, the store clerk is (technically) not authorized to hand it to you, because the addresses don't match. But the backside of the yellow note has a mandate form, which you can use to authorize any person to fetch it for you. So if the clerk refuses to give you the package, you turn the yellow note around, fill in the form and authorize yourself in front of their eyes to get it anyways.
Yes, but if you were stealing the package, you'd now not only have stolen a package but you'd have committed all kinds of identity fraud or forged a signature, and so on, which is a way bigger deal.
A lot of this is just currier CYA for disputes. DHL (and others) don't really care who collects the package, so long as whoever authorized the shipment is happy with the result of the delivery.
I never understood signatures, but then my handwriting is all over the place and at best my signature is a squiggle.
I haven't a clue how they're verified these days. I suspect they're basically a legal guarantee, where in court you can be asked "is this your signature".
Signatures are basically the same as credit card numbers. Not secure, but everyone keeps pretending they are, so insurance and legal make sure they are.
We can "sign" something by entering our name into a text box on a web form. That's the easiest to reproduce. This works because the contract is between the other party (requestor) and the named person. Entry by any other party not authorized by the signer will render the contract null and void.
The signature answers the question of, "Does X agree to these terms or not?" and not, "Is X who they say they are?"
Depends on the country. In Czechia, some documents (such as the contract about selling/buying property) require verified signatures, where you go to a notary or a post office with your ID card, sign the document in front of them, they note what document you brought to them into a book of signature verifications and stamp the document.
Mail-in voting in my state entails "signature verification", wherein I learned - the hard way - that they do in fact compare the signature on my ballot to some signature they've got stored somewhere to see if it matches.
> so I produced a new one from my wallet. "It's not signed" she told me. So I signed it in front of her
I’ve had this too. It felt weird, but isn’t that bad: You sign just the same before a second attempt, so might as well continue. And the upside of having to do this ad hoc is that it’ll help on future card usage.
But the following likeliness check is hilarious of course. :)
That's the reason (at least in the store I used to work).
Also, in some stores, the clients drop the items they don't want, and the employees have to fold them back (or put them on a hanger) and to put them back at their correct place in the store. It builds quickly, and you want to keept it manageable.
I've seen people in primark pick stuff off a rail, hold it up to themselves to size it, then if not happy just drop it on the floor and reach for another. It amazes me and not in a good way.
My passport is from a “top” country. To get it I only needed a photo and my birth certificate, which I bought a copy of from a government office for a small fee. Then a signature from police department to say it is “my” certificate to accompany my photo.
When I was renewing it in a foreign country I filled out an application online and had it sent to my new foreign address without any checks. Perhaps the embassy checked on my residency but any ID in my new country has been issued based on that passport.
At my work we signed a deal with one of the big three credit reporting agencies that had a well publicized security breach a little while back.
As part of this, they sent us some security due diligence questions, which is fairly routine. One of the things that they wanted us to agree was that we forced all of our employees to change their passwords every $timeperiod.
They insisted that this was important for security, and only backed off after we sent them references from all of microsoft[0], nist[1] and the uk national cyber security centre[2] saying that doing so would reduce security.
My suspicion though is that they only removed it from our specific contract, and they hadn't changed the entire process. I would have hoped that after their security breach (this happened afterwards) they would have reviewed their security and improved it but unfortunately that didn't seem to be the case. There were a fair few other things in the review that were generally poor security wise, but that one is the one that stood out to me.
I protested this exact thing at $workplace last year, but they said that the NIST recommendation was "only a draft", and that the NZ government security services hasn't yet updated their recommendation. So here we are, incrementing a number every couple of months to keep the robots happy.
One of our vendors started requiring a monthly password change. Now, you can walk through the office and see half a dozen passwords on Post-It notes in plain view. I explained this to them, and received a response that seemed to think I was asking what a password was for.
I once went through a couple of cycles of: Create account, do things, log out, try to log back in: Invalid password. After 5 times of resetting the password I had an idea, may it be that they truncate my 32 char random string from Bitwarden? So I paste the first 12 chars (after a lot of tries I was down to 12), voila.
I can't remember the company but it was not a small one... The stupid thing was that it accepted the original input but simply truncated it, no warnings.
Sounds like some banks I’ve dealt with. They all seem to have ridiculously short password limits. I’m guessing they store them in plain text in an ancient mainframe.
Dell iDRACs had the same 5-10 years ago. I only found this out after changing them all across 200+ hosts and trying to log into a few of them again a few months later.
Made for a fun weekend and a fun flight all across Australia to our various DCs.
This happened to me with paypal which had a max password length for some reason at one time. When entering the two masked passwords it just discarded anything I typed beyond 20 characters instead of throwing an error "password to long" when I hit submit.
I've run into the same thing a bunch! I wish I could remember the companies to "name and shame". Some of the variations on password complexity I've seen have been:
1. Allows any input length for password, but has a limit of X characters. Annoying because it makes it a guessing game, like you were talking about.
2. Stops accepting input for password after it hits X character limit. Annoying because you can go for months thinking you're using a 30 character password, and come to find out it's actually 8. Hard to catch if you aren't paying attention to how long your obscured password is.
3. 1 or 2, but they have a rule against all special characters. I think this is supposed to be some weird attempt at preventing SQL injection?
4. 1 or 2, but they have a rule against all special characters and numbers. I've only seen this once, but I remember dropping the service after I realized what the problem was. It was years ago, during the era when something like "banana" or "pencil" was considered a strong password.
5. 1 or 2, but they have a rule against some special characters. Usually ! @ and #, or ! # % will be permitted, but other special characters will not be permitted. I suspect this is because customers "keyboard walk" with shift+123. I.e. asdf123ASDF!@# or QWER!#%qwer135 Basically, the company allows for more predictable passwords in order to prevent extra tickets being filed.
Generally, when I run into login problems, I drop my password down to 12, since that's a pretty common length; I believe it meets a minimum length for a DOD standard? If that doesn't work, I drop it down to 8. If it's still not working, then I start removing special characters and capitalization.
It's really gross that I can't expect to use a 30 - 60 character password in a predictable manner across the entire internet. In a way, I almost prefer things to be the way they are though, so I can have a better idea of which companies are strong on security, and which are either uninformed, or justify misconfigurations as "improving customer satisfaction" over minimum security policies.
> Because security questions are nuts! I mean those ones are extra nuts but in general the whole idea of taking either immutable pieces of data like your mother's maiden name or enumerable questions like the make of your first car or transient ones like your favourite movie...
I always use a randomly generated string with a high entropy as answer to the security question.
It actually doesnt matter if there is a person on the other end of that line. I have forgotten many of these over the years. My go to was always, like oh I always put a random name or phrase or just mash the keyboard. They always ended up prompting me and I would give a few guesses, and if there is any way of them bypassing it, they would do it at that point. The only way to solve this is for them to not to have a way to bypass it.
I used to cross the US border frequently and kept snarky passwords on my laptop and phone, dreaming of the day I’d get to answer honestly with apparent noncompliance or some remark about the ethics of warrantless searches. Alas, I don’t fall into the demographics they tend to prey on.
I once used a randomly generated word as the answer. But the darn thing required five security questions and answers, so of course I just reused the same answer (random, unrelated to the question) across all five.
On fine day, I had to call in to the support for something or another. He's like "I need to ask for the answer to the security questions", I'm thinking "no problemo" and I kid you not, the agent asked all five questions. By the last I'm thinking "surely you can't think I'm going to miss this one?"
Edit: it also reminds me of Rackspace! If you initiated a customer support request — for which the button was in the console, after logging in, the agent would ask the security question. The answer to which … was in the same console!
My guess is that the call was recorded and the agent was required by security to always ask the 5 questions, and he was probably thinking "There's no point in asking those five questions since the answer is the same, but I'm not paid enough to risk being reprimanded just to save 10 seconds for the customer."
I recall reading about someone restoring access over the phone who had forgotten that string. If memory serves, the service worker accepted "random string" as the correct answer.
I wish U2F was much more widely accepted then it is.
Hey, I'm a yubikey user. I even have two as they recommend to have a backup. But I still can't get over myself to trust a digital piece of hardware that it will not fail me. And I still want to have some backup code printed or TOTP token (which itself is backed up) along with yubikey etc.
Losing a second factor is an underappreciated threat. Adding 2fa can definitely increase the risk of losing an account, and a responsible person will weigh the costs and risks of doing so.
You can reduce that risk, e.g. by keeping a piece of paper safe somewhere, but it's never gone entirely. What if your house burns down? Oh, a fire safe. What if that gets stolen? A safety deposit box, etc. Those all have their own risks and costs as well.
It's completely reasonable to not add 2fa to your accounts, if you believe that your ability to keep safe a second factor is less than the chance of someone guessing your password.
I have lost (/stolen/destroyed) more physical possessions than I have had accounts hacked. By a lot.
Exactly: There are risks: flood, fire, stolen, lost, (I) broke (Smartphone, yubikey, usb, etc), broke (itself), kids, washing machines, etc.
I really like how our country TLD registrar handles 2nd factor recovery: nic.lv. As long as I'm alive and myself, I can disable 2FA without my second factor.
We have ubiquitous ID card which servers as passport. And it has smartcard there used to sign documents and communicate with gov/corp entities. If we lose 2nd factor, we can submit application, electronically signed, email it to request disabling 2FA. Or I can show up in person with my ID card/password and request disabling 2FA.
What I now noticed is I can pay for my domain and in payment notes request disabling 2FA. This looks like a weak point - I wonder if they correlate WHO paid for that domain name.
This obviously can't work for international services as they won't trust our ID card issuer.
Otherwise piece of paper with backup codes are: 1) impossible to retrieve remotely 2) easily replicated and distributed
> This obviously can't work for international services as they won't trust our ID card issuer.
Why not? They don't need to trust the issuer to say "this certificate is X person", they just need the issuer to say "this certificate is the same person as this previous certificate" (presumably via distinguished name matching).
As long as the issue vouches that it's the same person that initially registered the domain, it should be fine, regardless of whether the actual identity of the person is correct.
Sorry it can, but just not NOW :) Different countries would have to establish/trust various issuers and such and there would be "supported countries list" resulting in many countries left out etc.
I have two U2F keys and too always register them both. Got an extra set coming with NFC, and I'll probably add them too. I share the same keys with my partner.
But yeah, most services also hand you a set of recovery codes which serve as backup when hardware should fail. I've not actually have any hardware crap out on me yet though.
I trust it far more than easily socially engineered "secret" questions or a phone 2F.
I have 4 (3 Yubikeys, 1 Hypersecu HyperFIDO), and I ordered 6 more, some for my wife and others for offsite backup, sadly some services will only allow you to register two, and AWS only one for root accounts.
1Password has a feature for this now, I think it’s great. Insert and generate random strings for security questions. I’d consider using something you can read to someone on the phone if it’s a company you can call - banks, insurance, ISP, something that’s important to you.
1. e-mailing passwords and personal information (because customers need doxed by their e-mail providers)
2. shipping commercial beta source code to customers because someone was too impatient to learn how to package the product with a clearly labeled build script
3. shipping master x509 signing keys to random users because they are "good people" and can be trusted
4. deleting random database records with administrative privileges because it looked complicated and messy. Then tell users to pack data into document labels after destroying the key relationships.
5. goofing with passwords, then issuing a support ticket when they get auto-banned from the system
If you think an external adversary is more dangerous than someone who shouldn't have administrative privileges, than you have never dealt with real sentient liabilities.
I am alluding to the "90% of security breaches being internal" stat as likely being a gross underestimate. Implying people preoccupied with external threat vectors are often missing a key concept of risk mitigation.
Ok, I just googled "clod" again, and noticed there is a second definition!
I saw the first definition and looked for some other links to explain it but missed that one so was still confused.
(I'd picked up what you were saying about internal threat vectors from the rest of the comment, I was literally just asking for the meaning of the word "clod")
VMS OS from DEC had exactly the "unique password" stupidity in the late 80s.
In a way, it was much worse than Twitter's, because where I worked there were only about a 100 accounts, so you could find the other by hand in less than an hour.
Is it really worse than Twitter's? The image has the website telling you exactly which account is already using that password (it's actually a joke from reddit).
It's interesting to know that this was actually a thing though...
This is from Troy Hunt of Have I Been Pwned fame and I'm almost positive that even in the last year he still gets legal threats from companies that he contacts about embarrassing security risks.
They now need to be made enforceable, whether by government requiring them in government contracts, or indirectly by insurers excluding coverage if they are not met.
No, because what will happen is that the standards won't be updated for 10 years, and they'll be outdated, just like all "enforceable" standards created by the US government.
I was hoping you'd have given up on UW after that. They're a bunch of scamming bastards who will pester you endlessly to go out and sign up more people to UW.
When you see sense and stop using UW, they'll send you bills for random amounts of money for *years* afterwards.
Unfortunately, right now it is nearly impossible to actually change energy suppliers. Thanks to the insanity of the energy market, all market comparison sites are essentially not working (they list no results) and providers are refusing to show their tariffs, instead telling you to stay with your existing supplier.
They were used by the previous tenants when I moved in, so I basically had to stick with them. Nevertheless, the above really didn't leave me with a good impression of a competent company.
Tangentially related to your last point, I really don't like the dominance of direct debit here. In Australia, I paid everything via my credit card in order to be able to review charges before I paid them. Here, so many places really push direct debit and as a result they can charge basically whatever they want to my account and then I have to fight to recover it if it's wrong.
I am dealing with a problem with UPS MyChoice at the moment. I signed up with them about 3 years ago, and used a long randomly generated password (as you should), which had several punctuation characters in it. Today I am unable to log into the account because of this - they apparently changed the password complexity rules at some point to only allow a single punctuation character. And they seem to be applying the new rules at login.
Bonus: the password reset process is broken - their customer support doesn't understand that I don't have anywhere to enter the PIN they emailed me since the website errors out. "Just enter your PIN" "Where? I only see the LASSO_1010 error message telling me to call you."
After hanging up on them after 40 minutes of frustration I decided to go create a new account. But can't because there is already an account for that address.
There are different kinds of regulation. They don't necessarily need
to be prescriptive or to test static quality in the same way that
drugs are regulated. (But I think that some security people would like
them even less than flat-out being told what code to put in their
sites.)
Instead of regulating the product, you regulate the processes around
it. Why?
The assumption is that users are stupid, but security people are
clever. A side-effect (lemma of this axiom) is that security is for
control rather than safety. Ergo - safety emerges from control.
Security becomes power.
But as we see, frequently mother does not know best. The fact is that
some security people are stupid (it's difficult and not the highest
paying job out there) and a few users are actually very clever. Now,
if the clever ones are malicious (bad hackers) you've got
problems. But in reality far more clever users are benevolent and
would choose to participate in a "security culture" if it were
encouraged rather than imposed on them like children. It's their data.
As it stands our security culture leads not just to dismissive
authoritarianism but unassailable systems that may not be questioned.
Regulation that puts much more power into the hands of all
stakeholders can be a great alternative to ever more compliance and
auditing imposed top-down (which is really a weak solution to a
dynamic problem: security changes almost daily).
Consider a regulatory mechanism like the GDPR that allows users not
just to know what data is held, but how it is protected and to
request (with some force) changes to that protection.
Taken to the limit, let's call it "User Side Security" (USS), we build
interfaces so that the user gets to decide their chosen security
solutions (obviously compartmentalised so as not to affect any other
users assets or choices).
(I feel a tremor in the Force, as if a million security people
suddenly cried out in horror then suddenly fell silent.)
But this would provide the bottom-up incentive for firms to get their
PII-security systems back on track without Byzantine top-down regs
which I guess the industry fears more.
> The fact is that some security people are stupid
Bold of you to assume that these companies have any security staff at all.
> Taken to the limit, let's call it "User Side Security" (USS), we build interfaces so that the user gets to decide their chosen security solutions (obviously compartmentalised so as not to affect any other users assets or choices).
> (I feel a tremor in the Force, as if a million security people suddenly cried out in horror then suddenly fell silent.)
And rightly so. There's a lot of things broken in the security industry, but letting users pick: "Hmm, i want AES 256 ECB instead of AES 128 GCM, because 256 > 128" is not the answer.
> And rightly so. There's a lot of things broken in the security
> industry, but letting users pick: "Hmm, i want AES 256 ECB instead
> of AES 128 GCM, because 256 > 128" is not the answer.
Not quite the granularity I had in mind. But please say more. What I'm
interested in specifically is whether or not you believe the owners of
data have no stake in it's protection and no say in how that's done?
I wouldn't say they should have no stake, just that its impractical to ask them.
First of all, security is context dependent. Even a security expert will have trouble making good choices if they don't have the full picture of how the business operates. A non-expert has basically no chance. Just look at how many B2B security companies are basically preying on ignorance to sell useless security solutions. They sell them to businesses which should in principle be able to rationally evaluate the offering, and yet still manage to swindle them. What hope does the average person have?
Second, if you give users real choice, that means you have to implement all the choices, which means you have to spread your focus. Complexity is the enemy of security. The more complexity the more likely you will miss some unintended interaction.
Then there is the other trade-offs. Some security controls can have very real productivity and business trade-offs. For example, if one of your controls is that all staff have to get manager sign-off before accessing any machine with user data on it, that is going to slow down work. Often that is worth it, but the productivity loss can be significant depending on how the business is set up. I'm not sure it makes sense for users to control something like that, except in the sense they should be informed of protection in place and can freely decide if they want to continue doing business. Not to mention how can you do something like that for half your users?
My general view is that companies should be more transparent in what they do so that people can vote with their feet. Companies should also be liable for breaches, especially ones that would have been prevented by best practises. This punishes companies who play fast and loose, and also might in theory put pressure via insurance requirements. A big part of the problem right now is that it is generally more profitable to not invest in security. Breaches have very minor impacts, even major ones usually just mean a very small temporary dip in the stock price. Companies aren't going to care about security unless it affects the bottom line.
Thanks for this thoughtful response bawolff. I need to digest it but
you make good points that all tally with ny experience. Yet I remain
convinced that a regulatory approach needs to include the end user as
a firts-class stakeholder. How to do this without making the life of
security professionals an untennable misery is where I want to focus.
After all people look after their own money, their own homes and their
own health. Why do we carve out an exception for their data?
Is it weird to say that these issues are too stupid for software engineer licensing to be a good answer? It's like buying a fleet of cherrypickers to pick lettuce.
And if we can't hold people accountable for their actual products being laughably insecure, I don't see how licensing enforcement is going to go better. For starters, the question of who should be required to hire licensed engineers is the same as who should be regulated/sued into compliance/oblivion regardless of licensing, and we clearly can't do that.
Everyone operates at the limit of their knowledge of the world (and their available resources).
It's just that some peoples knowledge is waaay more limited than others. And all we have is some form of self-regulation - from a science Viva to a engineering degree, we have no option other than to say "we think we have a measure of all human knowledge in this subject, and so we can judge if anyone else has same knowledge"
Just look at any "building disasters" TV show where unsafe extensions were added to houses etc. At some point someone says "that meets a standard"
do we do it before the guy leaves college? Do we do it during the build using independnat inspectors ala building codes? Do we do it in court after it's all fallen over?
I am not convinced that's software regulation is correct - I prefer to see software asa form of literacy and as such I am really reluctant to reign in "speech". I think software is so open to composability that best practises can come almost for free. Security is just one of those areas where you need a good understanding of the fundamentals.
> It's just that some peoples knowledge is waaay more limited than others.
This excuse ends when an expert reaches out to you and explains the exploit. At that point you've chosen the way of pain, one way or another.
> I am really reluctant to reign in "speech".
OTOH this question is pretty easy to answer: Regulation (of whatever type) applies to deployments, not code. Deployments are where the harm happens. I think this approach would even align the incentives correctly w.r.t. maintenance funding.
Can you give an example of any other domain where the entire domain is changed every decade?
Space exploration comes to mind, but again, still based on physics and chemistry.
The problem with the digital domain is we're literally just dreaming this stuff up and then being surprised when everyone has a hard time securing those dreams.
I’m very new to this concept.
What would that actually look like? What kind of rules and regulations would be put into place and how would that affect e.g. building websites with logins?
Well I am not 100% convinced but the mid-19C move to better steam boiler design is instructive - boiler explosions were so common that there was a economic effect as well as human cost. In the US only one insurance company would take the risk - and they only would insure when their standards were met. The industry as a whole improved
> blocking pasting is bad because it blocks my password manager
again this is just a very ad-hoc opinion, theres nothing stopping a password manager from faking keyboard inputs, or the browser from having a mechanism for supplying a password externally. that said, blocking pasting is fucking insanely stupid and shouldnt even be something web apps have control over. incidentally, the same "we dont have to concretely define anything" mentality is the same behind both issues here: allowing browser control over paste is a way of letting the app implement 1% more applications that would be impossible otherwise, while breaking stuff by ruining the abstraction; and providing a password externally would require inter application API which always fucks up because nobody has the balls to define anything concretely.
Don't know if I should share it here but hey, we all make mistakes :)
I had to implement a "forgot password" feature in a web application. I implemented it via:
1) Take the user's email
2) Generate a 6 digit code
3) Send the code to the email
4) Send the hash of the code to the frontend and save it in local storage
5) Compare the code from user which they get via email to the hash in local storage
Someone could change the hash in the local storage and bypass this.
Of course, I reverted to use Redis instead of local storage for this after like 3 days, fortunately with no mishaps.
I've since then made up my mind to not implement bad workarounds like this because it just felt so wrong.
In a security test of a bespoke password reset platform, I came across something quite similar, so you're not alone.
The JSESSIONID of the unauthenticated request for a password reset (as exposed to the client as a cookie) was used as the secret in the email sent to the user. Therefore an attacker knew the emailed token before it was even sent, and could reset the account password and take it over.
totp is good in theory but not ideal for sending via email. Users might miss the email, or it gets delayed, then they want to resend the email and end up with two codes or more. Which one to use then? and you might want bruteforce protection so you introduce a rate limit, which can lock users out in those scenarios.
I would not use a TOTP but a stateless HMAC token in this case. I was only evoking TOTP because the original comment mentioned a 6-digit code (which is not a proper way to reset a password).
One of my favorite "Stupid Security Things" is when creating accounts on banking, government, and other critically important websites and I'm met with stupid password requirements which actually forcibly reduce password security like "Sorry, your password may only contain letters and numbers. Special characters are not allowed."
(Believe it or not, this has been a far more common occurrence than it should be… Sometimes it even goes a step further by additionally limiting the length of the password to some insanely short length. "Must be between 2 and 8 characters." for example.)
I always worry about those, because it suggest that they might be using those passwords to build a string of some sort, like a database query, or a json object. This is a problem firstly because string libraries are notoriously vulnerable, secondly because the password is being sent somewhere, and thirdly because it’s probably being stored somewhere. Stop doing this! Un-hashed* passwords should never leave the browser.
* don’t forget salt. Mmmm… salty hash brown passwords
Encrypted over TLS is fine. You don’t gain anything by digesting salts, peppers, and passwords on the client side. You’re ultimately handing an string to a server, and if your backend just takes that scrambled text and compares it to a database entry, it’s no better against MITM attacks than not digesting.
You’ve also increased the complexity and bug surface area of the client-side code for any client that needs to log in.
Just don’t send plaintext passwords over raw HTTP, enforce TLS.
Passwords have low entropy and high reuse, so they can be used outside of their purpose, which is to allow access to a single resource. A salted hash is only useful to the resource it provides access to, and a salted and peppered hash provides resistance from external unsalted disclosures.
TLS in the browser is intentionally insecure. All browsers allow a back door . certificate to MITM every connection through proxies. Guess what happens to that traffic? Decrypted, stored, sent off to third parties, who knows what. Every major corporate network does this. An actually-secure protocol would enforce client certification to the device manufacturer’s authority, or at least to the browser vendor’s authority.
Please do not send passwords off the client. Use a standard (I’m surprised that no such thing exists in an accessible way from the browser), but the protocol should look something like: enter password, digest, salt, digest, tls, pepper, digest, encrypt, store.
Asking HN: is there an open standard that actually spells out 100% of a password management protocol with highly trusted implantations on both sides?
That specific complaint seems to me more about trusted roots - corporate can just install their own CA cert on each machine. What backdoor are you talking about? Sounds like administrative access to the computers owned by the company.
The seasoning and digesting doesn’t solve that problem. Every company also has the wherewithal to copy/store/transmit anything typed into an input field on computers they own.
If there’s some other intentional “browser backdoor” (not social engineering attempts against end users) for installing random signing certs, I’d like to read about it.
If I own the hardware and control the software installed on it (notwithstanding infiltration), none if my TLS traffic is peeled open on the way to its destination.
>Asking HN: is there an open standard that actually spells out 100% of a password management protocol with highly trusted implantations on both sides?
If you don't want to go full asymmetric cryptography, there's SCRAM. It doesn't specify pepper, but pepper can be added to salt like for any salted scheme: salt=hash(pepper, stored salt);
There’s nothing that can be done on the client to prevent this. Either your plaintext password can be used directly to login to a system, or your seasoned (on the client) password is effectively the same. Either of these logged can then be used to impersonate you.
Yes there is. Use a PBKDF to turn the password into a (public key,signature of session cookie or challenge nonce) pair, and send that to the server. The server stores the public key in place of a salted password and validates the signature in place of password comparison. The 'seasoned' key-signature pair is useless for any later session.
It prevents leak of plaintext passwords. I suppose impersonation by a twitter employee is not a concern, because they already have access to the twitter system from inside.
Also, there are people at Apple that still think that I remember my Apple ID password and can type it into a modal popup on an iPhone without accessing 1Password.
For something as important as Apple ID, how can Apple expect me to use an easily-remembered password?
The checkout one is down right stupid. Like you have to put in effort to do something like that (design something like that) - like it is so easy to NOT do that.
Not sure how people end uo designing websites like that by damn. Also that Xbox HDMI cable was hilarious - I need one too because those viruses can easily enter my home network through a virus noise (like COVID) that attaches to the HDMI!
Isn't there a massive security concern in exposing the username of whom you share a password with?
I type `password123` or a range of other passwords, find a silly user using that, try every other major account providers with the username/password and I have access to that person's account?
It's also often not difficult to guess someone's email address from their username (to find more logins) since there are only a few major providers, which such silly user would definitely use.
> Isn't there a massive security concern in exposing the username of whom you share a password with?
The big security problem there is that the stored password hashes are not salted with the username+$RANDOM_NUMBER. If it were, there'd be no way to check if two users shared the same password.
At this point I'm tempted to think we ought to be managing our own domain instead of email so we can ditch a breached email whenever it is leaked. Together with the use of virtual credit card to limit the exposure. It doesn't stop website collecting other kind of sensitive information like addresses and phone number though
It's very interesting seeing how companies get breached. The most common culprit is when they outsource email marketing to third-party firms with spotty security, but when I started receiving pornographic spam sent to my Dell vendor email address, I knew their security is worthless.
I mean... that's a clear GDPR violation[0] if it's in the UK or EU. Do we know if this is still happening, and if so, how do we get the ICO involved in this?[1]
in theory, yes. In practice, good luck getting your complaint acted on. Enforcement of GDPR is a sad joke. To a point that there are law suits and criminal charges[0] against data protection authorities in some countries.
I don't think he advocated for getting rid of the legislation? He was just making an observation of the current state of things (which I confirm - GDPR enforcement is basically non-existent).
There are millions of websites that serve EU customers/users. Enforcing GDPR through the courts is expensive, because courts are expensive.
So GDPR enforcement amounts to making an example of prominent offenders, where the case is cut-and-dried. But before taking enforcement measures, the authorities will notify the offender, in the hope they will come into compliance voluntarily.
Honestly, I think cajoling people into compliance with the law is a better enforcement model than the "drag em into court, then fine the hell out of them" model, when the number of offenders vastly exceeds the legal capacity of the enforcement authorities.
Data protection authorities don’t need to go through the courts. They have the authority to levy fines. They are by and large slow, understaffed, and in some cases allegedly criminally negligent or coerced into inaction. Try to file a GDPR complaint and see where it gets you. I’ve done it a couple of times. Once with the backing and support of NOYB (argueably one of the most prominent privacy organizations), and didn’t get any sense of the system actually working. Yes, there are some prominent fines trying to set example, but as you say with millions of website and thousand upon thousand of violations, this is a drop in the ocean.
yes, and that’s why some countries are far friendlier destinations for HQs of global orgs. But even the more privacy friendly countries are (badly) handling huge backlogs, and are far from efficient in enforcing legislation.
Corporate answering like you're some kind of clueless idiot is the worst.
I'm a tenant. My real estate agency uses a website through which you can, among other things, send them files. Including sensitive files, such as passport scans etc (often required in my country when you want to sign a lease).
The files are put inside a cache.
The cache is not secured and not even protected against directory listing in their web server. I can list the entire cache and see hundreds and hundreds of files there, including passport scans, salary certificates, etc. Wrote to them to tell them the issue existed and I wanted to talk to a person in charge of the website to disclose the details to them (I didn't want the details, including the URL, going back and forth in emails, knowing that they probably don't secure their emails correctly either).
The next day some lady calls me on my phone. She wouldn't understand the problem. "No Sir, in your personal account, you can only see YOUR documents, Sir." She was basically telling me what the correct behaviour would have been, completely unable to even process the idea that their website might be leaking highly sensitive documents in the wild.
I remember admissions form of children (not sure which Indian state) that contains Aadhar (UID of India) listed in a server with directory listing enabled. Response was - the site is hosted in a secure government datacenter. No chance of leaking data.
I remember accessing SEBI, PWD records through google dorking. They are probably still live, haven't checked. I reported it to all the correct authorities, tried to get in touch with various organizations. To no avail.
I got myself involved in an argument with the Post Office recently. I was posting a letter to an international address and they asked me for - I kid you not - a colour print of my Aadhaar (black and white was rejected).
For people unaware, the document contains your photo, address, date of birth, and a very important number that the government itself 'advises' to not disclose.
I really needed to post that letter so I had to cave in.
When the letter was received, the receiving party told me they also received the Aadhaar stapled with the original letter.
Ironically, this practice of sharing your Aadhaar everywhere seems to stem from trying to "increase security".
It makes sense if you know who the Government thinks is the threat to that security - namely Indians against the government. What they don't seem to realize is that they're leaving the population completely vulnerable from both internal and external actors.
> The next day some lady calls me on my phone. She wouldn't understand the problem. "No Sir, in your personal account, you can only see YOUR documents, Sir." She was basically telling me what the correct behaviour would have been, completely unable to even process the idea that their website might be leaking highly sensitive documents in the wild.
And at that point, you should hand it over to regulatory agencies. It's one thing if some random person emails them, but a government agency letter will get the attention of at least some sort of legal department, if not the corporate leadership.
Can't you just hand that case to your national CERT (Computer Emergency Response Team) and let them handle this? After all, they may have more effective instruments against corporations.
> Corporate answering like you're some kind of clueless idiot is the worst.
I regularly deal with UK banks, many of whose 2nd step auth is to send an SMS to your phone. I regularly ask for other ways of doing the 2nd step such as TOTP or even hardware. I get similar responses back that what they have is secure.
It is not secure. The banking sector's regulations consider it an acceptable 2nd step, which they conflate with being secure. So I regularly get condescending responses about it.
Don't worry, they'll "fix" it as most of continental EU banks did: by forcing you to use their app which will break on every OS update (when Android 13 rolled out, one of the banks had their app broken for months and their users couldn't authorize any payments).
You'll also be forced to use the mobile phone of the brand they deem personally "secure". Don't even think about any kind of privacy respecting phones like GrapheneOS. But you'll be secure. At least by definition of their "security consultants".
And if your phone breaks? Well, good luck buying a new one because the payment confirmation app is on that same broken phone.
Access to my HSBC accounts is via a digit-only pin and SMS (to the same phone the app is running on) for the second factor. The pin used to be max 9 characters alpha-numeric, which is still bad, and when they changed it (earlier this year) to be shorter and numeric only all the communications were are great pains to assure me that things are just as secure as before… (Which by some definitions may be true: it went from not secure enough to not secure enough).
Luckily for me the only significant thing I have with them is a mortgage that will be paid off in January at which point I'll be closing all accounts I have with them. In the meantime I'll keep pushing about it in every relevant thread in the hope someone high enough up there gets wind and is embarrassed enough to make change happen.
FirstDirect is very similar, though those accounts at least still have 9-character case-sensitive alphanumeric passwords not just a numeric-only pin.
When I had HSBC accounts, their security was laughable but in the other direction: IIRC, in addition to just the plain password which you had to type by hunt-and-peck on an onscreen keyboard, you also had to make up some "security phrase" which they would make you type a few letters from, like the 1st, 3rd, and 24th letters. Maybe their new security approach is a response to that.
In France for some strange reason, banks are forced to have a stupid 6 to 8 number as a password.
Literally [0-9]{6}. They have "secured" it by forcing each bank to implement their own keypad, that randomises the order of keys in order to make scripting much harder.
It makes no sense, my password manager freaks out about it every time. There is TFA (thank goodness), but still, it feels so stupidly unsecured.
I wonder why, and who to complain to, if anyone has any info on that I'll take it
I know at least one french bank which accepts regular strings as passwords, so it may not be a legal requirement. Which makes it confusing why it's so widespread...
this post demonstrates how laughably highly specific and ad-hoc the current infosec religion is
> first example
everyone will freak out here because the password is therefore stored in plaintext (well not necessarily but maybe depending on the hashing scheme). the cringe here is that the most immediate reaction of the security junkie will be OMFG PLAINTEXT, rather than the more obvious "problem" that it just told you the password of another account
of course none of this matters since you could just brute force every account anyway since the password is short
> second example
this is not a bad guess at how to implement this feature. not everyone knows about gotchas like secure cookies, despite being 20 year old problem
passwords also shouldnt need to be "securely stored somewhere". and you shouldnt expect any web dev to get that right
in reality, the web should be encrypted/authenticated by default, and not using CA bloat. a public key should uniquely identify a website. dns shouldnt exist. if the web was for static documents like it was meant to be and without millions of unneeded complications like CSS, you could replace a http://longkeyblahblah with <Google>. yes, really, not even css should exist. something you use for banking should not be the same thing you use for looking at magazines and ads.
> But it's still a password in a cookie and it's still not HTTP only and they had reflected XSS risks on the site.
more web infosec pro dogma. just dont have XSS. of course thats too much to ask for web standards since they will just extend the grammar in some way that adds XSS to your existing XSS-free code, but whatever lets pretend mitigation is the most important thing in the world while anyone can just use your account without needing your password in 99% of attacks concering things that are mitigated with hashing, keeping passwords in "secure places", and
now everyones gonna reply to me saying "well the layperson...". it doesnt matter. all the things i talked about are things the user has to solve himself otherwise he will be hacked, no matter how many bandages and "best practices (TM)" you use.
tl;dr youre all stupid, your beliefs about security are WRONG, and your software has the same needlessly stupid shit, like apache commons absolutely critical vuln that exists for absolutely no reason, last week.
see, the problem here, is that for 15 years ive been saying "just do it right", and people have been "arguing" that it will take a year to fix all the current broken standards (like the web not having a proper way to do authentication, being vuln to CSRF by default, not providing safe ways to compose DOM, etc). instead, year after year you give yourself applause for implementing the latest password hashing algo and cargo cult like JSONP to work around fundamental problems that will never be fixed in the web that should have died in 2003. you think my post is toxic but really the fact that the 99%er webdev mocks people for not knowing about secure cookies or some other web workaround, makes the 99%er webdev toxic (and they spread their bad products around toxicly, to boot).
> Yes, that's just a Base64 encoded version of your password in a cookie and yes, it's being sent insecurely on every request and also yes, it's not flagged as "secure" therefore it's being sent in the clear.
The domain has HSTS enabled with a duration far exceeding the cookie lifetime. A cookie not being marked Secure does not imply that the cookie "is being sent in the clear" and I'm tired of security consultants implying otherwise. Do we know whether HSTS was set in 2017? Not for sure, but I'd bet some money on it being set.
First, in the UK about a decade or more ago you used to have to sign the receipt and the checkout staff would check it matched the one on your credit card. There came a time when my credit card expired, so I produced a new one from my wallet. "It's not signed" she told me. So I signed it in front of her, then she gave me the receipt to sign, which I did. She then checked to make sure both signatures matched! Even though she'd seen me sign them BOTH in front of her.
Secondly, my wife tried to take 6 items into the changing rooms but was told they have a maximum of 5. I have no idea why. I mean, theft is theft regardless of how many items, right? Anyway, she handed one item over and took 5 in, and they gave her a plastic sign saying 5 on it. The idea is if you take 5 items in then they check that you bring 5 items out with you. Anyway, when she was in there she asked me to swap items around - go and get her 2 belts, swap the jeans for a different size but bring 2, etc
So, at the end she came out of the changing rooms with 7 or 8 items. The assistant took the clothes she didn't want and put the sign back ready for the next customer. At no point was she challenged about why the number of items she had didn't match the sign, all their "security checks" were done to make sure she couldn't try on more than 5 items at a time.
I wrote to the store but naturally received no reply.