In the end, i find a lot of chrome's decision to implement spec-breaking behavior awful in the context of having a website that works forever (Looking at you, samesite). But this behavior rarely breaks functionality and on the whole makes the web a lot more secure.
* Username and Password fields must not autocomplete
* Username and Password fields must not allow text to be pasted in to the field
* Password must be at least 8 characters with lower case, upper case, numbers, and special characters (they didn't care it had a maximum length of 8 characters)
I straight up told our project management it was actively hurting our security, and was told the the point here was to fulfill a regulatory requirement to complete and resolve all issues from a independent "pentest" not to improve security.
Having to tell a client another company they’ve hired are absolute clowns, without making it seem like we’re trying to save our own skin, is certainly interesting.
This isn't the time to tread lightly, but to go scorched earth. This isn't an "oh, we disagree on the finer points!" debate between peers kind of situation, but a flat-out "these knuckleheads are putting you at risk and you need to know it". You want to get the point across that you're not messing around or leaving room for doubt.
Source: have had these conversations several times over the years. I normally pride myself on tact, but in my experience tact is the exact wrong approach here as it gives the client the impression that there's a wiggle room of doubt.
“I’m sorry if this means we can’t do business any more, but this situation has gotten so severe, that I just have to tell you the unvarnished truth, and ….”
Hummmm. So a couple of years back, I was working on some internal tools that passed sensitive information around and I found some interesting info.
Some bloggers INCORRECTLY thought that HTTPS didn't secure the URL Flags. Correct fact: parameters passed in the URL like ?item=bla is encrypted
Also, some cloud providers aload Balancers (AWS) allow you to offer load HTTPS encryption/decryption - so there REALLY IS plain text stuff in the final leg of the journey (e.g. from the LB to the server)
In the end, the biggest thing I learned is that HTTPS is hard and it sucks.
It’s still good practice to keep sensitive info out of URL query parameters, which often leak into server logs.
This is customizable by setting the referrer policy: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Re...
At first I thought this must have been what they meant; perhaps there was some configuration thing we got wrong.
So we asked for clarification and nope, the example given was that someone logging in from an office could have their credentials sniffed freely by anyone else on the office LAN.
I sent the client back a list of government and military websites that responds to ping. As an extra bonus, it turned out the pentesters own website responded to ping.
Cheese, this one line in their report causes around 3 hours of meetings with around 10-20 people on them... and there were a lot of lines like this.
They try to sell us external/internal Auth service, similar to KeyCloack with their support. What pentesters want to achieve is not improved security, but to sell their services as DevOps and developers. This were not what we expected from pentesting.
In the very last paragraph, as a conclusion to YOUR exercise, explain how the utter lack of competence in the subject matter displayed by the consultant has resulted in blah, blah, dollars, time, effort, all down the drain. Emphasize the harm to the organization and how it affects the trust required between different groups.
I guarantee it will get you promoted or fired. Which one depends on the organization and I expect you already know what will happen.
Interestingly, I checked a few big sites, and while Google doesn’t, Facebook and Amazon both use client-side encryption. Is it just to provide some extra protection for pwned users who have trusted bad certs? I’m no security expert, and I’m struggling to think of any real benefit.
For the JS/CSS thing, I have literally no idea.
> Offer the option to display text during entry, as masked text entry is error-prone.
And under 10.2.1:
> Support copy and paste functionality in fields for entering memorized secrets, including passphrases.
(... snip ...)
> Allow at least 64 characters in length to support the use of passphrases. Encourage users to make memorized secrets as lengthy as they want, using any characters they like (including spaces), thus aiding memorization.
Do not impose other composition rules (e.g. mixtures of different character types) on memorized secrets.
Do not require that memorized secrets be changed arbitrarily (e.g., periodically) unless there is a user request or evidence of authenticator compromise. (See Section 5.1.1 for additional information).
Oh, and that password? Not case sensitive.
What, you expect them to make a case-sensitive version of NTFS just to store your password??
So, it will show you what was entered and make you think it’s case-sensitive, but then when you go to do the comparison, it actually ignores case.
The stupid thing is that MacOS was also case-preserving but not case-sensitive for a long time.
[nathell@macmini /tmp]$ echo first > A
[nathell@macmini /tmp]$ echo second > a
[nathell@macmini /tmp]$ cat A
The auditors are typically 10 to 15 years behind technical security expertise.
If I can play devil's advocate for a moment—isn't this just how insurance necessarily works? Your car insurance company isn't going to interview your teenage son; they don't care that he's a particularly mindful individual, who never speeds because he remembers the time a close friend died in a car crash. "The policy says 17-year-olds are high risk, pay us a zillion bucks a month."
Of course, guidelines that have literally zero value still have zero value. But they have to come up with something concrete...
The only way to check the "Has taken a driving class and has at least 20 hours behind the wheel" is to do just that. How many different ways could you check the "Secure password requirements are enforced by users" box? How many ways could you check the "physical security to encrypted systems" box?
I'm not quite sure where I'm going with this. Something about, maybe things are broken because they don't fit in the insurance company model, and someone needs to solve for that before anything gets better.
Probably not, but they are there to be paid by their customers. Does the customer have to mark a checkbox on a regulatory form? Give the customer some answer which is not blatantly false or useless, get the money, come back next year.
It's because it's in some dumb regulatory pentest manual or something. OK.
I still can't believe that whole business managed to interpret 2FA for whole EU as "you MUST use SMS for 2FA!".
We're actively harming the user experience (and driving paying customers away) because of some "expert" advice.
I'm not really sure what the best fix is; there are many possible ones. I've seen total clowns pushing decades-old nonsense be taken seriously by competent businesses simply because they thought "hiring an expert" was enough, like they're a plumber or something.
I think that’s a big difference.
I believe there was an article on HN recently about a startup that used a "lawyer" that wasn't because they didn't check their credentials after getting a great reference. Just because there are consequences doesn't mean it doesn't happen.
I feel quite certain that I haven't, I just think the point is poorly made and I've spoken specifically to why I think that to be the case. You can get all the recommendations and referrals you want for an infosec professional; nothing stops that person from holding themselves out to be such a professional, quality of work or competency performing it notwithstanding.
You can absolutely suck as a pentester, but still legally hold yourself out to be one and advertise yourself as one to anyone who will hire you.
You can NOT do the same, holding yourself as an attorney or a doctor without very real risk of legal action if you are in fact-not licensed to do either. There are bar associations and medical boards governing various aspects of their work, and how their work is conducted, performs ethics and competency investigations on license holders, and can take away their license to continue working in such capacity if said investigations deem fit. No such governing board or ethical board exists for infosec professionals.
That is a pretty important difference that shouldn't be ignored just to make a petty point about how easy is is to ask for a referral.
Just because there are consequences doesn't mean it doesn't happen.
Which is only supplemental to all of this. My entire point is that it happens, and the prudent do the diligence to make sure it doesn't.
People who are wrong usually do.
> You can get all the recommendations and referrals you want for an infosec professional; nothing stops that person from holding themselves out
Here is where you missed the point.
You are correct that we do not license, say, pen testers the same way we license doctors. You are incorrect in thinking that this matters.
The point is that in both cases, reputation is the best general-purpose measure of who you want. That's all.
My mentioning certs may have steered you wrong, and that was a bit of a distraction. My point there was that certs tell us something, usually not much, but are still better indicators than their self-advertising.
Does reputation matter? Yes. This I will openly concede. Do I think credentials are meaningless? No.
Where we disagree is "thinking that this matters". I still think it absolutely does, and think the analogy is a poor one. You clearly think it doesn't, that's fine, but I don't think it makes either one of us less or more wrong. Perhaps that's all there is at play here, a difference of opinion in how an organization prosecutes the search for a qualified expert in security, medicine or law; and I think it's revealingly disingenuous to frame such organizational decision making and risk tolerances when seeking professional services with rigid and inflexible absolutes of "right way" or "wrong way" or whether or not method A matters whereas method B doesn't.
Also the computer itself solves this problem for you in many cases, a guest profile typically deletes all browser session info when you log out.
Many sites? Probably.
You're assuming people log out reliably or otherwise behave in the most secure way. They don't.
I also don't see how logging out/killing a session after 15 minutes of inactivity is much of a hardship for the user.
And it's not just extremely annoying, it's also completely unnecessary. Just put a "trust this browser" checkbox on the sign-in page and adjust the session timeout accordingly.
That works. It defaults to the "safe" behavior, but allows users to self-select into other behavior that they find less objectionable.
The client can only use numerical passwords. When loading the login page, their site also loads the number pad, which consists in an HTML pad containing the 10 digits. The digits are displayed as base 64 images and in a random order, so it's impossible to determine which digit is which from parsing the HTML alone. In the HTML, the images of the digits are each associated to a random 3 letters string. This string will be sent to the server instead of the plain digit.
With the number pad, the site also load a "challenge", and this challenge is sent to the server when connecting. My guess is that this challenge is an encrypted string that indicates what digit corresponds to what 3 letters string.
I made a script that logs in to my bank account to get some information and I was able to do it without using OCR on the images of the number pad because the images never change, so their base 64 strings are always the same. I was a bit disappointed when I realized it, I thought that the people who came with such a twisted login form would have added random noise to the image, just for fun.
When I was a kid, a teacher told me learning was supposed to be hard and unpleasant, and I believed her for a long time. Only when I started enjoying myself in spite of that did I see it was wrong, and I started doing well in school, and (more importantly) pursuing my own interests.
There's a similar thing with security - people assume good security must be painful, so making it painful becomes a goal. Sometimes this is sincere, sometimes (TSA) intentional theater. But either way, the result is intentional hostility to the people who use the system.
I'd bet money they have a one-sentence answer for why it does each of those things ("order is scrambled to prevent shoulder-surfing"), but have done zero testing to determine whether those theories are correct.
Another favorite of mine are password conposition rules, which do nothing but reduce security and are everywhere :(
I'm familiar with two (2) common kinds of "2FA" implementations. TOTP and SMS.
Of those two, only SMS is actually a second factor, albeit not a particularly secure one. TOTP is fundamentally a password, and two passwords are no different than one password.
I see this view a lot. It's wrong. TOTP is fundamentally different to a password, as the stored "password" (by which I presume you mean the key) is never transmitted anywhere.
TOTP in fact has one property that makes it potentially* the most secure of all 2FA methods: it can be used airgapped. As the credential you type into the 2FA form is not the saved secret.
* I say "potentially" because the relative inconvenience + human factors conspire to make it less secure than e.g. U2F in most cases. But assuming hypothetical perfect conditions, there would be nothing more secure than TOTP for 2FA.
You’d need to type a nonce into the dongle, then type the result into your computer.
TOTP is just a password. Also, in practice, the server has to have non-air-gappped access to a TOTP generator, so it’s not really air gapped at all.
Read up on the great RSA key fob recall for an example of TOTP-style auth gone horribly wrong.
> You’d need to type a nonce into the dongle, then type the result into your computer.
That would be a cool augmentation of digest auth, but afaik is hypothetical currently (at least as far as common use goes). I can use TOTP airgapped right now.
> in practice, the server has to have non-air-gappped access to a TOTP generator
This is a fair point, but requiring full server compromise is still a nice step up from being mitm-able.
> so it’s not really air gapped at all
That seems like a rather extreme conclusion to draw. Client-side only air gapping is still airgapping, the fact it doesn't extend to protection from server compromise doesn't completely invalidate the benefits.
Are you familiar with SRP?
TOTP has all of the properties of passwords, and no properties that passwords don't have. That makes it... a password.
I would say SRP is strictly a misnomer (though it's a useful conflation). Generally speaking password is a value provided for authentication (if it's no longer being "provided", as in SRP, it's something different... but I understand using a familiar word for that something different is helpful when communicating).
Either way, in saying TOTP was "just a password", the point you were trying to make was that TOTP is "no different than and therefore no better than a 2nd traditional password". The fact it's not transmitted makes it very different to, and better than, a traditional password. So whatever you want to define the definition as, the point stands.
> and no properties that passwords don't have
It has 1 property that passwords don't have: it is not transmitted!
TOTP is a password. The fact that it is a password doesn't matter though since it is something you have (and can't know) which augments the something you know. This satisfies the intent of MFA.
It kills me that most enterprise environments use Kerberos via Active Directory, LDAP, or NIS. So, your workstation probably has Kerberos tickets sitting on it, which would allow very light weight 2-way authentication and encryption of internal flows.
TLS client certificates and TLS-everywhere would be another good option, but it's particularly frustrating that the Kerberos TGTs are already on the client machines. The key management part is already solved in the Kerberos case.
Kerberos is even potentially resistant to quantum cracking. (Grover's quantum search algorithm effectively halves the key size of ideal symmetric ciphers, so you'd want 256-bit keys.) Forward secrecy is an issue, but there are proposals to incorporate DH key exchange in the pre-auth to give imperfect forward secrecy. A post-quantum key agreement protocol, like RLWE would be fairly strait forward to incorporate, with standardization being the main hurdle.
Part of the problem is that it's "enterprise" tech, which means all sorts of "enterprise" middleware claims to support it with some half-assed concoction that worked on the presales demo environment once, back in 2001, and nobody else has touched since. And it's also old and pretty obscure, with documentation lost to the fog of time, and very few people who remember how it was supposed to work - a bit like MS DCOM...
Slight detail that’s of course completely irrelevant.
You realize that, out of the many comments I've made in this tree, the one you responded to was the one that said
> Are you familiar with SRP?
There are more ways of compromising someone's information than capturing it in transit. If you give me your phone, I can read your TOTP seeds straight out of Google Authenticator.
The "Password" named in "Time-based One Time Password" is the temporary generated value you transmit. It's not what's stored on the TOTP device, so in the context of this discussion, that temp value isn't what the gp was referring to.
Careful; "one-time password" is in the name, and it certainly isn't that. Your TOTP seed stays valid forever.
After the security backlash they now backpedaled and implemented 2FA with ONLY apps. Apps that ONLY work on iOS and Google Android. I had endless calls from family where they couldn't access their banks anymore because they had a Huawei phone or a dumb phone. Banks are citing "security" as explanation why they can't use smartcards, hardware tokens or even bring apps to desktop computers or phones without Google services.
The funny part is - ALL banks did this at once. Why? Because the security consultants had "must have app" and "must check Google Safety net" on their check lists.
What country are you taking about? In regards to the EU 2FA thingy I start to belief to see a pattern. In countries who had established online banking standards with 2FA, nothing changed. But countries without, went ballistic. SMS or App only 2FA on every login and on every transaction. Yah, I can see that this is annoying.
While for me with my German banks I still access them using the FinTS protocol with a banking software of my choosing. For transaction above 20€* I need a TAN from my chipTAN/Sm@rt-TAN device (Which shows you the transaction details). Optional I could choose an app. SMS was phased out years ago (By my banks. Others perhaps still have it.)
(*only 3 transaction a day I believe. You can deactivate that so that you get asked for a TAN every time.)
It's a minor inconvenience for someone who is organised or is used to store secretes securely but a complete nightmare (including a security nightmare) for your average Joe.
Thanks EU, thanks governments for your precious regulations that keep us safe.
I wonder how many similar stories there are in fields I'm not an expert of.
I talked with fintech founders and they mostly say "sure, we could give better user experience and then have a fight on our hands with auditors because we didn't fill out all the checkboxes from the reputable security consultancy that 'interprets' the requirements"
Indeed, this is an argument you can reasonably make.
> TOTP is no more a password than whatever one-time code you'd get by SMS.
But this isn't; this is just a blatant lie.
A second factor is something you have, i.e. your phone, a hardware token, or access to a shared secret you don't store in your head.
Password managers kind of mangle the idea and turn the password from something you know to something you have.
The idea of "something you have" is that the thing can't be duplicated. As soon as it can, it's no longer "something you have". Any number of people might have it. A person who has it might not be you.
SMS hijacking, for example, converts your phone-based authentication to a password, where the password is your phone number. (Since an attacker who knows that number can pass the test.)
TOTP starts its life as a password.
Sms hijacking doesn't "convert" anything anymore than someone with a telephoto lens "converts" an old-style hardware token to a password. (Yes, I know the p in otp is password, and called that because it's entered by the user. It's not a password in terms of a factor you "know" because it's time-limited.)
These are also fluid ideas that are used to describe roughly different failure modes for different types of authentication:
Passwords are thought of as things the user can disclose.
Totp and other "second factors" are thought of as things that must be stolen, or if disclosed have a very short viability time.
Biometric are things that can't be disclosed, but can be lost, and (and when properly implemented) not stolen.
You're trying to argue that these categories of authentication factors have hard lines and definitions when they're fluid categories being used to think about failure modes of a method. Each specific authentication method has its own strengths and weaknesses.
Also, sms hijacks require a lot more than simply "knowing" a phone number. While sim cloning and ss7 attacks are known and very possible, they're still fairly complex. You can also social engineering tech support at phone companies to activate your sim for an account, but that is also significantly more difficult than simply "knowing" a phone number and also a failure of the authentication the phone carrier is using.
I didn't notice this sentence before. Compare the issue of releasing photographs of master keys.
Compare the (correct) comment from that post:
> the press has helpfully published a photograph of the keys, so you can make your own, even if you didn’t win the eBay auction.
with this official statement from the government of New York:
> “If you’re selling it, it’s in your possession for an unlawful reason,” said City Councilmember Elizabeth Crowley, chairwoman of the Fire and Criminal Justice committee.
( https://nypost.com/2015/09/20/the-8-key-that-can-open-new-yo... )
Saying "you're not supposed to have this" won't stop people from having it. These keys are regulated as if they are "something you have", but the facts are otherwise.
TOTP gets set up in the first place when the website discloses your seed to you. It's not something that can't be disclosed. Seeds get disclosed all the time; workflows are built around it.
> Biometric are things that can't be disclosed
Huh?? Biometrics are things that it's impossible to avoid disclosing. If you're ever in a police station, they are free to sample your DNA. You shed it all over the place. If you ever handle something, you just disclosed your fingerprints. If there are any pictures of you out there, your face is public information.
> sms hijacks require a lot more than simply "knowing" a phone number.
I didn't claim otherwise. The intent of my sentence above is to say that a context which involves a working hijack attack converts an SMS challenge from a second factor into a password. If your attack is working, knowing the phone number is sufficient to authenticate as the victim.
It seems to me you are ascribing properties to "something you have" that aren't warranted. The "something you have" needs to prove you were party to the initial exchange, not necessarily that you were the only one present -- that's why we use two factors, and not only TOTP.
> The "something you have" needs to prove you were party to the initial exchange
This is not something that can be proven at all. Accordingly, proving it is not a goal. Anything that can be had can also be transferred. Your delegated agent's login attempt is just as valid as yours is.
Similarly, they can grab the shared secret from the server.
It’s marginally better than a password manager (though some of those support TOTP now), since they can’t pull all your credentials by keylogging your master password.
The hash seed that generates a password is connected to the device.
All I need for password authentication is the password and a device that can generate a one time proof that I know the password.
TOTP just seems more secure because the password is never displayed to the end-user.
A password/passphrase/passcode is something you know.
A hash for a TOTP is something you have. 2 factor means something you know and something you have (or something you are): https://dis-blog.thalesgroup.com/security/2011/09/05/three-f...
(And yes in theory you could remember the hash, and have a custom TOTP client that lets you enter it in. But unless you do this it is a theoretical argument only).
In fact, in Google Authenticator you can even conveniently export all running TOTP to another Google Authenticator without any connection with the apps or anything else whatsoever.
I was in total agreement with him - you can in theory run the algorithm by hand.
It isn't especially relevant though - 2-factor is "something you know, something you have". You need to have the hash.
Would you not install two deadbolts on your door if you needed the extra security?
I agree this is probably product managers, but may also be engineers who have strongly held "security" opinions and nobody to check them.
But I fully agree with the disable-paste stuff. Very few (web-related) things get as annoying as that.
as a low-risk privacy defect yes, because things like bank account and routing numbers would be stored in autofills for certain banking sites that don't require authn/authz to initiate a transfer.
(I can think of a handful of platforms frequently used for common services like paying HOA fees which are currently vulnerable to this, meaning another user sharing the machine can simply hit ⬇ on the keyboard in form fields on a page that doesn't require authn/authz to initiate an external transfer in order to capture any stored banking details that were previously entered into the form.)
Source: I was one of those brain-damaged appsec pentesters.
My biggest security vuln is Google. And I've seen too many new account usernames out there like forgotlastpasspw to use an external manager.
Firefox, thankfully, keeps the passwords.
One of our local banks disabled autofill without warning, and they went out of their way to detect if someone was pasting a password.
There was backlash and frustration, and they eventually reversed the decision.
After reversing it, they still put a disclaimer about not pasting passwords, but that disappeared after a few weeks.
I recall working with some folks who supported load balancers when Chrome decided that something seemed 'unnecessary' and they updated Chrome and ... it broke load balancing.
Thankyouverymuch. I am gonna keep using my password book.
There is no sure way, as a private person and not being expert in security, to secure your browser. But there are ways to limit the damage that can be made. Maybe just don't make it too convenient and have a database of all your passwords on all your devices?
Besides, you can encrypt the local storage with a master password (and if you accept online as a requirement, you could even add 2FA to that).
Not only that, I would argue that a physical booklet is not only more secure but also safer. Nothing short of a house fire will destroy the booklet, and however much I like to rave about old-school ThinkPad durability, I don't think my locally stored encrypted database would survive that either.
the modern security hazard is not someone reading your post-it that is sitting on your desk, it is someone remotely getting access to some part of your computer or some service you own that can tell you what the password is.
The post-it note in our world is more secure than lots of things that have replaced it.
on edit: I see Mordisquitos said it better than I.
Is it? If someone is physically in your home you are in greater trouble anyways and even then they likely aren't going to be grabbing a notebook. Just keep it somewhere nearby but hidden (notebook in a drawer on the desk).
I believe moat browsers will use the system keyring (which is usually encrypted based on your login password or a tpm) if present or use a master password to encrypt them at rest.
IMHO, the decision of whether to show auto-complete should be with the user and not with the website. When I install an auto-complete add-on or activate a browser feature, I expect the AC to be available on ALL input fields, whether the site owner thought that would be a good idea or not.
Now, there is a valid question on how the user should be able to configure the AC behavior, and how the website may help inform this configuration, but the decision should be with the user. The website should not have the final say.
So I would see this as more of a shortcoming of the HTML Spec.
1. A "name" field on a dialog for creating values in a controlled vocabulary (e.g. genres in fiction) -- Chrome thinks this is a username field so brings up a user autocomplete. I guess it thinks that "Jane Smith" is a valid label!
2. Editing user details (username, full name, email, etc.) -- Firefox thinks the email is a good place to autocomplete the password.
With these, I've had to employ several workarounds to tell the web browsers that these are not login forms, so please don't autocomplete them as such, all because they ignore `autocomplete="off"`. I've got these working now, but if Chrome/Firefox decide to ignore the markup because of sites misusing them (like they've done before), I'll need to work out how to avoid this again.
Even if you add `autocomplete="email"` to that field?
However, conceptually the right place to fix/configure this is the browser. So the correct long-term approach is to open a bug/feature request and get this properly addressed. Everything else is, well, -- a workaround.
(Again: I understand that the correct approach can take years, and it is unclear if it will succeed at all - so it may be impractical.)
For example, for a multiplayer game I worked on, you could set a password when you create a private room in the game. The browser always auto-filled it with your account password, which is definitely not good because you have to share the room password with others. Telling the browser to not autocomplete that filled didn't work, because "the browser should know better than the website" thing you mentioned.
There's a setting in Chrome where you can disable auto-complete on a field-by-field basis?
Well the alternatives may not be perfect, this clearly isn't either. They can create videos rebuking disabling password fields, or put warnings in the webmaster console, or apparently just release a vague statement about how "disabling password fields or disabling pasting in to them will now majorly detract your placement in search results" and turn the Marketing/SEO team against bad security contractors.
It;s fundamentally wrong to decide what 'rights' website users have (aside from when it comes to privacy).
There are myriad ways how a website can become un-user friendly to the point of being unusable not the least is of which you can completely disable the cursor or completely not display certain parts which are really there (e.g. display: none).
Point being there is a fundamental 'trust' which a user gives to a website developer, that the website they visit will behave as 'the developer' intended. The user even expects to get the site just s the developer created it, however 'bad' that may be.
Now of course it is in the interest of the web developer to make their site user-friendly if they want to appeal to a wide populace. But it is totally in the purview of the developer to make the site even completely unusable.
I don't understand how a browser has the audacity to force their assumptions on site behavior on the user/developer.
So it's not even those "corner case big boring CRM business apps" that had to find workarounds to forced-autocomplete, it's "real" user-facing ones too. Very frustrating.
The only way I was able to fix it was renaming the field.
The recommended alternative solution posted by a Googler in the above Chromium thread is to specify:
Browsers as they stand now are not capable of truly blocking autocomplete, or pasting into a field with an input box. If they aren't implementing their own text field with a canvas and taking keystrokes themselves they aren't blocking paste anyhow. (And if they do that I can still tampermonkey or something my way into a "paste".)
> Conforming to the spec is not a virtue.
> When the spec is malicious, conforming to the spec is malicious behavior.
> I'm comfortable calling it a bug in the spec. `a << 40` needs to have 0 in the lowest 40 bits. It does not need to have random values in bits 8-31.
> This behavior is documented, but that doesn't make things better, it makes them worse.
> But the philosophy that says "if it's documented, then it's OK" doesn't even allow for the concept of a bug in the spec.
Implementing a bad idea doesn't become a good idea just because someone once wrote that it was.
Remember, there are autocomplete values to accommodate "current-password". If your bank has a field representing a password without that attribute, do you think that's following the spec?
I encountered quite a few myself and was very annoyed. I guess devs took the "usability" side of the question.
Well apparently it is, because they're doing it.
That's a tad over-dramatic. And context matters, surely I don't need to remind you why Google is spending so much money on Chrome?
Having a company control 70% of the browser market is bad enough, we don't need people telling them to go ahead and ignore specs, remember that they don't make those decisions out of goodwill for us.
But as soon as browsers stop autocompleting fields marked with autocomplete="one-time-code", won't website developers start marking _all_ input fields with this tag? After all, why do people put autocomplete="off" on input fields anyway?
It's broken as fuck.
We have customer service representatives that accept orders over the phone, including credit card numbers. These should not get stored by the browser as autocomplete data.
The other side is the situation we have now, autocomplete doing the wrong thing all over the place with no way to stop it. Stomping on my apps specific database driven autocomplete really hurts the user experience. Also autofilling fields without the user noticing and entering wrong data into forms. What a mess.
Because they do put autocomplete="off" on login form, username, and password fields. At least for me:
UPDATE: please help me write a sarcastic comment about Apple Store team putting autocomplete="off" there, and Apple Safari browser ignoring it.
One app is a kiosk that keeps saving people's passwords and autofilling them for the next user. Another app has its own address dropdown but Chrome hides it and keeps autofilling the same address over and over making the app useless. A third app is for admins creating users, and it keeps autofilling the admin's own details so that info keeps accidentally leaking into the user accounts. Another app is for applying for a bank service with very strict requirements, names get autofilled not following the requirement, users think the autofiller is perfect, then they get rejected and need to go to the branch physically to fix it.
Don't be a know-it-all. Go actually learn something.
Having a browser second-guess its own markup after this markup has already been established to work a certain way is really dangerous. We're talking about the web, the most popular platform in the world, and Chrome is the most popular browser. This is irresponsible handling of that burden from Google to make changes like this on a whim.
Try again, but with less personal invective. You're listing a few bad things that happen because Chrome ignores autocomplete="off", but you're not listing all the bad things that would happen if Chrome didn't ignore autocomplete="off" --- namely, users using weaker passwords and getting compromised more.
Sorry, all the things you mention sound like minor annoyances to me. It's much more important that websites not block secure password storage features in browsers.
To me as a web developer (among other things :D) this is quite annoying because password managers often hijack our forms when they decide that the label (or id or classname etc.) sounds suspiciously usernamely, passwordly or credit cardly.
I don't care about the reason they have to be so intrusive in UX, probably some malware fight and/or prevention. The fact is that if I am going to use 1Password or other password managers per site, with 25 characters long passwords with symbols and numbers, I want to be able to somehow fill that in without typing each letter. Some sites don't care about this use cases as they are trying to cover the asses of non-tech-savvy users. They must protect the password123 crowd, right? So password managers need to fight back, unfortunately.
Also, if you have a problem contact their customer support. I had a tweet get a few hundred likes about a non pastable field for a transportation website and they actually changed it later that week!
The absolute worst are fields where paste is disabled, and the characters are also echoed as "*" so you can't even see what you are typing. I saw this with SSNs when I submitted some tax forms on my state's website recently.
The only argument I can think of for disabling paste (and I think it's pretty weak) is on a form to set a new password, where you need to input the password twice (and the form validates that they match) you might want to make the user actually type the same password twice, rather than let them copy/paste the first entry into the second field.
Please no. I generate a password in bitwarden, save it, copy and paste twice. Don't do that. I really don't want to type a 24 character password with lower / upper letters and special characters. If you do that to me, I will leave your website and never come back.
This is the only issue I've ever had with copy/pasting passwords, it only happened once, and the site preventing me from pasting would have done nothing to prevent it.
I don't understand the rationale either.
Also, double validating passwords should allow for pasting to promote the use of managers. Forcing users to type them in creates more possibility for mistakes - you can type the same wrong password twice... Muscle memory is funny that way.
I think this is also why lastpass clears your clipboard a few moments after you click the “copy to clipboard” button.
Lastpass and other password managers like 1password wipe the clipboard after a few seconds to minimize native app access to the secret.
Cargo-cult internet "security" practices are legion in the retail-banking sector. Like with most things it starts with good-intentions but when modern research suggests better-things the worst of them just knuckle-down with hypertension-inducing results: https://www.troyhunt.com/tag/banks/
* Banks think that having users remember their banking-passwords and commit them to memory is far preferable to having users use password-managers.
** Password managers on Windows can theoretically get hacked by malware:
*** Ssure, the data is encrypted at-rest, often with your DPAPI key (e.g. Chrome and Edge's built-in manager) or with 2FA (e.g. LastPass), but none of the password-managers I've used on Windows (Chrome, Edge, IE's, Firefox's, LastPass, etc) take any steps to protect their hWnds from inspection by other userland processes running at the same privilege level. This does surprise me - I honestly would have hoped/thought that by-now password managers would use Office IRM-style protections ( e.g. `SetWindowDisplayAffinity` https://stackoverflow.com/questions/21268004/how-does-office... ) and/or accessing the password-database and showing results in an elevated hWnd to protect them from lower-privileged hWnds and processes).
* Banks believe that password-managers present a risk to their customers (and by-extension: their own bottom-line) because:
** If they do recommend users use a password-manager then they run the risk of a user downloading and using a scam or malicious password-manager and then blaming the bank once their account gets hacked and drained.
*** Banks don't want to get into the business of recommending any particular password manager: there's too many to choose and it's not their business to vet the good ones from the bad ones.
*** So it's easier just to not recommend using any password-manager. This then logically extends to recommending not using a password-manager, using whatever weak reasons exist for arguing against them.
* As for why paste is disabled: This notable article by Troy Hunt deals with this exact issue https://www.troyhunt.com/the-cobra-effect-that-is-disabling/
** The first reason blame-shifts to the bank's accrediation/certification/PCI/EV/etc process - which seems sus, though plausible, depending on exactly what certification's rules and guidelines could be broadly misinterpreted by whatever technophobic upper-executive in charge of a bank's retail online banking user-experience.
** The other examples listed seem (to me) to be all around discouraging users from copying their passwords into their clipboard and pasting it into websites so that their users eventually give-up and stop copying it at all and instead type it in by-hand - the concern being that malware running in the background on the user's machine could monitor the clipboard and steal passwords that way - which I'll agree is a real concern to have, but the fact that users will try to copy and paste it at first and that by typing it in renders them vulnerable to keyloggers (and if a program is already monitoring the clipboard, that program could just-as-easily be a keylogger).
 because they'll likely be found liable for losses caused by unauthorized customer account access due to phishing, etc. Their liability varies between jurisdictions, though I haven't noticed a correlation between jurisdictional liability and banks' general intransigence towards modern evidence-based infosec...
Yeah, but what happens in reality is that the user copies the password, and then discovers that paste is disabled. By that time, the password is already on the clipboard.
I don't log in to any particular websites often enough to remember ahead of time which ones let me paste passwords and which ones don't.
> Verifiers SHOULD permit claimants to use “paste” functionality when entering a memorized secret. This facilitates the use of password managers, which are widely used and in many cases increase the likelihood that users will choose stronger memorized secrets.
I use the "Don't Fuck With Paste" add on for Chrome/Firefox, which mostly works well.
We've spent countless developer hours trying to work around password managers. I agree that sites shouldn't attempt to disable password management for login and sign up pages, but it's annoying how often these password managers do the wrong thing and break the user experience for pages… like Safari is doing for livewire-ui/spotlight.
As an administrator was trying to work though a users problem. But their account details all matches mine. It took an embarrassing amount of time for it to click.
I've also had to scrub data when users somehow put their credit card numbers into public fields. Still no explanation on that one, but it happened with enough users that our only guess was browser auto-fill gone awry and people blindly hitting submit.
And it's the browser itself rather than an electively installed plugin where you asked for it.
It's outrageous. By rights, modifying the content this way should be seen as utterly outrageous by both site authers and users, not just some quirky glitch that it's not smart enough and doing the modification in the wrong place sometimes and will shortly be improved to false-positive less often.
Also, the confirmation requires authentication (at least by default, unsure if this can be changed).
> The phrase “welcome back” on a page causes Safari to autofill a password
I use 1Password, with browser integrations (it works better with Safari than Chrome).
I don't know most of my passwords; relying on 1Password to access the strings of garbage I autogenerate.
So I am constantly using it to fill forms.
It keys on things like attached <label>...</label> elements. Not all sites use these. Some sites also sometimes add some kind of junk that causes 1Password to fail.
Other times, 1Password insists that the field I just selected needs an autofill; even for non-auth fields.
Not really a big deal for me. No one that shouldn't gets my auth, and I ignore the prompt when it is not necessary.
This is more likely normal behavior than abnormal. The number of sites a person uses likely increases the chances they don't actually know most of their passwords. The default "flow" becomes "password reset and recovery", which makes most services about a secure as the system being used for recovery if the password is reset.
It's important to understand the value of the data or service being "protected" by authentication. Banks should probably continue using passwords. Bookmarking sites, or things like Discord can get away with token logins. This eases the burden on the user.
Gmail leaves me logged in for long periods of time once I've authenticated on a given machine/browser. This is also a form of "autocompletion" in a way, allowing me to access sensitive data (my email) without having to re-authenticate with a password (by using a stored cookie). Anything using my email for password recovery is susceptible to being attacked through my persistent session, but then again I do a pretty decent job of retaining possession of my laptop physically.
By using email tokens to log in to a site, instead of resetting a password that will likely be forgotten, one could just skip straight to logging in the user with the one time tokens, which are as secure as the system being used for transmitting the token.
There is no icon in any of the fields to click to populate them.
There is no auto filling.
You have to cursor into the field, right click and manually select the relevant entry to fill.
From a security standpoint this is much better and safer overall.
It also prevents accidental autofilling and login of an account you're trying NOT to login with on sites where you have multiple accounts and need to keep things carefully separated.
Additionally, if you have Bitwarden in your toolbar, you can click the Bitwarden icon, then click the entry for the site, and it will auto-fill in the page for you.
I'm surprised anyone uses context menus to do this, though I agree with you that it's probably safer.
I have encountered this mentality often. I'm not sure if Apple users have so many bugs that they are used to it, or if it's part of the fanaticism.
I had so many bugs on iphone 6 I was baffled because the marketing "It just works". Upon voicing my issues, I was told from numerous people, "it's probably just doing X,Y,Z". Like that's an acceptable reason for bugs.
Thanks for the insult.
I'm not an "Apple fanatic," but I do develop for the platform.
I don't rail against other platforms (I spent 25 years, managing a cross-platform team), and I would suggest that you may be doing yourself a real disservice by writing off an extremely lucrative venue.
I do support you, however, in demonstrating a commitment to your principles, by ignoring and insulting a gigantic swath of monied customers.