Hacker News new | past | comments | ask | show | jobs | submit login
Safari tries to fill username (github.com/livewire-ui)
628 points by knorthfield 17 days ago | hide | past | favorite | 376 comments



Related, there is a "bug" in chrome that disabled autocomplete="off" on input elements, marked as won't fix

https://bugs.chromium.org/p/chromium/issues/detail?id=587466


The nuance here is that brain-damaged appsec pentesters reported this as a vulnerability for years, and so tons of websites followed that advice and dutifully disabled the functionality. But autocomplete has advantages: it lets users easily specify long, random, per-site passwords without ever having to worry about that. And when they can't do that, a pretty large percentage of them just give up and write the password down somewhere.

In the end, i find a lot of chrome's decision to implement spec-breaking behavior awful in the context of having a website that works forever (Looking at you, samesite). But this behavior rarely breaks functionality and on the whole makes the web a lot more secure.


I used to support a client facing app at a bank and the appsec pentesters were a joke:

* Username and Password fields must not autocomplete * Username and Password fields must not allow text to be pasted in to the field * Password must be at least 8 characters with lower case, upper case, numbers, and special characters (they didn't care it had a maximum length of 8 characters)

I straight up told our project management it was actively hurting our security, and was told the the point here was to fulfill a regulatory requirement to complete and resolve all issues from a independent "pentest" not to improve security.


I am currently arguing with the bargain-basement pentesters one of our clients hired. They are claiming the system we built is vulnerable because, and I quote, “any credentials sent over HTTPS are transmitted in plain text until they leave the user’s local network”. Not sure how exactly they think HTTPS works, but five minutes on Wikipedia could debunk that one.

They also flagged up that users can access JavaScript and CSS files. Not the original source files mind you, nor is directory indexing enabled or anything like that. They pointed to our compiled and minified app.js and app.css, and suggested we block access to these files as the source code to the app is “sensitive information”.

Having to tell a client another company they’ve hired are absolute clowns, without making it seem like we’re trying to save our own skin, is certainly interesting.


"Look, I'm going to be honest with you: your pentesters are morons. They're grossly incompetent and should be embarrassed. I can give you a list of qualified alternatives you might want to choose from, and not just to test the work I've done for you, but for all your other projects too. Seriously, their advice is just awful and you really need to switch."

This isn't the time to tread lightly, but to go scorched earth. This isn't an "oh, we disagree on the finer points!" debate between peers kind of situation, but a flat-out "these knuckleheads are putting you at risk and you need to know it". You want to get the point across that you're not messing around or leaving room for doubt.

Source: have had these conversations several times over the years. I normally pride myself on tact, but in my experience tact is the exact wrong approach here as it gives the client the impression that there's a wiggle room of doubt.


The key here is to make this a do-or-die conversation. Tell the customer the truth, and then tell them you’re not going to work for them any more if they keep the other morons on the payroll — you’re not going to risk your reputation and your business on being associated with that other company.

“I’m sorry if this means we can’t do business any more, but this situation has gotten so severe, that I just have to tell you the unvarnished truth, and ….”


Yep. This isn’t just complaining about someone saying something you don’t like. You mean business, literally.


>any credentials sent over HTTPS are transmitted in plain text

Hummmm. So a couple of years back, I was working on some internal tools that passed sensitive information around and I found some interesting info.

Some bloggers INCORRECTLY thought that HTTPS didn't secure the URL Flags. Correct fact: parameters passed in the URL like ?item=bla is encrypted

Also, some cloud providers aload Balancers (AWS) allow you to offer load HTTPS encryption/decryption - so there REALLY IS plain text stuff in the final leg of the journey (e.g. from the LB to the server)

In the end, the biggest thing I learned is that HTTPS is hard and it sucks.


> Some bloggers INCORRECTLY thought that HTTPS didn't secure the URL Flags. Correct fact: parameters passed in the URL like ?item=bla is encrypted

It’s still good practice to keep sensitive info out of URL query parameters, which often leak into server logs.


And are (were, maybe modern browsers fixed that by now?) sent in HTTP Referers to linked sites, end up in browser history, ...


The current default for the Referer header is to send the complete referrer for same-origin requests, to send the origin for cross-origin requests, and to send nothing if going from HTTPS to HTTP.

This is customizable by setting the referrer policy: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Re...


> Also, some cloud providers aload Balancers (AWS) allow you to offer load HTTPS encryption/decryption - so there REALLY IS plain text stuff in the final leg of the journey

At first I thought this must have been what they meant; perhaps there was some configuration thing we got wrong.

So we asked for clarification and nope, the example given was that someone logging in from an office could have their credentials sniffed freely by anyone else on the office LAN.


I had someone complaint they could ping the public address of our load balancer.

I sent the client back a list of government and military websites that responds to ping. As an extra bonus, it turned out the pentesters own website responded to ping.


Some hired "pentesters" found in our Asp.Net application that "Connection to the prod database is established before the user credentials have been validated.". They even insist that this is come from some ISO security guidelines.

Cheese, this one line in their report causes around 3 hours of meetings with around 10-20 people on them... and there were a lot of lines like this.


This is the DB that contains the usernames and (hashed) passwords right? What do they expect? That you have a separate DB for authentication from everything else? What does that achieve? If you DoS the auth DB, you still DoS the application in this scenario.


The application has a much larger attack surface than the auth/user system, so it makes sense to store PII separately.


Actually yes.

They try to sell us external/internal Auth service, similar to KeyCloack with their support. What pentesters want to achieve is not improved security, but to sell their services as DevOps and developers. This were not what we expected from pentesting.


In the biz. What you need to do is address each issue with dispassionate detail in the response. Make no value judgements in the individual responses. Feel free to use words like “incorrect”, “false”, and my personal favorite, “logical inconsistency”. Quote specs, RFCs, platform dynamics, everything. Use diagrams, flowcharts, whatever it takes. But again, dispassionate, detached, and nonjudgmental. Then...

In the very last paragraph, as a conclusion to YOUR exercise, explain how the utter lack of competence in the subject matter displayed by the consultant has resulted in blah, blah, dollars, time, effort, all down the drain. Emphasize the harm to the organization and how it affects the trust required between different groups.

I guarantee it will get you promoted or fired. Which one depends on the organization and I expect you already know what will happen.


That is dire. Almost as bad as the NCC reports we had for an old client.


wait, so what do they suggest you do instead?


For the HTTPS thing, they’re suggesting client-side encryption. Which, to me, seems to be a combination of no real benefit and opens a window to introduce vulnerabilities if we get anything wrong.

Interestingly, I checked a few big sites, and while Google doesn’t, Facebook and Amazon both use client-side encryption. Is it just to provide some extra protection for pwned users who have trusted bad certs? I’m no security expert, and I’m struggling to think of any real benefit.

For the JS/CSS thing, I have literally no idea.


If you're stuck following their recommendations, you could try to sniff Accept headers or user agent or something to "block access" to JS/CSS but still allow the browser to load them in your page. Might risk breaking the app though.


In these cases, it makes sense to point people to NIST Special Publication 800-63B (Digital Identity Guidelines) https://pages.nist.gov/800-63-3/sp800-63b.html — their guidelines are pretty good and eliminate much of the braindead nonsense that is considered "accepted practice in the industry".


Nice read! Specifically, under 10.1:

> Offer the option to display text during entry, as masked text entry is error-prone.

And under 10.2.1:

> Support copy and paste functionality in fields for entering memorized secrets, including passphrases.

(... snip ...)

> Allow at least 64 characters in length to support the use of passphrases. Encourage users to make memorized secrets as lengthy as they want, using any characters they like (including spaces), thus aiding memorization. Do not impose other composition rules (e.g. mixtures of different character types) on memorized secrets. Do not require that memorized secrets be changed arbitrarily (e.g., periodically) unless there is a user request or evidence of authenticator compromise. (See Section 5.1.1 for additional information).


Taken to the extreme is the US Government's TreasuryDirect website, where individuals can buy savings bonds. Instead of allowing you to type your password, they render a "virtual keyboard" that you have to use your mouse to click the keys one by one.

Oh, and that password? Not case sensitive.


> Oh, and that password? Not case sensitive.

What, you expect them to make a case-sensitive version of NTFS just to store your password??


NTFS is case-sensitive.


They made a case-insensitive version of NTFS just to store your password


It is case-preserving, but not case-sensitive.

So, it will show you what was entered and make you think it’s case-sensitive, but then when you go to do the comparison, it actually ignores case.

The stupid thing is that MacOS was also case-preserving but not case-sensitive for a long time.


APFS still defaults to case-preserving:

    [nathell@macmini /tmp]$ echo first > A
    [nathell@macmini /tmp]$ echo second > a
    [nathell@macmini /tmp]$ cat A
    second


That's how Windows will behave but not actually how the underlying filesystem does.


I think it’s internally case sensitive and provides case insensitive APIs to users, right?


Right, win32 is insensitive


I heard that systems like this were designed when there was a point in time(this may just be erroneous and such a time never actually existed) where keyloggers were more common than RATs, so government websites would often have this requirement due to the higher probability of access from public computers(library, etc), since that was also a point in time when fewer people had their own at home.


Hard to believe it requires a mouse. The government (everyone really but especially the government) generally would need to follow basic ADA guidelines...


Yes, the authors of that document did an great public service in letting us point to an "official government standard".


This is the continued dilution of security with audit/compliance. It's a mindless, check the box mentality. They don't care about real-world security, they offer insurance to cover the losses. But many insurers are no longer paying due to the volume of incidents and the lack of sound security.

The auditors are typically 10 to 15 years behind technical security expertise.


> This is the continued dilution of security with audit/compliance. It's a mindless, check the box mentality.

If I can play devil's advocate for a moment—isn't this just how insurance necessarily works? Your car insurance company isn't going to interview your teenage son; they don't care that he's a particularly mindful individual, who never speeds because he remembers the time a close friend died in a car crash. "The policy says 17-year-olds are high risk, pay us a zillion bucks a month."

Of course, guidelines that have literally zero value still have zero value. But they have to come up with something concrete...


Bad example. They aren't going to interview your son, but _most_ will take his high GPA and certificate of completion of Driver's Education class, and give you a discount for it, which is the next best thing without spending the time to interview him.


But isn't that driver's education class certificate basically a “checkbox”? I don’t think it’s so different from those IT certifications.


I think the difference is that taking a drivers education class, and (in my experience, at least) is that there is actual hands on driving experience. I think an IT certificate or security audit is a lot more abstract.

The only way to check the "Has taken a driving class and has at least 20 hours behind the wheel" is to do just that. How many different ways could you check the "Secure password requirements are enforced by users" box? How many ways could you check the "physical security to encrypted systems" box?


Totally—but I think that's actually what leads to the dumbest requirements people are complaining about. "Don't allow autofill." "Don't allow pasting passwords." "All passwords must contain at least five special characters and your first born son." Those are boxes that can only be checked one way.

I'm not quite sure where I'm going with this. Something about, maybe things are broken because they don't fit in the insurance company model, and someone needs to solve for that before anything gets better.


> The auditors are typically 10 to 15 years behind technical security expertise.

Probably not, but they are there to be paid by their customers. Does the customer have to mark a checkbox on a regulatory form? Give the customer some answer which is not blatantly false or useless, get the money, come back next year.


This is spot on... I wish I could share my recent experience


Ahhhh, so that's why banks specifically often don't allow automatic filling/pasting.

It's because it's in some dumb regulatory pentest manual or something. OK.


I think PCI standards are pointing to some of this nonsensical advice.


It's a problem, to put it mildly. There is humongous growth in this space and not enough skilled people to fill the gap. I'm lucky that my current employer is more discerning but i frequently get reports from previous assessment that are just the results of uninterpreted automatic tooling :(


Oh man, enterprise "security" firms used by banks and other old behemoths are a cancer for users. If you want your website to actively abuse users (especially one with special needs and pretty much anyone that doesn't fit into an "made up average person mold") get those people on board and listen to the dumb things they say.

I still can't believe that whole business managed to interpret 2FA for whole EU as "you MUST use SMS for 2FA!".


The product I work on now logs users out after 15 minutes. It's a service where the average user would probably spend a good few hours of their day.

We're actively harming the user experience (and driving paying customers away) because of some "expert" advice.


The ones that puzzle me even more are the intranet websites that log you off after x minutes whereas they work with single sign on, ie no password entered, so not sure what security benefit that achieves. But they make you lose whatever you were doing in the process.


Usually that comes down to single sign off being a lot harder to do than single sign on. So they just use a timer on each app for the sign off.


What continues to grimly amuse me is that many of these websites that also have a mobile app will basically keep you signed in forever on the mobile app. It's just the website, where most people would prefer to do their heavy lifting work on, that has the anti-usability nonsense that makes you install plugins like auto-refresh-every-10-minutes.


The problem with the security industry is that there's no way for non-experts to reliably assess "I'm an expert, trust me!" from a practitioner.

I'm not really sure what the best fix is; there are many possible ones. I've seen total clowns pushing decades-old nonsense be taken seriously by competent businesses simply because they thought "hiring an expert" was enough, like they're a plumber or something.


It is no different than doctors or mechanics or lawyers. Reputation is your best guide. In security-land, there are some certifications that are fairly rigorous; some of those can serve as a distant second.


Doctors and lawyers are professions that are regulated by licensure, of which unauthorized practice comes with actual real and not made up legal consequences. Where is the similar licensure that tech security professionals are regulated by?

I think that’s a big difference.


You may have missed the point being made. You find a good security professional the same way you find a good lawyer or doctor. Ask around for a reference for a good one. Then check their credentials (e.g., what certifications they have).

I believe there was an article on HN recently about a startup that used a "lawyer" that wasn't because they didn't check their credentials after getting a great reference. Just because there are consequences doesn't mean it doesn't happen.


You may have missed the point being made

I feel quite certain that I haven't, I just think the point is poorly made and I've spoken specifically to why I think that to be the case. You can get all the recommendations and referrals you want for an infosec professional; nothing stops that person from holding themselves out to be such a professional, quality of work or competency performing it notwithstanding.

You can absolutely suck as a pentester, but still legally hold yourself out to be one and advertise yourself as one to anyone who will hire you.

You can NOT do the same, holding yourself as an attorney or a doctor without very real risk of legal action if you are in fact-not licensed to do either. There are bar associations and medical boards governing various aspects of their work, and how their work is conducted, performs ethics and competency investigations on license holders, and can take away their license to continue working in such capacity if said investigations deem fit. No such governing board or ethical board exists for infosec professionals.

That is a pretty important difference that shouldn't be ignored just to make a petty point about how easy is is to ask for a referral.

Just because there are consequences doesn't mean it doesn't happen.

Which is only supplemental to all of this. My entire point is that it happens, and the prudent do the diligence to make sure it doesn't.


> I feel quite certain

People who are wrong usually do.

> You can get all the recommendations and referrals you want for an infosec professional; nothing stops that person from holding themselves out

Here is where you missed the point.

You are correct that we do not license, say, pen testers the same way we license doctors. You are incorrect in thinking that this matters.

The point is that in both cases, reputation is the best general-purpose measure of who you want. That's all.

My mentioning certs may have steered you wrong, and that was a bit of a distraction. My point there was that certs tell us something, usually not much, but are still better indicators than their self-advertising.


Let's dispense with the "right or wrong" aspect of this, because I don't think it's helpful towards moving the needle on this, and instead evaluate this as a matter of complementary perspectives.

Does reputation matter? Yes. This I will openly concede. Do I think credentials are meaningless? No.

Where we disagree is "thinking that this matters". I still think it absolutely does, and think the analogy is a poor one. You clearly think it doesn't, that's fine, but I don't think it makes either one of us less or more wrong. Perhaps that's all there is at play here, a difference of opinion in how an organization prosecutes the search for a qualified expert in security, medicine or law; and I think it's revealingly disingenuous to frame such organizational decision making and risk tolerances when seeking professional services with rigid and inflexible absolutes of "right way" or "wrong way" or whether or not method A matters whereas method B doesn't.


It is normal to get a little confused when you ignore half the comment.


Are you unironically comparing a certification in technology to a license to practice medicine or law?


This one is based in security standards :( https://security.stackexchange.com/questions/45455/which-sec... (link talks about screen locking but similar vibe for app logout for various certification bodies)


At least this is based on "inactivity", compared to "authentication tokens must have a maximum lifespan of 15 minutes"


imho better would be if the site just asks for the 2fa again after sending the form.


After 15 minutes, or 15 minutes of inactivity? The latter is defensible at least, in e.g. a public area where there is a risk of people leaving their desktops without locking them. I mean that's another policy issue that can be addressed (a policy that locks a system after x amount of inactivity), but as an app developer you can't know much about the system things are running on.


Careful. Filling out a long form isn’t 15 minutes of inactivity, but a huge range of websites assume it is.


PTSD causes me to copy and paste big blocks of text out of a text area before submitting every time.


sounds like what an extension could do. store in localstorage the last hour of forms. I especially hate clicking submit to get an error and an empty form again.


Ugh, a form that takes 15 minutes or more to fill out, without any feedback or other interaction, is itself a UX problem. It should at least be auto-saving.


More likely it will have a “submit” button that runs a script that blocks submission wen you missed a field. And wipes out a couple of other fields (usually passwords) so that you have to re-enter those after hitting that “submit” button again.


But should all sites really be optimized for the user at a public library computer? At the expense of convenience for the large majority of users that are on a personal or work computer? Doesn’t make much sense to me.

Also the computer itself solves this problem for you in many cases, a guest profile typically deletes all browser session info when you log out.


All sites? Probably not.

Many sites? Probably.

You're assuming people log out reliably or otherwise behave in the most secure way. They don't.

I also don't see how logging out/killing a session after 15 minutes of inactivity is much of a hardship for the user.


I hate _all_ sites that do this and I actively avoid them. There are many very good reasons why I might not be able to complete a form without interruption. It's not for them second guess me.

And it's not just extremely annoying, it's also completely unnecessary. Just put a "trust this browser" checkbox on the sign-in page and adjust the session timeout accordingly.


Just put a "trust this browser" checkbox on the sign-in page and adjust the session timeout accordingly.

That works. It defaults to the "safe" behavior, but allows users to self-select into other behavior that they find less objectionable.

FWIW, my end-users are using public computer labs, so we have to build for the worst-case in terms of user security habits.


I use Coface for work to check credit for potential customers. Instead of a password, they require a 6-digit pin. It can't be auto-filled or entered with the keyboard. There's an on-screen number pad that you have to click on and the numbers are scrambled - they show up in a different arrangement every time. Such a pain!


Yeah. One of my banks uses something like this. Here's how it works:

The client can only use numerical passwords. When loading the login page, their site also loads the number pad, which consists in an HTML pad containing the 10 digits. The digits are displayed as base 64 images and in a random order, so it's impossible to determine which digit is which from parsing the HTML alone. In the HTML, the images of the digits are each associated to a random 3 letters string. This string will be sent to the server instead of the plain digit.

With the number pad, the site also load a "challenge", and this challenge is sent to the server when connecting. My guess is that this challenge is an encrypted string that indicates what digit corresponds to what 3 letters string.

I made a script that logs in to my bank account to get some information and I was able to do it without using OCR on the images of the number pad because the images never change, so their base 64 strings are always the same. I was a bit disappointed when I realized it, I thought that the people who came with such a twisted login form would have added random noise to the image, just for fun.


I think this is the manifestation of non-logical associations humans make.

When I was a kid, a teacher told me learning was supposed to be hard and unpleasant, and I believed her for a long time. Only when I started enjoying myself in spite of that did I see it was wrong, and I started doing well in school, and (more importantly) pursuing my own interests.

There's a similar thing with security - people assume good security must be painful, so making it painful becomes a goal. Sometimes this is sincere, sometimes (TSA) intentional theater. But either way, the result is intentional hostility to the people who use the system.

I'd bet money they have a one-sentence answer for why it does each of those things ("order is scrambled to prevent shoulder-surfing"), but have done zero testing to determine whether those theories are correct.


I always associated these with key logging prevention. What drives me nuts however, is websites/apps that allow me to type my password but not paste it. Like they want to force that a keylogger can grab it?

Another favorite of mine are password conposition rules, which do nothing but reduce security and are everywhere :(


Ah yes, the 2005 RuneScape bank method of security, very high tech


One of my bank has that to login. The worst is that they also force to use the randomized numpad on their mobile app.


So, they never want users with a disability to be able to use their site. Nice.


You could probably outsource the pin entry to a human or AI based third party service.


Such as Amazon’s Mechanical Turk?


> I still can't believe that whole business managed to interpret 2FA for whole EU as "you MUST use SMS for 2FA!".

Weeeeeelll...

I'm familiar with two (2) common kinds of "2FA" implementations. TOTP and SMS.

Of those two, only SMS is actually a second factor, albeit not a particularly secure one. TOTP is fundamentally a password, and two passwords are no different than one password.


> TOTP is fundamentally a password

I see this view a lot. It's wrong. TOTP is fundamentally different to a password, as the stored "password" (by which I presume you mean the key) is never transmitted anywhere.

TOTP in fact has one property that makes it potentially* the most secure of all 2FA methods: it can be used airgapped. As the credential you type into the 2FA form is not the saved secret.

* I say "potentially" because the relative inconvenience + human factors conspire to make it less secure than e.g. U2F in most cases. But assuming hypothetical perfect conditions, there would be nothing more secure than TOTP for 2FA.


Digest authentication allows passwords to be authenticated without sending the “key”, and could also be used airgapped.

You’d need to type a nonce into the dongle, then type the result into your computer.

TOTP is just a password. Also, in practice, the server has to have non-air-gappped access to a TOTP generator, so it’s not really air gapped at all.

Read up on the great RSA key fob recall for an example of TOTP-style auth gone horribly wrong.


Digest auth can be air gapped but the time aspect of TOTP still makes digest comparatively less secure (plus digest isn't typically even done separately to the primary client device, nevermind airgapping, whereas TOTP is at least most commonly used via an entirely separate device).

> You’d need to type a nonce into the dongle, then type the result into your computer.

That would be a cool augmentation of digest auth, but afaik is hypothetical currently (at least as far as common use goes). I can use TOTP airgapped right now.

> in practice, the server has to have non-air-gappped access to a TOTP generator

This is a fair point, but requiring full server compromise is still a nice step up from being mitm-able.

> so it’s not really air gapped at all

That seems like a rather extreme conclusion to draw. Client-side only air gapping is still airgapping, the fact it doesn't extend to protection from server compromise doesn't completely invalidate the benefits.


> TOTP is fundamentally different to a password, as the stored "password" (by which I presume you mean the key) is never transmitted anywhere.

Are you familiar with SRP?

TOTP has all of the properties of passwords, and no properties that passwords don't have. That makes it... a password.


I guess you can argue the definition of the word "password"; language is fluid, especially English.

I would say SRP is strictly a misnomer (though it's a useful conflation). Generally speaking password is a value provided for authentication (if it's no longer being "provided", as in SRP, it's something different... but I understand using a familiar word for that something different is helpful when communicating).

Either way, in saying TOTP was "just a password", the point you were trying to make was that TOTP is "no different than and therefore no better than a 2nd traditional password". The fact it's not transmitted makes it very different to, and better than, a traditional password. So whatever you want to define the definition as, the point stands.

> and no properties that passwords don't have

It has 1 property that passwords don't have: it is not transmitted!


There are authentication mechanisms that rely on passwords but work by not transmitting the password too. One example is kerberos.

TOTP is a password. The fact that it is a password doesn't matter though since it is something you have (and can't know) which augments the something you know. This satisfies the intent of MFA.


> One example is kerberos.

It kills me that most enterprise environments use Kerberos via Active Directory, LDAP, or NIS. So, your workstation probably has Kerberos tickets sitting on it, which would allow very light weight 2-way authentication and encryption of internal flows.

TLS client certificates and TLS-everywhere would be another good option, but it's particularly frustrating that the Kerberos TGTs are already on the client machines. The key management part is already solved in the Kerberos case.

Kerberos is even potentially resistant to quantum cracking. (Grover's quantum search algorithm effectively halves the key size of ideal symmetric ciphers, so you'd want 256-bit keys.) Forward secrecy is an issue, but there are proposals to incorporate DH key exchange in the pre-auth to give imperfect forward secrecy. A post-quantum key agreement protocol, like RLWE would be fairly strait forward to incorporate, with standardization being the main hurdle.


I agree Kerberos is somewhat under-used, but man isn't it half a pain to set up integrations with...

Part of the problem is that it's "enterprise" tech, which means all sorts of "enterprise" middleware claims to support it with some half-assed concoction that worked on the presales demo environment once, back in 2001, and nobody else has touched since. And it's also old and pretty obscure, with documentation lost to the fog of time, and very few people who remember how it was supposed to work - a bit like MS DCOM...


Aside from the fact that I never transmit the actual password. So the password that you’d potentially get only works for you for 30 seconds.

Slight detail that’s of course completely irrelevant.


> Aside from the fact that I never transmit the actual password.

You realize that, out of the many comments I've made in this tree, the one you responded to was the one that said

> Are you familiar with SRP?

There are more ways of compromising someone's information than capturing it in transit. If you give me your phone, I can read your TOTP seeds straight out of Google Authenticator.


Yeah, TOTP is a password. Hell, it is in the name. One property it has that differs from classic passwords is the authentication factor. For TOTP, it changes from something you know you something you have. However, lots of passwords are now randomly generated and are no longer "something you know" either.


> it is in the name

The "Password" named in "Time-based One Time Password" is the temporary generated value you transmit. It's not what's stored on the TOTP device, so in the context of this discussion, that temp value isn't what the gp was referring to.


> Yeah, TOTP is a password. Hell, it is in the name.

Careful; "one-time password" is in the name, and it certainly isn't that. Your TOTP seed stays valid forever.


The issue was that it was ONLY SMS - they immediately deprecated private certificates, 2FA "calculators" and other 2FA schemes.

After the security backlash they now backpedaled and implemented 2FA with ONLY apps. Apps that ONLY work on iOS and Google Android. I had endless calls from family where they couldn't access their banks anymore because they had a Huawei phone or a dumb phone. Banks are citing "security" as explanation why they can't use smartcards, hardware tokens or even bring apps to desktop computers or phones without Google services.

The funny part is - ALL banks did this at once. Why? Because the security consultants had "must have app" and "must check Google Safety net" on their check lists.


> The funny part is - ALL banks did this at once.

What country are you taking about? In regards to the EU 2FA thingy I start to belief to see a pattern. In countries who had established online banking standards with 2FA, nothing changed. But countries without, went ballistic. SMS or App only 2FA on every login and on every transaction. Yah, I can see that this is annoying.

While for me with my German banks I still access them using the FinTS protocol with a banking software of my choosing. For transaction above 20€* I need a TAN from my chipTAN/Sm@rt-TAN device (Which shows you the transaction details). Optional I could choose an app. SMS was phased out years ago (By my banks. Others perhaps still have it.)

(*only 3 transaction a day I believe. You can deactivate that so that you get asked for a TAN every time.)


The benefit of apps and SMS over hardware tokens, TOTP, smartcards, etc. is to have a out of band communications channel, not merely a second factor. This is crucial for dealing with malware that can change the transactions a user is entering on a banking site, and it being literally impossible for them to notice that it's happened just on the browser. With apps / SMS, they can be informed of the transaction details as part of the verification process on a secondary communications channel that hopefully is not affected by the malware.


chipTAN/Sm@rt-TAN device shows you the transaction details before showing you the TAN. This devices receive their information visually. Either via blinking code or via a coloured QR-Code. So they are are air-gapped.


Noticed this as well.

It's a minor inconvenience for someone who is organised or is used to store secretes securely but a complete nightmare (including a security nightmare) for your average Joe.

Thanks EU, thanks governments for your precious regulations that keep us safe.

I wonder how many similar stories there are in fields I'm not an expert of.


The thing is - I read both EU and local regulations and they don't demand any certain approach to security. Nothing is stopping banks from providing a better experience except dire warnings and prescriptivism of security consultancies.

I talked with fintech founders and they mostly say "sure, we could give better user experience and then have a fight on our hands with auditors because we didn't fill out all the checkboxes from the reputable security consultancy that 'interprets' the requirements"


My bank (one of the largest in the US) supports 2FA with SMS, their app, or a physical hardware token (which you buy from them for $20).


TOTP is no more a password than whatever one-time code you'd get by SMS. In fact, TOTP is arguably more secure, since it isn't nearly as vulnerable to hijacking as SMS is.


> In fact, TOTP is arguably more secure, since it isn't nearly as vulnerable to hijacking as SMS is.

Indeed, this is an argument you can reasonably make.

> TOTP is no more a password than whatever one-time code you'd get by SMS.

But this isn't; this is just a blatant lie.


Doesn’t answering a TOTP challenge prove that you “have” the HMAC shared key that seeds the code generator?


Yes, that shared key is a password, a piece of knowledge known in common between you and them.


A password is something you're supposed to "know", i.e. something in your head.

A second factor is something you have, i.e. your phone, a hardware token, or access to a shared secret you don't store in your head.

Password managers kind of mangle the idea and turn the password from something you know to something you have.


A password is information, something that can be freely duplicated.

The idea of "something you have" is that the thing can't be duplicated. As soon as it can, it's no longer "something you have". Any number of people might have it. A person who has it might not be you.

SMS hijacking, for example, converts your phone-based authentication to a password, where the password is your phone number. (Since an attacker who knows that number can pass the test.)

TOTP starts its life as a password.


I would argue that since the totp secret is never in my head it is not a password.

Sms hijacking doesn't "convert" anything anymore than someone with a telephoto lens "converts" an old-style hardware token to a password. (Yes, I know the p in otp is password, and called that because it's entered by the user. It's not a password in terms of a factor you "know" because it's time-limited.)

These are also fluid ideas that are used to describe roughly different failure modes for different types of authentication:

Passwords are thought of as things the user can disclose.

Totp and other "second factors" are thought of as things that must be stolen, or if disclosed have a very short viability time.

Biometric are things that can't be disclosed, but can be lost, and (and when properly implemented) not stolen.

You're trying to argue that these categories of authentication factors have hard lines and definitions when they're fluid categories being used to think about failure modes of a method. Each specific authentication method has its own strengths and weaknesses.

Also, sms hijacks require a lot more than simply "knowing" a phone number. While sim cloning and ss7 attacks are known and very possible, they're still fairly complex. You can also social engineering tech support at phone companies to activate your sim for an account, but that is also significantly more difficult than simply "knowing" a phone number and also a failure of the authentication the phone carrier is using.


> Sms hijacking doesn't "convert" anything anymore than someone with a telephoto lens "converts" an old-style hardware token to a password.

I didn't notice this sentence before. Compare the issue of releasing photographs of master keys.

https://www.schneier.com/blog/archives/2012/10/master_keys.h...

Compare the (correct) comment from that post:

> the press has helpfully published a photograph of the keys, so you can make your own, even if you didn’t win the eBay auction.

with this official statement from the government of New York:

> “If you’re selling it, it’s in your possession for an unlawful reason,” said City Councilmember Elizabeth Crowley, chairwoman of the Fire and Criminal Justice committee.

( https://nypost.com/2015/09/20/the-8-key-that-can-open-new-yo... )

Saying "you're not supposed to have this" won't stop people from having it. These keys are regulated as if they are "something you have", but the facts are otherwise.


> Totp and other "second factors" are thought of as things that must be stolen, or if disclosed have a very short viability time.

TOTP gets set up in the first place when the website discloses your seed to you. It's not something that can't be disclosed. Seeds get disclosed all the time; workflows are built around it.

> Biometric are things that can't be disclosed

Huh?? Biometrics are things that it's impossible to avoid disclosing. If you're ever in a police station, they are free to sample your DNA. You shed it all over the place. If you ever handle something, you just disclosed your fingerprints. If there are any pictures of you out there, your face is public information.

> sms hijacks require a lot more than simply "knowing" a phone number.

I didn't claim otherwise. The intent of my sentence above is to say that a context which involves a working hijack attack converts an SMS challenge from a second factor into a password. If your attack is working, knowing the phone number is sufficient to authenticate as the victim.


Yes, it starts its life as a password. After that, it is never communicated ever again, and therefore, after the initial exchange, it's something you have.

It seems to me you are ascribing properties to "something you have" that aren't warranted. The "something you have" needs to prove you were party to the initial exchange, not necessarily that you were the only one present -- that's why we use two factors, and not only TOTP.


I can't follow what you're trying to say.

> The "something you have" needs to prove you were party to the initial exchange

This is not something that can be proven at all. Accordingly, proving it is not a goal. Anything that can be had can also be transferred. Your delegated agent's login attempt is just as valid as yours is.


Not entirely. The thing I know turns into the main password to get into the password manager for every site.


The key thing is that an attacker wont be able to keylog the shared secret, or trick me into typing it on the wrong site.


They will be able to trick you into typing it on the wrong site (more likely, wrong terminal) if they’ve compromised your machine. They just need to wait for you to log in.

Similarly, they can grab the shared secret from the server.

It’s marginally better than a password manager (though some of those support TOTP now), since they can’t pull all your credentials by keylogging your master password.


TOTP is a second factor.

The hash seed that generates a password is connected to the device.


The seed is all you need. The device is unnecessary.


So, it’s a password.

All I need for password authentication is the password and a device that can generate a one time proof that I know the password.

TOTP just seems more secure because the password is never displayed to the end-user.


It's not a password.

A password/passphrase/passcode is something you know.

A hash for a TOTP is something you have. 2 factor means something you know and something you have (or something you are): https://dis-blog.thalesgroup.com/security/2011/09/05/three-f...

(And yes in theory you could remember the hash, and have a custom TOTP client that lets you enter it in. But unless you do this it is a theoretical argument only).


Sure.


No need to be sarcrastic. He is absolutely right. The seed is all you need in case of the common TOTP algorithm. There is no connection to the device.

In fact, in Google Authenticator you can even conveniently export all running TOTP to another Google Authenticator without any connection with the apps or anything else whatsoever.


That wasn't supposed to be sarcastic at all.

I was in total agreement with him - you can in theory run the algorithm by hand.

It isn't especially relevant though - 2-factor is "something you know, something you have". You need to have the hash.


You’re not familiar with U2F?

Would you not install two deadbolts on your door if you needed the extra security?


Top of the pops?


I don't even know if it was security consultants who ever recommended that. It's the same thing with disabling pasting into password fields. A lot of websites used to do that, many probably still do, but I have never seen a security team, no matter how braindead, recommend that nonsense. Rather, it's well-intentioned but stupid project managers following industry worst practices. You can't get in trouble for doing what everybody else is doing, no matter how terrible, I guess.


If you're on *nix, I've found that middle-click will usually work even if "CTRL-V" or right-click->paste is disabled. Something about the handling of Primary Selection vs. Clipboard in X11.


Ditto for credit card number entries. I use Dashlane to copy my CC info out of, and if that doesn't work, there is a good chance I'm not buying on your site. Maddening and pointless.

I agree this is probably product managers, but may also be engineers who have strongly held "security" opinions and nobody to check them.


This really used to be an issue when some JavaScript code constantly checked for new data in those inputs in hopes to find something interesting, like some personal info which shouldn't be there.

But I fully agree with the disable-paste stuff. Very few (web-related) things get as annoying as that.


I'd go a step further and say if my password manager doesn't play nice with a website, I'm less likely to use that website.


> The nuance here is that brain-damaged appsec pentesters reported this as a vulnerability for years

as a low-risk privacy defect yes, because things like bank account and routing numbers would be stored in autofills for certain banking sites that don't require authn/authz to initiate a transfer.

(I can think of a handful of platforms frequently used for common services like paying HOA fees which are currently vulnerable to this, meaning another user sharing the machine can simply hit ⬇ on the keyboard in form fields on a page that doesn't require authn/authz to initiate an external transfer in order to capture any stored banking details that were previously entered into the form.)

Source: I was one of those brain-damaged appsec pentesters.


Maybe it's just me but I can't trust Chrome with my passwords anyway. It seems like every update they wipe out the store. So I only use Chrome for GSuite (or whatever they call it now). And, of course, I have to use a pw I can remember.

My biggest security vuln is Google. And I've seen too many new account usernames out there like forgotlastpasspw to use an external manager.

Firefox, thankfully, keeps the passwords.


This stuff matters, and I hadn't thought of it in the way you're putting it.

One of our local banks disabled autofill without warning, and they went out of their way to detect if someone was pasting a password.

There was backlash and frustration, and they eventually reversed the decision.

After reversing it, they still put a disclaimer about not pasting passwords, but that disappeared after a few weeks.


>i find a lot of chrome's decision to implement spec-breaking behavior awful

I recall working with some folks who supported load balancers when Chrome decided that something seemed 'unnecessary' and they updated Chrome and ... it broke load balancing.


Autocomplete has one huge, glaring disadvantage: the passwords are stored on your computer, in reversible form.


Not really a glaring disadvantage. If someone has physical access to your unlocked computer and wants to do bad stuff to you, you are going to have a very bad day.


Consider Chrome, automatically, by default, replicates all your passwords to all your devices on which you are signed in.

Thankyouverymuch. I am gonna keep using my password book.

There is no sure way, as a private person and not being expert in security, to secure your browser. But there are ways to limit the damage that can be made. Maybe just don't make it too convenient and have a database of all your passwords on all your devices?


Yes, but let's be fair, it's a galaxy better than writing it on a post-it or password booklet, and still way better than using a memorable passphrase which will get reused and then leaked.

Besides, you can encrypt the local storage with a master password (and if you accept online as a requirement, you could even add 2FA to that).


A (well handled) physical password booklet is much more secure for the average home user, who is unlikely to ever be individually targetted by a third party attacker, let alone to the level of the attacker physically breaking into their home. My parents being victims of a zero-day vulnerability or installing a malicious application by mistake are much more realistic scenarios than their house being broken into and their password booklet being stolen by a thief who is meticulous and observant enough to take it and know how to make use of it.

Not only that, I would argue that a physical booklet is not only more secure but also safer. Nothing short of a house fire will destroy the booklet, and however much I like to rave about old-school ThinkPad durability, I don't think my locally stored encrypted database would survive that either.


A password booklet works well at home, but it's obviously much less secure if you wanted to sign in to a service while in public on your phone for example. One of the major benefits of a password manager is that your passwords are present, encrypted, on all of the device you need them on. Most people don't only need passwords at home, so the odds of theft or loss of the password book are much higher than your example makes it out to be. If we're talking about an average user, the solution of only sign into services at home isn't really an option.


You are correct that the access security of a booklet is almost certainly better than that of a password manager. The issue with the booklet is that humans do not like transcribing long strings between computer and paper so (at least in my experience) people who use the booklet method tend to eschew longer passwords, they tend to avoid creating new passwords when they can re-use an old one, and they don’t change the passwords very often (if at all). Also in the event that the booklet is ever lost or stolen (which is made significantly more likely by the fact that you must carry it around with you everywhere in this age of the pocket computer), you are suddenly in a very bad place.


>Yes, but let's be fair, it's a galaxy better than writing it on a post-it

the modern security hazard is not someone reading your post-it that is sitting on your desk, it is someone remotely getting access to some part of your computer or some service you own that can tell you what the password is.

The post-it note in our world is more secure than lots of things that have replaced it.

on edit: I see Mordisquitos said it better than I.


The password booklet can be secure if you have good physical security, and is immune to a software zero-day and autocomplete exploits.


this x 10, computer security is usually flawed, personal security is (bar a few war zones) much better.


there are a lot of war zones in this world though. given that and the number of Third World countries with high levels of crime and poor public security, I suspect that a significant percentage of the worlds technology-using population might have better digital security than physical security


It's always about evaluating your OPsec and tailoring it to your needs and threat assesment.


>but let's be fair, it's a galaxy better than writing it on a post-it or password booklet

Is it? If someone is physically in your home you are in greater trouble anyways and even then they likely aren't going to be grabbing a notebook. Just keep it somewhere nearby but hidden (notebook in a drawer on the desk).


When you connect to a website with ssl, your sensitive data is transmitted in a reversible form as well.

I believe moat browsers will use the system keyring (which is usually encrypted based on your login password or a tpm) if present or use a master password to encrypt them at rest.


Most websites are data sinks of anything that can be taken. No reason IMHO the login page should not always send a hash over ssl. (which is hashed again to test it)


I'm not sure what you mean by hash, but i6 think you're trying to describe mutual authentication, where the service also authenticates itself to the user. Look up things like pake, srp, and tls client certificates for more information.

https://en.m.wikipedia.org/wiki/Mutual_authentication

https://en.m.wikipedia.org/wiki/Secure_Remote_Password_proto...

https://en.m.wikipedia.org/wiki/Password-authenticated_key_a...

https://en.m.wikipedia.org/wiki/Client_certificate


I tend to side with Chrome here.

IMHO, the decision of whether to show auto-complete should be with the user and not with the website. When I install an auto-complete add-on or activate a browser feature, I expect the AC to be available on ALL input fields, whether the site owner thought that would be a good idea or not.

Now, there is a valid question on how the user should be able to configure the AC behavior, and how the website may help inform this configuration, but the decision should be with the user. The website should not have the final say.

So I would see this as more of a shortcoming of the HTML Spec.


The problem is when the web browser gets it wrong and decides to show autocomplete for an unrelated field, or a field that is not a login/enter password page. Some examples I've had to deal with:

1. A "name" field on a dialog for creating values in a controlled vocabulary (e.g. genres in fiction) -- Chrome thinks this is a username field so brings up a user autocomplete. I guess it thinks that "Jane Smith" is a valid label!

2. Editing user details (username, full name, email, etc.) -- Firefox thinks the email is a good place to autocomplete the password.

With these, I've had to employ several workarounds to tell the web browsers that these are not login forms, so please don't autocomplete them as such, all because they ignore `autocomplete="off"`. I've got these working now, but if Chrome/Firefox decide to ignore the markup because of sites misusing them (like they've done before), I'll need to work out how to avoid this again.


Another great example is github - for awhile, trying to request a review from someone on a pull request would cause LastPass to "helpfully" pop up and prompt for an auto-fill, completely obscuring the list of users underneath.


I write web apps for a living and literally ran into this last week...and was promptly annoyed when I realized chrome was ignoring the attribute to disable it.


> 2. Editing user details (username, full name, email, etc.) -- Firefox thinks the email is a good place to autocomplete the password.

Even if you add `autocomplete="email"` to that field?


It's been a while since I had that issue, so can't remember the exact details of what I tried at the time, and the workaround for that hasn't broken recently.


I understand the pain, and the need to somehow work around this.

However, conceptually the right place to fix/configure this is the browser. So the correct long-term approach is to open a bug/feature request and get this properly addressed. Everything else is, well, -- a workaround.

(Again: I understand that the correct approach can take years, and it is unclear if it will succeed at all - so it may be impractical.)


No, the website should have the option to overwrite the browser decision.

For example, for a multiplayer game I worked on, you could set a password when you create a private room in the game. The browser always auto-filled it with your account password, which is definitely not good because you have to share the room password with others. Telling the browser to not autocomplete that filled didn't work, because "the browser should know better than the website" thing you mentioned.


For this specific case, the website could generate a password on room creation and show it to the user in a non-editable field.


Users want to choose the password themselves because they have to share it with others, so they usually create a funny password.


>IMHO, the decision of whether to show auto-complete should be with the user and not with the website.

There's a setting in Chrome where you can disable auto-complete on a field-by-field basis?


As far as I know there is not, but I wish there was! Or even on a website-by-website basis. On the UPS website there's a screen where I can't use autofill to enter an email address for shipping notifications without it also overwriting the shipping address fields to whatever address I have stored for that email address.


I think that what this ignores is that if anybody has clout over behavior, it's Google. They don't have to break their own Angular Material autocomplete or burden devs with unpredictable behavior like this.

Well the alternatives may not be perfect, this clearly isn't either. They can create videos rebuking disabling password fields, or put warnings in the webmaster console, or apparently just release a vague statement about how "disabling password fields or disabling pasting in to them will now majorly detract your placement in search results" and turn the Marketing/SEO team against bad security contractors.


> IMHO, the decision of whether to show auto-complete should be with the user and not with the website

It;s fundamentally wrong to decide what 'rights' website users have (aside from when it comes to privacy).

There are myriad ways how a website can become un-user friendly to the point of being unusable not the least is of which you can completely disable the cursor or completely not display certain parts which are really there (e.g. display: none).

Point being there is a fundamental 'trust' which a user gives to a website developer, that the website they visit will behave as 'the developer' intended. The user even expects to get the site just s the developer created it, however 'bad' that may be.

Now of course it is in the interest of the web developer to make their site user-friendly if they want to appeal to a wide populace. But it is totally in the purview of the developer to make the site even completely unusable.

I don't understand how a browser has the audacity to force their assumptions on site behavior on the user/developer.


It's especially bad because the holier-than-thou attitude broke real, commonly-used websites , the Chrome team was made aware of the use cases, and they just didn't care. For example, Chrome tried auto-completing my home address into Expedia for where I'd like to vacation.

So it's not even those "corner case big boring CRM business apps" that had to find workarounds to forced-autocomplete, it's "real" user-facing ones too. Very frustrating.


I had a form with an input field named “accessibility-accommodations”. Chrome was seeing the “cc” and was assuming it was a credit card field, and thus prompting to autofill a credit card number. Occasionally a customer didn’t notice and sent us their credit card number via the form.

The only way I was able to fix it was renaming the field.


Worth mentioning that Firefox & Safari also have the same "bug", and IE has no autocomplete support whatsoever (making the bug moot).

The recommended alternative solution posted by a Googler in the above Chromium thread is to specify:

  autocomplete="semantic-description-of-field"
And the MDN docs recommend specifically doing:

  autocomplete="new-password"


Yes. The Chrome devs refuse to accept there are viable cases for not allowing autocomplete.


I'm sure there are valid reasons. Unfortunately, many sites disable it without a good reason, and in those cases, I am glad Chrome hinders their misguided efforts. Many banks, for instance, think password managers are bad and disable it. Chrome preventing them doing so is a good thing.


They can be persuaded to change these policies. Asking why they aren’t following the current NIST (US) or NCSC (GB) password guidelines is helpful.


I can't persuade my bank to revisit their security decisions in any reasonable time frame or within any reasonable amount of effort.


I do not. Ultimately it is up to the website owners, it shouldn’t be ignored by the browser if it’s part of the spec.


Why do you think it is up to website owners, and not website users?


The website users do not write the HTML? They can set their browser to override whatever they want, but it should not be the default.


You can strengthen that... it is up to the users, as a matter of practical fact. I've right-clicked -> edit attribute -> autocomplete=true more than once. I've cleared the right-click handlers and keypress handlers that were blocking paste, or run $0.setAttribute("value", "paste your password here on the console where they can't stop you") (after you select the element in the inspector).

Browsers as they stand now are not capable of truly blocking autocomplete, or pasting into a field with an input box. If they aren't implementing their own text field with a canvas and taking keystrokes themselves they aren't blocking paste anyhow. (And if they do that I can still tampermonkey or something my way into a "paste".)


It's not up to Chrome devs to accept or deny viable use cases. As someone from comments mentions, it's in the spec, and chrome devs should not deviate from that irrelevant if what they think is accepted or not accepted use case. Or they should go and push for spec change.


I feel like repeating an old comment of mine ( https://news.ycombinator.com/item?id=27231194 ) here:

> Conforming to the spec is not a virtue.

> When the spec is malicious, conforming to the spec is malicious behavior.

> I'm comfortable calling it a bug in the spec. `a << 40` needs to have 0 in the lowest 40 bits. It does not need to have random values in bits 8-31.

> This behavior is documented, but that doesn't make things better, it makes them worse.

> But the philosophy that says "if it's documented, then it's OK" doesn't even allow for the concept of a bug in the spec.

Implementing a bad idea doesn't become a good idea just because someone once wrote that it was.


I think predictability is important. And specs define what you can expect. System with undefined/unpredictable behaviour does complicate a life in long run even if at the moment it looks more convenient.


If the topic is predictability, I would expect banks to use the spec to disable only predictably non-autofillable fields with the user's best experience in mind. Disabling autocomplete on username and password fields in the name of some nebulous 'security' goal is neither predictable, nor in line with most user's expectation of usability; it also doesn't make the system more secure. I could argue that these sites themselves aren't following the spec by disabling the fields.

Remember, there are autocomplete values to accommodate "current-password"[1]. If your bank has a field representing a password without that attribute, do you think that's following the spec?

[1] https://html.spec.whatwg.org/multipage/form-control-infrastr...


I guess there are just too many pages that break autocomplete, e.g. for username/password as a "security feature".

I encountered quite a few myself and was very annoyed. I guess devs took the "usability" side of the question.

EDIT: phrasing


But it’s not useable at all. A form without autocomplete is perfectly useable. A form with autocomplete where it’s not wanted is an absolute hindrance.


A non autocomplete password field is an absolute hindrance.


The spec is driven by browser implementations rather than the other way around, is it not?


It should not be so. Or else Spec would just look like "do as chrome does"


> It's not up to Chrome devs

Well apparently it is, because they're doing it.


Why? The spec ain't God given.


That's how we ended up with decades of Internet Explorer.


> Or they should go and push for spec change


That attitude basically endorses the idea that the spec is God-given. There's nothing so important about getting the spec changed before you start ignoring it.


> That attitude basically endorses the idea that the spec is God-given

That's a tad over-dramatic. And context matters, surely I don't need to remind you why Google is spending so much money on Chrome?

Having a company control 70% of the browser market is bad enough, we don't need people telling them to go ahead and ignore specs, remember that they don't make those decisions out of goodwill for us.


Are there any viable cases?


Any business SaaS app where users like customer service representatives input data about their customers. Name, address, email, payment information and so on. Under no circumstances should these sort of input fields be autofilled.


Or the data stored in the CSR browser history.


OTP one-time-password fields


Every time I logon to a certain system, Bitwarden types my password then I get a TOTP prompt. And it offers a pulldown menu half a screen long of previously entered codes.


autocomplete="one-time-code"

Any others?


Admin page where you create users for other persons. Not cool when browsers try to add your password as password for all the users you create


Then there's

autocomplete="new-password"


good point!

But as soon as browsers stop autocompleting fields marked with autocomplete="one-time-code", won't website developers start marking _all_ input fields with this tag? After all, why do people put autocomplete="off" on input fields anyway?


autocomplete="one-time-code" causes a different type of autocomplete behaviour, it doesn't disable it. Specifically for example it will suggest a one time code you received by sms if one was recently sent (on mobile at least).


Chrome recommends wrong passwords, passwords from wrong subdomains, and passwords for pages that will never accept custom passwords.

It's broken as fuck.


Search fields. I don't want it filling in the users address or email address when they're searching for a customer.

We have customer service representatives that accept orders over the phone, including credit card numbers. These should not get stored by the browser as autocomplete data.


I can make my web site/app extremely hard to use in all sorts of bad ways, the developer being able to disable autocomplete should be the least of anyones worries.

The other side is the situation we have now, autocomplete doing the wrong thing all over the place with no way to stop it. Stomping on my apps specific database driven autocomplete really hurts the user experience. Also autofilling fields without the user noticing and entering wrong data into forms. What a mess.


Safari also completely ignores autocomplete="off" when it thinks something is a username or password field. Even when, as a dev, I know it definitely isn't.


I assume you're not in Apple Store team, then.

Because they do put autocomplete="off" on login form, username, and password fields. At least for me:

https://imgur.com/a/Ygb371g

UPDATE: please help me write a sarcastic comment about Apple Store team putting autocomplete="off" there, and Apple Safari browser ignoring it.


I honestly believe that some of the people that work on apple.com don't test the website in Safari.


tip: Using autocomplete="new-password" at least fixes the change password forms (so it wont pre-fill the password there).

See https://developer.mozilla.org/en-US/docs/Web/Security/Securi...


At least safari lets you force autocomplete by adding "Welcome back"


Where do you see it marked as won't fix? Status is assigned and open according the the information on the left side.


The Chrome people are right. The browser is a user agent.


You should sit down and read the reports and realize users are harmed by this.


Are they? I think users are harmed by overzealous webmasters breaking a browser security feature. Sorry, but the people who disabled autocomplete unnecessary ruined that control for everyone.


I thought I told you to sit down and read the reports. Why are you so insistent on speculating based on no information instead of actually reading the specific cases described there?

One app is a kiosk that keeps saving people's passwords and autofilling them for the next user. Another app has its own address dropdown but Chrome hides it and keeps autofilling the same address over and over making the app useless. A third app is for admins creating users, and it keeps autofilling the admin's own details so that info keeps accidentally leaking into the user accounts. Another app is for applying for a bank service with very strict requirements, names get autofilled not following the requirement, users think the autofiller is perfect, then they get rejected and need to go to the branch physically to fix it.

Don't be a know-it-all. Go actually learn something.

Having a browser second-guess its own markup after this markup has already been established to work a certain way is really dangerous. We're talking about the web, the most popular platform in the world, and Chrome is the most popular browser. This is irresponsible handling of that burden from Google to make changes like this on a whim.


> Don't be a know-it-all. Go actually learn something.

Try again, but with less personal invective. You're listing a few bad things that happen because Chrome ignores autocomplete="off", but you're not listing all the bad things that would happen if Chrome didn't ignore autocomplete="off" --- namely, users using weaker passwords and getting compromised more.

Sorry, all the things you mention sound like minor annoyances to me. It's much more important that websites not block secure password storage features in browsers.


Not surprisingly facts made you dig your heels deeper.


This is not really a Safari-only thing. All password managers that I have used in the past had some kind of heuristic to decide whether a field should be auto-filled or not. Here is a nice explanation by a (former?) 1Password employee (https://1password.community/discussion/94198/autocomplete-of...).

To me as a web developer (among other things :D) this is quite annoying because password managers often hijack our forms when they decide that the label (or id or classname etc.) sounds suspiciously usernamely, passwordly or credit cardly.


As well they should. I sometimes hate the password managers too as a web developer. I am also a 1Password user, and I hate sites that block clipboard, block pasting, block right click, basically block any kind of way I have to type even my username, not to mention annoying full size on screen keyboards that can only be used with the mouse.

I don't care about the reason they have to be so intrusive in UX, probably some malware fight and/or prevention. The fact is that if I am going to use 1Password or other password managers per site, with 25 characters long passwords with symbols and numbers, I want to be able to somehow fill that in without typing each letter. Some sites don't care about this use cases as they are trying to cover the asses of non-tech-savvy users. They must protect the password123 crowd, right? So password managers need to fight back, unfortunately.


I have/wrote a one line auto hot key script for typing in strings in fields that don’t allow paste. Originally intended for a tax program that doesn’t allow pasting banking passwords. The pain of making a mistake and have to enter a 30+ character password over and over still haunts me.

Also, if you have a problem contact their customer support. I had a tweet get a few hundred likes about a non pastable field for a transportation website and they actually changed it later that week!


What is the rationale for disabling paste on passwords, account numbers, other "sensitive" data?

The absolute worst are fields where paste is disabled, and the characters are also echoed as "*" so you can't even see what you are typing. I saw this with SSNs when I submitted some tax forms on my state's website recently.

The only argument I can think of for disabling paste (and I think it's pretty weak) is on a form to set a new password, where you need to input the password twice (and the form validates that they match) you might want to make the user actually type the same password twice, rather than let them copy/paste the first entry into the second field.


> you might want to make the user actually type the same password twice, rather than let them copy/paste the first entry into the second field

Please no. I generate a password in bitwarden, save it, copy and paste twice. Don't do that. I really don't want to type a 24 character password with lower / upper letters and special characters. If you do that to me, I will leave your website and never come back.


I do agree -- it was just the only semi-reasonable argument I could think of. It probably made some amount of sense before password managers were really a common thing, and you wanted to be sure that users didn't typo a new password and lock themselves out of accounts.


It never makes sense. I know how to use dev tools to remove your no paste option, my mother doesn't. She will simply use Password1!. That's how you get weak passwords. Don't make it difficult using strong passwords.


When I used to have a multi-monitor dev environment, I did accidentally paste a password into Slack (left screen) and not Chrome (right screen). Immediately deleted the chat message and had to cycle the password.

This is the only issue I've ever had with copy/pasting passwords, it only happened once, and the site preventing me from pasting would have done nothing to prevent it.

I don't understand the rationale either.

Also, double validating passwords should allow for pasting to promote the use of managers. Forcing users to type them in creates more possibility for mistakes - you can type the same wrong password twice... Muscle memory is funny that way.


I’ve also accidentally typed a password into a chat app when I meant to type it into a browser. Just zoned out instead of looking at the password field where stars should have been showing up. Ultimately, people are just going to make mistakes!


With unique passwords, the OS could introduce a filter so that when you paste a password into anything but a password field, it gets replaced by ******* à la hunter2 https://www.urbandictionary.com/define.php?term=hunter2


If anything, pasting a password into a password field should be explicitly allowed, whilst pasting it anywhere else should either be forbidden or, possibly, prompt for confirmation first.


The clipboard is accessible from the javascript runtime from any page in any tab. Maybe disabling paste is intended to discourage the behavior?

I think this is also why lastpass clears your clipboard a few moments after you click the “copy to clipboard” button.


JS has write-only access to the clipboard, for precisely this reason. It would be a security disaster if JS could snoop your clipboard.

Lastpass and other password managers like 1password wipe the clipboard after a few seconds to minimize native app access to the secret.


Sounds like a security issue in Javascript to me


> What is the rationale for disabling paste on passwords, account numbers, other "sensitive" data?

Cargo-cult internet "security" practices are legion in the retail-banking sector. Like with most things it starts with good-intentions but when modern research suggests better-things the worst of them just knuckle-down with hypertension-inducing results: https://www.troyhunt.com/tag/banks/

TL;DR:

* Banks think that having users remember their banking-passwords and commit them to memory is far preferable to having users use password-managers.

** Password managers on Windows can theoretically get hacked by malware:

*** Ssure, the data is encrypted at-rest, often with your DPAPI key (e.g. Chrome and Edge's built-in manager) or with 2FA (e.g. LastPass), but none of the password-managers I've used on Windows (Chrome, Edge, IE's, Firefox's, LastPass, etc) take any steps to protect their hWnds from inspection by other userland processes running at the same privilege level. This does surprise me - I honestly would have hoped/thought that by-now password managers would use Office IRM-style protections ( e.g. `SetWindowDisplayAffinity` https://stackoverflow.com/questions/21268004/how-does-office... ) and/or accessing the password-database and showing results in an elevated hWnd to protect them from lower-privileged hWnds and processes).

* Banks believe that password-managers present a risk to their customers (and by-extension: their own bottom-line[1]) because:

** If they do recommend users use a password-manager then they run the risk of a user downloading and using a scam or malicious password-manager and then blaming the bank once their account gets hacked and drained.

*** Banks don't want to get into the business of recommending any particular password manager: there's too many to choose and it's not their business to vet the good ones from the bad ones.

*** So it's easier just to not recommend using any password-manager. This then logically extends to recommending not using a password-manager, using whatever weak reasons exist for arguing against them.

* As for why paste is disabled: This notable article by Troy Hunt deals with this exact issue https://www.troyhunt.com/the-cobra-effect-that-is-disabling/

** The first reason blame-shifts to the bank's accrediation/certification/PCI/EV/etc process - which seems sus, though plausible, depending on exactly what certification's rules and guidelines could be broadly misinterpreted by whatever technophobic upper-executive in charge of a bank's retail online banking user-experience.

** The other examples listed seem (to me) to be all around discouraging users from copying their passwords into their clipboard and pasting it into websites so that their users eventually give-up and stop copying it at all and instead type it in by-hand - the concern being that malware running in the background on the user's machine could monitor the clipboard and steal passwords that way - which I'll agree is a real concern to have, but the fact that users will try to copy and paste it at first and that by typing it in renders them vulnerable to keyloggers (and if a program is already monitoring the clipboard, that program could just-as-easily be a keylogger).

[1] because they'll likely be found liable for losses caused by unauthorized customer account access due to phishing, etc. Their liability varies between jurisdictions, though I haven't noticed a correlation between jurisdictional liability and banks' general intransigence towards modern evidence-based infosec...


> discouraging users from copying their passwords into their clipboard

Yeah, but what happens in reality is that the user copies the password, and then discovers that paste is disabled. By that time, the password is already on the clipboard.

I don't log in to any particular websites often enough to remember ahead of time which ones let me paste passwords and which ones don't.


I'm pretty sure Keepass/Keepassx etc do this


They do. It's the "auto type" feature. Quite handy when sites disable paste. It also keeps passwords out of your clipboard history.


NIST actually recommends allowing users to paste exactly for this reason:

> Verifiers SHOULD permit claimants to use “paste” functionality when entering a memorized secret. This facilitates the use of password managers, which are widely used and in many cases increase the likelihood that users will choose stronger memorized secrets.

https://pages.nist.gov/800-63-3/sp800-63b.html

I use the "Don't Fuck With Paste" add on for Chrome/Firefox, which mostly works well.


Here's a bookmarklet version of "Don't mess with paste" for those who don't want to install the add-on:

    javascript:void(document.documentElement.addEventListener(
    'copy',e=>e.stopPropagation(),true),
    document.documentElement.addEventListener(
    'paste',e=>e.stopPropagation(),true))


Automatic field detection is fine and good UX for password managers. What is bad is auto-fill without user action.


For exactly this reason I wrote a script that reads from the clipboard cut buffer and inserts the keys one at a time into the keyboard input stream; voilà, pasting that side steps asinine browser page restrictions.


Not to mention the 2-step flow that’s so predominant now.


This has been annoying for me. I build a healthcare EMR software, and the browser trying to autofill the employee's information into every patient field is often a problem. By accident we end up with patient's phone numbers, addresses, and emails being set to employee's information. Since its software, I don't have control over those employees, but we have had to put in recommendations to disable autofill in browsers being used to cut it down.

We've spent countless developer hours trying to work around password managers. I agree that sites shouldn't attempt to disable password management for login and sign up pages, but it's annoying how often these password managers do the wrong thing and break the user experience for pages… like Safari is doing for livewire-ui/spotlight.


I’ve made this error and will admit to being utterly baffled the first time I hit it.

As an administrator was trying to work though a users problem. But their account details all matches mine. It took an embarrassing amount of time for it to click.


It's such a pain. I've had all kinds of oddities when using type="password" for private data. A lot of password managers would see that and assume a password and fill in the email in whatever the form element before it was. You can't tell them not to, either!

I've also had to scrub data when users somehow put their credit card numbers into public fields. Still no explanation on that one, but it happened with enough users that our only guess was browser auto-fill gone awry and people blindly hitting submit.


Fine if there was a field there. This is creating a field where there was none.

And it's the browser itself rather than an electively installed plugin where you asked for it.

It's outrageous. By rights, modifying the content this way should be seen as utterly outrageous by both site authers and users, not just some quirky glitch that it's not smart enough and doing the modification in the wrong place sometimes and will shortly be improved to false-positive less often.


Definetly an terrible feature as a web developer. I stopped counting how many bug reports I've had to can because the users thought we were the ones auto populating their fields with crap.


It _wants_ to autofill, but it doesn't without the user actually confirming the autofill. Pretty important distinction to make I think


Not just that, the "a password" is also not leaking a stored password to a random website that contains this string, it's really just popping up the autofill prompt with the passwords that you explicitly stored for this specific website.


Agree. Current title is inaccurate and click baity.

Also, the confirmation requires authentication (at least by default, unsure if this can be changed).


In case it changes, for context, the current title is

> The phrase “welcome back” on a page causes Safari to autofill a password


I don't see this as a bug. Password autocomplete is kind of a dumpster fire. It varies, depending on which sites I visit.

I use 1Password, with browser integrations (it works better with Safari than Chrome).

I don't know most of my passwords; relying on 1Password to access the strings of garbage I autogenerate.

So I am constantly using it to fill forms.

It keys on things like attached <label>...</label> elements. Not all sites use these. Some sites also sometimes add some kind of junk that causes 1Password to fail.

Other times, 1Password insists that the field I just selected needs an autofill; even for non-auth fields.

Not really a big deal for me. No one that shouldn't gets my auth, and I ignore the prompt when it is not necessary.


> I don't know most of my passwords; relying on 1Password to access the strings of garbage I autogenerate.

This is more likely normal behavior than abnormal. The number of sites a person uses likely increases the chances they don't actually know most of their passwords. The default "flow" becomes "password reset and recovery", which makes most services about a secure as the system being used for recovery if the password is reset.

It's important to understand the value of the data or service being "protected" by authentication. Banks should probably continue using passwords. Bookmarking sites, or things like Discord can get away with token logins. This eases the burden on the user.

Gmail leaves me logged in for long periods of time once I've authenticated on a given machine/browser. This is also a form of "autocompletion" in a way, allowing me to access sensitive data (my email) without having to re-authenticate with a password (by using a stored cookie). Anything using my email for password recovery is susceptible to being attacked through my persistent session, but then again I do a pretty decent job of retaining possession of my laptop physically.

By using email tokens to log in to a site, instead of resetting a password that will likely be forgotten, one could just skip straight to logging in the user with the one time tokens, which are as secure as the system being used for transmitting the token.


I use BitWarden and have come to prefer something about BitWarden that initially irked me coming from LastPass.

There is no icon in any of the fields to click to populate them.

There is no auto filling.

You have to cursor into the field, right click and manually select the relevant entry to fill.

From a security standpoint this is much better and safer overall.

It also prevents accidental autofilling and login of an account you're trying NOT to login with on sites where you have multiple accounts and need to keep things carefully separated.


Auto-filling on page load in Bitwarden is an opt-in feature.

Additionally, if you have Bitwarden in your toolbar, you can click the Bitwarden icon, then click the entry for the site, and it will auto-fill in the page for you.

I'm surprised anyone uses context menus to do this, though I agree with you that it's probably safer.


Well, there is auto-filling in the sense that if you press Ctrl-Shift-L (at least on the browser extension), it will find the user/pass fields and fill them in for you. But it requires you to press the shortcut, so it doesn’t do so unprompted.


Apple bug- it's not really a bug

I have encountered this mentality often. I'm not sure if Apple users have so many bugs that they are used to it, or if it's part of the fanaticism.

I had so many bugs on iphone 6 I was baffled because the marketing "It just works". Upon voicing my issues, I was told from numerous people, "it's probably just doing X,Y,Z". Like that's an acceptable reason for bugs.


> or if it's part of the fanaticism.

Thanks for the insult.

I'm not an "Apple fanatic," but I do develop for the platform.

I don't rail against other platforms (I spent 25 years, managing a cross-platform team), and I would suggest that you may be doing yourself a real disservice by writing off an extremely lucrative venue.

I do support you, however, in demonstrating a commitment to your principles, by ignoring and insulting a gigantic swath of monied customers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: