Hacker News new | past | comments | ask | show | jobs | submit login
JavaScript is now required to sign in to Google (googleblog.com)
593 points by amaccuish 4 months ago | hide | past | web | favorite | 499 comments



When I was at Google I started both the login risk analysis project and the Javascript-based bot detection framework they're now enforcing, so it's a pity to see so many angry comments. Maybe a bit of background will make it seem more reasonable.

Firstly, this isn't some weird ploy to boost ad revenue. This is the login page - users are typing in a long term stable identifier already! The Javascripts they are requiring here are designed to detect tools, not people. All mass account hijacking attacks rely on bots that either emulate or automate web browsers, and Google has a technology that has proven quite effective at detecting these tools. There's a little bit of info on how it works scattered around the internet, but none of the explanations are even remotely complete, and they're all years old now too. Suffice it to say: no JS = no bot signals.

Google had the ability to enforce a JS-required rule on login at least 6 years ago and never used it until now. Without a doubt, it's being enforced for the first time due to some large account hijacking attack that has proven impossible to stop any other way. After so many years of bending over backwards to keep support for JS blocking users alive, it's presumably now become the weakest link in the digital Maginot line surrounding their network.

For the people asking for the risk analysis to be disable-able: you used to be able to do that by enabling two-factor authentication. You can't roll back your account to just username and password security though: it's worth remembering that account hijacks are about more than just the immediate victim. When an account is taken over it's abused and that abuse creates more victims. Perhaps the account is used to send phishing mails to your contacts, or financial scams, or perhaps it's used to just do normal spamming which can - at enough volume - cause Gmail IPs to get blocked by third party spam filters, and user's emails to get bounced even if their own account is entirely secure. The risk analysis is mostly about securing people's accounts, but it's also about securing the products more generally too.


Sorry, but at this point it is pretty obvious that big tech companies care about account security only as far as it impact their services. The late revelation about Facebook abusing 2FA phone numbers for marketing is a great demonstration of how that works.

Google too does some really funny things to make it nearly impossible to create and maintain an anonymous accounts not tied to a phone number. Even when those accounts are just used for browsing and bookmarking, without sending any outgoing information.

>When an account is taken over it's abused and that abuse creates more victims.

Pushing JavaScript everywhere increases the attack surface for every single user on the web. Except it doesn't happen overnight, and big companies who do it (by affecting standards, or by doing stuff like this ^) aren't affected by client-side exploits and privacy loss.

If someone hijacks my browser through some clever JS API exploit and steals my credentials, what is Google's response? "Just use our 2FA." What about smaller websites that don't have resources to maintain 2FA? "They should authenticate through us." All roads seem to conveniently lead to centralization.

BTW, it is worth noting that the impact of a compromised account isn't nearly as significant if a single account doesn't hold keys to pretty much everything you do online. Somehow this is rarely factored in during such discussions.


> Google too does some really funny things to make it nearly impossible to create and maintain an anonymous accounts not tied to a phone number.

Well yes, these kind of accounts are highly susceptible to being bot accounts. What obligation does Google have to be the place for people's free, anonymous accounts? In any case, I haven't had a problem with the number of secondary accounts I've created that are tied to me only by another email address (which can point to another provider, like Yahoo).

> BTW, it is worth noting that the impact of a compromised account isn't nearly as significant if a single account doesn't hold keys to pretty much everything you do online. Somehow this is rarely factored in during such discussions.

How should this be factored into the current discussion? The use of JS is to ostensibly make it more difficult for automated hijacking to prey on users.


This is true. Google has explicitly never put the user first. We should be grateful to give them our information in the first place in the way they deem is best.


Sorry if I don't agree that giving Google a throwaway email address is a significant concession of personal information. Which free services do you recommend that do things better?


You can't give it a throwaway email address - they want your phone number now and will accept nothing less in my experience.

If that doesn't appear true for you, try over Tor and you'll see what happens to many people...


I am surprised to see a long thread about nothing. Google does force you to give them your phone number and they do not let you register new emails after you hit some limit.


I just tried creating a new account from a VPN, using a browser that I haven't used to log into Google previously. It allowed me to create a new account without a backup phone or email, and allowed me to send an email.

How does Google know you've hit a limit when you've registered new emails? And, not to belabor the point, but why should they be expected to give you unlimited number of free accounts -- or, what service do you recommend that will be that generous?


I mean they could accept my cash instead of offering a service paid for with your information.

Oh that’s right, they’ll never give the consumer power over their data because that’s google’s entire value proposition.


So, don't use google, and run your own DNS that returns 0.0.0.0 for all google tlds. I mean, I'm all about having some responsibility at the corporate level, and people working for companies showing ethics.

But this is just cringe worthy. How do you propose getting your cash to said company? I mean, most methods will leak personal information the company could then use anyway.


The moment you try to login from another location, they will lock you out and ask you to enter a phone number, which from that point forward will be tied to your account.

I am not 100% whether they use geolocation, or just trigger this when your new IP doesn't match the last IP you logged in with.


> Pushing JavaScript everywhere increases the attack surface for every single user on the web.

I understand where you're coming from, but most users browse the web with Javascript = on. Even as a NoScript user I have Google whitelisted because most of their services are unusable without Javascript. Even automated tools have good Javascript engines now thanks to headless mode in popular browsers.

I suspect the next steps in browser security will not be to blanket-deny scripting, but instead focus on containers and sandboxing to make script-based attacks less worthwhile.


I am not talking about merely enabling JavaScript. I am talking about normalizing more and more APIs accessible to every website I visit. Sound. Canvas. 3D. Local storage.


> "Just use our 2FA." What about smaller websites that don't have resources to maintain 2FA?

I'm not going to say that providing 2FA is "free" in the time sense (both in implementing it initially and supporting people who lock themselves out) but on the surface 2FA requires just a library to verify 2FA codes and a column in your users table to store the shared secret.


Yeah it's a bullshit argument. 2FA is a very cheap solution to a problem that could end up very expensive. If you can afford to (securely!) store account information and have a login infrastructure, 2FA is a minimal amount of effort to implement. You could add 2FA from scratch in less than 50 lines of code and one extra column in your account DB. There's no excuses.


>2FA is a very cheap solution

If you don't know the technologies the website is built upon or how much it will be impacted by increased barrier of entry for users, this statement is baseless.


Who uses malbolge to create servers?


I take offense at your tone, and the implied judgement. Malbolge was the right tool for us, and let us tap into a talent pool that was otherwise going unused.


The one thing I dislike about 2FA as a user is, if I drop my phone in a lake, can I safely recover my account? I have a lot of time, money, effort, etc invested in my accounts, and I really don't want to lose that


If you use something like Authy or 1Password to store your 2FA tokens then they aren't lost if you lose the phone. Does this mean a single point of failure and in someways undermine the use-case for 2FA? Sure but as with all things security you need to decide where on the spectrum you want to be. It's a game of trading security for usability base on what your situation requires.


You can write down on paper the private key you get when enabling 2FA. Some providers also give you a list of recovery keys.


If a website is smaller it may not be sticky enough for a user to feel they get enough value to put in the effort to do 2FA


That's totally fair, but you don't have to force 2FA on your users.


For good and bad, I happen to like social media for logins... Twitter, fb, google, ms all offer them. In the end if you get their "real name" and email address, I find that generally sufficient. You don't have to do the actual authentication, users can configure 2fa on their own.

I mean, some site do ask for way more than they need here and that's often bad. In the end, I think it's a reasonable trade off for end-use convenience. Which I often am.


Not to detract from your work there, but there's actually some great research papers about how Botguard itself is easy to bypass and google cookies provide most of the heavy lifting when it comes to bot detection.

I've snooped around a bit myself and it doesn't seem like botguard does anything much more advanced than other fingerprinting solutions.

I just don't buy that this is all about detecting more bots; every sophisticated bot I've seen in the ad world runs javascript as it's better to pose as a normal user, and only a tiny fraction of users would have javascript disabled.

To be honest, I also don't think requiring javascript is a bad thing either.


Ah, you're assuming it's the same strength on all places it's used - and also that it actually has been bypassed.

There didn't used to be any public bots that can beat the strongest version and from a quick Googling around I don't see that it's changed. Someone took apart a single program manually, years ago, but the programs are randomly generated and constantly evolve. So that's not sufficient to be able to bypass it automatically/repeatedly.

every sophisticated bot I've seen in the ad world runs javascript as it's better to pose as a normal user, and only a tiny fraction of users would have javascript disabled

It's a lot faster and more scalable to not automate a full web browser. Bot developers would rather not do it, they only do because they're forced to. Forced to ... by requiring Javascript, like this.


I understand this is not a subject where details can be shared, but - I'm sorry - at this level, this sounds like marketing speak. "You can't possibly comprehend just how advanced our AI is. If it appears stupid to you then because we intentionally want to have it appear stupid..."


Yes, I know. Nothing much that can be done about that, sorry.

The point I'm trying to get across is that companies use these techniques because they are effective - it isn't as simple as "some junk that was beaten ages ago" - and the collateral damage is very small, relative to other techniques. Far fewer users run with JS disabled than the number of users who struggle with CAPTCHAs.

We can see the direction things are going with reCAPTCHA v3, which appears to be the logical end of the path Google started walking 8 years ago - reCAPTCHA v3 is nothing but risk analysis of anti-automation signals.


It's a lot faster and more scalable to not automate a full web browser. Bot developers would rather not do it, they only do because they're forced to. Forced to ... by requiring Javascript, like this.

In other words, the bot developers are still getting through, and meanwhile it's the actual humans who don't want JS which get screwed. Reminds me of DRM... honest customers are the most inconvenienced, while crackers still break it.


> In other words, the bot developers are still getting through

There's no perfect solution but any solution is still better than none. Why do you keep a lock on your door if I can break it in 30 seconds? Your computer is even there! I can easily add a key logger there, why do you have a password then if all I need is to do that? You aren't stopping me thus any protection you add is meaningless.

Let say that having a full web browser takes 50% more resources (if you block javascript, you probably already use the argument that it use 99% of your phone battery so you can agree that 50% is pretty conservative), than you just blocked 50% of the tentative JUST by requiring it. That's seems pretty effective already and you haven't done much yet.

Now add all the information that you can gather using Javascript. Aren't you also blocking Javascript because it's capacity to fingerprint you? Again another easy gain you can get.

> honest customers are the most inconvenienced

A tiny fraction of the honest customers are inconvenienced, a huge portion of them allow Javascript. They probably inconvenienced more by blocking older versions of TLS.


You are putting words he didn’t say. JS provides vast reactive surface to identify automated tools with.


Javascript allows them to identify bots who automate a full web browser.


Javascript ist also required for the vast majority of web-based exploits. I find it somewhat strange that you ask me to make my system less secure so you can better secure my account.


Like the blog post mentioned, 99.9% of users already have JS enabled, and this number is only going to go up as websites rely more and more on JS. For them, this is a purely beneficial change, with no downsides. It's somewhat selfish for you to ask that your system be made more secure, even at the cost of security for 99.9% of other users.


I'm hijacking this thread to say we need a better ID system for the web! That preferably work without JS. Something built into browsers, that also allow you to create as many identities you want. When a id-signup header is detected, the user see a signup button, and can chose what information is sent to the web site/app. The user can login to any site with the push of a button, or even automatically. With a built in public-private key ID solution your friends will have the same ID-public-key on both site A and site B. The contact list can even be inside the browser, and web site's can ask for it, allowing for example white-listing in messenger apps, or let the user pick who are allowed to see their family pictures, etc. And web sites/app no longer have to store, username/password/keys, they only have to make a "challenge" where the browser automatically proves the ID. The private key should be exportable and standardized, and it should be possible to also use smart-cards and second factor logins. Having ID built into the browser means every site/app no longer have to build and manage all this functionality independently.


In a certain sense this existed with the `<keygen>` element (https://developer.mozilla.org/en-US/docs/Web/HTML/Element/ke...) which is unfortunately deprecated. But you still have all the same issues of moving and safeguarding key material.

These days with browser/mobile sync, maybe it's actually possible. But like a synced password manager, it makes a primary account breach that much more devastating.


The problem with certificates is they require a central authority, eg. politics. We need something that can work anywhere, without any third party or central authority.


There's nothing about <keygen> specifically that requires a central signing authority. You could just as well generate a public/private pair, feed the website your public key, and sign challenges with the privkey to log in.


You can also create a public-private key pair using JavaScript. Only problem is that when you go to another site, it wont let you identify yourself using the public key you generated on another site. And the browser won't answer the challenge automatically. Lets say we make a browser plugin, as a proof of concept. Then when it has "product market fit" browsers can make it a built in functionality. It will however be hard to ask a user to install a plugin, just to signup/identify to your site/app.


FWIW, this is something U2F somewhat solves. There is still the following exhaustive list of problems:

1. Education for and purchase of U2F keys 2. Key loss recovery mechanism 3. Key stolen defense (you can't just rely on the U2F key alone, there must be a pw or other type of second factor) 4. Widespread browser & device support (without it, a user/pass is required as a backstop)

Nevertheless, it is progress.


A problem with U2F and prior solutions is that they are hard to implement and public/private key pairs are specific to website origin, meaning I can not use the public key as an identifier. We need something much more simple, basically all it needs to do is to proof the possession of the private key. It would however have the same problems, those you mentioned, which are difficult problems. My idea in order to make those problems less painful is to require key rotation at regular intervals, that way people will automate away those pain points. For example by generating two backups keys that the user is instructed to store off-line, which is then used every second month to rate keys, and can be used as proof of you owning a lost key. Then you can implement certificates etc independently eg proof that the person holding the key is a certain person. But it's important that such things are not in the spec - or it would be too complicated, leading to no- or slow adaption.


This is brilliant! Has the IETF put out any proposals for a standard of this sort or is this just an idea you had?


Unsure if sarcasm, but OpenID has been a thing for over ten years, and the only place it's ever gained significant traction is with Facebook and Google as the providers: https://en.wikipedia.org/wiki/OpenID#History

Other than StackExchange, I can think of no other major site who looked at the over-engineered protocol, the implementation headaches, the confusing user experience (nascar board of provider logos), and said "yes, that is definitely what we want to have rather than a validated user email address."


Chrome supports U2F and that is basically a client cert based ID. But yes, the browser should be able to manage identities on a per domain basis.


> The contact list can even be inside the browser, and web site's can ask for it, allowing for example white-listing in messenger apps, or let the user pick who are allowed to see their family pictures, etc.

How would that work for sites like Hacker news, reddit, and twitter? Does every single user have to preview and approve every single other user on the website? That doesn't scale at all.


It would work the same, except you can sign up with one click, login with one click, and the server wouldn't store any password/key. But it also allows for additional functionality, for example in a photo sharing app/site you could tell it that only those with these ID's should be able to see the picture, without the site/app needing to know who those people are, they don't even need to have an account on that site.


This is basically U2F


One of the reasons that the percentage of users who have JS enabled continually goes up is because web developers make their sites non-functional when JS is disabled.


An error only becomes a mistake when you refuse to correct it.


> the vast majority of web-based exploits

Do you have a source for that? As far as I know, most web based exploit came from any external plugin such as flash, pdf, videos, etc...


Then just turn Javascript on to log in, then turn it back off again. You just need Javascript for the sign-on page.


Then just get your wallet out to use the ATM in the dodgy neighbourhood and put it away again. You just need your wallet for the ATM in the dodgy neighbourhood.


How else are you supposed to get to your money?


You go elsewhere.


> Javascript ist also required for the vast majority of web-based exploits.

So are browsers...


Interesting to hear from someone involved.

> This is the login page - users are typing in a long term stable identifier already!

There are so many other considerations at work here though, and I can't imagine that they're not obvious to you as well? For starters, we're creatures of convenience, and this makes it significantly inconvenient to block google scripts on other websites even when not signed in. It also guarantees that you have the chance to produce a (likely unique) JS-based fingerprint of every google user that can then be used for correlation and de-anonymization of other data.

But really the most basic point that probably makes folks here suspicious: if this were really only about preventing malicious login attempts by bots, then why not give users a clear, explicitly stated choice: either JS or 2FA.


I understand what you're saying and it makes sense. I think in my mind it's the fact that javascript has the potential to do so many things, not that it's being used that way today.

To use a bad car analogy, the in-car entertainment used to be just a dumb radio. Now that it's a computer connected to the main car network, it has a lot more potential to do things, whether it's a feature, bug, or an exploit.


It's unfortunate that your work is being used in such a massively economically wasteful manner.

The core issue is - it's all computers. The reason you can't reliably detect the difference between an average user using a computer to log in to your computer versus a fairly sophisticated computer using a computer to log in to your computer is that the transition between user and tool is not smooth. It is always going to be easier to find the various boundary points between user and tool than it is to construct a passable simulation of the user using that tool.

Yes, bot detection does make account hijacking attempts more expensive, but it makes all logins more expensive, and the rate of expense increases faster for you than it does the account hijackers.


Thanks Mike,

so the way I understand it, roughly it works like this: JS tries to gather some set of information about browser's environment (capabilities, network access, reaction to edge-cases, …) sends it to google and google decides if they should allow user to continue authentication (by providing crypto signature of request+nonce or smth.like that… or just by flipping key in session)


> All mass account hijacking attacks rely on bots that either emulate or automate web browsers, and Google has a technology that has proven quite effective at detecting these tools

It sounds untrue and short-sighted. Even Google provides the tools for anyone to automatically navigate in a javascript-enabled website with Chrome Headless. All this will do is to provide a short-term security before the bots can again perfectly mimic humans with JavaScript enabled, this time.

Most "mass bots" can be stopped by logging the access attempts on the server side, with plain old HTML on the client side.


I'd be curious if the tools are only effective because of the fringe requirement of them. That is to say, right now, many malicious users are easy to spot using these, precisely because they don't have to be hard to spot.

That is to say, in the long run, it will be interesting if this actually reduces malicious use. Seems like it would be just as easy to avoid.


Thanks so much for chiming in here.

Admittedly, bot detection is really interesting to me as a subject. It's not the kind of thing you can throw infinite ML at and expect it to break even in terms of scaling; you need careful tuning and optimization baked in from the ground up, which means a fundamentally "manual" approach involving lots of lateral creativity and iteration. That creativity and one-upmanship (along maybe with the gigantic piles of money, because manual ;)) is what makes the field so interesting to me - but alas, I cannot pepper you with questions/topics/discussion/anecdotes/stories for two hours for obvious reasons :)

So, instead, a couple of questions, considering datapoints likeliest to benefit others, and perhaps (hopefully) provide some appropriately dissuading signals as well.

- How does Botguard detect Chromium running in headless X sessions? By looking at the (surely very wonky) selection of pages visited, or...? (Obviously IP range provenance is a major factor, considering the unusable experience Tor users reportedly have)

- Regarding the note about "bending over backwards", while playing with some old VMs very recently I observed with amusement that google.com works in IE 5.5 on Win2K but not IE 6 on Win2K3SP4 (https://imgur.com/a/zg7FoAW), entirely possibly due to broken client-side config, I'm not sure. In any case, I've also observed that Gmail's Basic HTML uses very very sparing XHR so as not to stackoverflow JScript, so I know the bending-over-backwards thing has stuck around for a long time. Besides general curiousity and interest in this practice, my question is, I wonder if this'll change going forward? Obviously some large enterprises are still stuck on IE6, which is hopefully only able to reach google.com and nothing else [external] :)

- I wonder if a straightforward login API could be released that, after going through some kind of process, releases Google-compatible login cookies. I would not at all be surprised if such ideas have been discussed internally and then jettisoned; what would be most interesting is the ideation behind _why_ such an implementation would be a bad idea. On the surface the check sequence could be designed to be complex enough that Google would always "win" the ensuing "outsmart game", or at least collect sufficient entropy in the process that they could rapidly detect and iterate. My (predictable) guess as to why this wasn't implemented is high probability to incur technical debt, and unfavorable cost-benefit analysis.

I ultimately have no problem that JS is necessary now; if anything, it gives me more confidence in my security. Because what other realistic interpretation is there?


I'm glad you're interested! The world could use more people tackling spam.

I'm not going to discuss signals for obvious reasons. Suffice it to say web browsers are very complex pieces of software and attackers are often constrained in ways you might not expect. There are many interesting things you can do.

I have no idea how much effort Google will make to support old browsers going forward, sorry. To be double-super-clear, I haven't worked there for quite a while now. Over time the world is moving to what big enterprises call "evergreen" software, where they don't get involved in approving every update and things are kept silently fresh. With time you'll see discussion of old browsers and what to do about being compatible with old browsers die out.

Straightforward login API: that's OAuth. The idea is that the login is always done by the end user, the human, and that the UI flow is always directly with the account provider. So if you're a desktop app you have to open an embedded web browser, or open a URL to the login service and intercept the response somehow. Then your app is logged in and can automate things within the bounds set by the APIs. It's a good tradeoff - whilst more painful for developers than just asking for a username/password with custom UI each time, it's a lot more adaptable and secure. It's also easily wrapped up in libraries and OS services, so the pain of interacting with the custom web browser needs be borne by only a small number of devs.


Couldn't the attacker just switch to imap/pop3 for those attacks? (And (why )aren't they doing that already?)


pop3 / non-oauth imap login attempts are blocked unless the user has explicitly opted in to allow those protocols:

https://support.google.com/accounts/answer/6010255


"Google had the ability to enforce a JS-required rule on login at least 6 years ago and never used it until now."

Um ... this should terrify everyone.

'We had the power to impose this, and we graciously chose not to. You should be thankful for what we have done.'

No. Just ... no.


What exactly? The fact that login owner can change the login system at any point?


Do you glow in the dark?


> Firstly, this isn't some weird ploy to boost ad revenue.

Everything Google does is a ploy to boost ad revenue. That's the whole business model.

> Without a doubt, it's being enforced for the first time due to some large account hijacking attack that has proven impossible to stop any other way. After so many years of bending over backwards to keep support for JS blocking users alive, it's presumably now become the weakest link in the digital Maginot line surrounding their network.

You're doing two things here:

1. Reasoning from no evidence, and ignoring a much simpler reason in the process.

2. Acting like the honest web users, the ones blocking malware-laden JS, are the ones who are wrong.

It's much simpler to conclude that Google engineers simply got lazy and decided to punt the hard work of security to some JS library, instead of looking at it honestly.


ITT: people dramatically under-estimating the risk to their accounts from credential stuffing and dramatically over-estimating their security benefits from not running JS.

They're probably right that not running JS is privacy accretive, but only if you consider their individual privacy, and not the net increase in privacy for all users by being able to defend accounts against cred stuffing using JS. The privacy loss of one account being popped is likely far greater than the privacy loss of thousands of users' browsing patterns being correlated.

tl; dr: Good luck detecting and preventing automation of sign in pages at scale without robust JS based defenses. I think there's a shortsightedness and self-centeredness to a lot of these comments.


Your first statement is incompatible with your second. (I think the second statement is reasonable, although I disagree with the conclusion).

People aren't underestimating the risk to _their_ accounts, they are discounting the risk to _others_ accounts.

That is, they're essentially saying, 'well, other users chose to have bad passwords, so bully them'.

I think that's a fair viewpoint to have. We've entered a world in which computer literacy is a basic requirement in order to, well, exist.

That said, what's reasonable, and what actually occurs, are two different things. A company isn't going to ideologically decide "screw the users that use bad passwords" if it loses them money.

So we get _seemingly_ suboptimal solutions like this.


I think you may be giving people more credit than they deserve, but I'm willing to accept that they're making that argument. Even if that's their argument, that their personal habits around password use and being attentive to not being phished are so good they don't need Google's help defending themselves, so bully for everyone who does, I'm not convinced it's a good one.

There are a few things needed for that to be a good argument 1) Their security really is so good (I'd bet it isn't. I saw a tenured security professor/former State Department cyber expert get phished on the first go by an undergrad.) 2) Google isn't improving their security posture on top of that (I'd be shocked if Google isn't improving theirs, and I'm certain having JS required to sign into gmail closes a major hole in observability of automation) 3) There are real harms from the JS being there for their security/privacy posture (as I've said elsewhere, I'm unconvinced Google is allowed by their own privacy policy from doing anything untoward here)

As to your point about computer literacy and existence, I think the sad truth is that computer engagement is required, but literacy is optional. When that's the case, large companies are in the position of having to defend even the least computer literate against the most vicious of attackers.


> I think the sad truth is that computer engagement is required, but literacy is optional.

You're right on, but I wouldn't call it sad.

The population is expected to operate vehicles without putting others in danger, not credentialize in how cars work. There are endless amounts of things we could demand people spend their precious time deeply understanding. We just like to demand tech-savviness because it's self-aggrandizing.

Like everything else, the solution is to help people on their own behalf.

At an online casino I once worked at, we ended up generating random passwords for our users. We had to, because otherwise attackers would lookup usernames in the large password dumps online and log in as our users. No amount of warnings on our /register page stopped password reuse. So we decided we could do better than that, and that "well, we warned you" was not an appropriate response.

If you look around at everyday objects, everything is designed to protect the user. But for some reason in computing we're still in the dark ages of snickering and rolling our eyes at users for making mistakes.


> The population is expected to operate vehicles without putting others in danger, not credentialize in how cars work.

Exactly! We require "car literacy" in drivers before we allow them to use them. Pretty much every advanced economy has mandatory driver licensing.

A driver can trivially press a few levers and slam themselves into a barrier at 100mph. But they don't do that, because they know, through experience and education, that it's a terrible idea.

That's the exact opposite of the approach that would have cars restrict their own usage into a narrow set of patterns and refuse to function otherwise.

WRT the last half of your comment: I think that's reasonable. Generating random passwords for users is a fair approach.

Account security exists on a spectrum. I don't think anyone (reasonable) is arguing against that, we're talking about mutable state here, actual _actions_.

What I'm railing against, is this idea that every webpage on the internet needs to be behind a CAPTCHA that does a bunch of invasive data collection including probably asking the user to perform a Mechanical Turk task in order to _access a website_ without even logging in.

It happens all the time. A website doesn't like my IP block -> forced through a bunch of nonsense. The site operator probably isn't even aware because they're using an upstream service which does it for them.


If by ‘cred stuffing’ you mean brute forcing accounts, that’s what short lockouts and 2 factor authentication are for. JavaScript is just a layer of obfuscation and doesn’t fundamentally help.


Credential stuffing more commonly refers to the practice of getting valid sets of creds from various password database dumps and retrying them across common/popular systems.

2FA is a good defence against it, but lockouts are less as they attacker will be going broad and not deep (could be a single request per user account)


Ip based lockouts as opposed to account based lockouts do better against cred stuffing. Because there is a cost to getting more IP adresses. Maybe carrier grade NAT would lead to too many false positives?


Most bad actors doing abuse at scale have access to large networks of proxies on residential or mobile IPs, usually backed by malware on workstations, laptops and mobile phones.

Even as a newcomer without the right contacts on the black market you can get started with very little upfront investment, using services like https://luminati.io/ (they pay software developers to bundle their proxy endpoints within their apps).


IPv6 addresses aren’t really scarce.


/64 and /48 are pretty much on the same order of magnitude as IPv4s in terms of difficulty of acquisition, and I don't know why you would ever look at more than /64 when most major operating systems randomize the last 64 bits anyway (RFC4941).


Cred stuffing is using botnets, you aren't going to have more than a couple login attempts per IP.


I don’t see how JavaScript is anything more than a bandaid for that. The assumption is the attacker has the usercode and password combination and then you want to prevent him from logging in.


I'm not up to speed with the latest and greatest of what java-script can do, but isn't the source code fundamentally user-visible?

We always used to laugh at people who did website security with javascript, the whole idea was that security processing had to be done server-side.


Javascript can be served dynamic as well, per user/connection specific even. So an attacker would have to investigate and counter each new version of the scripts. Even if this could be done automatic it greatly increases the cat/mouse factor for Google.


So, basically javascript is used for security through obscurity?


Obscurity is just another layer you add onto your security. As with all security methods, no one is perfect and its always a balance with usability.

But with security at this level nowadays every added layer helps. Even if it is not even used in the initial authentication step. Think of classifying certain patterns in the attacks and retroactive de-authorizing after login, increasing the time-cost for the attacker.


security through obscurity may be no security at all, but security without obscurity is probably not as good as security with obscurity for many security scenarios that one can imagine.


Can't you use Javascript to implement challenge-response authentication, which meaningfully improves security by:

1. Preventing interception of passwords on the wire

2. Allowing a tunable "difficulty" parameter which makes brute-force attacks cost ineffective

3. Requiring that brute-force attackers either run a Javascript interpreter (dangerous, because the web site chooses what they do and could make them mine Bitcoins) or rewrite their brute-forcer each time the JS-driven network communication channel is altered

It seems to me that having a client-and-server protocol beyond just "POST this data here" can be more secure than sending a password to the server for verification...


>1. Preventing interception of passwords on the wire

Isn't this solved by https? I have no idea, but I hope at least that https protects my passwords.

>2. Allowing a tunable "difficulty" parameter which makes brute-force attacks cost ineffective

I don't want to wait for a login more than a second. Actually, I don't want to wait at all.

>3. ... or rewrite their brute-forcer each time the JS-driven network communication channel is altered

How is this different from altered HTML/CSS? An attacker has to adapt to the altered login page. It is not an argument for javascript.

>It seems to me that having a client-and-server protocol beyond just "POST this data here" can be more secure than sending a password to the server for verification...

You say it: a protocol! not a piece of javascript.


> Actually, I don't want to wait at all.

Neither do I. But I also accept that, given the sheer volume of stolen creds and bots out there, sites that damage their bang/buck performance, even at the cost of very minor inconvenience to users, are likely to be targeted less frequently and in lower volume. Even if I wasn't begrudgingly willing to pay that price, I'd at least admit to the logic of making the process more time-consuming as a deterrent.


> I don't want to wait for a login more than a second.

Do you log in that often?


I find the idea of detecting someone's trying to bust your login page with some kind of automated system and deciding to serve them a ridiculously aggressive Bitcoin miner rather amusing.


You risk setting up a little cold war that you probably don't have time for though...

"Think you are clever, eh, try this for size..." -- some attackers in response to being affected by your counter measures.


Whats stopping that war from happening at any other time? If a attacker has the resources and carelessness to mount such an attack at a whim you should be prepared for it?


Nothing, but trying to hack them back seems like a way to invite more personal attention than your services might otherwise get.


I believe some US banks have been doing this for a while.


> Can't you use Javascript to implement challenge-response authentication

> 1. Preventing interception of passwords on the wire

It can, but challenge-response that isn't PKI based requires the remote side to have the secret stored or the local side to know how to generate the value that is stored instead, which goes against other recommended practise (with PKI the remote side can store the public key and ask for something to be signed with the private key).

Protecting passwords on the wire is better done with good encryption and key exchange protocols - in the case of web-based systems that is provided by HTTPS assuming it is well configured.

> 2. Allowing a tunable "difficulty" parameter which makes brute-force attacks cost ineffective

Could you give an example of that? If you are tuning difficulty based on the computation power of the other side, surely the other side could lie about being low powered and get an easier challenge?

> 3. Requiring that brute-force attackers either run a Javascript interpreter (dangerous, because...)

A knowledgable attacker doing this would be safe: they'd make sure the interpreter was properly sandboxed (to avoid reverse hacking) and given execution resource limits (to avoid resource waste). Then if the site/app is important enough that they really want in, they modify their approach if the resource limits are hit.

> or rewrite their brute-forcer each time the JS-driven network communication channel is altered

If your method is only used by you (and you aren't a Google or similar so you are big enough to be a juicy target on your own) and you enter into this arms race you might find it takes so much resource that it gets in the way of your other work. You are only you, the attackers are legion: put one off and another will come along later. Also there is the danger in rolling your own scheme that you make a naive mistake rendering it far less useful (potentially negatively useful: helpful to the attacker!) than your intention.

If the method is more globally used then it is worth the attackers being more persistent.

> It seems to me that having a client-and-server protocol beyond just "POST this data here" can be more secure than sending a password to the server for verification.

It can, though often only against simple fully automated attacks. Cleverer automated attacks may still succeed, as may more manual ones, and targetted manual attacks will win by inspection & replication.

Or they get in through an XSS, injection, or session hijacking bug elsewhere (bypassing the authentication mechanisms completely) that you missed because you spent so much time writing an evolving custom authentication mechanism.


Regarding point 1, you can combine 'challenge response' with diffie helman to

Have the server side not know the password

And be secure against replay attacks given attacker access to plain text.

The scheme is something like:

Setup

Server generates key (x, xG) where G is some elliptic curve base point. It stores x and sends xG to the client.

The client computes y = H(password) and sends yG to the server.

The server stores the shared secret. x (yG)

Authentication:

Server generates nonce r and sends r xG.

Client computes y = H(password) and responds with y r xG

Server verifies that the response equals r (x yG).

End

In this protocol, an attacker with access to plain text, even during setup, still can't do anything.

This method is weak against MitM, but that can be solved on auth by doing a fully ephemeral diffie helman there.

I concocted this scheme on like 10 minutes, so there might be mistakes, and it os probably suboptimal.


No not necessarily, and if it is, it's bad practice.

Client-side js will always be readable, even if you obfuscate it, you can't trust it to never being decompiled.

But server-side js never has to reach the client, it can be used to dynamically generate basically anything.


> Good luck detecting and preventing automation of sign in pages at scale without robust JS based defenses

Why is it not sufficient simply to throttle logins at the server?


Modern cred stuffing is done by botnets. When I see a cred stuffing attack, it's maybe 1-3 attempts per IP address spread over 100-500k IP addresses. Often you'll have a family of legitimate users behind an IP address that's cred stuffing you at the same time.

Throttling by IP address may have worked 10 years ago, unfortunately it's not an effective measure anymore.

Modern cred stuffing countermeasures include a wide variety of exotic fingerprinting, behavioral analysis, and other de-anonymization tech - not because anyone wants to destroy user privacy, but because the threat is that significant and has evolved so much in the past few years.

To be entirely honest, I'm kinda surprised Google didn't require javascript enabled to log in already.


Any advice on where to read more about these modern cred stuffing countermeasures? I'd love to learn more.


Unfortunately I don't have much reading material to provide. It's a bit of an arms war, so the latest and greatest countermeasures are typically kept secret/protected by NDA. The rabbit hole can go very deep and can differ from company to company.

The most drastic example I can think of was an unverified rumor that a certain company would "fake" log users in when presented with valid credentials from a client they considered suspicious. They would then monitor what the client did - from the client's point of view it successfully logged in and would begin normal operation. If server observed the device was acting "correctly" with the fake login token, they would fully log it in. If the client deviated from expected behavior, it would present false data to the client & ban the client based on a bunch of fancy fingerprinting.

Every once in awhile, someone will publish their methods/software; Salesforce and their SSL fingerprinting software comes to mind: https://github.com/salesforce/ja3


A relatively successful company in the area is Shape Security. Their marketing is a bit painful, but they invented the concept of cred stuffing. Disclaimer: I worked there for four years.


Fundamentally it's a question of fingerprinting the behaviours of humans versus bots. The problem is that it's becoming increasingly difficult to distinguish them, particularly when bots are running headless chrome or similar, and real users are automating their sign-ins with password managers.

I don't do much of this sort of thing, but numerous things come to mind. Aim to identify and whitelist obviously human browsers, blacklist obviously robot browsers, and mildly inconvenience/challenge the rest.

For example, an obvious property of a real human browser is that it had been used to log in successfully in the past. Proving that is left as an exercise for the reader, though it inevitably requires some state/memory on the server side.


This is a rare paper published on the topic.

https://link.springer.com/chapter/10.1007%2F978-3-319-07536-...


A company I am considering investing into: https://fingerprints.digital/


Are they looking for funding? They appear to be privately funded.


They have been at https://www.wolvessummit.com/ - they are preparing for a funding round. You can find them at other events listed on their page: https://fingerprints.digital/event/


But you don't have thousands of families logging in from thousand of different servers: in your case a max of 10 login attempts would prevent it.


Throttle based on what? IP address? This works for domestic IT departments looking to shut out automated attempts from specific ranges but at Google's scale IP based filtering could end up shutting out an entire country.


> Throttle based on what?

User Id?


That's a terrible idea. Back when MSN was one of the most common instant messengers, there was a common prank that was called "freezing" where you just continuously kept trying to log into someones account and it would lock itself out for 15mins or more depending how long you kept doing it.

There was automated tools that did this too!


That's the first obvious countermeasure and will prevent hackers targeting a specific account. But there are other ways to crack passwords, one is to try the same password but iterate over user ids instead. As hackers would start with the most common password you can't throttle globally on same password attempts either because well yeah, it is by definition the most commonly used one which should have a lot of traffic.


Google can ban common passwords, or passwords that look like they’re being targeted (over the long-run).


This has nothing to do with anything but I don't know how else to get in touch with you. Could you upload your zero spam email setup guide somewhere? Your site was hacked so the link I had doesn't work:

http://iamqasimk.com/2016/10/16/absolutely-zero-email-spam/


I’m sorry, I changed the domain to QasimK.io, but neglected to set up forwarding. I will do that.

http://qasimk.io/2016/absolutely-zero-email-spam/


"Credential stuffing" as I've heard it used refers to taking username/password combos from one breached site and trying them in other sites.

So for example LinkedIn has a breach, which reveals to evildoers that user 'johnsmith@example.com' uses the password 'smith1234' then they test that username and password in Amazon, Netflix, Steam and so on.

They only make one attempt per account, because they only have one leaked password per account. Hence, throttling per account isn't an option.


That would create an easy denial of service attack: if I wanted to deny you access to your account I'd spam it with bad login attempts.


Happens weekly to my Sears account.


With credential stuffing, isn't it unlikely the perpetrator wants to make more than one or two attempts per user ID?


Which country uses a single IP address for all its devices/citizens?


All of Qatar's traffic used to be routed through 82.148.97.69, though that was back in 2006-2007. At one point it was banned from Wikipedia, which unintentionally affected the whole country.

https://simple.wikipedia.org/wiki/User_talk:82.148.97.69


China Telecom does something weird with NAT, not sure what exactly but I've seen it mentioned here before


And indeed it's time to give up on the web being a document format only. The internet is about loading remote applications in your local sandbox. That's what it is. It sucks, but it is what it is. As part of loading remote applications, we now might be asked to compute whatever anti-abuse puzzles are required. So it goes.


If something shitty is happening, you don't have to shrug your shoulders coswhatyagonnado. Understanding the human reason why something shitty is happening doesn't mean you have to accept it. So it goes, until it doesn't.


Passwords are obsolete - actual security would involve keys. The fact they have to care about automation for security instead of availability is a sign they have already lost. If you have a disposable EC2 server administration password accessible you are already doing it horribly wrong because you /will/ get attacked frequently.

Javascript is opening an attack surface for what will certainly turn into an arms race anyway instead of ending it.

Given that they aren't pushing a new standard for what has already been a problem for a long time while introducing a vector for abuse both to and from it google can be criticized for both of those sins far more.


To be fair, Google released their own OTP hardware keys and have already 2FA login mandatory for accounts that they deem "high risk."

I don't think it's fair to blame them for the facts that most folks are not willing to give up passwords yet. Given that passwords are the current reality, shouldn't they do everything in their power to make them as secure as possible?


Calling other people, or their opinions, shortsighed and self-centered is usually not the start of a good conversation.


I'll be the guy who says that while I recognize those are insults, they are also sufficiently descriptive of a point of view... it's not like he called someone Mr. Poopypants.


You're right. I was shortsighted and self-centered.


To a good conversation no. But not always is a conversation what's desirable.

Sometimes one just wants an accurate depiction of a situation -- and those might still be totally accurate characterizations...


I mean, why not cut to the chase?


So what about a opt-out at account level? Something in the account settings, like this:

[check] Allow sign-in from javascript disabled browsers. WARNING etc. (usual warnings about security etc.)

Edit: because users who know to use long passwords and 2FA do exist and don't need all that extra security stuff ...


> So what about a opt-out at account level? Something in the account settings, like this:

> [check] Allow sign-in from javascript disabled browsers. WARNING etc. (usual warnings about security etc.)

It sounds a bit like what Gmail's doing with their "allow less secure apps" login option, except that's more for allowing IMAP logins using password instead of OAuth.


I used a long and supercomplicated password for one of my accounts that i access intermittently. Why I have it is a long story, but I only log into it once or twice a month to check if there is something that needs my attention.

Usually the login is in incognito, guest mode, and even from different locations and machines. Google asks for a second factor (i dont have it on for my accounts) like phone verification for my usual accounts (not so complicated password) but not for the one with complex password. So I think the level of extra steps/security is linked with how complex your password is. Not so sure if this is a good thing or bad. But, I hope they should continue basing their security measures based on the security measures you take.


I asked LastPass to generate me a long and complicated password for a new Office 365 account only to have it rejected as too long because it was over 16 characters. Sigh.


It's probably the switching of devices that raises the level of security.


Maybe one reason is because google doesn’t know which account is trying to login before the login page, so how could they remember that security setting before attempting to serve JS?


I don't understand why anybody concerned about having JS on a login screen would want to log into Google in the first place. I imagine there's a tiny overlap between "Runs NoScript" and "Trusts Google"


It costs money to support and a miniscule amount of users would care.

The majority of Google's customers also don't pay for an account.


Even when I ran noscript Google domains were allowed. It's extra step for a small number of technical users.


> ITT: people dramatically under-estimating the risk to their accounts from credential stuffing and dramatically over-estimating their security benefits from not running JS.

Password are effectively obsolete and everyone should be using multi-factor authentication of some kind. Keys with passphrases. 2FA auth. Whatever.

Making 2FA auth mandatory would be substantially more effective than bot signaling.

> tl; dr: Good luck detecting and preventing automation of sign in pages at scale without robust JS based defenses. I think there's a shortsightedness and self-centeredness to a lot of these comments.

If they were, 2FA auth would be mandatory with additional phone-based (i.e. SMS) whenever you try to login from a new geographic area. That would stop anything short of a targeted hack.

Instead, they created an attack on the bot maker's profit margins. Cloudflare, Google, et al. are really just trying to increase the cost of making bots. They are not really trying to _stop_ bots.

Stopping bots requires making unpopular choices.


XSS vulnerabilities are everywhere. You obliviously don’t realize that.

Note that I do use js, because it makes life easier. But you got to realize that not using js will at some point protect you against an XSS vuln. They are that prevalent.


There's absolutely nothing in the above comment that indicates the person you're replying to doesn't know the prevalence of XSS vulnerabilities.


I'm pretty familiar with XSS prevalence and I agree with them.


Running JavaScript means parsing text from outside source plus executing the program from outside source. Both requires really complicated code counted by the unit of M LOC(Mega Line of Code).

It's still under-esitmating.


> The privacy loss of one account being popped is likely far greater than the privacy loss of thousands of users' browsing patterns being correlated.

That's quite the hand-wave. How do you even measure privacy loss? And given that browsing history is not in your inbox, why are you so confident that one compromised email account is a bigger deal?


This is coming right after the reCAPTCHA v3 announcement

https://news.ycombinator.com/item?id=18331159

Sorry, you don't have enough Google Points to browse the web. Please enable JavaScript and install Google Chrome.


Recent new version of Google Mail flat out doesn't work to any usable standard in Firefox. Ten seconds to open a new 'compose mail' window. A context menu does a multi-second HTTP fetch before showing. The previous version worked great.

Either the dev team has just given up on quality or they're intentionally goading me into installing Chrome. I'm not going to play that game -- at this point Thunderbird works better.


Switching email providers is reasonably painless, fwiw. Set up forwarding, migrate mail when you can.

Even better if you set up the majority of your non-security-essential mail to be at your own domain, hosted by Fastmail/etc. Then you can easily change your email provider and your contacts don't even care. I've yet to implement this is in my own life, I just switched to fast mail - so I can't speak from personal experience on the domain portion of it.

NOTE: I mentioned non-security-essential email in reference to things like, your bank login or things that could threaten your life essentials. I say this because theoretically (and has happened before), using your own domain increases the attack surface area. My personal plan is to setup custom domain email with Fastmail, but still use the plain me@fastmail.com for my security focused emails. The majority of my email will still be based on my custom domain for easy portability, but I plan to avoid that for my bank, for example... assuming fast mail lets me.


I can speak from experience regarding FastMail because that's exactly what I did. In fact, I migrated off a grandfathered Google Apps account with my custom domain to FastMail with that same domain. Yeah, it's a bunch of steps, but I'm very comfortable with making DNS changes. My wife and I have an account; it's worth every penny.

Also, FastMail allows for subdomain handling. I use this feature with nearly every site. You can have *@<YourFastMailId>.<YourDomain>.com route to <YourFastMailId>@<YourDomain>.com just as you'd expect. The way this handling works is even configurable.


Another very happy user of FastMail here, with our own domain. I initially was excited by subdomain handling, but switched back to only using my main account.

Using FastMail-specific features will lock you into this specific vendor once again, one of the main reasons to switch in the first place!


To be fair, how FastMail does catch-all delivery like this is standard and easily reproduced st any mail vendor (except Office 365) that supports catch-all, which is most of them. I use a catch-all address with FastMail that is @asubdomainichose.mydomain.org and it is the same subdomain I used with my previous setup before moving to FastMail.

Using a subdomain for catch-all is great because spammers can’t easily discover and flood the subdomain.


I'm in the weird Google Apps for Your Domain limbo right now myself. I've wondered what would happen if I switched to something other than GMail but kept my google account with that email address.

I know a long time ago you could set up a Google account using a non-GMail email address but I'm not sure if that's even a thing anymore. That's what I want though. Keep the email address with my own domain that I've used for 17 years and just have a regular old Google account using that email (and keep all my Google services and purchases associated with it).

Google has been absolutely terrible to Google Apps for Your Domain users (who were often Google's biggest supporters back in the day). They've been shoved into this weird second class status where their Google accounts only partially work with Google services. I completely regret ever setting it up.


You absolutely can set up a Google account with any email address you want.

https://accounts.google.com/SignUpWithoutGmail

I use Google services heavily at work, all on a Google account that was created with my work email address. And we are not a Google shop; my employer's email is self-hosted Exchange.


You can continue using your email address for your Google account even if you've got someone else handling the mail now. You can also sign up for a Google account with an email account from any domain or provider.


I switched to Fastmail years ago and it was the best mail-related thing I ever did. I was dreading the migration but it literally took ten minutes, switch DNS records (I have my own domain), run Fastmail's import, done.

I still can't believe how fast the UI is. It's by far the fastest web app I've ever used, and the same goes for the service in general.

Seriously, just ditch Gmail now, the alternatives are great.


Gmail is more than just mail, it's also integration with other Google services, like calendar. How does Fastmail fare in that regard?


FastMail supports CalDAV. I use my FastMail calendar with Thunderbird (Lightning) and on my iPhone; works great. They also support CardDAV for contacts. /satisfied FM customer since ~2008 or so


I had to purchase a CalDAV and CardDAV app (which were extremely cheap, mind) for Android, so it's not quite as plug'n'play there.


Why is jjawssd's (sister) comment dead? Davdroid works great and is free (as in beer and speech), though I would encourage people to donate if it's useful to you.


I wouldn't know, I have a self-hosted calendar. From the little I've seen, though, the calendar part of Fastmail is very good too.


Which self-hosted calendar do you use? would you recommend it? I'm in the market for a new one, but the current offerings that I've seen aren't great.


I use Radicale and find it great, but there's no UI, so you need to use whatever client you want that supports CalDAV (I use Lightning and the calendar on my phone). Lately I've been liking Nextcloud a lot, and that's a one-stop solution for lots of things, so nowadays I would recommend that if you have a home server or want to pay someone to host it.


Thanks!

I've looked at nextcloud, but IIRC, you have to have the whole suite installed, right? I'd love a way to just use the calendar function.


Yeah, you do. As I said above, Fastmail's calendar is very good too, and you can load your self-hosted/CalDAV calendars into it, so that's a good option.


Last time I tried fast mail they didn’t really support labels, only folders. Is that still the case, or is there a good workaround?


FastMail is standards-based, so it does not support labels. This is a good thing, and you should stop depending on Google-specific proprietary features. Even when I was on Gmail, I had a lot of issues with labels because the third party mail clients I needed to use didn't support them. The inbox tabs I ended up replacing in Gmail with rules/filters, that moved my social updates, for instance, to an actual social folder which worked properly on third party clients.

That being said, FastMail is also the leading developer/champion of a new mail standard called JMAP, which supports both labels and folders. I suspect, therefore, if it takes off, they may consider supporting labels themselves.


I used fastmail for a year and can recommend it, but if you’re European you should probably look up runbox instead as it’s housed in Norway.

That’s what I eventually switched to and it works fine.


> assuming fast mail lets me

It does let you, you can create as many aliases as you want (I'm assuming) on any of their or your domains.


Fwiw, I use Gmail exclusively in Firefox and have no problems at all. (And my machines are fairly dated.)


I've recently switched to MacOS's built in mail client with IMAP to Gmail, never have to wait for my UI to do something. So count me in as surprised how far gmail has gone downhill.


That's the thing that gets me. So many optimisations have gone into user interface software over the years. And some of the stories of early Apple work, like 'round rects [0]' are truly inspirational.

I wrote software using Cocoa about a decade ago (so I may be out of touch), and it was clear how much thought and effort had gone into making the user interface responsive. And it generally shows.

The idea that you would just give up on that precedence is baffling. And let's face it, email's important but it's not rocket science.

[0] https://www.folklore.org/StoryView.py?story=Round_Rects_Are_...


Never liked webmail anyways.

Thunderbird might be superior, but I really like that App because it's so light and fast.


What version of Firefox are you running? You are either exaggerating greatly or have other issues with your system. I run the latest stable release of Firefox and the performance of Gmail (particularly the features you mention) is fine. I’d be happy to upload a screen recording to verify.


He's not the only one. It's a recurring comment here on hacker news and a problem I've encountered as well, and I'm running the latest stable release.


Same here, I run the latest Firefox on both Windows and Linux. Gmail always takes at least 5 seconds to load.


If you experience a reproducible Firefox performance problem, please consider using the Firefox profiler add-on [1] to record a profile and file a bug with "[qf]" to the whiteboard field. These "[qf]" Firefox performance bugs get reviewed by engineers twice a week. Having a profile makes the bugs much easier to diagnose.

[1] https://perf-html.io/docs/#/


I think that Chrome also suffers on this front? But it's better at doing pre-fetching than Firefox is

This could really just be that part. I have a hard time imagining explicit sabotage of FF on the gmail frontend. The likeliest explanation is that perf testing and the like only happens in Chrome


The classic question to ask at this point is whether this affects you with a fresh install, or only once you have added all your extensions?


Firefox 63.0. Fibre internet connection. 3.1 GHz Mac, 16 GB RAM.

It's tricky to share a screen recording because there's personal information. But I just did two for my own curiosity. From a fresh load, once the "Loading Gmail" screen has gone away, it took 8 seconds and 11 seconds respectively from clicking 'Compose' to having a new window open.

Maybe there is variability. There are a million combinations of factors out there. I suppose as an engineer you make the trade off of "do I hope for the best case" vs "do I make something that works for a broad audience". The previous version shows that they can make something that works for my own anecdatapoint if they want to.


I have a lot of issues with google apps for business. Sometimes I have to refresh the browser 5-6 times before it will display any email in the primary inbox as well.

It's just horrible to use in firefox (in arch linux) and I'm currently looking for a new provider.

I might just go all in and use protonmail.


Protonmail is great but doesnt offer custom domains. If you need domains people mostly mention fastmail but i think there are far better choices like mailbox.org and kolabnow. Mailbox does not look like much from their homepage but it has awesome web client and it extremly reliable private provider thats in bussiness from 90s. I had account there for last 5 years without single problem.


This is actually not quite right, we have offered custom domain support since 2016 :)

https://protonmail.com/support/knowledge-base/custom-domain-...


I've experienced both very fast and very slow with the new Gmail on the same machine with Firefox on Linux. It's currently faster than Chromium, but maybe tomorrow I'll see ten-second load times. Who knows? For the record, I use uMatrix (and uBlock Origin on easy mode for client-side cleanup), which might be affecting it somewhat.


>and the performance of Gmail (particularly the features you mention) is fine

>ten seconds to load your inbox

>16 GB, i7, SSD, 100 MB/s internet etc.

>fine


Exactly what I was thinking. How is that even close to fine? WTF?


I’m a little surprised to read this because Google Mail works fine for me in Firefox (ArchLinux). In fact it’s smoother than some of the Electron-based clients I’ve tried and less painful than trying to get push messages on Thunderbird working (sure, there is always IMAP but that requires regular fetches).


FIY, IMAP actually allows "push messages" via the IDLE extension. If you use K9 on android, it's enabled by default. I never used gmail, but I'd be surprised if the gmail imap server didn't support it (and I would dismiss gmail entirely if it didn't).


Is this a new thing? I don't recall seeing an option for that in Thunderbird (desktop version by the way; not the mobile / Android version) the last time I looked (~9 months ago).


IDLE is an old extension. Unfortunately Thunderbird is a crappy client.


What would you recommend then?

I don't use Thunderbird myself - was just following the discussion on from the OP who did use it. However I've yet to find a client I like so genuinely interested in any suggestions you might have.


I'm currently using mutt with "getmail" (which does support IDLE), which I can recommend -- it's an excellent client, but only if you're fine with tweaking.

I used TB until two years ago, but I gave up with it's unfixed bugs and quirks. I do prefer graphical clients, but not if they are clunky or buggy.

I used Silpheed and Claws for years, but Silpheed locks (or used to lock) the UI during fetch (unacceptable IMHO) while Claws has some critical bugs in the filter/rule logic that made me lose mail in several occasions by refiling into the wrong folder while processing a lot of messages. If you arent't a heavy filter user you might be fine with it though, I think Claws gets a lot of things right.

KMail wasn't bad when I used it, but it was too long ago to make an honest comment today.


Same issue here. Mails not loading, poor initial load time. That is with zero extensions enabled.

I am now using mutt/notmuch/mbsync to prevent having to go through their horrendously slow web interface, and eventually move away from Gmail completely (probably to ProtonMail or fastmail).


> A context menu does a multi-second HTTP fetch before showing.

Where? The only one I can trigger that does any kind of network is in the inbox, and that's only to get some icons. The text for the options is already loaded.


I'm talking about the RSVP box for integrated calendar invites. Not a right-click context menu.


Yup, that was all the push I needed to migrate all my Google stuff to fastmail.


I have the same problems on Chrome.


Ditto, they really need to work on speed on the new gmail.


Firefox on mobile or PC? I haven't experienced that on PC.


I've had the same experience. poor performance and display anomolies.


If you enable privacy.resistFingerprinting in Firefox you automatically fail v3 Captcha with score 0.1 People who want to try it out: https://recaptcha-demo.appspot.com/recaptcha-v3-request-scor...


Thanks! I filed https://bugzilla.mozilla.org/show_bug.cgi?id=1503872

When we have time we'll have to trace through what it's doing and what components of RFP are causing the failure. (If anyone wants to do that and report in the bug, we (Mozilla/Tor) would much appreciate the contributions!)


Sounds like the feature is working exactly as intended and the problem is with reCAPTCHA.


Google has becoming increasingly annoying. Every time I browse from work, where I have to use Internet Explorer, I have to suffer Chrome ads. They also require me to solve a Captcha every time I change the number of results per page via the search settings.


> Sorry, you don't have enough Google Points

In the past few months all our domestic devices have gradually hit that notional condition with Google Search. All the laptops one by one, and then last night my phone. My wife's phone is the only one that can still use their search without a ten-round Recaptcha challenge.

As each device was locked-out from Google I switched the default over to DDG.


Is there any traffic coming from your IP that would make the Google fingerprinting bot angry at you?


Me too. Its creepy being locked out of half the internet for daring to block ad servers.


What is especially interesting is that this will allow Google to track you on more pages, but that in this case, you can by definition not block the tracker. I've checked, but reCAPTCHA just falls under the general Google Terms of Service.

I don't believe this to be done with that goal, but it is an unfortunate side-effect.


ReCaptcha is like Cloudflare's free DDoS protection: we like to point at these services and complain how people are "ruining the web" by using them because that's what we do on HN. We ignore the big picture and whine.

But I encourage everyone to consider a darker reality: that centralized services by large companies are becoming more and more necessary in a world where it's becoming easier and easier to be an attacker. The internet is kinda broken. Like how half the ISPs in the world don't filter their egress for spoofed IPs because there's no real incentive. That every networked device in every household could unknowingly be part of a botnet because we aren't billed for externalities.

Yeah, maybe it's kinda spooky that now ReCaptcha v3 wants to be loaded on every page. But is that really the take-away? What about the fact that this is what's necessary to detect the next generation of attacker? That you can either use Google's omniscient neural-network to dynamically identify abuse or you can, what? Roll your own? What exactly is the alternative?

Do HNers think this stuff is a non-issue because nobody has every attacked their Jekyll blog hosted on Github Pages (btw, another free service by a large company)?


That is exactly what I was trying to say with the final line in my comment: I do believe that this is necessary; it's just unfortunate that it comes with the tracking side-effect.

So no: the take-away is that this improves reCAPTCHA. A side remark to that is that it also improves Google's ability to track you, and hampers your ability to fight that.


Haha yeah, thanks for the recommendation Google. Chrome is never gonna come back on my PC.


reCAPTCHA can go frick off into a hole. I've stopped using all websites that use reCaptcha because it takes me sometimes 10 minutes to login to them. I also don't feel right providing free data so Google can help a military drone bomb children on busses one day.

I miss old captchas.


> I also don't feel right providing free data so Google can help a military drone bomb children on busses one day.

reCAPTCHA v4: please click on all the pictures of insurgents.


They are such a pain point. Especially if you fill out a form accidentally, and have to go through the re-captcha again, and again and again for the most mundane of services.


Tbf you don’t require Google to browse the web.


There are also employers who don't treat their employees like children.


"When your username and password are entered on Google’s sign-in page, we’ll run a risk assessment and only allow the sign-in if nothing looks suspicious."

In my experience (it is already the case with gmail and outlook up and now), this means I will not be able to login to my account when in holiday in another city, country, or when I use a borrowed device, or when I am behind VPN/Tor, etc, unless I give google my phone number, and can afford to get a call / sms at that point of time and unblock the account.

It should be my choice, as it is my account that is at risk, to turn on/off such dubious security measures. It is fine to have these features on by default, but I would like to turn this particular feature off for my account. Any clever "risk assessment" thing where a computer decides without an option to turn if off/on is problematic.

I have sometimes the feeling they know this and it is on purpose. They want not only to collect data, they want to collect high quality data and these measures help to clean their data sets at time of collection.


I travel frequently and have multiple Google accounts (3x G-Suite and one Gmail) and have never had any problems accessing any of them anywhere in the world. I do occasionally get alerts saying that they've blocked a login attempt from India or South America, though. It seems their system works pretty well.


This is actually surprising enough that I'm glad to hear it even as an anecdote.

My experience with changing devices or cities (or god forbid both at once) is that it always requires further authentication, and often fails outright. I have an account which is simply disabled because I didn't set a recovery phone # or email and then changed machines. Everyone I've ever discussed the topic with has described similarly pervasive problems.

Which makes me wonder: what's so different between usage patterns? Obvious Google's auth approach is working for lots of people, so what's distinctive about this block of users who it's constantly failing for?


> In my experience (it is already the case with gmail and outlook up and now), this means I will not be able to login to my account when in holiday in another city, country, or when I use a borrowed device, or when I am behind VPN/Tor, etc, unless I give google my phone number, and can afford to get a call / sms at that point of time and unblock the account.

That's exactly it, there's already two Gmail accounts from high school I can't access despite knowing the passwords.


I get this on my own computer that I've been using for 6 years on a static IP. It happens at least once a month, sometimes several times. Each time they ask for a phone number confirmation when no phone number is linked to the account (and never will be).

Google™ employees have come in and found mind-bending ways to excuse it when I've mentioned this before.


I also have no faith in their risk assessment. For a very long time I have only used one computer from one location to log into my Gmail account and every time I log in they consider it a suspicious activity. They even forced me to confirm my identity on my last login. What's their risk assessment doing if it can't get the baseline right?


What makes you think that their end goal wasn't getting your identity confirmed?


I don't think they are questioning Google's end goal, but more the effectiveness of the current system.


It used to be the case that these checks are not in place when you're using 2FA. The downside is that you cannot use it without using a phone to register in the first place (though you can use your own generators afterwards)


Yep, this happened to me when I created a separate account for travel. Immediate, permanent lockout.


They're giving you a free account to burst out mails with. Your account will most likely contain a lot of private or privileged information about other people, e.g. their mails, pictures, contact data, etc. You have a responsibility so why should you be allowed to reduce the security of your account?


Because I "have a responsibility" if it is truly mine I should be allowed to.

But just as you said, they are giving it away for free, so it is technically theirs, we are not paying customers. (Except for G-Suite users)


> But, because it may save bandwidth or help pages load more quickly, a tiny minority of our users (0.1%) choose to keep it off. This might make sense if you are reading static content, but we recommend that you keep Javascript on while signing into your Google Account so we can better protect you.

They don’t seem to explain why though? Did I miss it? Are they fingerprinting the JavaScript environment of my browser? Why? The 0.1% are the people who would like to know why they need it, but this message is written ironically for those who don’t know what JavaScript is.


Additionally they imply the only motivation for disabling JavaScript is to increase performance and decrease bandwidth. They conveniently don’t mention the other, arguably more prevalent motivations: to increase privacy and security.


Yeah, it struck me as extremely disingenuous; while I like the other benefits, I disable JS mostly because I hate being tracked and noticed that many (most?) browser exploits require JS to run.


...and speed, and decreasing the amount of arbitrary code execution on your machine.

Most people don't disable JS entirely, but use something like uMatrix or noscript. It takes more work, but you can turn off a significant number of things that just don't need to be executed and get around a lot of annoying modals and paywalls (or see a lot of blank pages; that happens a lot too).


0.1% of Google accounts is a huge number of people. Millions?


They are trying to detect state actor hacking and track individual devices and whether they are new or impersonations of devices.

Some states have mitm certs on all their domestic machines but (hopefully) not much competence except on whatever schedule they buy updates.

I would be implenting a u2f soft client in js if I were Google. IMO you need a private key a state would need to retrieve by tampering with js and that isn't being sent over the wire with every connection. (Just to give them their first level of headache, when it comes to transitioning from observing to impersonation.)


To keep your account secure, turn on Javascript?? If anything is making your web browsing less secure, it's JS.

I don't particularly care that Google isn't letting you sign in without JS, but the message is just plain wrong..


I'm not sure about you, but most people with js "disabled" don't browse with javascript disabled entirely, but instead use a whitelisting/blacklisting plugin, otherwise they don't be able to access many essential sites (eg. banks). under this setup, whitelisting google isn't going to decrease security unless you think they're going to serve a 0day when you sign in.


Passwords can be hashed directly client-side with javascript, which is way more secure than sending them clear on the wire, so i dont disagree with Google's stance here and dont understand the hate


Hashing passwords client side has no benefit if a site uses HTTPS.

If a site uses HTTP, then hashing the password client-side and sending it up to the server is equivalent to sending a clear text password. If an attacker can already read your traffic, what is stopping them from using your password's hash to log-in to your account?


It stops them from using the password to log in to your other accounts.

It stops a compromised server from silently leaking unhashed passwords.

It makes password hashing user auditable.

You could even do a call and response model to stop the hashed password to log in at all. Here is a primitive scheme for such a model (public key crypto probably enables more clever schemes, not sure):

- Upon signup, generate hashes of "$password$site$i" for i in 1 to 1000. Send these to the server and have the server hash them again.

- Upon login, after the user has entered their password into the box, send an integer from i from 1 to 1000 to the browser, have the browser send back the hash of "$password$site$i".

Now a compromised hash can only let you log in 1 time in 1000. Combine that fact with the other available signals for "is this who we think it is" and you should be able to reject people who stole the hash reasonably reliably. Meanwhile since you are still hashing the password on the server (again) you have lost literally nothing but a tiny bit of computation time.


Use a password manager and don't reuse passwords. If your randomly generated, unique password has good enough entropy then why go through all of the trouble of the rest of the client side hashing?

There's nothing stopping you from hashing your own passwords client side and sending your bcrypt hash up to the server except some sites still truncate the passwords to 32/16 chars etc.

When you have the need for the level of security, client side hashing will not be as good as dedicated HSMs that many services now use on authentication.

Writing your own crypto flows can be extremely dangerous as you open yourself to all kinds of side channel attacks.


A password manager is a client side method that only works for people who opt into it, Google needs to deploy a server side method. Likewise with hashing my own passwords client side. HSMs.

As for writing my own crypto. Indeed, if anyone actually used the scheme I suggested they would be making a mistake. I wrote it not to be used but to demonstrate that we can do better in an easy to understand way. Unlike me, Google has the resources to read the papers, do the math, carefully implement this, and do it properly.

Keywords for how to do it properly include "zero knowledge password proof" and "password authenticate key exchange".

PS. It's irrelevant to this conversation, but putting all my passwords into one program has always struck me as a monumentally stupid idea. I use one for passwords I don't care about, I memorize unique passwords for passwords I do care about.


worshipping an arbitrarily contrived measure of password entropy makes for good security theatre, but there's a lot that goes into maintaining anything resembling actual security. How many people use "password generators" and trust that they'll come up with "random" words? What about that old saying about putting eggs in a basket?


> It stops a compromised server from silently leaking unhashed passwords

If you trust the site to deploy correct JavaScript to do this, then that's the same level of trust that they implemented password salting and hashing server side. You don't gain any robustness by moving this to JavaScript.

Your scheme is just a weak salting technique. You'd be better off with just using a longer salt and hash function.


I separately assume a salt is part of my hash function. Salts only help with rainbow tables (an admirable goal, but not my one here).

I can trust the site to deploy the correct javascript more than I can trust it not to steal passwords because

- That is auditable - it is impossible for a malicious site to do so without risking being caught.

- The HTML/JS can be served from static cloud storage that is far less likely to be hacked than the server running a DB verifying passwords.


> - That is auditable - it is impossible for a malicious site to do so without risking being caught.

Hardly. Minimization and obfuscation is trivial, and you can ensure the output is always different in order to defeat auditing. Not great for caching obviously, but 'auditability' is not achievable if the server is determined to fool you.

> - The HTML/JS can be served from static cloud storage that is far less likely to be hacked than the server running a DB verifying passwords.

Password are simply not where you want to leverage your security. If you can find a document example of a real threat that this approach would have mitigated, then I'll take it seriously.


So the malicious site can run risk assessment first and then if it thinks nobody's looking, send different code for hashing to this particular user.


This still feels vulnerable to XSS. Better would be to have browsers provide an API to do this so that $site is trusted.

The downside is not a tiny bit of computation time. It's also increased latency for the customer.


This is completely wrong. HTTPS is what secures this, not client side password hashing. If you don't use HTTPS, you can just get MITM'd to disable any kind of client side hashing.


You are wrong. Client-side hashing CAN be a silly thing, but it can also prevent a (compromised) server from seeing your password which you probably use on other websites (which is what most people do unfortunately).


>but it can also prevent a (compromised) server from seeing your password

If the server is compromised, then there is no protection of your cleartext password at all. This is because the entity that compromised the server can replace the original JS with anything, including new JS that sends your cleartext password off to their own host as you type each character.

The only activity on your part that can save you against comprimised servers is having a unique password per server (i.e., not reusing any passwords).


Not true in modern architectures, that situation only applies to more traditional file & api server combo's. If you statically serve your site with a service like s3 and have a backend running on lambda or ec2 - the attacker cannot modify the static assets and the client side hashing will prevent them from seeing the plaintext password.


Again, this is wrong depending on how the client is implemented, if updates are signed, if we are talking about a protocol, etc.


and if said "compromised" server simply decides to not supply the js that hashes the password?


Thanks for saying it. Client-side scripting can't protect against a compromised server when the client scripts are provided by that same server.


The answer is that it depends. We could be talking about protected js with SRI, signed updates with an electron client, a browser plugin or native hashing, a protocol similar to SSH that hashes the client pw, etc.


This is only true when client-side hashing is under control of the client. In a web browser, it is not. The browser will happily run whatever JS the server sends it. So if the server is compromised, it can send compromised JS, and there goes your client-side hashing protections.

An example of where it might work is in an app, where you're getting the client code from a separate channel like an app store.


It can protect you against non-malicious issues on server-side. If I recall correctly, twitter recently discovered that they were logging passwords in plaintext by accident. With hashed password you reduce exposure of actual passwords in this type of situation.


or a separate channel like another server - which is the standard in every large web application I've ever seen.


This is why server side HSMs (hardware security modules) are a thing.


Is this true? Over time I have seen user passwords end up in a variety of strange internal places accidentally, like log files or crash dumps.


See: https://blog.cryptographyengineering.com/2018/10/19/lets-tal...

About client side benefits. I'm not advocating for JS in the browser but there are benefits to doing some work client side.


Who is sending passwords in cleartext on the wire?


I think totony meant sending passwords without pre-hashing, but yeah it doesn't make sense to send any confidential information in clear text that should be sent via E2E encrypted TLS channels.

Furthermore, pre-hashing doesn't necessarily make transmitting confidential information safer, as one would argue that your client side javascript can be reverse-engineered and give the attacker more information about how you hash your data.


Really your back end should just treat password's hashed just like any password.

Ideally, if TLS was being MITMed somehow such as a dodgy root cert. It would shield the users plaintext password so it could not be used to login into other services. The problem is as soon there is TLS issue an attacker can modify the Js to just send the password in the clear. It really would require code that can't be modified by attacker. This means that there would have to be some sort of browser support. Otherwise it does nothing against the attack it would protect against.

The main benefit is offloading some computation workload on the clients machine. This could allow you to increase the work load required to brute force the password hashes assuming your database leaks. (aka increase iterations or memory requirements)

You last argument is security through obscurity if exposing how you hash makes it easier to brute force the passwords your password hashing sucks.


Yes I meant sending the password cleartext inside the transport protocol*

pre-hashing doesn't prevent an attacker from stealing your account if it can read the communication, but it prevents it from having your password and using it everywhere else where you might re-use the password or a permutation of it


Almost any http site with a login form is sending your password in cleartext. Thankfully, initiatives like Let's Encrypt have made plain http sites much less common than they used to be.

Hashing the password before sending it doesn't really help you much - the naïve approach is vulnerable to "pass-the-hash" (where you basically send the hash instead of the password as the authentication token). The secure approach involves either some kind of challenge-response or a nonce salt, but these aren't as easy to implement correctly.


Indeed. And: who is hashing passwords on the client? As this would require either not using a salted hash, or sharing the server's salt with the client, in order to obtain identical hash values for comparison. In either case that system's entire password inventory would be a lot more vulnerable.

TLDR don't do that, send passwords over SSL and use a good password hashing algorithm on the server like BCrypt.


Yep. Proper password hashing requires per-credential salt, pepper (for all credentials) and a strong algorithm (IV, iterations etc.) Revealing all those information is a leak and arguably making client side hashing less secure (by giving away a lot of parameters for attackers to attack)


NIST may say that you should use "peppers" for passwords, but nobody else does.

None of bcrypt, scrypt, or Argon2 use them and are not materially worse for it.


Yes, adding pepper is a recommendation not a mandatory step. But a lot of sites do, I.E. PagerDuty [1], paired with PBKDF2 as many apps requires to meet FIPS certification or enterprise support on many platforms.[2]

[1]: https://sudo.pagerduty.com/for_engineers/

[2]: https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet


Salts are not meant to be secret, nor are the hashing functions. You gain little by hiding them


if you're in the position to or are developing an app use argon2!


Judging by his comment, totony is.

Your password _is_ whatever you send over the wire. Doing a hash in JavaScript before sending it won't obscure the user's password from anyone who can see their traffic; it will obscure the user's password from the user.


Nope, the password is what people type in. They may type the same things at many websites. We should not care what that exactly is.

Why would you want to see actual user password if you can just not see it?

If you see a password you can leak it by screwing up in numbers of ways. If you never see a password you just can't leak it.

E.g. Twitter recently discovered that they were storing passwords in plaintext in logs, GitHub had similar issue.

Take a look here: https://arstechnica.com/information-technology/2018/05/twitt....

Of course, a hash that you will receive from client should be treated as a normal password including all good practices.


No, the password is whatever you send over the wire. If a website processes your attempt to type "password" into "5f4dcc3b5aa765d61d8327deb882cf99" before sending that to the server, then your password for that website is 5f4dcc3b5aa765d61d8327deb882cf99. That's what the server sees and how it recognizes you. The only effect of this is to make it less likely that the user knows his own password.


If user password is "passsword" he may be reusing it across 50 other websites. If you leak information that "password" is linked to "email@gmail.com" I can hack the 50 other websites. If you never knew that the user password is "password" you can not leak it and I can not use it to log in into 50 other websites. Leaking "5f4dcc3b5aa765d61d8327deb882cf99" is useless to hackers, because he cant go and use it to login into another website.

So, there are properties that differentiate "password" and "5f4dcc3b5aa765d61d8327deb882cf99", even if for the server it's all the same.


The distinction you're trying to draw vanishes as soon as this becomes a standard practice. Passwords are already stored hashed and salted. They get compromised anyway, because the data is valuable. Under the circumstances you describe, cracking 5f4dcc3b5aa765d61d8327deb882cf99 (which takes less than a second) is just as valuable as cracking a password database entry is now, because the underlying issue -- reuse of credentials -- hasn't gone away. (In fact, you're encouraging it, so it's probably somewhat worse.) As long as people are reusing credentials across multiple websites, those credentials will have value greater than that associated with their use on any particular site, and other people will put in the effort to crack them. Even when you're generating and submitting a cryptographically secure salted hash, you haven't improved on the situation now, where databases store a secure salted hash of the password.


Lots. But even those that don’t tend to send the password to the server, which is still bad.


How is sending the password to the server over HTTPS bad? What would you do otherwise? Hash it on the client? So are you not using salted hashes for your password store? That's far worse. Or you're hashing twice, the first with no salt client-side, then again with salt on the server side, which is fine, but the client-generated hash must be unsalted so is basically just the password itself: steal the client-generated hash instead of the original password, just as good with only minor loss in value (might not be able to reuse it on other sites for the victim; but actually maybe still could if you can build a reverse index of common passwords hashed using whatever algo is in use.)

And if you don't trust HTTPS to protect sensitive information, why would you send the auth cookies over it that have virtually as much power the password that was given in exchange for them in the first place?


Why would you want to see actual user password if you can not see it?

If you see a password you can leak it by screwing up in numbers of ways. If you never see a password you just can't leak it.

E.g. Twitter recently discovered that they were storing passwords in plaintext in logs, GitHub had similar issue.

Take a look here: https://arstechnica.com/information-technology/2018/05/twitt...

Of course, a hash that you will recive from client should be treated as a normal password including all good practices.


> Hash it on the client? So are you not using salted hashes for your password store?

There is no reason you can't also salt on the client. Salts do not need to be secret. The substantial constraint you outlined in your comment isn't a problem.


Literally almost everyone. (Wrapped in a TLS connection of course.)


> (Wrapped in a TLS connection of course.)

So, not cleartext over the wire then.


If the client hashes the password then the hash itself is the password. Meaning stealing the hashes passwords is the same as stealing the plain text password for which they're based, since you can post them direct.

Blizzard entertainment does half client half server hashing which is rather clever, one of the few examples where client hashing makes sense.


Nope, the password is what people type in. They may type the same things at many websites. We should not care what that exactly is.

Why would you want to see actual user password if you can just not see it?

If you see a password you can leak it by screwing up in numbers of ways. If you never see a password you just can't leak it.

E.g. Twitter recently discovered that they were storing passwords in plaintext in logs, GitHub had similar issue.

Take a look here: https://arstechnica.com/information-technology/2018/05/twitt....

Of course, a hash that you will receive from client should be treated as a normal password including all good practices.


I'm curious, how is half-hashing the password different from really hashing it?

The best protocol I know of is to derive a signing keypair from your (salted, stretched) password, and store the public key on the server instead of a password hash. Then during login, the server sends a challenge to the client, and the client signs it. The server never sees any secret material at all. Keybase uses a version of this protocol.

Unfortunately all the magical client side crypto in the world doesn't save you if the attacker can compromise your server and then send clients bad JS :p


Have you heard of this new technique called HTTPS?


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: