Google had the ability to enforce a JS-required rule on login at least 6 years ago and never used it until now. Without a doubt, it's being enforced for the first time due to some large account hijacking attack that has proven impossible to stop any other way. After so many years of bending over backwards to keep support for JS blocking users alive, it's presumably now become the weakest link in the digital Maginot line surrounding their network.
For the people asking for the risk analysis to be disable-able: you used to be able to do that by enabling two-factor authentication. You can't roll back your account to just username and password security though: it's worth remembering that account hijacks are about more than just the immediate victim. When an account is taken over it's abused and that abuse creates more victims. Perhaps the account is used to send phishing mails to your contacts, or financial scams, or perhaps it's used to just do normal spamming which can - at enough volume - cause Gmail IPs to get blocked by third party spam filters, and user's emails to get bounced even if their own account is entirely secure. The risk analysis is mostly about securing people's accounts, but it's also about securing the products more generally too.
Google too does some really funny things to make it nearly impossible to create and maintain an anonymous accounts not tied to a phone number. Even when those accounts are just used for browsing and bookmarking, without sending any outgoing information.
>When an account is taken over it's abused and that abuse creates more victims.
If someone hijacks my browser through some clever JS API exploit and steals my credentials, what is Google's response? "Just use our 2FA." What about smaller websites that don't have resources to maintain 2FA? "They should authenticate through us." All roads seem to conveniently lead to centralization.
BTW, it is worth noting that the impact of a compromised account isn't nearly as significant if a single account doesn't hold keys to pretty much everything you do online. Somehow this is rarely factored in during such discussions.
Well yes, these kind of accounts are highly susceptible to being bot accounts. What obligation does Google have to be the place for people's free, anonymous accounts? In any case, I haven't had a problem with the number of secondary accounts I've created that are tied to me only by another email address (which can point to another provider, like Yahoo).
> BTW, it is worth noting that the impact of a compromised account isn't nearly as significant if a single account doesn't hold keys to pretty much everything you do online. Somehow this is rarely factored in during such discussions.
How should this be factored into the current discussion? The use of JS is to ostensibly make it more difficult for automated hijacking to prey on users.
If that doesn't appear true for you, try over Tor and you'll see what happens to many people...
How does Google know you've hit a limit when you've registered new emails? And, not to belabor the point, but why should they be expected to give you unlimited number of free accounts -- or, what service do you recommend that will be that generous?
Oh that’s right, they’ll never give the consumer power over their data because that’s google’s entire value proposition.
But this is just cringe worthy. How do you propose getting your cash to said company? I mean, most methods will leak personal information the company could then use anyway.
I am not 100% whether they use geolocation, or just trigger this when your new IP doesn't match the last IP you logged in with.
I suspect the next steps in browser security will not be to blanket-deny scripting, but instead focus on containers and sandboxing to make script-based attacks less worthwhile.
I'm not going to say that providing 2FA is "free" in the time sense (both in implementing it initially and supporting people who lock themselves out) but on the surface 2FA requires just a library to verify 2FA codes and a column in your users table to store the shared secret.
If you don't know the technologies the website is built upon or how much it will be impacted by increased barrier of entry for users, this statement is baseless.
I mean, some site do ask for way more than they need here and that's often bad. In the end, I think it's a reasonable trade off for end-use convenience. Which I often am.
I've snooped around a bit myself and it doesn't seem like botguard does anything much more advanced than other fingerprinting solutions.
There didn't used to be any public bots that can beat the strongest version and from a quick Googling around I don't see that it's changed. Someone took apart a single program manually, years ago, but the programs are randomly generated and constantly evolve. So that's not sufficient to be able to bypass it automatically/repeatedly.
The point I'm trying to get across is that companies use these techniques because they are effective - it isn't as simple as "some junk that was beaten ages ago" - and the collateral damage is very small, relative to other techniques. Far fewer users run with JS disabled than the number of users who struggle with CAPTCHAs.
We can see the direction things are going with reCAPTCHA v3, which appears to be the logical end of the path Google started walking 8 years ago - reCAPTCHA v3 is nothing but risk analysis of anti-automation signals.
In other words, the bot developers are still getting through, and meanwhile it's the actual humans who don't want JS which get screwed. Reminds me of DRM... honest customers are the most inconvenienced, while crackers still break it.
There's no perfect solution but any solution is still better than none. Why do you keep a lock on your door if I can break it in 30 seconds? Your computer is even there! I can easily add a key logger there, why do you have a password then if all I need is to do that? You aren't stopping me thus any protection you add is meaningless.
> honest customers are the most inconvenienced
These days with browser/mobile sync, maybe it's actually possible. But like a synced password manager, it makes a primary account breach that much more devastating.
1. Education for and purchase of U2F keys
2. Key loss recovery mechanism
3. Key stolen defense (you can't just rely on the U2F key alone, there must be a pw or other type of second factor)
4. Widespread browser & device support (without it, a user/pass is required as a backstop)
Nevertheless, it is progress.
Other than StackExchange, I can think of no other major site who looked at the over-engineered protocol, the implementation headaches, the confusing user experience (nascar board of provider logos), and said "yes, that is definitely what we want to have rather than a validated user email address."
How would that work for sites like Hacker news, reddit, and twitter? Does every single user have to preview and approve every single other user on the website? That doesn't scale at all.
Do you have a source for that? As far as I know, most web based exploit came from any external plugin such as flash, pdf, videos, etc...
So are browsers...
> This is the login page - users are typing in a long term stable identifier already!
There are so many other considerations at work here though, and I can't imagine that they're not obvious to you as well? For starters, we're creatures of convenience, and this makes it significantly inconvenient to block google scripts on other websites even when not signed in. It also guarantees that you have the chance to produce a (likely unique) JS-based fingerprint of every google user that can then be used for correlation and de-anonymization of other data.
But really the most basic point that probably makes folks here suspicious: if this were really only about preventing malicious login attempts by bots, then why not give users a clear, explicitly stated choice: either JS or 2FA.
To use a bad car analogy, the in-car entertainment used to be just a dumb radio. Now that it's a computer connected to the main car network, it has a lot more potential to do things, whether it's a feature, bug, or an exploit.
The core issue is - it's all computers. The reason you can't reliably detect the difference between an average user using a computer to log in to your computer versus a fairly sophisticated computer using a computer to log in to your computer is that the transition between user and tool is not smooth. It is always going to be easier to find the various boundary points between user and tool than it is to construct a passable simulation of the user using that tool.
Yes, bot detection does make account hijacking attempts more expensive, but it makes all logins more expensive, and the rate of expense increases faster for you than it does the account hijackers.
so the way I understand it, roughly it works like this: JS tries to gather some set of information about browser's environment (capabilities, network access, reaction to edge-cases, …) sends it to google and google decides if they should allow user to continue authentication (by providing crypto signature of request+nonce or smth.like that… or just by flipping key in session)
Most "mass bots" can be stopped by logging the access attempts on the server side, with plain old HTML on the client side.
That is to say, in the long run, it will be interesting if this actually reduces malicious use. Seems like it would be just as easy to avoid.
Admittedly, bot detection is really interesting to me as a subject. It's not the kind of thing you can throw infinite ML at and expect it to break even in terms of scaling; you need careful tuning and optimization baked in from the ground up, which means a fundamentally "manual" approach involving lots of lateral creativity and iteration. That creativity and one-upmanship (along maybe with the gigantic piles of money, because manual ;)) is what makes the field so interesting to me - but alas, I cannot pepper you with questions/topics/discussion/anecdotes/stories for two hours for obvious reasons :)
So, instead, a couple of questions, considering datapoints likeliest to benefit others, and perhaps (hopefully) provide some appropriately dissuading signals as well.
- How does Botguard detect Chromium running in headless X sessions? By looking at the (surely very wonky) selection of pages visited, or...? (Obviously IP range provenance is a major factor, considering the unusable experience Tor users reportedly have)
- Regarding the note about "bending over backwards", while playing with some old VMs very recently I observed with amusement that google.com works in IE 5.5 on Win2K but not IE 6 on Win2K3SP4 (https://imgur.com/a/zg7FoAW), entirely possibly due to broken client-side config, I'm not sure. In any case, I've also observed that Gmail's Basic HTML uses very very sparing XHR so as not to stackoverflow JScript, so I know the bending-over-backwards thing has stuck around for a long time. Besides general curiousity and interest in this practice, my question is, I wonder if this'll change going forward? Obviously some large enterprises are still stuck on IE6, which is hopefully only able to reach google.com and nothing else [external] :)
- I wonder if a straightforward login API could be released that, after going through some kind of process, releases Google-compatible login cookies. I would not at all be surprised if such ideas have been discussed internally and then jettisoned; what would be most interesting is the ideation behind _why_ such an implementation would be a bad idea. On the surface the check sequence could be designed to be complex enough that Google would always "win" the ensuing "outsmart game", or at least collect sufficient entropy in the process that they could rapidly detect and iterate. My (predictable) guess as to why this wasn't implemented is high probability to incur technical debt, and unfavorable cost-benefit analysis.
I ultimately have no problem that JS is necessary now; if anything, it gives me more confidence in my security. Because what other realistic interpretation is there?
I'm not going to discuss signals for obvious reasons. Suffice it to say web browsers are very complex pieces of software and attackers are often constrained in ways you might not expect. There are many interesting things you can do.
I have no idea how much effort Google will make to support old browsers going forward, sorry. To be double-super-clear, I haven't worked there for quite a while now. Over time the world is moving to what big enterprises call "evergreen" software, where they don't get involved in approving every update and things are kept silently fresh. With time you'll see discussion of old browsers and what to do about being compatible with old browsers die out.
Straightforward login API: that's OAuth. The idea is that the login is always done by the end user, the human, and that the UI flow is always directly with the account provider. So if you're a desktop app you have to open an embedded web browser, or open a URL to the login service and intercept the response somehow. Then your app is logged in and can automate things within the bounds set by the APIs. It's a good tradeoff - whilst more painful for developers than just asking for a username/password with custom UI each time, it's a lot more adaptable and secure. It's also easily wrapped up in libraries and OS services, so the pain of interacting with the custom web browser needs be borne by only a small number of devs.
Um ... this should terrify everyone.
'We had the power to impose this, and we graciously chose not to. You should be thankful for what we have done.'
No. Just ... no.
Everything Google does is a ploy to boost ad revenue. That's the whole business model.
> Without a doubt, it's being enforced for the first time due to some large account hijacking attack that has proven impossible to stop any other way. After so many years of bending over backwards to keep support for JS blocking users alive, it's presumably now become the weakest link in the digital Maginot line surrounding their network.
You're doing two things here:
1. Reasoning from no evidence, and ignoring a much simpler reason in the process.
2. Acting like the honest web users, the ones blocking malware-laden JS, are the ones who are wrong.
It's much simpler to conclude that Google engineers simply got lazy and decided to punt the hard work of security to some JS library, instead of looking at it honestly.
They're probably right that not running JS is privacy accretive, but only if you consider their individual privacy, and not the net increase in privacy for all users by being able to defend accounts against cred stuffing using JS. The privacy loss of one account being popped is likely far greater than the privacy loss of thousands of users' browsing patterns being correlated.
tl; dr: Good luck detecting and preventing automation of sign in pages at scale without robust JS based defenses. I think there's a shortsightedness and self-centeredness to a lot of these comments.
People aren't underestimating the risk to _their_ accounts, they are discounting the risk to _others_ accounts.
That is, they're essentially saying, 'well, other users chose to have bad passwords, so bully them'.
I think that's a fair viewpoint to have. We've entered a world in which computer literacy is a basic requirement in order to, well, exist.
That said, what's reasonable, and what actually occurs, are two different things. A company isn't going to ideologically decide "screw the users that use bad passwords" if it loses them money.
So we get _seemingly_ suboptimal solutions like this.
There are a few things needed for that to be a good argument
1) Their security really is so good (I'd bet it isn't. I saw a tenured security professor/former State Department cyber expert get phished on the first go by an undergrad.)
2) Google isn't improving their security posture on top of that (I'd be shocked if Google isn't improving theirs, and I'm certain having JS required to sign into gmail closes a major hole in observability of automation)
As to your point about computer literacy and existence, I think the sad truth is that computer engagement is required, but literacy is optional. When that's the case, large companies are in the position of having to defend even the least computer literate against the most vicious of attackers.
You're right on, but I wouldn't call it sad.
The population is expected to operate vehicles without putting others in danger, not credentialize in how cars work. There are endless amounts of things we could demand people spend their precious time deeply understanding. We just like to demand tech-savviness because it's self-aggrandizing.
Like everything else, the solution is to help people on their own behalf.
At an online casino I once worked at, we ended up generating random passwords for our users. We had to, because otherwise attackers would lookup usernames in the large password dumps online and log in as our users. No amount of warnings on our /register page stopped password reuse. So we decided we could do better than that, and that "well, we warned you" was not an appropriate response.
If you look around at everyday objects, everything is designed to protect the user. But for some reason in computing we're still in the dark ages of snickering and rolling our eyes at users for making mistakes.
Exactly! We require "car literacy" in drivers before we allow them to use them. Pretty much every advanced economy has mandatory driver licensing.
A driver can trivially press a few levers and slam themselves into a barrier at 100mph. But they don't do that, because they know, through experience and education, that it's a terrible idea.
That's the exact opposite of the approach that would have cars restrict their own usage into a narrow set of patterns and refuse to function otherwise.
WRT the last half of your comment: I think that's reasonable. Generating random passwords for users is a fair approach.
Account security exists on a spectrum. I don't think anyone (reasonable) is arguing against that, we're talking about mutable state here, actual _actions_.
What I'm railing against, is this idea that every webpage on the internet needs to be behind a CAPTCHA that does a bunch of invasive data collection including probably asking the user to perform a Mechanical Turk task in order to _access a website_ without even logging in.
It happens all the time. A website doesn't like my IP block -> forced through a bunch of nonsense. The site operator probably isn't even aware because they're using an upstream service which does it for them.
2FA is a good defence against it, but lockouts are less as they attacker will be going broad and not deep (could be a single request per user account)
Even as a newcomer without the right contacts on the black market you can get started with very little upfront investment, using services like https://luminati.io/ (they pay software developers to bundle their proxy endpoints within their apps).
But with security at this level nowadays every added layer helps. Even if it is not even used in the initial authentication step. Think of classifying certain patterns in the attacks and retroactive de-authorizing after login, increasing the time-cost for the attacker.
1. Preventing interception of passwords on the wire
2. Allowing a tunable "difficulty" parameter which makes brute-force attacks cost ineffective
It seems to me that having a client-and-server protocol beyond just "POST this data here" can be more secure than sending a password to the server for verification...
Isn't this solved by https? I have no idea, but I hope at least that https protects my passwords.
>2. Allowing a tunable "difficulty" parameter which makes brute-force attacks cost ineffective
I don't want to wait for a login more than a second. Actually, I don't want to wait at all.
>3. ... or rewrite their brute-forcer each time the JS-driven network communication channel is altered
>It seems to me that having a client-and-server protocol beyond just "POST this data here" can be more secure than sending a password to the server for verification...
Neither do I. But I also accept that, given the sheer volume of stolen creds and bots out there, sites that damage their bang/buck performance, even at the cost of very minor inconvenience to users, are likely to be targeted less frequently and in lower volume. Even if I wasn't begrudgingly willing to pay that price, I'd at least admit to the logic of making the process more time-consuming as a deterrent.
Do you log in that often?
"Think you are clever, eh, try this for size..." -- some attackers in response to being affected by your counter measures.
> 1. Preventing interception of passwords on the wire
It can, but challenge-response that isn't PKI based requires the remote side to have the secret stored or the local side to know how to generate the value that is stored instead, which goes against other recommended practise (with PKI the remote side can store the public key and ask for something to be signed with the private key).
Protecting passwords on the wire is better done with good encryption and key exchange protocols - in the case of web-based systems that is provided by HTTPS assuming it is well configured.
> 2. Allowing a tunable "difficulty" parameter which makes brute-force attacks cost ineffective
Could you give an example of that? If you are tuning difficulty based on the computation power of the other side, surely the other side could lie about being low powered and get an easier challenge?
A knowledgable attacker doing this would be safe: they'd make sure the interpreter was properly sandboxed (to avoid reverse hacking) and given execution resource limits (to avoid resource waste). Then if the site/app is important enough that they really want in, they modify their approach if the resource limits are hit.
> or rewrite their brute-forcer each time the JS-driven network communication channel is altered
If your method is only used by you (and you aren't a Google or similar so you are big enough to be a juicy target on your own) and you enter into this arms race you might find it takes so much resource that it gets in the way of your other work. You are only you, the attackers are legion: put one off and another will come along later. Also there is the danger in rolling your own scheme that you make a naive mistake rendering it far less useful (potentially negatively useful: helpful to the attacker!) than your intention.
If the method is more globally used then it is worth the attackers being more persistent.
> It seems to me that having a client-and-server protocol beyond just "POST this data here" can be more secure than sending a password to the server for verification.
It can, though often only against simple fully automated attacks. Cleverer automated attacks may still succeed, as may more manual ones, and targetted manual attacks will win by inspection & replication.
Or they get in through an XSS, injection, or session hijacking bug elsewhere (bypassing the authentication mechanisms completely) that you missed because you spent so much time writing an evolving custom authentication mechanism.
Have the server side not know the password
And be secure against replay attacks given attacker access to plain text.
The scheme is something like:
Server generates key (x, xG) where G is some elliptic curve base point. It stores x and sends xG to the client.
The client computes y = H(password) and sends yG to the server.
The server stores the shared secret. x (yG)
Server generates nonce r and sends r xG.
Client computes y = H(password) and responds with y r xG
Server verifies that the response equals r (x yG).
In this protocol, an attacker with access to plain text, even during setup, still can't do anything.
This method is weak against MitM, but that can be solved on auth by doing a fully ephemeral diffie helman there.
I concocted this scheme on like 10 minutes, so there might be mistakes, and it os probably suboptimal.
Client-side js will always be readable, even if you obfuscate it, you can't trust it to never being decompiled.
But server-side js never has to reach the client, it can be used to dynamically generate basically anything.
Why is it not sufficient simply to throttle logins at the server?
Throttling by IP address may have worked 10 years ago, unfortunately it's not an effective measure anymore.
Modern cred stuffing countermeasures include a wide variety of exotic fingerprinting, behavioral analysis, and other de-anonymization tech - not because anyone wants to destroy user privacy, but because the threat is that significant and has evolved so much in the past few years.
The most drastic example I can think of was an unverified rumor that a certain company would "fake" log users in when presented with valid credentials from a client they considered suspicious. They would then monitor what the client did - from the client's point of view it successfully logged in and would begin normal operation. If server observed the device was acting "correctly" with the fake login token, they would fully log it in. If the client deviated from expected behavior, it would present false data to the client & ban the client based on a bunch of fancy fingerprinting.
Every once in awhile, someone will publish their methods/software; Salesforce and their SSL fingerprinting software comes to mind: https://github.com/salesforce/ja3
I don't do much of this sort of thing, but numerous things come to mind. Aim to identify and whitelist obviously human browsers, blacklist obviously robot browsers, and mildly inconvenience/challenge the rest.
For example, an obvious property of a real human browser is that it had been used to log in successfully in the past. Proving that is left as an exercise for the reader, though it inevitably requires some state/memory on the server side.
There was automated tools that did this too!
So for example LinkedIn has a breach, which reveals to evildoers that user 'firstname.lastname@example.org' uses the password 'smith1234' then they test that username and password in Amazon, Netflix, Steam and so on.
They only make one attempt per account, because they only have one leaked password per account. Hence, throttling per account isn't an option.
Given that they aren't pushing a new standard for what has already been a problem for a long time while introducing a vector for abuse both to and from it google can be criticized for both of those sins far more.
I don't think it's fair to blame them for the facts that most folks are not willing to give up passwords yet. Given that passwords are the current reality, shouldn't they do everything in their power to make them as secure as possible?
Sometimes one just wants an accurate depiction of a situation -- and those might still be totally accurate characterizations...
Edit: because users who know to use long passwords and 2FA do exist and don't need all that extra security stuff ...
It sounds a bit like what Gmail's doing with their "allow less secure apps" login option, except that's more for allowing IMAP logins using password instead of OAuth.
Usually the login is in incognito, guest mode, and even from different locations and machines. Google asks for a second factor (i dont have it on for my accounts) like phone verification for my usual accounts (not so complicated password) but not for the one with complex password. So I think the level of extra steps/security is linked with how complex your password is. Not so sure if this is a good thing or bad. But, I hope they should continue basing their security measures based on the security measures you take.
The majority of Google's customers also don't pay for an account.
Password are effectively obsolete and everyone should be using multi-factor authentication of some kind. Keys with passphrases. 2FA auth. Whatever.
Making 2FA auth mandatory would be substantially more effective than bot signaling.
> tl; dr: Good luck detecting and preventing automation of sign in pages at scale without robust JS based defenses. I think there's a shortsightedness and self-centeredness to a lot of these comments.
If they were, 2FA auth would be mandatory with additional phone-based (i.e. SMS) whenever you try to login from a new geographic area. That would stop anything short of a targeted hack.
Instead, they created an attack on the bot maker's profit margins. Cloudflare, Google, et al. are really just trying to increase the cost of making bots. They are not really trying to _stop_ bots.
Stopping bots requires making unpopular choices.
Note that I do use js, because it makes life easier. But you got to realize that not using js will at some point protect you against an XSS vuln. They are that prevalent.
It's still under-esitmating.
That's quite the hand-wave. How do you even measure privacy loss? And given that browsing history is not in your inbox, why are you so confident that one compromised email account is a bigger deal?
Either the dev team has just given up on quality or they're intentionally goading me into installing Chrome. I'm not going to play that game -- at this point Thunderbird works better.
Even better if you set up the majority of your non-security-essential mail to be at your own domain, hosted by Fastmail/etc. Then you can easily change your email provider and your contacts don't even care. I've yet to implement this is in my own life, I just switched to fast mail - so I can't speak from personal experience on the domain portion of it.
NOTE: I mentioned non-security-essential email in reference to things like, your bank login or things that could threaten your life essentials. I say this because theoretically (and has happened before), using your own domain increases the attack surface area. My personal plan is to setup custom domain email with Fastmail, but still use the plain email@example.com for my security focused emails. The majority of my email will still be based on my custom domain for easy portability, but I plan to avoid that for my bank, for example... assuming fast mail lets me.
Also, FastMail allows for subdomain handling. I use this feature with nearly every site. You can have *@<YourFastMailId>.<YourDomain>.com route to <YourFastMailId>@<YourDomain>.com just as you'd expect. The way this handling works is even configurable.
Using FastMail-specific features will lock you into this specific vendor once again, one of the main reasons to switch in the first place!
Using a subdomain for catch-all is great because spammers can’t easily discover and flood the subdomain.
I know a long time ago you could set up a Google account using a non-GMail email address but I'm not sure if that's even a thing anymore. That's what I want though. Keep the email address with my own domain that I've used for 17 years and just have a regular old Google account using that email (and keep all my Google services and purchases associated with it).
Google has been absolutely terrible to Google Apps for Your Domain users (who were often Google's biggest supporters back in the day). They've been shoved into this weird second class status where their Google accounts only partially work with Google services. I completely regret ever setting it up.
I use Google services heavily at work, all on a Google account that was created with my work email address. And we are not a Google shop; my employer's email is self-hosted Exchange.
I still can't believe how fast the UI is. It's by far the fastest web app I've ever used, and the same goes for the service in general.
Seriously, just ditch Gmail now, the alternatives are great.
I've looked at nextcloud, but IIRC, you have to have the whole suite installed, right? I'd love a way to just use the calendar function.
That being said, FastMail is also the leading developer/champion of a new mail standard called JMAP, which supports both labels and folders. I suspect, therefore, if it takes off, they may consider supporting labels themselves.
That’s what I eventually switched to and it works fine.
It does let you, you can create as many aliases as you want (I'm assuming) on any of their or your domains.
I wrote software using Cocoa about a decade ago (so I may be out of touch), and it was clear how much thought and effort had gone into making the user interface responsive. And it generally shows.
The idea that you would just give up on that precedence is baffling. And let's face it, email's important but it's not rocket science.
Thunderbird might be superior, but I really like that App because it's so light and fast.
This could really just be that part. I have a hard time imagining explicit sabotage of FF on the gmail frontend. The likeliest explanation is that perf testing and the like only happens in Chrome
It's tricky to share a screen recording because there's personal information. But I just did two for my own curiosity. From a fresh load, once the "Loading Gmail" screen has gone away, it took 8 seconds and 11 seconds respectively from clicking 'Compose' to having a new window open.
Maybe there is variability. There are a million combinations of factors out there. I suppose as an engineer you make the trade off of "do I hope for the best case" vs "do I make something that works for a broad audience". The previous version shows that they can make something that works for my own anecdatapoint if they want to.
It's just horrible to use in firefox (in arch linux) and I'm currently looking for a new provider.
I might just go all in and use protonmail.
>ten seconds to load your inbox
>16 GB, i7, SSD, 100 MB/s internet etc.
I don't use Thunderbird myself - was just following the discussion on from the OP who did use it. However I've yet to find a client I like so genuinely interested in any suggestions you might have.
I used TB until two years ago, but I gave up with it's unfixed bugs and quirks. I do prefer graphical clients, but not if they are clunky or buggy.
I used Silpheed and Claws for years, but Silpheed locks (or used to lock) the UI during fetch (unacceptable IMHO) while Claws has some critical bugs in the filter/rule logic that made me lose mail in several occasions by refiling into the wrong folder while processing a lot of messages. If you arent't a heavy filter user you might be fine with it though, I think Claws gets a lot of things right.
KMail wasn't bad when I used it, but it was too long ago to make an honest comment today.
I am now using mutt/notmuch/mbsync to prevent having to go through their horrendously slow web interface, and eventually move away from Gmail completely (probably to ProtonMail or fastmail).
Where? The only one I can trigger that does any kind of network is in the inbox, and that's only to get some icons. The text for the options is already loaded.
When we have time we'll have to trace through what it's doing and what components of RFP are causing the failure. (If anyone wants to do that and report in the bug, we (Mozilla/Tor) would much appreciate the contributions!)
In the past few months all our domestic devices have gradually hit that notional condition with Google Search. All the laptops one by one, and then last night my phone. My wife's phone is the only one that can still use their search without a ten-round Recaptcha challenge.
As each device was locked-out from Google I switched the default over to DDG.
I don't believe this to be done with that goal, but it is an unfortunate side-effect.
But I encourage everyone to consider a darker reality: that centralized services by large companies are becoming more and more necessary in a world where it's becoming easier and easier to be an attacker. The internet is kinda broken. Like how half the ISPs in the world don't filter their egress for spoofed IPs because there's no real incentive. That every networked device in every household could unknowingly be part of a botnet because we aren't billed for externalities.
Yeah, maybe it's kinda spooky that now ReCaptcha v3 wants to be loaded on every page. But is that really the take-away? What about the fact that this is what's necessary to detect the next generation of attacker? That you can either use Google's omniscient neural-network to dynamically identify abuse or you can, what? Roll your own? What exactly is the alternative?
Do HNers think this stuff is a non-issue because nobody has every attacked their Jekyll blog hosted on Github Pages (btw, another free service by a large company)?
So no: the take-away is that this improves reCAPTCHA. A side remark to that is that it also improves Google's ability to track you, and hampers your ability to fight that.
I miss old captchas.
reCAPTCHA v4: please click on all the pictures of insurgents.
In my experience (it is already the case with gmail and outlook up and now), this means I will not be able to login to my account when in holiday in another city, country, or when I use a borrowed device, or when I am behind VPN/Tor, etc, unless I give google my phone number, and can afford to get a call / sms at that point of time and unblock the account.
It should be my choice, as it is my account that is at risk, to turn on/off such dubious security measures. It is fine to have these features on by default, but I would like to turn this particular feature off for my account. Any clever "risk assessment" thing where a computer decides without an option to turn if off/on is problematic.
I have sometimes the feeling they know this and it is on purpose. They want not only to collect data, they want to collect high quality data and these measures help to clean their data sets at time of collection.
My experience with changing devices or cities (or god forbid both at once) is that it always requires further authentication, and often fails outright. I have an account which is simply disabled because I didn't set a recovery phone # or email and then changed machines. Everyone I've ever discussed the topic with has described similarly pervasive problems.
Which makes me wonder: what's so different between usage patterns? Obvious Google's auth approach is working for lots of people, so what's distinctive about this block of users who it's constantly failing for?
That's exactly it, there's already two Gmail accounts from high school I can't access despite knowing the passwords.
Google™ employees have come in and found mind-bending ways to excuse it when I've mentioned this before.
But just as you said, they are giving it away for free, so it is technically theirs, we are not paying customers. (Except for G-Suite users)
Most people don't disable JS entirely, but use something like uMatrix or noscript. It takes more work, but you can turn off a significant number of things that just don't need to be executed and get around a lot of annoying modals and paywalls (or see a lot of blank pages; that happens a lot too).
Some states have mitm certs on all their domestic machines but (hopefully) not much competence except on whatever schedule they buy updates.
I would be implenting a u2f soft client in js if I were Google. IMO you need a private key a state would need to retrieve by tampering with js and that isn't being sent over the wire with every connection. (Just to give them their first level of headache, when it comes to transitioning from observing to impersonation.)
I don't particularly care that Google isn't letting you sign in without JS, but the message is just plain wrong..
If a site uses HTTP, then hashing the password client-side and sending it up to the server is equivalent to sending a clear text password. If an attacker can already read your traffic, what is stopping them from using your password's hash to log-in to your account?
It stops a compromised server from silently leaking unhashed passwords.
It makes password hashing user auditable.
You could even do a call and response model to stop the hashed password to log in at all. Here is a primitive scheme for such a model (public key crypto probably enables more clever schemes, not sure):
- Upon signup, generate hashes of "$password$site$i" for i in 1 to 1000. Send these to the server and have the server hash them again.
- Upon login, after the user has entered their password into the box, send an integer from i from 1 to 1000 to the browser, have the browser send back the hash of "$password$site$i".
Now a compromised hash can only let you log in 1 time in 1000. Combine that fact with the other available signals for "is this who we think it is" and you should be able to reject people who stole the hash reasonably reliably. Meanwhile since you are still hashing the password on the server (again) you have lost literally nothing but a tiny bit of computation time.
There's nothing stopping you from hashing your own passwords client side and sending your bcrypt hash up to the server except some sites still truncate the passwords to 32/16 chars etc.
When you have the need for the level of security, client side hashing will not be as good as dedicated HSMs that many services now use on authentication.
Writing your own crypto flows can be extremely dangerous as you open yourself to all kinds of side channel attacks.
As for writing my own crypto. Indeed, if anyone actually used the scheme I suggested they would be making a mistake. I wrote it not to be used but to demonstrate that we can do better in an easy to understand way. Unlike me, Google has the resources to read the papers, do the math, carefully implement this, and do it properly.
Keywords for how to do it properly include "zero knowledge password proof" and "password authenticate key exchange".
PS. It's irrelevant to this conversation, but putting all my passwords into one program has always struck me as a monumentally stupid idea. I use one for passwords I don't care about, I memorize unique passwords for passwords I do care about.
Your scheme is just a weak salting technique. You'd be better off with just using a longer salt and hash function.
- That is auditable - it is impossible for a malicious site to do so without risking being caught.
- The HTML/JS can be served from static cloud storage that is far less likely to be hacked than the server running a DB verifying passwords.
Hardly. Minimization and obfuscation is trivial, and you can ensure the output is always different in order to defeat auditing. Not great for caching obviously, but 'auditability' is not achievable if the server is determined to fool you.
> - The HTML/JS can be served from static cloud storage that is far less likely to be hacked than the server running a DB verifying passwords.
Password are simply not where you want to leverage your security. If you can find a document example of a real threat that this approach would have mitigated, then I'll take it seriously.
The downside is not a tiny bit of computation time. It's also increased latency for the customer.
If the server is compromised, then there is no protection of your cleartext password at all. This is because the entity that compromised the server can replace the original JS with anything, including new JS that sends your cleartext password off to their own host as you type each character.
The only activity on your part that can save you against comprimised servers is having a unique password per server (i.e., not reusing any passwords).
An example of where it might work is in an app, where you're getting the client code from a separate channel like an app store.
About client side benefits. I'm not advocating for JS in the browser but there are benefits to doing some work client side.
Ideally, if TLS was being MITMed somehow such as a dodgy root cert. It would shield the users plaintext password so it could not be used to login into other services. The problem is as soon there is TLS issue an attacker can modify the Js to just send the password in the clear. It really would require code that can't be modified by attacker. This means that there would have to be some sort of browser support. Otherwise it does nothing against the attack it would protect against.
The main benefit is offloading some computation workload on the clients machine. This could allow you to increase the work load required to brute force the password hashes assuming your database leaks. (aka increase iterations or memory requirements)
You last argument is security through obscurity if exposing how you hash makes it easier to brute force the passwords your password hashing sucks.
pre-hashing doesn't prevent an attacker from stealing your account if it can read the communication, but it prevents it from having your password and using it everywhere else where you might re-use the password or a permutation of it
Hashing the password before sending it doesn't really help you much - the naïve approach is vulnerable to "pass-the-hash" (where you basically send the hash instead of the password as the authentication token). The secure approach involves either some kind of challenge-response or a nonce salt, but these aren't as easy to implement correctly.
TLDR don't do that, send passwords over SSL and use a good password hashing algorithm on the server like BCrypt.
None of bcrypt, scrypt, or Argon2 use them and are not materially worse for it.
Why would you want to see actual user password if you can just not see it?
If you see a password you can leak it by screwing up in numbers of ways. If you never see a password you just can't leak it.
E.g. Twitter recently discovered that they were storing passwords in plaintext in logs, GitHub had similar issue.
Take a look here: https://arstechnica.com/information-technology/2018/05/twitt....
Of course, a hash that you will receive from client should be treated as a normal password including all good practices.
So, there are properties that differentiate "password" and "5f4dcc3b5aa765d61d8327deb882cf99", even if for the server it's all the same.
And if you don't trust HTTPS to protect sensitive information, why would you send the auth cookies over it that have virtually as much power the password that was given in exchange for them in the first place?
Take a look here: https://arstechnica.com/information-technology/2018/05/twitt...
Of course, a hash that you will recive from client should be treated as a normal password including all good practices.
There is no reason you can't also salt on the client. Salts do not need to be secret. The substantial constraint you outlined in your comment isn't a problem.
So, not cleartext over the wire then.
Blizzard entertainment does half client half server hashing which is rather clever, one of the few examples where client hashing makes sense.
The best protocol I know of is to derive a signing keypair from your (salted, stretched) password, and store the public key on the server instead of a password hash. Then during login, the server sends a challenge to the client, and the client signs it. The server never sees any secret material at all. Keybase uses a version of this protocol.
Unfortunately all the magical client side crypto in the world doesn't save you if the attacker can compromise your server and then send clients bad JS :p