Folks complaining about how this doesn’t solve the “I don’t trust Google” threat model are missing the point - this is a compliance feature so you can work with PII and still comply with regulations like PCI that require you to encrypt everything in transit and at rest.
Previously you’d need an expensive third party product like Virtru, so it’s great news for users that this is just a bundled feature now.
“Security theater” is probably fair, as is much of the compliance regime. However this probably does make it harder for a back-office admin to accidentally/absentmindedly save files with PII to their machine in breach of policy, as the UI makes it clear the files/content are Top Secret.
But if it is security theater, how is that compliant with regulations? Hopefully regulators could see or could be pointed to see through the curtain. I guess that plenty of people will be monitoring the packets transmitted over their network.
If I got it right, Google generates a key and sends it to the external service that stores the keys of the user. That service uses a Google API with probably Google's and the user keys [1] to encrypt the message (and this is quite disturbing, imagine if every company would require key management services to use its own API - Office365, Apple, second tier mail providers, etc.) and finally Google stores the encrypted message. Similar procedure to decrypt.
At a first glance Google is not able to see the message, except that they provide the browser to most people and the OS too (Chromebooks and especially Android.) However we're back to monitoring the network packets for suspicious traffic to Google.
What I'm more concerned about is regulatory capture through compliance. If E2E messages become the new normal either clients like Thunderbird (desktop) or K9 (mobile) implement an interoperable E2E system or more sooner than later it will be impossible to use them to mail customers and maybe friends too. And uncountable IMAP / POP3 / SMTP libraries for a lot of different languages: they eventually have to display the message and encrypt it before sending it.
Of course I'm all in for encrypted email but let's hope everybody can jump on it no matter the mail client they use. I'd hate that the only choices left will be GMail, Apple and Office365 (if they'll be able to talk to each other.)
> But if it is security theater, how is that compliant with regulations?
I think you are fundamentally misunderstanding what these regulations are. Regs get defined, then someone makes sure that the boxes are checked.
They are not the same thing as (often being orthogonal to) having a strong security posture, that being far too dynamic and technical to encode in regulations.
> that being far too dynamic and technical to encode in regulations
First, that would require the regulating body to actually now what security is and how it works in the real world, in real companies.
Then, it would require having an open mind regarding the risks and forget about 30 years of long gone history. If you look at ISO27002 (derived from the British BS-7799 30 years ago) you will note that it addresses very general ideas without a link to the reality of the field.
If we had 5 rules that address real-world issues we would be light years ahead. Something like "a patch from teh vendor must be applied within 24 hours" (yes, one in a blue moon it will break things and slow down some fantasy "quarter end"), "MFA on all accounts used by humans" and 2-3 more like these.
Just ask the people doing actual security 9-12 and then "compliance" 13-17, two totally disconnected activities.
What a pointless display of security theater. As long as both end-points are google it doesn't matter. It's like a centralized company using a blockchain instead of a database. Yeah, neat, you implemented it... but it doesn't get you anything except press.
This is my feeling about... well, basically every "end-to-end encrypted" product. With Signal, both endpoints are Signal-controlled (yes, the apps are open source, but at least on the App Store and Google Play you can't verify that the open source code is what's shipped). WhatsApp also claims to be "end-to-end encrypted", but there the app isn't even open source so both ends are just 100% Facebook-controlled without even a facade of transparency. Same with Apple's end-to-end encrypted stuff, where everything is encrypted on their servers but for all I know they can just ask my device that's running their closed software to send them information.
I don't understand the hype around end-to-end encrypted software where you have to trust the organization behind the software to not steal your data through their complete control of the endpoints.
You are correct to identify that end to end encryption without control of the software is not perfect.
However, you have reduced the attack surface to the organisation distributing a signed and compromised version of their software to your phone. This is much less likely than any admin in the company looking at your stuff, or anyone who can hack into the company.
That's not a bad point. That moves the conversation from "which organisations must we trust" to "which individuals in the organisations must we trust".
After thinking about it for a bit, I find that argument rather convincing in the case of Signal: very few people would be in a position to add code to the distributed binary without it ending up in the git repo. So with E2EE, we still have to trust the organisation overall, but we don't have to trust any individuals with access to the servers, and the open-source nature lets us avoid trusting any of the individual app developers other than the person/group responsible for producing the final binaries.
It's slightly less convincing in the case of WhatsApp or this GMail thing though. You don't have to avoid a back-door ending up in the git repo, you just have to make it past code review. So in that case, with E2EE, we still have to trust the organisation overall, we have to trust the individuals with access to the app source-code, but it lets us avoid trusting any individuals with access to the back-end.
Still a win I suppose. Just a pretty far cry from how I usually see it advertised, which is more like "thanks to E2EE you don't even have to trust them".
Just beware that there are Signal forks where people do have modifications like being able to ignore delete commands. That's the downside of open source, that people can easily modify programs. This is also why Signal keeps a tight control over their servers. I'm happy that Signal has things open source, I think it is the better option, but I want to point this out because we need to recognize that the problem itself is over-constrained and there are not globally optimal solutions. It then becomes easy to justify several solutions and have us fight and go nowhere because we are arguing with different weighting criteria than the person we're arguing with. Those discussions are fruitless because we're speaking different languages despite being able to understand one another.
And just to be clear, Signal says that they try to reduce the trust on them to as little as possible. Not zero trust/trustless. They sell "low trust" instead.
I don't really see modified clients as having any impact on the security aspects. From a security perspective, once you have sent something, it is sent. There is no un-sending. A recipient running a completely unmodified client might have simply been on their phone at the time and seen the message notification come up, or the notification might not have disappeared when the message was deleted (this seems to happen with Telegram IME, notifications for deleted messages sometimes linger), or the recipient might have set up a computer to screenshot every incoming message with an unmodified client.
I don't understand what "low trust" means. It makes sense to talk about removing trust for individual employees in the organization, but when we're using a client distributed as a binary with no way to verify the source code used to compile it, we're forced to place 100% of our trust in Signal as an organization. Is that "low trust"?
> I don't really see modified clients as having any impact on the security aspects.
Your messages are only as secure as the least secure part of the network. In an absurd example, that other client could take your messages and publish all conversations straight to Twitter in the public. Effectively rendering the encryption pointless. While this is an absurd example, there are things people do that aren't drastically different, including unencrypted plain text backups.
> I don't understand what "low trust" means.
Trust isn't a binary option of: I trust this person/thing/group vs I don't trust it. There is a spectrum. The question is not "do you trust x?" but "how much do you trust x?" This is more obvious with people you deal with in your daily life. There are people you trust with some things but not others. I am willing to bet that you don't have two sets of people in your life: those that you're a complete open book with and those you are a closed book with. You share different pages with different people and this is okay. Signal's mission is to read as little of that book as possible while maximizing your security and privacy. That is low trust. I'm actually not even aware of a zero trust system in existence. All the ZKPs I know still require a trust in the setup process, so aren't completely trustless. Either way, you still need to trust that things aren't improperly implemented. Even if you check it yourself, you have to trust that you yourself didn't make a mistake. Trust is not binary, but a spectrum. There is no global optima and we need to be aware of the trade-offs being made.
> Your messages are only as secure as the least secure part of the network. In an absurd example, that other client could take your messages and publish all conversations straight to Twitter in the public.
Wait, are you talking about malware clients now? Because the original example was a client which does what its user wants to. Nothing can save you if the human at the other end wants to do something evil with the messages you send them, you don't need to use a modified client to share screenshots on Twitter.
> [The trust thing]
I suppose I don't see what the difference is though. Signal has complete control of the app, which means they have full access to all my messages if they want. Where does the "low trust" come in? How is it different from, say, Facebook Messenger, where Facebook also has access to all my messages?
> Wait, are you talking about malware clients now?
Not necessarily malware, but borderline malicious. I would consider someone uploading our chats in plain text (even to their backup) a "malicious" actor, even though they are probably naive. The point I'm making here is that modified clients can be harmful to _your_ security and that open sourcing a program makes it easier to modify in a way that does this. There are trade-offs here.
> I suppose I don't see what the difference is though.
I'm sorry, you don't see the difference between a binary outcome and spectrum? There just are very few things in the world that are actually binary. People trying to convince you that they are are typically bad actors.
> Signal has complete control of the app, which means they have full access to all my messages if they want.
Under what mechanism? The upside of an open sourced app is that you can actually verify that they don't have such a mechanism. That's where the low trust comes in. That they show evidence of their claims. They release court documents detailing exactly what they've released to governments[0], in combination with the ACLU. Which now means the ACLU is putting their reputation on the line as well. There's a lot of 3rd party actors that put their reputations on the line here and Signal going out of their way to demonstrate that they have nothing to hide. "Don't trust us, this is all the code here. Check it yourself."
> at least on the App Store and Google Play you can't verify that the open source code is what's shipped
Is there a reason Google and Apple couldn't change their deploy flow to pull from a git repository? As in:
1. Developer pushes to a specific remote (or pushes a specific tag or branch)
2. Apple/Google run the build on their systems, sign it, etc
3. If passed, the app deploys and the published app is annotated with the specific git hash (which can also be displayed in the app store)
For open source, it gives the public verification ability.
For closed source, it is still useful for internal verification. I work adjacent to some mobile teams and most still do builds on a developer's desktop and upload that. I am working towards getting CI for everything, partly to reduce dependence on any one developer's system (eg: I hear things like "Only Tom knows how to do builds to give to Apple, but he's out today") and partly for security. As much as I believe we don't have developers with malicious intent, there's still no way to know that a build uploaded from a developer's desktop really is what's in source control.
> Is there a reason Google and Apple couldn't change their deploy flow to pull from a git repository?
Because doing so isn't something they'd realistically care about. They'd still have to maintain the existing flow for closed-source apps (the vast majority of them) that don't want to give outside parties access to their source code, so this would just mean more work for them.
Another big downside is that app authors would not be signing the final product with their own keys. That would have to be done by Google/Apple, so authors would lose a bit of control over future distribution of their app.
Even if the app is open source with reproducible builds, you have to trust the OS. Are you going to audit every line of the entire operating system? If you do that, are you going to audit every transistor in the SoC to ensure the hardware isn’t compromised in some way? No one person can do that. Even if a team of people could do it, do you trust the other people? At some point, you have to trust _something_. If you don’t, you simply can’t use computers.
What you're saying isn't wrong, but it misses the point. I'm not saying, "you have to trust someone so you shouldn't use it". I'm questioning whether the E2EE is adding anything or if it's security theatre.
Take the case of WhatsApp. Without E2EE, you have to trust the platform (Apple) and the people behind the app (Facebook). With E2EE, you still have to trust Apple, but you also still have to trust Facebook, since Facebook controls the endpoint and you have no insight into what it does or if there are any back doors. The same applies to Signal obtained from the App Store or (AFAIK) Google Play. And Apple's E2EE stuff is obviously not adding anything, since we're already trusting them as the OS vendor.
Something like Signal on the desktop is different though. There, we still have to trust the OS vendor, but we and anyone else can inspect the source code, verify that it does what it's supposed to and doesn't have any back-doors, compile it for ourselves, and then we can exchange messages without trusting Signal. In the case of Signal on the desktop, E2EE lets us distrust the Signal organisation, which has value even though we still need to trust our OS and our hardware.
You could argue that we should trust the Signal organisation. That's not unreasonable, but that still makes E2EE no better than normal client-server encryption a la TLS and the promise of a solid no-logging policy.
(We could also discuss the feasibility of verifying that there are no back-doors purely from reading source code and how well a back-door could be hidden. But that's sort of a tangent.)
This does not take away from the parent's point. You need to trust something, yes; we just don't trust Google in particular in that whole chain of trust.
I haven't said I'm not willing to trust Apple. The Apple example is included exactly because you have to trust Apple, whether they use E2EE or not, so E2EE doesn't make a difference and is security theatre.
E2EE has value in the case where it (at least in principle) allows you to remove something from the list of trusted entities. Signal on the desktop is one such case, where E2EE lets us avoid trusting the Signal organisation.
I’m not trying to protect myself from Apple. Apple doesn’t have a reason to lie. I want to protect myself from the police. I trust Apple much more than I trust law enforcement. I found that it could remain secret for too long if Apple were forced to compromise its encryption.
Besides, they can afford better lawyers than Signal.
Can you jailbreak the latest iOS? Or you think running outdated OS doesn't create bigger problems security wise? And if not and you run the latest OS on your main device, how can you be sure the app store sends it the same non-backdoored version of Telegram that you would verify on the outdated jailbroken OS?
You don't need to jailbreak a new version of iOS. You can just use the existing jailbroken phone (on an older version of iOS) to install the latest version of the Telegram app, and then verify it.
Yes, eventually you'll be unable to install the latest versions of some apps on your increasingly-older-by-the-day version of iOS, but presumably some recent-enough version of iOS is jailbreakable at all times.
Still you could download the latest Telegram that you suspect on a new, patched phone, disconnect it from the network so it doesn't auto upgrade and keep it that way until a jailbreak is available.
At this point (attacks can only be done against users of the newest versions of the OS, and you still risk getting caught, squandering the whole operation) I say I cannot see how it is worth it.
Unlike Signal, Telegram provides both end-to-end encrypted chats and point-to-point-encrypted chats.
(But unlike Signal Telegram also haven't been caught with glaring zero days or caught sending images to anyone except the intented recipient the last fee years.)
> Unlike Signal, Telegram provides both end-to-end encrypted chats and point-to-point-encrypted chats.
Source on this? Signal can't even send messages to users with their phones off. Telegram (like WhatsApp) will buffer the message on the server (to the best of my knowledge) and can relay that message quite some time after. Signal clearly provides E2EE chats and group chats, so I'm not sure why that's stated here. For point-to-point, my understanding is that the functionality is extremely limited here and that you can't actually chat. That's what I found googling anyways. While Signal doesn't have this, there is an open feature request for something like this[0], but traction is often low on topics like these. Maybe if HN users were as passionate on Signal forums as they were here we'd get more of these features? Who knows. Devs may just ignore those too.
> But unlike Signal Telegram also haven't been caught with glaring zero days or caught sending images to anyone except the intented recipient the last fee years.
I'd actually like a source on this. I'm not aware of any Signal zero days... ever. Or hearing about Signal users receiving wrong messages. I have heard about several zero days with Telegram though.
> Signal can't even send messages to users with their phones off. Telegram (like WhatsApp) will buffer the message on the server (to the best of my knowledge) and can relay that message quite some time after.
That is exactly what Signal has done for years (if not from the beginning?). It's why there's a distinction between one vs. two checkmarks on a sent message:
(But unlike Signal Telegram did agree to cooperate with Russian government to help fight 'extremism')
Even if you trust Telegram's e2e it's not on by default. Secure chats are second class and missing lots of features (plus glitchy, I lost multiple entire chat histories), so almost no one uses them.
> Even if you trust Telegram's e2e it's not on by default.
I have not seen anyone postin any reason to not trust their end-to-end-encryption, so this shortens to a criticisms of point-to-point-encryption, which is (abd I'm feeling generous here) as about as useful as criticism of postcards.
Note, I'm not going after you but after the HN tradition of trashing Telegram.
Thanks for clarification. I don't care about the tradition, it is my personal distrust. There's no visibility into the organization, they struggle for money, if FSB comes after them or their relatives in Russia with a rubber hose I don't see why they don't cooperate.
It doesn't require breaking e2e either to make an impact, since no one I know uses secret chats on the regular anyway
> any reason to not trust their end-to-end-encryption
Durov claiming in 2014 to assist Russian government in fighting 'extremism' (we all know what it means) is enough for me. I couldn't find any debunking, clarification or walking back those words.
Maybe some clarification: Telegram has said that they cooperate with authorities almost everywhere when it comes to open groups and channels. This means terrorist groups like ISIS/Daesh gets taken down.
As for closed groups I think they claim they don't have access and police have to get someone on the inside.
As for cooperating with Russian authorities in particular, Telegram has a history of open confrontation with Russian authorities, and for a long time had to maintain a proxy network for Russian residents.
Today we also see that both Russians and Ukrainians use Telegram to reach out, but I suspect at least Ukrainians use something else between themselves.
They had a confrontation where Russian government blocked Telegram until they give access. Then later lo and behold they unblocked Telegram with the above statement by Durov. I don't see how it's too uncharitable of an interpretation of those events that I have.
That Russians and Ukrainians use Telegram is the exact issue here! Tons of my fellow Russians and I imagine most Ukrainians use Telegram to exchange statements that literally make them extremists or just criminals in the eyes of Russian gov, the very gov Durov declared collaboration with to get unblocked in the country where TG has the most users, they do it every day and without the impaired secret chats feature.
Secret chats are inconsistent across devices, lack features like message preview in notifications, can't unsend messages, entire chat histories are lost, the list goes on.
End-to-end-encrypted means without significant, unexpected breakthroughs in mathematics, no one can read it between the sender and the recipient, even if the traffic pass through FSB, NSA, Telegram, Google and Facebook headquarters and they all conspire to break it.
Point-to-point-encrypted means it can theoretically[1] be read by the vendor (Telegram in this case) or anyone who can coerce the vendor. This is the standard for mail, online banking etc.
The reason for providing both end-to-end-encrypted and point-to-point-encrypted is that point-to-point-encryption is significantly simpler, which makes it easiee to create useful and or cool features.
[1]: vendors can do a nunber of things to make it hard/next to impossible for employees and others to access data, like for example restricting access to user data to service accounts, only allow debugging access in special circumstances and auditing such debugging access.
Telegram used to claim they solve it by sending/storing encrypted data through/in different data centers from the keys, and keeping these days centers in different jurisdictions. Done properly should mean that two or more employees across the company would have to conspire to get access to customer data, or two or more judges in different countries would have to demand data.
But unlike with end-to-end-encryption we have to take their word for it.
Building over eitland's reply, Telegram allows users to see messages on several devices. This is because they terminate encryption on their servers and can send a copy of the plaintext message (re-encrypted) to every other device of the same user. They also store a copy because you can start from scratch with no surviving devices and a brand new one. All you need is access to your account. E2E chats work only on one device and are invisible to the other devices of the user [1]. More technical details at [2]
I think this can be misunderstood in at least two ways.
My most understanding reading is that you mean Telegram receives the data in plain text at their servers.
That reading implies that they don't do this whole "encrypted data one way, keys another way" thing. (If we know they don't do that I am interested in knowing.)
But a number of people here on HN will read it as "Telegram sends data unencrypted", which is definitely incorrect.
> With Signal, both endpoints are Signal-controlled (yes, the apps are open source, but at least on the App Store and Google Play you can't verify that the open source code is what's shipped).
While I agree with the concern here, is there an alternative mass market model in which software can be distributed in a way that ensures no one but the user controls the device? I mean mass market, so the barrier is that my tech illiterate grandma has to be able to install this with the help of my tech illiterate parents.
Where in that word vomit from Bleeping Computer are the keys stored? It says that decryption is handled by the browser, so unless you synchronize browsers the painful way, I assume Google has the key somewhere. I also wouldn't be surprised if the shit only works in Chrome.
Also, wasn't Zoom sued for bastardizing the concept of "end-to-end encryption" to mislead people?
They're either stored in your choice of third party service or you can host it yourself if you really want.
I don't see if the private half of the key is shared with the web app to decrypt the cypher text, or if the cypher text is sent to the key service which responds with plain text.
> I also wouldn't be surprised if the shit only works in Chrome.
I don't see any evidence of this in the documentation. I can't think what API it would require beyond the widely adopted fetch API.
> Also, wasn't Zoom sued for bastardizing the concept of "end-to-end encryption" to mislead people?
I don't see any reason to think this is much different from any other end-to-end system. All the mobile end-to-end apps require you trust the code they run on your device. As described this requires you to trust the JavaScript they run on your browser.
> I don't see if the private half of the key is shared with the web app to decrypt the cypher text, or if the cypher text is sent to the key service which responds with plain text.
I checked out the ref for that encryption API. Seems like the service stores the key, and the API exposes encrypt/decrypt calls:
So yeah, go trust that third-party provider, or if you have the technical skill, set up your own. Since most people would do the former, this is basically back to square one, and to call this "end-to-end" is misleading.
The idea of storing your keys on an Internet-facing server baffles me too. It will 100% get hacked sooner or later.
> most people would do the former, this is basically back to square one, and to call this "end-to-end" is misleading.
I'm not sure I agree that this is misleading. Google, who is storing the data, never holds the key. Likewise, the key provider never holds the data. To compromise the data you'd need to compromise both gmail and the key provider at the same time. The fact that organizations are delegating the key management is an implementation detail.
> The idea of storing your keys on an Internet-facing server baffles me too. It will 100% get hacked sooner or later.
I mean you can separate it out. You just need to implement the API on an internet-facing server.
This is nice for protecting against compelled legal disclosure in the US, but Google could just serve js in the future (targeted to specific users only, if they want) to capture password/key. This is the problem with repeatedly downloading a client which isn't inherently trusted/from the same party holding the ciphertext. Same thing happened LONG ago with hushmail, which used a java applet to do encrypted email -- at some point they chose to serve a "special" applet to some users.
It is trivial to serve different javascript to different clients on a request by request basis, making it very unlikely that highly targeted malicious js will be detected.
Most thick client update mechanisms make this more difficult (but certainly not impossible) thus greatly increasing the risks of embedded malware being detected.
Signal, for example, is reproducibly built, so you can diff the source of each new update and then verify that you get the same binary. This gives you very good assurance that your audit of those diffs is valid to the binary on your phone.
Thanks. I understand autoupdate and web infrastructure, as well as the Ken Thompson hack that resolves all these cases into the same threat. The question was rhetorical.
I doubt that someone who has invested the time in developing code-auditing skills also values their own time so little that they'd audit and build their chat client. And if they're willing to farm auditing out to someone else, or to vouch to other users of the app, then they've lost the plot.
Not saying such a person couldn't exist. But the intersection in the Venn diagram seems small.
This gross oversimplification precludes one from understanding anything but a yes/no answer. The answer is not binary. Like the parent said, delivering the client on every single call makes the rogue update trivial and possibly targeted. Having your software come from, e.g., a bi-annual release Linux distribution, and/or one that focuses on security, makes the rogue update less likely. Not impossible, but still more difficult.
As long as the yes/no doesn't have an inversion error, I'm fine with the coarse granularity. Unless you're targeted by an APT, the threat model would consider all of this to be noise. It's just odd to single out Gmail E2E of all things for this reason in a top-level HN comment on this article. Within a reasonable confidence interval, it's irrelevant to anyone already concerned about using Gmail for their high-security data (because they aren't using Gmail!), and irrelevant to everyone else because they don't care (because it's just their email).
I'm way more worried about all the dirty little fingers on the hundreds of Rust crates, Python packages, Java libraries, and NPM packages that get slurped, unreviewed, into so much software these days. (But I'm still not actually going to do anything about it.)
Much larger cross section when you expand "person" to companies. I probably wouldn't bother with chat, but absolutely would/have with smart contracts and wallet software. (Was a multi-human effort within an organization.)
How is this different from the long existing support for S/MIME which has been available for years in gmail (and available for admin setup via Google Workspace)?
Is the main news that S/MIME encryption can now happen in the browser instead of Google’s backend?
> When you open Gmail, you'll see ads that were selected to show you the most useful and relevant ads. The process of selecting and showing personalized ads in Gmail is fully automated. These ads are shown to you based on your online activity while you're signed into Google. We will not scan or read your Gmail messages to show you ads.
AFAICT this feature is mostly for internal (to your domain) emails as you need to supply your own identity provider for key management.
I don’t believe a random external sender will be able to send an E2E encrypted email. Or at least not in a way that would be readable by the recipient.
I'm surprised they are implementing this as it will reduce the ability to do Google Inbox-esque classification of emails for things outside of spam like smart labeling of emails or highlighting important emails, emails prompting an urgent response. It just significantly reduces the amount of NLP processing they can do.
The organizations that need this for compliance reasons likely don't care about that. This is likely going to be a very niche feature (in the grand scale of all Gmail users) that really only exists so that it can be sold to large enterprises.
This would be amazing if Google use anything like an open standard for
their cryptography, so that anyone can write and use interoperable
mail client plugins, or use GPG to talk to gmail users e2ee. Why am I
haunted by the suspicion that won't be the case?
That opens it right up. Pleased to be wrong in my suspicion. I see
that it's possible to configure Mutt [1] and Gnus [2] with
SMIME. Perhaps 2023 will finally be the year of encrypted email.
I wouldn't get too excited. This only seems to be the addition of S/MIME client capability for Google corporate email customers. So the Google corporate customer will still have to procure, renew and pay for a S/MIME certificate for each and every employee. That isn't something that companies have shown much interest in up to this point.
Back in 200X, a graduate student acquaintance of mine had, for a research project, written a browser extension which transparently (as I recall, GPG) encrypted your emails in Gmail to people and decrypted those from them. I recall how pleased he was demonstrating it to people, and how much smoother and simpler it was than integrating GPG manipulation in your separate mail client.
All that to say, this is something at least some people have thought would be useful for some time, and it's nice to see some people getting access to that functionality.
(Now work for Google, not on anything remotely related to Gmail, remarks are my own, etc etc.)
I don't actually know. I know a few people have linked things which solve the same problem in other comments, but I checked, and neither of the ones linked so far appear to be related, just nice illustrations that yes, other people have had the same problem to solve.
But to the best of my knowledge, he graduated without suggesting we all start using his extension to take over the world, and I've not found any suggestions it was published.
Well, I say good for Google! I recently, after years of being a paid ProtonMail customer, cancelled, and settled for Apple Mail with a custom domain. Apple specifically does not encrypt email, but with their new Advanced Data Protection, which I use (I have also been using Lockdown Mode for a few months), I decided that while no large corporation is truly trustworthy, we all need to make a decision of what our personal requirements are, and Apple was “good enough.”
What I like about Google’s announcement is that even though this is only enabled now for Workspaces, if Google opens this to all GMail users then a large percentage of e-mail will be gmail to gmail encrypted.
This is what disappointed me about ProtonMail: they didn’t have a critical mass of users. I only had one other non-work friend who used ProtonMail.
>Client-side encryption (as Google calls E2EE) was already available for users of Google Drive, Google Docs, Sheets, Slides, Google Meet, and Google Calendar (beta).
Those things are client-side encryption. End to end implies more than the one end. Which is a good thing. Who would want to have to compare a key fingerprint just so they could securely store files in the cloud? With client side encryption the identity management comes for free...
Not really. Other vendors gives a date range for your search, download it for cache in client and then searches it. It takes more resources but it is possible.
Of course, right after I get locked out of the account I had since I was a teenager (and years of encryption advocacy) both gmail and icloud roll out changes that I wanted back in 2009.
Too little, too late, we will never regain what was lost.
Open source has the same problem. You can review source, great, but you can’t prove it’s the same source that’s executing.
From evil compilers to source/binary modification in flight to supply chain attacks, it is probably impossible to prove secure operation of open (or closed) source.
All we have are mitigations and risk reduction. If the bar is provably perfect security, nothing can meet it.
How do you know the tools you use to verify weren’t modified in memory?
Seriously. It is not possible to prove 100% security. Between human, electrical, hardware, firmware, and software vulnerabilities, there is always a gap.
You’ve absolutely illustrated my two points: 1) nothing is ever 100%, and 2) everyone thinks their personal choices are the ideal; anything less secure is playing with fire, anything more secure is unnecessary overkill.
There is no need to be 100% secure at all. All you need is to make hacking you more expensive than what you get if successful. I'm sure it works well with Qubes, if you use it in the right way.
I don't think it can be done today, with the remote domain (gmail.com) controlling all the JS on the page. But maybe future hardened browsers could attach a textinput box directly to a sandboxed service worker not communicable with the rest of the JS code? The worker could be updated at a different cadence than the rest of the client, and be audited separately, and with the script contents hashed? We rely on browser sandboxing of different domains today, why not extend it to different parts of the same page? Its probably a hard problem still.
This is the development we want to see. Would be great if this would be the main mission for Google for the next 5 years. To enable E2E for most private data (Mail, Drive, Notes) and stop location tracking.
Yes you can, not everything we use needs to have military-grade encryption. Things like email should be kept simple. This is the problem we have with software today, everything is just glued together to produce a pile full of complex shit. "Oh.. lets add end-to-end encryption for gmail so people can be at rest when receiving their spam promotion emails".
I just want them to let me set the same theme we had up until recently. I don't want blueish or soft gray I want what it was the past several years. If they can't do that then give me the legacy look back.
Previously you’d need an expensive third party product like Virtru, so it’s great news for users that this is just a bundled feature now.
“Security theater” is probably fair, as is much of the compliance regime. However this probably does make it harder for a back-office admin to accidentally/absentmindedly save files with PII to their machine in breach of policy, as the UI makes it clear the files/content are Top Secret.