
More Encryption, More Notifications, More Email Security - nailer
https://security.googleblog.com/2016/03/more-encryption-more-notifications-more.html
======
codelitt
This seems like much ado about nothing.

I certainly appreciate the effort is better than nothing, however, how often
are those notices served to US/European citizens? It's one thing to stand up
to government overreach in foreign countries, but how about the country where
you (and a large percentage of your users) reside? They specify attackers, but
I'm assuming this notice to the end user does not apply to the US/EU
governments requesting your data and them complying?

Another gripe I have is that TLS has probably been broken by the NSA^1. It's
better than nothing to alert us about the other party not using it, but really
provides limited protection. PGP/GPG is really the only assurance you have and
the plugins for different desktop apps are nearly always buggy. I end up just
manually encrypting/decrypting with GPG because a buggy encryption integration
is not a comforting thought. If they really cared about keeping your privacy
safe, they'd have an end-to-end encryption tool/integration.

[1]: [http://blog.cryptographyengineering.com/2013/12/how-does-
nsa...](http://blog.cryptographyengineering.com/2013/12/how-does-nsa-break-
ssl.html)

~~~
nxzero
If they had end-to-end encryption, then Google wouldn't be able to read the
emails; meaning to gain value, Google would have to charge for the service.

~~~
delroth
It's not (only?) a question of "gaining value". End-to-end encryption is
fundamentally incompatible with many features that Gmail users rely on. I
would recommend reading [https://moderncrypto.org/mail-
archive/messaging/2014/000780....](https://moderncrypto.org/mail-
archive/messaging/2014/000780.html) from an ex Gmail anti-abuse tech lead. And
for 99.99% of Gmail users, protecting automatically against untargeted
phishing and malware attacks is a larger security improvement than having e2e
encryption.

~~~
mtgx
Even so, they could provide the option, which probably no more than 1-2% of
the users would ever use - those who actually need their privacy to be
protected, while the rest probably "won't care".

Leave it up to the users to decide whether they want to use end-to-end
encryption despite a potential spam problem. It's also not like people
couldn't use multiple email addresses.

~~~
relearn
If you were developing a product, would you invest time, money, and research
into a feature that (maybe) one or two percent of your users would utilize?

~~~
mtgx
Google already said in the above article that "only" 0.1% of its users get
targeted by state-sponsored attacks (which by the way is about 500,000 users)
- so why even bother building that then, by your logic? Clearly, just a waste
of resources (probably the same for two-factor auth, Security Key, etc).

How many times have we heard companies "China has a 1 billion people - imagine
if we only got 1% of that market with our product!". But we're talking about a
feature of a product here, not an entire product that only gets 1% of a
market's userbase.

0.1% here, 1% there, another 10% over there - all of these features add-up to
create a great product that everyone loves because of the aggregate of
features but also because of that "one" feature they love _individually_.

Another thing to remember is that the enthusiasts _are_ the market-builders.
You can't just win with a product that surveys well with 80% of the market. I
don't think most of the phone or smartphone customers in 2007 wanted a
touchscreen phone. Probably (well, _literally_ , actually) only 1% of the
market wanted it then.

Also, we don't know how important this feature could be to gain Google more
trust. Telegram for instance has gotten promoted as a private messenger that
uses end-to-end encryption - and yet its end-to-end encryption isn't even
enabled by default (so same scenario that I was talking about), while its
"normal" encryption is probably worse and less secure than what Google uses
for Hangouts.

~~~
Kadin
I'd argue that Google implemented it because they don't want their product to
be implicated in a high-profile attack; if someone disappeared because their
Gmail credentials were phished, it could easily blow back on Google and
contribute to a perception that Google services are fundamentally insecure.

It might be a harder sell to implement E2E crypto, although perhaps the same
argument might apply some day. The notifications are probably just low-hanging
fruit.

------
hartator
I think the red lock next to the recipient email is more confusing than
anything.

It's suggesting strongly some kind of end-to-end encryption, like PGP, when
there is still nothing. Google still has full access to the plain text
versions of these emails as well as the receiving email providers.

It's creating a fake assumption of security that can be more damaging than
anything.

~~~
stingraycharles
I'm not sure if it's far fetched, but instead of creating more security for
gmail users, perhaps this is a deliberate attempt for people to start
requesting that "lock" next to their email address (especially businesses),
and as such create a surge in providers adopting secure communication? Because
that's the biggest impact this will have, and as such, will mostly indirectly
increase security for google's users.

------
rubyfan
I agree with much of the others commenting here. The IETF strict transport
security draft is ridiculous. If every carrier who passes the message can #1
read it and #2 potentially change the content and #3 promiscuously route
messages to each other then why does it really matter if they pass it amongst
themselves securely? Line security is easily defeated by other 'features' of
SMTP.

End to end encryption is the only thing that will really matter in email
security. And even with end to end encryption email is a flawed medium, since
it leaks meta data in the process of message delivery. That is kind of a
barrier to secure messaging.

~~~
sliverstorm
_Line security is easily defeated by other 'features' of SMTP._

I'm not enough of an SMTP/security guru to know what you're referencing. I'm
curious, can you share?

~~~
rubyfan
TLS Cipher downgrade and DNS weakness. SMTP and almost every protocol out
there whether SSL/TLS enhanced are designed for least common denominator
interoperability and legacy compatibility.

SMTP and really email as we know it is inherently insecure. Without end to end
encryption and relying on hosted mailboxes, we're inviting "service
providers", government and hackers alike to read and tamper with our email.

------
Tepix
Google has no interest in end-to-end encryption: It would put them out of the
loop. It would be contrary to their mission to analyze the customer data to
deliver better ads.

Gmail is a big privacy problem. Even if you don't use it yourself, nowadays a
large percentage of your emails will end up there. And why? It's all about
lazyness and low friction.

Computer literate persons (that's you, right?) should really consider to get
off their butts and host their own email. It's not hard, it's not expensive
and it's not a lot of work to maintain either. It can even be fun and
informative. By sticking to gmail, you're no longer credible when complaining
about the erosion of privacy on the internet.

~~~
Zigurd
I pay for various kinds of Google services already. Making a business case for
secure paid email is not hard.

~~~
morley
Why does Google have to be the one to provide secure paid email to you? Gmail
obviously operates at a vastly different scale than their similar paid
software products. If you really want secure paid email, I'm sure there are
plenty of other companies willing to take your money.

------
theandrewbailey
> In the 44 days since we introduced it, the amount of inbound mail sent over
> an encrypted connection increased by 25%.

I'm surprised that it's not more than that. I can imagine executives
everywhere asking their IT people "Why do all of our company's emails have
this error on them? They are all red and scary!"

~~~
mhurron
You're assuming anyone noticed.

~~~
comex
Well, _someone_ noticed if the percentage of encrypted mail increased that
quickly.

------
akerro
[https://www.salon.com/2014/11/16/googles_secret_nsa_alliance...](https://www.salon.com/2014/11/16/googles_secret_nsa_alliance_the_terrifying_deals_between_silicon_valley_and_the_security_state/)

------
ryporter
Why "state-sponsored" attacks specifically? If any group of attackers is
targeting me, then I'd be just as concerned. Introducing that distinction
seems like it will force Google to determine whether a group is backed by a
government.

~~~
huntsman
There's many other warnings and notifications that Google gives users about
phishing and hijacking attempts. This specific warning is exposing the fact
that we think someone is being targeted by a government backed attacker. We
believe this is information that could be useful for some people and that we
shouldn't keep this to ourselves.

------
technion
My major concern here is that to me, end users are reading into this way too
much.

I say this because I caught a developer a few days ago implementing an online
payment gateway using a Wordpress "form to email" plugin. The ensuing argument
came down to his firm belief that email to gmail is now "encrypted", and thus,
this is perfectly safe.

We need to be careful about sending this sort of message.

------
Animats
The mail content is still in the clear inside Google. As long as Google does
mail that way, it's not secure. Only end to end encryption can provide any
security.

------
lallysingh
Ignoring kuschku's telling, this is a big move for a big ship (email
security). Congrats!

------
daviddahl
But, no protection _from Google_ in any of this. Sigh.

------
dadrian
Congrats Jon!

------
kuschku
Well, if GMail would be open source, and we could self-host it, we could get
the same advantage.

Giving all your private data to a foreign company, serving interests of
investors, acting directly against your and your nations interests is NOT
acceptable, and should NOT be common.

~~~
delroth
Please read the link I posted before replying. Mike Hearn explains that the
biggest advantage that large email service providers in the war against spam
is their centralized aspect. Because of the access to large amounts of data
(obviously) but also because there is basically no known way of writing
decentralized anti-spam computation engines that cannot be gamed by spammers.

Also, would you rather have users willingly giving their data to a foreign
company (legally liable, bound to a published privacy policy) or unwillingly
to malware authors and credentials phishers? In the current security landscape
this is a very real tradeoff to think about.

~~~
kuschku
You realize that almost all email filtering that GMail is doing nowadays is
based on trained networks on the content of the mail, not trained on the
domains?

And in fact, if you train your own neural networks to do this same task – as
I’m currently doing – you get the same quality of categorization and spam
filtering that Google got.

I consider Google, the NSA, and so on just as trustworthy as a Nigerian
Scammer, so I see no difference in giving my data to Google, or giving it to
the phisher.

They operate under laws I can’t control, use my data in ways I can’t control,
and don’t ask me if they wish to use my data for more other purposes later on.

~~~
thomasahle
But how do you get access to large enough sets of training data? Wouldn't that
always require plain text access to other peoples mail? Even more, the data
would have to be recent, as to take into account trends in approaches.

~~~
kuschku
Simple: By hosting email for other people. /s

But yes, sharing training data (or trained networks) between people is the
best solution.

~~~
kbenson
The training data is emails. Sharing training data means your email is no
longer contained within your end-to-end encryption, it's leaking allover the
place. If you can find a way to extract useful training data from emails while
also making it so it doesn't identify anything about you or the emails you've
been receiving, I'm sure you can make a _lot_ of money with that. I suspect if
it's not impossible, it's extremely hard, and even harder to do _right_ (that
is, in a way that we don't find that is later susceptible to some partial
reconstruction of attributes).

~~~
kuschku
Well, there is a way: Share the whole network.

Then you combine all networks that others share with you into a new one, and
use that yourself.

Continuous recombination and cross-breeding of networks is the idea.

~~~
kbenson
If you share the network, and allow members of the network to use their own
data to help train, how do you prevent spammers from joining and submitting
garbage, or worse, targeted updates to make specific spam pass?

~~~
kuschku
Well, the idea is that you can check networks against your own set of
organised data — if adding network X reduces the overall effectivity, you just
stop using network X and X's score is reduced.

EDIT: As HN prevents me from adding new comments right now (Seriously, HN,
allow us to post more than 3 comments per hour, it’s seriously hard to hold a
conversation like this), I’ll answer your comment here:

Users would train networks locally based on their own decisions. Those
networks would then be submitted to a repo, and you’d get other networks in
return. If a network sorts badly (aka, you always undo its sorts manually),
you will not get networks with similar sorting capabilities next time.

The concept would automatically prevent people from adding malicious networks
– as they’d end up in the local blacklist of users.

Obviously you wouldn’t blacklist the network itself, but a representation of
its concept of sorting.

~~~
kbenson
So, how are these networks getting their data? Users submitting data? That
means users are reducing their individual security to increase the group
security as a whole. You are then presented with just consuming this data (and
staying secure), or contributing, and we're back at the same point, data needs
to be shared so it can be trained against.

Let's also look at the incentives for these networks that have data you can
subscribe to. How are they supposed to keep spammers out? Any sort of vetting
and management of the individual networks will be non-negligible, and if it's
not funded will be at a disadvantage to the spammers that are doing this for
profit.

Finally, I'm not sure that training sets for data like this can be easily
combined without a massive amount of reprocessing, if at all. I'm not familiar
enough with the classifying networks involved to know, but I suspect that
problem alone ranges somewhere from "non-trivial" to "very-hard", if not
already solved.

It sounds good, and in a perfect world we'd have well run and managed shared
networks of fully anonymized spam/phishing classification training data that
was easy to combine into individual personal classifiers without having to
heavily re-process large training sets.

I'm just not sure how feasible the individual parts of that are, much less
them combined into a whole.

------
mtgx
What's the progress on the End-to-End tool? What's the progress on making
Hangouts end-to-end encrypted for that matter?

I feel that these improvements, while useful, are a sideshow to stop privacy
enthusiasts from switching to better encrypted services or tools, while Google
(and Microsoft, and Yahoo) continues to mine all of your private conversations
for advertising purposes.

~~~
danieldk
Indeed, it's surprising that there is no sign of end-to-end encryption on
Hangouts, while the competition (with marketshare) is getting there. iMessage
had end-to-end encryption since the beginning. Whatsapp has end-to-end
encryption on Android based on TextSecure (though it is still hard to see
whether it's active). Telegram has support, though it's not the default.

In the meanwhile, Apple also has encrypted notes, phones that are encrypted by
default, etc.

Google seems to focus purely on transport security.

~~~
ikeboy
E2E would mean you can't log in to hangouts on a new device and access your
history, which is a useful feature.

~~~
ComputerGuru
No it doesn't. It just makes that a little harder to implement. You can set up
a side channel between your own devices to sync that history. Or you can use
double-encrypted key and just change the outer layer of that onion when a new
device is added.

~~~
ikeboy
>You can set up a side channel between your own devices to sync that history.

Then

1\. Some device would need to be logged in and online (not always true)

2\. You'd need a way to authenticate the new device, which means not only do
you need the other device on, but you need it close when first setting it up,
and you'd need to manually transfer keys or something of similar bandwidth

3.Either all history would need to transfer, which is a bandwidth hog if
history is large (although if device is close then you could transfer over
Wifi direct or bluetooth or something), or that other device would need to
kept online whenever I want to query history (currently, Google serves this
function)

4\. If you lose a device, there's no recovery (this is probably the main
reason Google won't do it.)

If you really want it, there are apps that do E2E and support Google Talk. See
[https://chatsecure.org/](https://chatsecure.org/)

~~~
kuschku
Conversations.im [1] fulfils the same purpose, has the same features, also is
open source, is on F-Droid, and looks better.

________________________

[1] [https://conversations.im/](https://conversations.im/)

------
satbyy
By chance, I happened to type (note the www.)

    
    
        https://www.security.googleblog.com/
    

This immediately popped up a red warning in Chrome:

    
    
        Your connection is not private
    
        Attackers might be trying to steal your information from
        www.security.googleblog.com (for example, passwords,
        messages, or credit cards). NET::ERR_CERT_COMMON_NAME_INVALID
    

It seems that the SSL certificate is issued to *.googleusercontent.com. Given
that we're talking of Google, I expected that the URL will redirect to non-www
https site, but apparently not.

~~~
philip1209
This seems to be an edge case between how wildcard certificates work versus
how HSTS works.

Wildcard certs only validate one subdomain of depth (so *.foo.com cert does
not validate a.b.foo.com). HSTS "includeSubDomains" will require a valid SSL
cert for all recursive subdomains.

It's a problem, but I don't think it's a problem worth solving.

