
Dell, Google, and Internet Security - gasull
https://blog.okturtles.com/2015/11/dells-tumble-googles-fumble-and-how-government-sabotage-of-internet-security-works/
======
ender7

      Please note, I am not accusing any specific person of being a government spook 
      intent on weakening security, but I can’t help notice the parallels in this
      situation with what is known about how Internet standards get compromised
    

Except that's exactly what the author is doing. Otherwise why did I have to
wade through the first 50% of this blog post? Palmer's blog on the subject [1]
is well-reasoned and makes the motivations for the decision clear. Slepak may
not like those reasons, but adding a veiled accusation of stoogery to an
otherwise weak rebuttal just makes you look like an asshole.

I'll requote Palmer's fundamental assertion:

    
    
      However, it is not possible for a low-privilege application to defend against
      the platform it runs on, if the platform is intent on undermining the
      application’s expectations. To try would be futile, and would necessarily
      also violate a crucial  digital rights principle: The computer’s owner should
      get to decide how the computer behaves. Dell and Lenovo let their customers
      down in that way, but for better and for worse, it’s not something that a web
      browser can fix.
    

I can appreciate this argument. Today it's pre-installed certificates, but
tomorrow it'll be some malware that hijacks the owner's machine in some other
way. If Chrome's responsibilities expand to include the security of the entire
system, it would place an impossible burden on its engineers.

[1] [https://noncombatant.org/2015/11/24/what-is-hpkp-
for/](https://noncombatant.org/2015/11/24/what-is-hpkp-for/)

~~~
shkkmo
> Except that's exactly what the author is doing.

What specific person is the Author accusing "of being a government spook
intent on weakening security"?

I find Palmer's assertion to be flawed and full of hyperbole and misdirection:

> To try would be futile,

This is like saying that because perfect security is impossible, all security
measures are futile.

> and would necessarily also violate a crucial digital rights principle: The
> computer’s owner should get to decide how the computer behaves

In no way is it necessary to violate that principle (unless you view the
hardware manufacturer as the 'owner', which does seem to be an increasingly
prevalent view)

> it’s not something that a web browser can fix.

Except that we've had at least 2 specific instances that the computer browser
could have mitigated by warning when HPKP is violated.

> Today it's pre-installed certificates, but tomorrow it'll be some malware
> that hijacks the owner's machine in some other way.

I do not find your 'slippery slope' argument at all convincing. I'm not asking
the browser to fix my ACL permissions, I'm asking it to tell me when my OS
asks the Browsers to do something that is potentialy insecure (use a
certificate other than the one pinned).

~~~
magicalist
> _I find Palmer 's assertion to be flawed and full of hyperbole and
> misdirection_

yes, we wouldn't want to be hyperbolic, would we.

> _This is like saying that because perfect security is impossible, all
> security measures are futile_

No, it's saying security theater is just that, and there would be significant
downsides for the stated gain. Where you and he disagree is the magnitude of
that gain. You're well within your right to disagree on that, but seriously,
let's refrain from being so hyperbolic.

It's not like this was unexpected. This has been noted for years in their
security FAQ:

[https://www.chromium.org/Home/chromium-security/security-
faq...](https://www.chromium.org/Home/chromium-security/security-faq#TOC-How-
does-key-pinning-interact-with-local-proxies-and-filters-)

It was also extensively discussed when the Lenovo/Superfish incident happened.

It's also notable that no other browser will reject a connection based on a
differing certificate to the pinned one, either, so it's not clear why Chrome
is being singled out here. Firefox was only immune here because they don't use
the OS's cert store. Superfish happily just injected its certificates in
their's too.

~~~
shkkmo
> No, it's saying security theater is just that

The possibility of working around a security measure doesn't make that measure
'security theater' as long as the workaround requires more resources and
effort.

> there would be significant downsides for the stated gain.

What exactly are the downsides to the browser notifying a user when a cert
different that the one pinned is used? The potential for some confused
questions to Antivirus providers and Corporate IT departments?

> but seriously, let's refrain from being so hyperbolic.

What hyperbole did I use? I specified specifically what I found hyperbolic and
flawed about Palmer's assertion. You did not.

> it's not clear why Chrome is being singled out

We are discussing chrome because that is the subject of the article in
question. Obviously these arguments apply to other browsers as well.

------
DanielDent
While I appreciate the intent of the okTurtles folks, I think their reasoning
about Google Chrome is flawed.

If I, as the user, want to override HPKP, it should be easy for me to do so.
Perhaps, for instance, I'd like to MITM my own traffic for debugging purposes,
or to get an idea of what data an app is transmitting.

Let's imagine for a moment that Google modifies Chrome the way the okTurtles
people propose. Nothing stops Dell from writing a kernel patch which detects
that code path and alters the functionality at runtime.

If you own the system, you can make it do whatever you want. And Dell starts
out owning the system here.

~~~
JoshTriplett
> I, as the user, want to override HPKP, it should be easy for me to do so.

Absolutely. For debugging purposes, browsers should have a mechanism where you
can purposefully use your own local certificate, with a big unremovable
warning at the top telling you that. It should _not_ , on the other hand, ever
look like a valid secure site.

No legitimate reason exists for a browser to silently show a site as secure
that uses certificate pinning and doesn't serve the pinned certificate. Ever.
For debugging purposes, you don't need it to work silently.

And if someone started intentionally compromising browsers to make such
warnings go away, we'd have a clearer indication that they'd progressed past
ignorance into malice.

~~~
maccam94
I've heard that some corporate networks do this to enable filtering and
monitor employee internet usage. I could see that being a somewhat legitimate
use case.

~~~
shkkmo
So you don't think that the corporate employees have the right to know then
their company is doing MITM to monitor their HTTPS traffic?

~~~
euyyn
I guess if you're using a company's computer on that company's network, assume
it's their best interest to monitor some of the stuff.

~~~
itistoday2
They can monitor, that's fine, and the correct/ethical thing would be to
inform the employees about it and show them a little thing in Chrome to
indicate they're being monitored.

Informed consent, in other words.

pdkl95 also makes excellent point about the current _default behavior_ being
the problem:
[https://news.ycombinator.com/item?id=10630375](https://news.ycombinator.com/item?id=10630375)

------
FiloSottile
Two points about HPKP bypass:

First, keep in mind the alternative. AVs, security suites, legitimate
corporate monitoring tools (and malware) will hook the browser function calls
to remove any warning/block, causing crashes and vulnerabilities. If they can
plant a root, they can do that. (Unless you give them a easy flag to toggle,
but then how is it different from just trusting the certificate?)

Consider that _most_ AVs are doing exactly what Dell did, but "correctly".
There are more than you probably think. And they will keep doing it (now via
horrible binary hooking), because an AV that "protects your HTTPS traffic"
sells more than one that doesn't. THAT will increase the attack surface
exposed to the outside world (see Tavis Ormandy's work), only to reduce the
"attack surface" exposed to _your own manufacturer or system_.

Second, please consider with skepticism any proposal that solves any concern
by adding more UI. Users can barely understand the green lock (as anyone
following the excellent work of Adrienne Porter Felt knows), how is the user
supposed to react to "hey, this site has a HPKP pin that is being overridden
by a local root, are you sure it's the IT Dept or the AV vendor but not your
manufacturer screwing up?".

We settled the "should you defend from root" discussion in Linux years ago, do
we have to start again from scratch with Windows?

~~~
shkkmo
> hey, this site has a HPKP pin that is being overridden by a local root, are
> you sure it's the IT Dept or the AV vendor but not your manufacturer
> screwing up?

Yeah, that is a crappy error message that will confuse and scare people.
However, it is not that hard to write a better one:

"Your connection to {this site} may be using a security certificate other than
the one specified by {this site}. This may be a part of your Antivirus or
Corporate network firewall. However, be aware that it is possible that groups
other than you and the operator of this site may have access to any data you
view at or send to this site."

Then handle this warning the same way that self-signed or expired certificates
are handled.

> THAT will increase the attack surface exposed to the outside world (see
> Tavis Ormandy's work), only to reduce the "attack surface" exposed to your
> own manufacturer or system.

How does it increase the attack surface? The attack surface of 'installing an
antivirus that has access to your https traffic and can perform a MITM on it'
is the same either way, right?

~~~
FiloSottile
I don't think any Chrome error mention "certificates" anymore, and for good
reason. Anyway, what is the user supposed to do after reading your (totally
easier) message? It still lacks any actionability, because there is no clear
action the user can take if their system is that compromised.

I consider hooking into a x509 verification stack some orders of magnitude
harder than parsing HTML.

~~~
shkkmo
> Anyway, what is the user supposed to do after reading your (totally easier)
> message? It still lacks any actionability, because there is no clear action
> the user can take if their system is that compromised.

Display a "Back to Safety" and a "Continue Anyway" button, and then don't re-
display the warning for that site until the browser is re-launched or some
period of time passes.

------
sgentle
The best thing I read about modern computer security came from Google, I'm
fairly certain, though I can't find the reference now.

Basically, the old model was two-party: you vs adversary. It could be assumed
that your computer was trustworthy unless it had already been compromised by
the adversary. Security in that model was largely based around the "computers
should do what their users want" idea. Unfortunately, it doesn't line up with
the actual state of computers and computer users today.

Modern security systems (especially those in mobile devices and web browsers)
are designed to be three-party: you, your agent, and the adversary. You can be
trusted, the adversary can't be trusted, and your agent (software on your
computer) can be trusted sometimes. It's not automatically true that code
running on your computer, on your behalf, is actually safe. Most users have no
idea what that code is even doing.

This three-party model is what underpins the app sandbox, it's what underpins
web app security including the same-origin policy, and it's what underpins
things like Mozilla and Chrome's plugin and extension blocking. You can't
always assume that computers are doing what their users want.

I think what is difficult about this is that, as your technical expertise
rises, the two-party model becomes closer to the truth and the three-party
model less so. As a developer I find my computer telling me I can't do
something enormously frustrating. However, after the thousandth time cleaning
malware off my mother's helpful computer, I really wish it would tell her what
she can't do more often.

So, while I buy the "we can't control the layer above us" argument, I don't
find "do what the user wants" compelling at all. Where was that argument when
the users wanted to install old insecure versions of Flash? Or had apps that
Google remotely removed from Android phones?

------
posnet
"Is it necessary to have a local trust store that can override HPKP pins in
order to debug a TLS connection? No, it is not."

Isn't that exactly what is needed in a corporate or home network where the
administrator wishes to monitor/proxy tls/ssl traffic?

Whether that is OK or a good idea is a whole other discussion, but maybe I am
misunderstanding the authors point.

~~~
JoshTriplett
That doesn't mean it needs to happen _silently_ ; such interception should
result in an unremovable browser indicator warning that it looks like your
traffic is being intercepted.

If you consider it acceptable for the network you're on to intercept your
traffic, then you can certainly continue to browse despite that warning. You
might decide not to transact any private business on such a network, of
course.

------
bad_user
I don't agree with the article. The author talks about reducing the attack
surface, yet there's nothing you could do against Dell abusing its control
over the laptops it sells.

The author is saying that Dell would have to " _literally install malware on
their computers. Dell’s stock would plummet_ ".

Oh, like how they installed a root certificate in the local trust store? How
is that any different from malware and where is Dell's stock plummeting
because I'm not seeing it.

~~~
glasshead969
> Dell’s stock would plummet.

Dell went private, so reaction on wall street is a non starter anyway.

------
TazeTSchnitzel
> If I were a decision-maker at Dell faced with the decision of “how do I
> quickly get my support team the model number of a computer?”, I would not
> even consider the idea of doing what Dell did. It is simply unnecessary to
> install a root cert into Google Chrome’s local trust store to solve that
> problem.

It's unnecessary, but it's a clever hack. The browser will send any server
that cert if they ask for it. Thus, any browser consulting the Windows trust
store will give Dell the number it wants.

Is it a massively flawed approach? Absolutely, but the explanation does make
sense, at least to me.

------
devit
A more general problem is that A LOT of Windows software is actually released
on non-HTTPS-only sites, so that any given Windows installation is highly
likely to eventually run an executable downloaded via non-HTTPS download.

All the NSA has to do against random Windows users is wait until an HTTP
request for a .exe file comes and automatically infect the downloaded
executable.

------
ryan-c
Unless it has changed recently, Chrome on Windows and OS X uses the system
trust store - it does not have it's own. Not sure why Chrome's being singled
out here.

------
bitmapbrother
Unfortunately, pinning doesn't solve everything.

[https://noncombatant.org/2015/11/24/what-is-hpkp-
for/](https://noncombatant.org/2015/11/24/what-is-hpkp-for/)

~~~
itistoday2
Did you miss the part of the article that responded to that link?

~~~
bitmapbrother
The system was compromised locally. Had this option not been available another
local "solution" would have been used.

~~~
itistoday2
The article replies to that argument, and you are welcome to respond to it.

~~~
bitmapbrother
You mean the part where they could have still compromised Chrome or used an
alternate method to accomplish what they were trying to do?

~~~
shkkmo
From the link you Posted:

> However, it is not possible for a low-privilege application to defend
> against the platform it runs on, if the platform is intent on undermining
> the application’s expectations. To try would be futile, and would
> necessarily also violate a crucial digital rights principle: The computer’s
> owner should get to decide how the computer behaves. Dell and Lenovo let
> their customers down in that way, but for better and for worse, it’s not
> something that a web browser can fix.

To which the response from the original article is:

>One of the most fundamental principles of information security is reducing
the attack surface, meaning that when something is unnecessary and can lead to
a vulnerability, it must be removed. Is anything Google could do to make it
more difficult for Dell to compromise their customers? Yes, there is. The most
obvious thing would be to not allow roots within the local trust store to
override HPKP pins. Had Google removed the red carpet to disabling all of its
safeguards, where would that have left Dell? Sure, they could still compromise
Chrome, but there would be no “Google Approved” method of doing so. Dell would
either: Have to give up, or: Literally install malware on their computers.
Dell’s stock would plummet.

Thus fixing this would:

1) Reduce the attack surface created by 'accidentaly' publishing a private key
for root certificate (especially when removing those certs is beyond the
technical ability of the majority of users)

2) Improve the ability of the computer OWNER to decide how the computer
behaves since they would be aware when HPKP pins are being overiden.

3) Force OEMs that wish to compromise Chrome to write explicit Malware that
will be much harder to handwave away and thus be a bigger risk to their
reputation.

~~~
thomastankeng
"3) Force OEMs that wish to compromise Chrome to write explicit Malware that
will be much harder to handwave away and thus be a bigger risk to their
reputation."

THIS. Shame on you, Google, for allowing anyone to defeat the certificate
pinning you advocated for in the first place!

