> Contrary to widespread belief, public key pinning  — an HTTPS feature that allows websites to restrict connections to a specific key — does not prevent this interception. Chrome, Firefox, and Safari only enforce pinned keys when a certificate chain terminates in an authority shipped with the browser or operating system. The extra validation is skipped when the chain terminates in a locally installed root (i.e., a CA certificate installed by an administrator).
Seems like a strange default to me. I feel like the user should be notified of this, for instance if they're using a work computer to access their bank account or something like that.
That's not to say I disagree with the sentiment that this is something employers (and other organizations providing access to devices) should be obliged to disclose, but that is perhaps more of a legal and educational issue.
Hah. That's precisely the argument I have made when arguing that there should be an opt-out for addon signature verification (needing admin permissions to toggle it if they insist) because you already utterly lost the security game if someone had admin on the machine.
But no, they argue that they must defend against malware with admin permissions injecting addons into the browser. Because that's a fight worth fighting and the perception of the browser's security is somehow more important than user freedom.
My first instinct is to say "it's important to not install crap software, you need to reasonably trust the software you install". But I immediately recognize that it's un-intuitive that Adobe and Microsoft and Symantec and McAfee are not on the "trusted" list. (Office and .Net have silently installed problematic Firefox extensions in the past.)
I don't really have a conclusion here, just, it sucks.
The "quietly" adjective suggests they are malicious. Which means they should be reported to AV vendors (including microsoft) instead of being used as a boogeyman when arguing against user freedoms.
Employees are often required to install local certs (or applications/scripts that do that) - that doesn't mean the host is entirely compromised.
If they are forced to install those certs, then the computers they use belong to their employers, and those computers are obeying their proper owners. I fail to see the problem.
Don't use your work computer for things you don't want your work to be able to detect, intercept & modify.
Don't use your work computer for things you don't want your work to be able to detect, intercept & modify.
You know who also has that right? Prisons.
I don't have things sent to the office unless they comply with my employer's policies (e.g. I'd never have a weapon mailed here), and unless I'm happy with my employer having information about my packages.
I'm curious what line of reasoning would justify me doing such a thing and expecting privacy.
> You know who also has that right? Prisons.
You know who also has that right? You, on any network and hardware you own. My employer owns my laptop and the network it's connected to: of course it has every right to inspect its own property.
Careful there: there's a difference between capability and right. Many legislations forbid employers to put surveillance cameras in the bathroom for instance. There is a level of privacy employees can legally expect.
E-mail is similar: if your employer intercepts your e-mail just because you read it at work, it is likely a breach of correspondence secrecy (in France it would be). This likely applies even if you're at fault for using the company's resources on personal matters.
While "don't trust your employer's network" is elementary operational security everyone should be taught at school, 90% of the population don't know what computers can do, let alone how they work. Because of that, their expectations are social, not technical.
The law says that if employers own the computers they provide to employees, then employers have broad latitude to monitor how those computers are used.
A different law says that only the recipient of a US Postal letter may open it. If you receive a personal letter at your office via US Postal Service, your employer cannot legally open it.
I'm not up to date on what the law says about FedEx or UPS.
For my last gig, I worked at some company on behalf of another. This workplace was quite explicit about intercepting and monitoring everything. I pondered for a second whether I should use this place's computers to log to my employer's webmail. I gave up and did it, because I wasn't going to read or write work emails outside of office hours.
Personal stuff however I didn't dare.
This might be true in the US but illegal in various other countries!
Work stuff… When you use the work computer to do stuff on behalf of your employer, it's not really private, and could arguably be monitored. (There are work regulations that limit how you can use that data however. Benchmarking for instance is forbidden in France.)
The problem is, since proxy servers cannot automatically distinguish work related activities from personal errands, they tend to cast a wide net. The implied distrust kinda disgusts me, but I reckon this puts big companies in a delicate situation: one does not simply trust thousands of people —too many single points of failure.
If it involves patching and recompiling the browser it wouldn't be that trivial for your average sysadmin. Besides I don't see why the admin would be hostile to the users being aware that they're being monitored. As you point out companies generally disclose that anyway.
Agreed. We could argue all day about companies who think they need to intercept traffic, but why would anyone who believed they had a legitimate reason to do so want to do so silently without any notification?
A persistent infobar near the address bar, for instance, would work nicely. And anyone working in a hostile environment with such monitoring imposed on them (a bank, for instance) would then have a much clearer warning that they shouldn't use their work device for anything they want to keep private.
I don't really know how widespread key pinning is but if it's reserved to the more sensitive websites (banking, e-commerce etc...) it might make sense to at least issue a warning.
Most Google properties use key pinning in some form (though AFAIK through static pins rather than HTTP headers). I would suspect that most users in that group would see such a warning at least daily.
> I don't really know how widespread key pinning is [...]
"Visitors may be presented with a warning if they're behind a middlebox and you deploy HPKP" would probably be a good way to slow down HPKP deployment even further.
Had public key pinning existed before companies started thinking of intercepting their internal TLS traffic, that wouldn't be an issue. But since TLS interception was already unfortunately common when public key pinning was developed, not having that exception would mean that nobody could ever use it, since it would mean instantly breaking whichever site enabled it.
And in a certain way, users are already being warned, as long as their bank uses an EV certificate: the EV green bar won't appear, since EV certificates are only allowed from a hardcoded list of CAs (unfortunately, according to https://www.grc.com/ssl/ev.htm that's not the case for Internet Explorer).
That's the point...
Seems like it should be feasible to develop modules for HTTP frontends to detect policy MitM based on the techniques described in this article and enable conditional denial of service.
Unfortunately, the user interface for client certificates is a complete pain, so they are rarely used. But they're the only true way for a server to make sure it's talking directly to a client, in the same way server certificates can allow a client to make sure it's talking directly to a server.
I'd like to see TLS Channel ID to become available in browsers for this very reason, but it still doesn't seem to have much buy-in.
Someone once proposed that browsers add a JS API for getting information about the certificate with which the page was loaded. The discussion is on a mailing list somewhere, you can find it if you try. It was shot down on the grounds that a page may contain resources from many origins and made over many different TLS connections, or come from cache, etc. So the idea of "what connection" a page was loaded over isn't necessarily very clear cut. If this API were added it would enable JS-based detection, which obviously wouldn't be foolproof against a determined attacker but would enable some monitoring of this issue to be done.
(Another hare-brained scheme I conceived of to detect this sort of thing is to have a special domain with a HPKP header set that doesn't match the certificates served, and a report-uri. If the browser makes a report, it means it's a normal browser acting correctly. If it doesn't, it means the page was served under a custom CA and HPKP has been disabled. Of course this is incredibly erratic as a means: it doesn't work if a browser doesn't support HPKP, you have to wait for the report to be made, or decide how long you wait before you conclude it's not coming, etc.)
The MITM proxy is operated by the same department that has root on all the endpoints it's intercepting. If necessary, the "endpoint protection" product will grab the private key, or just scrape the details of the browser session from the browser's memory rather than at network level.
Chris Palmer also wrote a really great blog post about this: https://noncombatant.org/2015/11/24/what-is-hpkp-for/
This is one reason why squid has a bump splice feature. Look for the SNI in Wireshark then config squid to let Netflix packets go through untouched. I am not aware of any other way to get Netflix and Snapchat to work in a MITM network. My experience is that you have to create a exception and not intercept. Not 100% sure. YMMV.
This still should not be the default, rather corps should have an easy about:config switch they can flip. The default should protect private users.
Meanwhile, of course, if someone did install a third-party root cert on my phone somehow, I'd never know because I always ignore & dismiss the wolf-crying warning.
The fundamental reason I disagree with you is that a computer should do exactly what its administrator wants it to do. If I install a root cert, it should trust that cert exactly as much (if not more than) every other root cert in the world.
Allowing uninspected outbound traffic makes it trivial for an attacker to exfiltrate data, an employee to accidentally or purposely release data, etc.
The arguments around warning fatigue are specious. The exact same mechanism that currently sets the number of warnings you get due to a pinning failure stemming from a user added certificate to 0 could easily make it 1 instead, or be tied to a "don't show me these warnings again" checkbox. Experimentation and data could determine whether and to what degree this was effective, as is routinely done with related warnings changes that have far less potential upside. But when you bring up the possibility of solving the question with real data the argument morphs into pure philosophy.
The philosophical points are twofold: first, a claim already raised here that fighting local admins is pointless because they'll always win and you don't want to get into an arms race. I attribute this to the fact that browsers developed on poorly sandboxed desktop platforms where admins are de facto root and no intelligent statement of any kind can be made about limitations of their behavior. On those platforms this isn't a crazy approach (although its shoulder-shrugging fatalism is distasteful to me even there). Fortunately, those aren't the only platforms we have today: on systems like Android the expectation is that corporate admins act through narrow, carefully controlled channels and will have no powers beyond those. There, the platform wins arguments with admins pretty much all the time. The arms race was over before it began. Without the risk of escalation from admins, the only question is whether the user is properly aware of the consequences of having had an extra CA added to their trust store, and again I refer to the point I made above: this can be settled with data. Rather than bend over backwards to give admins the benefit of the doubt, let's gather actual data on the degree to which users are comfortable with this behavior. And if they aren't, well, then the admin is an adversary and we have a duty to protect the user.
When you make this argument however the discussion becomes /really/ philosophical: people will start saying that limiting admin powers is anti-user-freedom, despite the fact that the user of the device clearly has a greater ability to make decisions for themselves about their security than in the free-for-all common to platforms of yore. Why that matters in this discussion is beyond me: even if you subscribe to this belief the horse is out of the barn and no amount of smugly screwing users will fix that. And some will assert that admins are users too, and that we need to serve those markets well. But the fact that people will give you money does not mean you should take it: if the data gathered above indicates that users do not want their traffic intercepted then that, in my mind, should be final-- if the amount of money on the other side convinces members of the security community to hurt users then in my view we should just give up the pretense that we're the good guys.
Except it isn't. Even simple things like cloudflare's SSL termination allows the traffic to go unencrypted over the internet and be intercepted by 3rd parties.
> We deployed these heuristics on three
> (1) Mozilla Firefox update servers,
> (2) a set
of popular e-commerce sites, and
> (3) the Cloudflare content
> In each case, we find more than an order of
magnitude more interception than previously estimated, ranging
> As a class, interception products drastically
reduce connection security. Most concerningly, 62% of traffic
that traverses a network middlebox has reduced security and
58% of middlebox connections have severe vulnerabilities. We
investigated popular antivirus and corporate proxies, finding that
nearly all reduce connection security and that many introduce
vulnerabilities (e.g., fail to validate certificates).
This won't solve all use-cases, but selfishly, It will solve mine at DNSFilter: If a browser could recognize our SSL cert, or a special field in our cert, and present the user with a block message, and a static link to learn more, it would eliminate the need for us to have our customers install a CA of ours, and MITM traffic. We have not yet done so, and I'd prefer not to, but it seems to be the industry standard way of avoiding users being confused by errors when we block/MITM an SSL site.
I might open-source it if there's interest but it's relatively basic.
I have a little side project where I try to implement a proxy for myself. I want to remove ads and be able to scan and cache downloads. I trust the adblock plugins and endpoint security products far less than a MITM proxy, I wrote myself.
Maybe this will have to wait until after the team from this paper releases their fingerprints: https://github.com/zakird/tlsfingerprints
I hate the AV industry in infosec. It does not work well and in most cases, refundes security. Unbelievable that it's still required for a lot of compliance veers.
It's an IP routing concept. AS Numbers are used to refer to different networks (run by different ISPs and providers) on the internet.
If you've read that out of the paper you read a different one. Quote:
"Our grading scale focuses on the security of the TLS
handshake and does not account for the additional HTTPS
validation checks present in many browsers, such as HSTS,
HPKP, OneCRL/CRLSets, certificate transparency validation,
and OCSP must-staple. None of the products we tested
supported these features."
Read: Some products got the absolute basics right. None of the solutions did anything that can reasonably be called "good".
> I expected much higher general standards.
I didn't. I don't expect anything from security appliance vendors.