
New Tools for Detecting HTTPS Interception - grittygrease
https://blog.cloudflare.com/monsters-in-the-middleboxes/
======
RKearney
> When a proxy root certificate is installed, Internet browsers lose the
> ability to validate the connection end-to-end, and must trust the proxy to
> maintain the security of the connection to ensure that sensitive data is
> protected.

Sort of like how CloudFlare does with their "Flexible SSL". As an end user, I
have no way of knowing if CloudFlare is proxying my credit card information
over clear-text to an insecure origin server.

~~~
rarecoil
> Sort of like how CloudFlare does with their "Flexible SSL". As an end user,
> I have no way of knowing if CloudFlare is proxying my credit card
> information over clear-text to an insecure origin server.

Cloudflare should really message if this is the case when using their gateway.
Small UI changes to note this would likely go a long way toward coercing
better overall security.

When I use Cloudflare as a proxy, I also configure authenticated origin
pulls[1] for better endpoint hardening. This makes it a bit more difficult to
find a way to bypass the CF proxy, since hunting around on shodan etc. to find
the server in the IPv4 space echoing the same content will not work.

[1] [https://blog.cloudflare.com/protecting-the-origin-with-
tls-a...](https://blog.cloudflare.com/protecting-the-origin-with-tls-
authenticated-origin-pulls/)

~~~
tgsovlerkhgsel
Just a note, unless you're also validating the Host: header (and possibly even
then), Authenticated Origin Pull can be bypassed if someone does find the
right server:

[https://medium.com/@ss23/leveraging-cloudflares-
authenticate...](https://medium.com/@ss23/leveraging-cloudflares-
authenticated-origin-pulls-for-pentesting-565c562ef1bb)

(Could have been fixed in the past couple months, but I doubt it.)

Same for Access, by the way.

~~~
prdonahue
Sometime in 2Q you'll be able to upload your own client certificate (issued
under your own CA if you like) to be used with Authenticated Origin Pulls.

~~~
rarecoil
Note: This user is a (the?) Director of Product at Cloudflare.

------
parliament32
I like how this is published by Cloudflare, who is literally the biggest TLS
interceptor in history -- their entire business model is based around MITMing
connections.

If I was a group who needed to get eyes on TLS traffic without it looking too
suspicious, offering free reverse-proxy services would be the way to go (for
attack protection and CDN-like features, of course).

~~~
bsamuels
> If I was a group who needed to get eyes on TLS traffic without it looking
> too suspicious, offering free reverse-proxy services would be the way to go

that's a pretty over the top accusation to make without citing evidence

~~~
parliament32
Not really accusing them of anything, but CF is a giant vuln in how you'd
expect TLS to work. TLS is supposed to guarantee that data between your
browser and the web server is encrypted in transit, but with the CF business
model there's a very convenient decryption/re-encryption step right in the
middle of that.

Infiltrating CF is far, far easier than any of the other TLS-snooping methods
(breaking the encryption, generating a fake cert via bad CA and intercepting,
etc); it's not ridiculous to think the bogeyman-du-jour probably has fingers
in CF (with their knowlege or not, doesn't really matter), and it'd be
irresponsible to assume that TLS traffic going through CF is any more secure
plaintext.

~~~
profmonocle
If you rely on any third-party for data processing/storage, you're accepting
some risk of them being compromised.

If you use CloudFlare/Akamai/Cloudfront/etc. as a CDN, a hacker could view
your site's traffic.

If you use G Suite/Microsoft 365/etc. for email or document storage, a hacker
could access your corporate documents and communications.

If you use EC2, Azure, or GCE, a hacker could access your storage buckets or
dump your VM's RAM.

It all comes down to your threat model. Is your threat model such that you
absolutely can't trust any third party with your data? If the answer is "yes"
then you should completely self-host and not use a CDN or anything similar.
(I.E. an email provider that specializes in providing services to
whistleblowers/political dissidents should definitely not use CDNs or public
cloud providers.)

But for most businesses it's an acceptable risk, especially since these giant
tech companies probably have better security than they do themselves.

------
robocat
A lot of our clients use proxies, and they sometimes have terrible bugs that
cause connection problems. E.g. the other day we detected an obsolete Cisco
device that was leaking memory from one HTTPS session into another (a
government department too!).

We now log whether HTTP2 or HTTP1.1 is used by the browser by using
JavaScript: `window.performance.getEntries()[0].nextHopProtocol` which is
supported by most modern browsers.

This works because we use CloudFlare, so most of our users get HTTP2, unless
they are using a corporate proxiy, which often downgrade the browser
connection to HTTP1.1. e.g. Cisco WSA doesn't support HTTP/2 yet[1].

We also log response headers on XMLHTTPRequests that fail, because sometimes
the proxy inserts a header with its name and version (however headers
sometimes get stripped for security reasons by the browser e.g. CORS, and
timeouts usually have no response header).

1\.
[https://quickview.cloudapps.cisco.com/quickview/bug/CSCuv329...](https://quickview.cloudapps.cisco.com/quickview/bug/CSCuv32968)

~~~
acdha
One other handy tool can be
[https://badssl.com/dashboard/](https://badssl.com/dashboard/) for doing a
quick scan for surprises — this is especially useful if you're testing managed
workstations and want to confirm that someone hasn't “fixed” a problem by
deploying a group-policy update which breaks TLS security.

------
hexadec
I dislike this as a user, but like it as a security professional. It is
critical to data loss prevention (sending SSNs to a HTTPS site could be hidden
otherwise) but is rarely done well.

The ability to degrade encryption cipher suites and inability of most of these
boxes to invalidate certificates results in lower security for most users. I
have seen sites with expired certs be passed to users since the interception
replaces the site's cert with the root cert. This means the browser ends up
trusting this cert and showing content that would normally be blocked. This is
an interesting mess we have gotten ourselves into. Also interesting when taken
in light of the BITS/ Andrew Kennedy comments on TLS 1.3 that directly impacts
this ability.

~~~
jcims
[https://badssl.com/](https://badssl.com/) is a great way to test how much
your MITM proxy is masking insecure HTTPS comms. If a test passes that
shouldn't, ship an email to your proxy team.

------
kodablah
I think the next logical step is to give those of us who care on the desktop
more info about what certs/chains are being used. While FF has extension
support for viewing cert info, Chrome does not yet[0]. Once there, it would be
reasonable to be able to easily pull up my root CA list and see which ones are
queried by my browser and how often (I'd love to trim up my list if mostly
unused). Of course this does nothing for a process using its own HTTP client,
hence the MITM checking.

0 -
[https://bugs.chromium.org/p/chromium/issues/detail?id=628819](https://bugs.chromium.org/p/chromium/issues/detail?id=628819)

~~~
isostatic
I'd like to associate my own protections into a given root. I'm happy for my
company's root certificate to identify as *.company.com, I don't want it
identifying as www.mybank.com, have it as an option under "edit trust". Same
goes for root certs -- if I choose to disallow "China Financial Certification
Authority" as a normal root cert, and I go to "chinabank.com" or wherever, I
should have a message pop up saying "this site isn't allowed", and allow me to
tag an exception for that specific certificate to China Financial
Certification Authority (although not if it's MITMed)

These settings should persist through browser upgrades too.

------
asaph
This is rather ironic coming from Cloudflare given that their main product is
a TLS proxy which essentially has man-in-the-middle access to all https
requests running through their systems.

~~~
acdha
“interception” implies that this is being done without your knowledge and
permission. That isn't a useful way to talk about a common class of service
which you explicitly opt-in to use with a contract.

~~~
asaph
Website owners may have knowledge that Cloudflare sits between their users and
their site. But the end users generally do not. They think they have an
encrypted connection directly to the website. They would be surprised to learn
that a third party has eavesdropping capabilities.

~~~
acdha
Ultimately you’re trusting the site owner to be responsible and almost never
have the ability to audit them. Using CloudFlare is another trust point, just
like using shared hosting. If you don’t complain about using AWS, Digital
Ocean, etc. in the same way that’s just saying you need to think about the
threat model more.

------
aboutruby
There is a public dashboard:
[https://malcolm.cloudflare.com](https://malcolm.cloudflare.com)

------
rocqua
They hate on TLS-terminating proxies, and are jubliant about TLS-terminating
reverse proxies.

That is: Clients don't get to decide about encryption only servers do.

And partially, this makes technical sense. There are fewer servers, and the
chance that they get it right is a lot higher. On the other hand, this is
nothing more than the platforms pulling all power towards themselves. Getting
users used to the paradigm 'we will decide what kind of encryption you get'.

------
fulafel
WTF, the rate of interception is so high. (search for "prevalence of HTTPS
interception")

I think browsers are way too friendly to this practice. IT departments &
oppressive governments are the main culprits obviously, but the browser and
the TLS impl is supposed to be on the user's side.

------
jve
I wonder if Cloudflare interprets our connections as MITMed or not. We have
group policies, configuring hosts to have specific cipher suite order and
disabling weaker ones. So basically adjusting TLS settings, but not actually
MITMing.

------
userbinator
While I'm sure a lot of people read this and think "awesome, more security", I
think "no, another hurdle in the DRM-ish battle to keep control over what the
devices on your network are doing"; especially after seeing some comments here
stating the logging (and potentially acting on) of the results from these
fingerprinting techniques.

I MITM my network so I can filter out ads and other crap, inject custom
stylesheets, and otherwise modify pages so that I can maintain a sane browsing
experience even on devices with severely castrated browsers. Need to control
JS on something that can't even let you turn it off? What better than
_stripping out the <script> tags completely_ before it even gets there. Want
to see the full version of the page instead of some mobile portal? I can
change the user agent and other headers on-the-fly. I can also check if
something is phoning home, and what exactly its communication is:

[https://news.ycombinator.com/item?id=6759426](https://news.ycombinator.com/item?id=6759426)

Given the situation with IoT and other "smart" things these days, along with
the trend of walled garden ecosystems and HTTPS Everywhere (even for DNS!), I
would almost consider an HTTPS intercepting proxy essential for security and
privacy purposes. Funny that the article makes no mention of this, but only
the usual "evil corporate proxies" scaremongering... then again, it wouldn't
fit in their narrative. Proxomitron, Proxydomo, Proxymodo(!), Adsubtract,
Admuncher, and the list goes on. These were quite popular a decade ago, and
would've remained so had the "security-cult" not driven them into obscurity.

This feels like just another one of those "we want to ensure we force all our
content down your throat and make you powerless to stop it" schemes, and I'm
pretty confident that I'm already seeing it in action. The previous technique
was running JS on the page to detect modifications (including those produced
by adblockers), now they're moving that war deeper.

edit: Wow, downvoted already.

tl;dr: My network, my traffic. Piss off with your nannying!!!

~~~
isostatic
You're right. Many tech people today tend to be hostile about the sort of
freedom and hacking that geeks used to have.

There are ways around this - the detection seems to work by investigating what
TLS ciphers are supported, and comparing with what the username should do.

A MITM proxy could easily implement this. On the flip side cloudfare could
easilly get false positives for people with non-default settings (which I
suspect is measured in the <0.0001% range, so websites won't really care)

These are the default firefox cipher settings on Firefox 65

[http://imgur.com/fVvUBdUl.png](http://imgur.com/fVvUBdUl.png)

And here's my desktop's current settings

[http://imgur.com/A72WA2hl.png](http://imgur.com/A72WA2hl.png)

(which disabled ciphers without and dh key exchange - I also block TLS 1.0)

------
drsopp
> a “monster-in-the-middle” or MITM

What happened to man-in-the-middle?

~~~
badsectoracula
Eaten by the monster, probably.

First time i heard this and i already prefer it to man-in-the-middle as it
sounds funnier :-).

