Step 2) leak cleartext from said MITM'd connections to the entire Internet
I recently noted that in some ways Cloudflare are probably the only entity to have ever managed to cause more damage to popular cryptography since the 2008 Debian OpenSSL bug (thanks to their "flexible" ""SSL"" """feature"""), but now I'm certain of it.
"Trust us" doesn't fly any more, this simply isn't good enough. Sorry, you lost my vote. Not even once
edit: why the revulsion? This bug would have been caught with valgrind, and by the sounds of it, using nothing more complex than feeding their httpd a random sampling of live inputs for an hour or two
I'd guess it's because of the crude and reductive way you describe the service cloudflare provides. I don't know what type of programming you do, but many small services don't have the infrastructure to mitigate the kind of attacks cloudflare deals with and they wouldn't be around without services like this.
I don't like the internet becoming centralized into a few small places that mitigate DDOS attacks like this, but I like the alternative (being held ransom by anyone with access to a botnet) even less.
I'm going to take a more even handed approach than what you're suggesting. Any time you work with a service like this you risk these kinds of things - it's part of the implicit cost/benefit analysis humans do every day. I'm not ready to throw out the baby with the bathwater because of one issue. I'm not sure what alternative you're suggesting (I didn't see any suggestions, just a lot of ranting, which might also contribute to the 'revulsion') but it doesn't sound any better than what we have.
Using services like Cloudflare as a 'fix' is wrecking the decentralized principles of the Internet. At that point we might as well just write all apps as Facebook widgets.
That is a separate step. First you either take cover or help.
Do you see a problem with that?
Everyone in the "cloud" is able to do the migration even without having prepared a disaster recovery plan ahead of time.
Extreme centralization of the Internet is not a "baby", except maybe in the sense of a cuckoo's egg.
But I'm willing to bet the mentality of this comment is highly representative of many web developers and service providers. They will not seek to fix anything, because they don't see this state of things as a problem in the first place.
Cloud means extreme centralization.
It means giving your data to a third party you don't control.
Why does our networked software have to assume a centralized topology?
In the days when developed countries had dialup, protocols (IRC, Email, etc.) were all decentralized. Today, all the famous developers live with fancy broadband internet connections and forgot what it's like to have to think about netsplits.
The result... all the software is either "online" or broken.
There shouldn't be an "online" or "offline". There should be "do I have access to server X currently?"
Why do we need Google Docs to collaborate on a document if we are all in the same classroom?
Why do we need centralized facebook server farms whose engineers post on highscalability how they enable us all to post petabytes of photos and comment to our friends?
Why do we need centralized sites to comment at all? Each thread is local to its parent.
Why does India need internet.org from facebook?
If communities could have a network that survives without an uplink to the outside world then DDOS from the global internet would just cut off that network's hosting of documents to outsiders. They'd still be able to do EVERYTHING locally - plan dinners, book a local appointment, send an email etc. and even post things out to the greater internet.
This is a future I want to see.
We already have mesh networks. We need more web based software to run these things.
That's what we are building at qbix.com btw.
Tim Berners-Lee, the "father" of World Wide Web, is currently advocating for exactly what you are asking for.
(Now I'm trawling Crunchbase to see if I can work out which investors are NSA front companies, then I'm gonna look to see what _else_ them and their partners have invested in...)
I don't actually believe that, but it isn't an unreasonable theory.
I once came up with that exact concept for a nation-state subversion. It would even pay for itself over time. I kept thinking back to it seeing the rise of the CDN's and the security approaches that trust them.
After the Snowden leaks it really seems nonsensical to give Cloudflare the benefit of the doubt and assume that they aren't compromised.
Or prevented using abstraction that do bounds checking. Or even just used ragel with a memory safe language and prevented all issues like that from ever happening. Probably would have been less work even with the reimplementation of an http proxy from scratch.
drastically reduced, but not quite ever.
For instance, use a GC language, especially in this domain, you might do some data pooling to reduce GC overhead. Maybe you forget to clear data in the pool. Same kind of error can result.
But yes, I feel like security sensitive stuff like this shouldn't be done in C / C++ any more.
I think you are overestimating the amount of people doing their regular browsing through Tor
I think the decision that goes on in the minds of most site operators is "fuck convenience and sleazy Tor users, I want my site to be as safe as they can make it".
It's worth noting that other reverse proxy providers I worked with when freelancing expose the very same controls to site owners. Based on anecdotal knowledge, I'd say anonymized users accessing a site behind CF are subject to less hassle than those accessing a site behind something like X4B with comparable settings.
Sure, the proportion of requests passing through Tor are more likely to be malicious, but given the bandwidth constraints the adversary seems limited.
The costs aren't only the lost business from people like you, but people who should use Tor giving in. There's some wisdom to people even researching something as mundane as what their dog ingested using anonymized services, much less other medical questions.
Over in App Engine land, someone bypassed their JVM sandbox and managed to extract a copy of their JVM image, which included much of their revered base system statically linked into something like a 500mb binary.
Sorry, I'd have to go digging to find references to either of these incidents. At least in either case customer data wasn't leaking, but suffice to say it's a little bit of the pot calling the kettle black
And finally let's not forget the China incident, which rumour has it, resulted in a system compromise at Google right to the heart of their engineering organization. Of course they didn't get roasted like Yahoo recently did over their password leak
A site using Flexible SSL is no less secure than one using http://, and in fact is more secure, because nobody can MitM the connection between CloudFlare and the end user. The only thing vulnerable is the connection between the website and CloudFlare (~~and only to MitM, not to passive sniffing~~ EDIT: this isn't true, see ), but that's a much smaller and much better-protected surface area.
Now it's quite obvious that the alternative SSL options are much better because they secure the data properly the whole way. But claiming that Flexible SSL is somehow undermining the security of the web is extremely hyperbolic.
: The connection between the origin server and CloudFlare can in fact be passively sniffed. I thought Flexible SSL was the option to use an arbitrary self-signed cert, but it actually means no encryption.
Edit: Dear downvoters, can you please explain why you disagree? What I wrote really shouldn't be controversial in the least, so I don't understand the drive-by downvotes.
No company is likely to handle your payment details completely securely. You're relying on it working out on sheer luck most of the time and chargebacks on the rest.
Then there's the whole lone-auditor thing where a very large data-center or three are being audited by a single person over the course of two weeks, or less. That person is absolutely bombarded with information about an environment that is foreign to them. The end result I think is that so far companies have had it very easy to get by. They only have to pay for a week, or two at most, and whatever limited findings they get are fixed and they move on to the next year.
If companies actually had to live with a slower and more methodical audit, there would be many more findings and a lot more money spent, both on the auditing process and the resulting cleanup. The upshot is this would drive actual innovation in the space of having proper logging, file integrity, encryption, access controls, etc.
The whole audit industry is just.. icky. It needs a massive overhaul and the financials need to be forced to pay for it.
This is true, but conversely there is no legitimate use case for Flexible SSL. Having a datastore like Redis or MongoDB that by default listens insecurely on any address is almost as bad, and such things often compromise the security of a site if it e.g. sends your data across the internet to one of those, but at least there's a more-or-less legitimate use case for that default if it's used on a secured network - it's at least possible that someone using that default isn't deceiving their users. Whereas anyone using Flexible SSL is necessarily deceiving their users (I mean you can argue users might genuinely think "I don't trust my local cafe operator but I do trust the completely public, unsecured internet", but I don't think that's a coherent position for anyone to take).
That said, now that we have Let's Encrypt, and as more tooling gains support for automatically handling that, the value of Flexible SSL is going down, and I do hope they retire it eventually.
That's putting the cart before the horse. "Every website should offer" authentication and confidentiality, that's why we want every website to use HTTPS; having a URL that starts with https:// is not a goal in itself.
Security is not binary, but you keep treating it like it is. Security is a continuum, and any progress you make towards perfect security is good.
I would strongly dispute the "much". If anything the local network is more likely to be trustworthy than the remote network - people keep talking about cafe wifi, but the user likely knows who's running the cafe wifi and can complain if they start injecting ads etc. Whereas the user has literally no idea who might be on the connection path between cloudflare and the website and listening in, MitMing or anything.
http:// versus https:// is inherently binary; there's no way to display a connection as http⸵:// . If it doesn't mean "encrypted while transiting the public Internet" at least then what does it mean?
Indeed - so we should be applying all of those against CloudFlare, and any other organization that offers or uses a "Flexible SSL"-like product, as firmly as we can.
If the company is handling sensitive data, such as credit card information or medical information, there's already regulations to handle that. There's literally no point in trying to add regulations around Flexible SSL specifically, since the usage of Flexible SSL likely already contravenes the regulations for that sensitive data and therefore companies handling that data shouldn't be using it.
If the company isn't handling sensitive data, then again there's no point in adding regulations around Flexible SSL, because what possible benefit would that serve?
Flexible SSL is simply one tool that websites can use. It's intended to be used by sites that would otherwise just be using http://. Sites that do protect more sensitive information certainly could use it, but that would be a bad decision on their part. And we don't need regulations around it specifically, because there's also a million other bad decisions that company could make that would expose that data, and there's really nothing special about Flexible SSL that makes it in particular need of regulation.
I think serving a site over https:// amounts to advertising that information sent to/from that site will not be sent unencrypted over the public internet, and users will use that when deciding what things are or aren't safe to enter into that site. Surely there are regulations that already apply to that? And in any case regulations are only one of the options you mentioned; we should be applying a lot more shame to CloudFlare and anyone who uses "Flexible SSL".
In their defense, this is a flaw of the whole SSL/TLS security model. I think even Google did that before Snowden, presented you with https:// urls but proxied everything in clear text (they claim they don't do it now). Still, you can be pretty sure that many https websites might pass traffic in clear text to their backends and not necessary take security even a little bit seriously.
EDIT: Original comment said he could pull content off Google results. To respond to the new one:
No, they're not worlds apart when you're on the backbone. They still go through other people's datacenters and that's what causes the problem - we're not talking about stuff that goes over wifi or corporate networks here - we're talking generally just big ISPs in both cases.
It can be, in several ways. Most critically, it stops browsers from detecting the connection as insecure and applying mitigations.
Browsers also prevent HTTPS sites from embedding active content from HTTP sites.
The reality is, you're much more likely to get sniffed on public wifi or even your school or workplace network than someone running the server in a datacenter is, generally speaking if someone can sniff them at a DC they can do much more already. So it's still a respectably huge security gain for users.
And they do offer a good way to secure this connection too where you can do full SSL and use a certificate signed by them.
Would you be more comfortable if they offered another way to represent this to the browser? An X-Endpoint-Insecure header or something like that?
Yes, definitely, _Cloudflare_ should own this and push it through. You know they won't though because that would inconvenience their customers.
So no, it's not 100% secure, but it's far far better than having an unsecured http:// connection.
As for the green lock, you can blame that on Chrome. I have no idea why they insist on using a green lock and green "Secure" text for DV certs. Safari only uses a green lock / green text for EV certs, which is a lot better (and I don't know offhand what Firefox or Edge do). Of course, you could have an EV cert and still use Flexible SSL, but anyone who cares enough to get an EV cert should know better than to use Flexible SSL anyway, and there's a great many ways to make your server insecure, using Flexible SSL is very far from the worst way.
All that said, it would be great if CloudFlare would just stop offering Flexible SSL in favor of the self-signed CSR approach. Any CloudFlare customer who can create their own cert to talk to CloudFlare can also create a CSR to get a cert from CloudFlare just as easily, so it's not clear to me why they still even offer Flexible SSL.
: I thought Flexible SSL was the option to use an arbitrary self-signed cert on the origin server. gkop pointed out that, no, Flexible SSL means no encryption at all.
How is it secure? CloudFlare allows you to send this traffic in the clear. If they required this traffic be HTTPS, that would be far better for web security.
When observing non-technical users, I still see people clicking through blatant full page cert errors after connecting to WiFi because they've been implicitly trained that it's the captive portal making them sign in.