As would be the preferable end-of-life date for SSLv3 and HTTP.
(Related, a big thanks to Google for un-trusting that whole big Symantec security chain. Yeah, I realize they weren't competent, but I also realize that it had no practical effect on my site's security, as I don't have nation states or motivated hackers in my threat model.)
Security measures should be weighed like everything else - as cost/benefit. In many cases the cost of the security is not worth it.
Edit: I'd just like to point out the irony in some of the replies to this comment. I'm complaining about zealotry, and the vast majority of nasty replies I've received to this comment are using language that only zealots and ideologues would use. My god, you'd think I'm killing puppies based on some of these responses. Nope, just advocating for using HTTPS where it makes sense, and not having it forced down your throat.
Those devs are gonna be really surprised when they find out that unencrypted connections are routinely tampered with.
> they either don't know or don't care about all the effort and pain they're creating
You have not been paying attention to the hundreds of tools available to make HTTPS painless.
> until someone at Big G decided they weren't.
And Mozilla. And countless research papers. And real-world attacks that are reported over and over again. The fact is that the global Web has become hostile, regardless of your prejudice against Google's Web security teams.
> In many cases the cost of the security is not worth it.
The problem is that it's not YOUR security, it's other people's. If websites don't implement HTTPS, it's the users of
the Web who pay the price. It's their privacy being deprived. And the website becomes easy to impersonate and manipulate, increasing the liability of having a website. HTTP is bad news all around.
I hardly ever see people talk about this use case and how to solve it with https everywhere. AND it's super widely used: e.g. debian repositories.
deb https://deb.debian.org/debian stable main
And because HTTPS is nothing more than baseline security, it's possible to automate it with things like Let's Encrypt and not add any more checking beyond current control of DNS or HTTP traffic to the domain.
(Another confusion along these lines is assuming HTTPS is useful as an assertion that a site isn't malware. It asserts no such thing, only that the site is who it claims to be and network attackers are not present. If I am the person who registered paypal-online-secure-totes-legit.com, I should be able to get a cert for it, because HTTPS attests to nothing else.)
Don't get me wrong GPG signatures with pinned public key is a lot better than trust TLS of a random mirror.
But isn't it nice to have two layers, the two key systems are independent and orthogonal that seems like a solid win.
Need I remind of Heartbleed (openssl) or the very debian specific gpg key derivation bug years ago.
There will always be bugs, we can only hope they aren't exposed concurrently :)
Majority of HTTPS traffic is sniffable and largely non-confidential, unless you pad every file and web-request to several gigabytes in size.
Does your website use gzip? Good, now padding won't help you either, — unless it is way bigger than original content. Oh, and make sure, that you defend against timing attacks as well! Passive sniffers totally won't identify specific webpage based on it's generation time, will they?!
As for authenticity… Surely, you are going to use certificate pinning (which is already removed from Google Chrome for political reasons). And personally sue certificate issuer, when Certificate Transparency logs reveal, that one of Let's Encrypt employees sold a bunch of private keys to third parties. Of course, that won't protect authenticity, but at least, you will avenge it, right?
SSL-protected HTTP is just barely ahead of unencrypted HTTP in terms of transport-level security. But it is being sold as golden bullet, and people like you are the ones to blame.
I bet the SNI issues will eventually be fixed too.
And yes, with momentum behind certificate transparency, it could definitely hold CAs to the fire :)
TLS is no silver bullet, but it's a good base layer to always add.
Consider you're a cloud provider running customer images. If everyone downloaded the same package via https over and over again, the incurred network utilization would be massive (to both you and the debian repository in general) compared to if everyone used http and verified via GPG, all from your transparent squid cache you setup on the local network.
It would probably be better to use a distributed system design for this.. BitTorrent or who knows ipfs maybe..
If you're doing this, then you've made your own HTTP client so you can do whatever you want.
"HTTPS Everywhere" is a web browser thing.
Because the rest of the content is not verified?????? That's the whole point of HTTPS????????
The whole point of having GPG is that you (as the distributor/debian repo/whatever) have already somehow distributed the public key to your clients (customers/debian installations/whatever). Having HTTPS is redundant as it is presumed that initial key distribution was done securely.
I wonder if anyone will be surprised when they learn how HTTPS and HTTP/2 will be used to push more advertising to users and exfiltrate more user data from them than HTTP would ever allow.
Will these "advances" benefit users more than they benefit the companies serving ads, collecting user data and "overseeing the www" generally? Is there a trade-off?
To users, will protecting traffic from manipulation be viewed as a step forward if as a result they only see an increase in ads and data collection?
Even more, perhaps they will have limited ability to "see" the increase in data collection if they have effectively no control over the encryption process. (e.g., too complex, inability to monitor the data being sent, etc.)
We're talking only by HTTPS. Adding HTTP/2 is just mudding the conversation.
Care to give any argument on how does adding a TLS layer over the exact same protocol (HTTP/1.1) will be used to do that?
Except most big orgs now employ MitM tools like BlueCoat to sniff SSL connections too.
> You have not been paying attention to the hundreds of tools available to make HTTPS painless.
I have, and they don't. They make it easier, but you know what's truly painless? Hosting an html file over HTTP. What happens when Let's Encrypt is down for an extended period? What happens when someone compromises them?
> And real-world attacks that are reported over and over again.
Care to link to a few?
> The problem is that it's not YOUR security, it's other people's.
Oh, so you know better than me what kind of content is on my site? So a static site with my resume needs SSL then to protect the other users?
From Friday, in which Turkey takes advantage of HTTP downloads to install spyware on YPG computers: https://citizenlab.ca/2018/03/bad-traffic-sandvines-packetlo...
Without TLS how do YOU know that the user is receiving your static resume. Any MitM can tamper with the connection and replace your content with something malicious. With properly configured TLS that's simply not possible (with the exception you describe in corporate settings where BlueCoat's cert has to be added to the machine's trust store in order for that sniffing to be possible). Hopefully in the future even that wont be possible.
The content of your site is irrelevant. We do know that your lack of concern for your user's safety is a problem though.
I also wish that managing certs was better, but until then, passing negative externalities to your users is pretty sleazy.
Absolutely yes. Without that layer of security, anyone looking at your resume could either be served something that's not your resume (to your professional detriment) or more likely, the malware-of-the-week. (Also to your professional detriment).
Do you care for the general safety of web users? Secure your shit. If not for them, for your own career.
But how likely is it to actually happen? For the former, someone would need to target both you and specifically the person who you think will view your resume, and that's, let's be honest, completely unlikely for most people. The second case I can see happening more in theory as it's less discriminating, but does it actually happen often enough in real life to the point where it's a real concern?
FWIW, I have HTTPS on all my websites (because, as everyone mentioned already, it's dead simple to add) including personal and internal, but I still question the probability of an attack on any of them actually happening.
Basically, I see it this way:
- You can be MitMed broadly, like the Xfinity case, but the company in question can't really do anything crazy like inject viruses or do something that would cause the user to actually notice because then their ass is going to be on the line when it's exposed that Comcast installed viruses on millions of computers or stole everyone's data.
- Or you can be MitMed specifically, which will cause professional detriment, but would require someone to specifically target you and your users. And I don't see this as that likely for the average Joe.
Really, what I would like to know is: How realistic is it that I, as a site owner, will be adversely affected by the MitM that could theoretically happen to my users on HTTP?
Consider the websites you view every day.. most of them are probably HTTPS by now.
It's the wild west, basically. Regardless of how likely it is that someone is waiting for you to hit a HTTP site right now so they can screw with it, why even take that risk when the alternative is so easy?
I've already covered the general case above. Anyone in a position to intercept HTTP communications like that (into every unencrypted connection) is in a position where if they intercept and do enough to materially harm me or my users through their act, then they will likely be discovered and the world will turn against them. They have far more to lose than to gain by doing something actively malicious that can be perceived by the user. So I don't realistically see it happening.
> Regardless of how likely it is that someone is waiting for you to hit a HTTP site right now so they can screw with it, why even take that risk when the alternative is so easy?
I already said I use HTTPS, so your advice isn't really warranted. I also specifically asked how likely it is, so you can't just "regardless" it away. I get that there's a theoretical risk, and I've already addressed it. But as a thought experiment, it is helpful to know how realistic the threat actually is. So far, I haven't really been convinced it actually is anything other than a theoretical attack vector.
Internet providers have been injecting ads into websites for years. Hackers and government have been doing same to executables and other forms of unprotected payload.
Hashes, cryptographic signatures, executables signing, Content-Security-Policy, sub-resource integrity — numerous specifications have been created to address integrity of web. There is no indication, that those specifications failed (and in fact, they remain useful even after widespread adoption of HTTPS).
For the most part, integrity of modern web communication is already controlled even in absence of SSL. The only missing piece is somehow verifying integrity of initial HTML page.
"Injection" is the process of inserting content into the payload of a transport stream somewhere along its network path other than the origin. To prevent injection, you simply need to verify the contents of the payload are the same as they were at the origin. There are many ways to do this.
One method is a checksum. Simply provide a checksum of the payload in the header of the message. The browser would verify the checksum before rendering the page. However, if you can modify the payload, you could also modify this header.
The next method is to use a cryptographic signature. By signing the checksum, you can use a public key to verify the checksum was created by the origin. However, if the first transfer of the public key is not secure, an attacker can replace it with their own public key, making it impossible to tell if this is the origin's content.
One way to solve this is with PKI. If a client maintains a list of trusted certificate authorities, it can verify signed messages in a way that an attacker cannot circumvent by injection. Now we can verify not only that the payload has not changed, but also who signed it (which key, or certificate).
Note that this does not require a secure transport tunnel. Your payload is in the clear, and thus can be easily cached and proxied by any intermediary, but they can not change your data. So why don't we do this?
Simple: the people who have the most influence over these technologies do not want plaintext data on the network, even if its authenticity and integrity are assured. They value privacy over all else, to the point of detriment to users and organizations who would otherwise benefit from such capability.
However, it's not that hard to avoid replay after cache expires. HTTP sends the Date of the response along with Cache-Control instructions. If the headers are also signed they can also be verified by a client. If the client sees that the response has clearly expired, it can discard the document. As a more dirty hack it can also retry it with a new unique query string, or provide it as an HTTP header and token which must be returned in the response.
By the way, — signing is not equal to "null encryption". Signing can be done in advance, once. Signed data can be served via sendfile(). It does not incur CPU overhead on each request. Signing does not require communicating with untrusted parties using vulnerable SSL libraries (which can compromise your entire server).
As we speak, your SSL connection may be tampered with. Someone may be using a heardbleed-like vulnerability in the server or your browser (or both). You won't know about this, because you aren't personally auditing the binary data, that goes in and out of wire… Humorously enough, one needs to actively MITM and record connections to audit them. Plaintext data is easier to audit and reason about.
Literally in the time you've spent thinking about and composing your reply you could have implemented free, secure TLS for your users.
Are you name dropping wosign just to be obtuse? They were untrusted because they were untrustworthy, not because Google just doesn't like them. https://www.schrauger.com/the-story-of-how-wosign-gave-me-an...
It just coincidentally happens, that US controls 100% of root CAs and Kazakhstan (most likely) controls 0. So the later needs more audacious measures, while former can just issue a gag order to Symantec (or whoever is currently active in market).
CA system is inherently vulnerable to government intervention. There is no point in considering defense against state agents in HTTPS vulnerability model. It is busted by default.
Marking non-https sites as non-secure is a result of the network having proven itself to be unreliable. This is both the snowden revelations, as well as the cases of ISPs trying to snoop.
Besides, HTTPS isn't hard to get. Worst case means you install nginx appache or the like to reverse proxy and add in TLS. Things got even simpler when let's encrypt came along. Anyone can get a trusted cert these days.
It isn't your threat model that is important here. It is the users' threat models. Maybe you have full control of that too (the simplest case where that would be true is if you are your only user) but most sites aren't.
You will see the same sort of anger at e.g. parents who refuse to get their kids vaccinated (they're my kids, they say; Big Pharma can't make decisions for me, if you want to get your kids vaccinated, that's fine but there's a cost-benefit analysis, I just don't want it forced down my throat). It would be incorrect to conclude that the angry people are the wrong people.
Speaking as someone who's maintained a lightweight presence on the Web for over 20 years, I've thought about the tradeoff and I think it is worth it. Our collective original thinking about protocols skipped security and we've been suffering ever since. I was sitting in the NOC at a major ISP when Canter and Siegel spammed Usenet. Ow. Insecure email has cost the world insane amounts of money in the form of spam. Etc., etc., etc.
You and I probably disagree on the cost/benefit analysis here, which is OK. It'd be helpful in discussion if advocates on both sides refrain from assuming zealotry on the other side.
That machinery has a cost. With every barrier we throw up on the web, it makes it harder to build a reliable site. I also realize this is an argument I've lost. It's so much easier to just say "HTTPS everywhere" than to examine the tradeoffs.
This touches on the real point of all this, which doesn't seem to have been contained in any replies to you.
There's no real choice in the matter, https is a requirement if, and that the very big if right there, we truly acknowledge that the network is hostile. With a hostile network the only option is to distrust all non-secure communication.
https isn't about securing the site as you know, it's about securing the transmission of data over the transport layer, and it's needed because the network is hostile.
It doesn't matter one little iota what the data is that's traversing it, as there's no way to determine its importance ahead of time. A resume site might not be of much worth to the creator, but the ecosystem as a whole ends up having to distrust it without a secure transport layer because the hostile network could have altered it.
It doesn't matter the effect of that alteration might be inconsequential, as there's also no way to determine that effect ahead of time. The ecosystems 'defense' is to distrust it entirely.
And that's the situation the browsers/users/all of us are left with. There's is no option but to distrust non-secured communication if the network is hostile.
Even places like dreamhost give you a letsencrypt cert for free on any domain.
There is no case to be made for not securing your site, on principle or based on what's already happening out in the world, with shady providers injecting code into non-secure HTTP connections.
You see it as "a simple resume site," and I see it as a conduit for malicious providers to inject malicious code. Good on the browser folks for pushing back on you.
The warning used to be the absence of a pad-lock, but who notices that?
In any case, (a small subset of) the random enthusiast sites and such are close to the only reason I use a browser recreationally anymore. I absolutely agree with you.
The answer isn't to stop fixing things. The answer is to make it easier and cheaper to be secure.
Kinda like what LE is doing, no?
Your failure to grasp this is fairly evident from the rest of your comment.
It sucks badly. I'd prefer a less hostile network myself. Even back then there were bad actors but at least you could somewhat count on well-meaning network operators and ISPs. Nowadays it's ISPs themselves that forge DNS replies and willfully corrupt your plaintext traffic to inject garbage ads and tracking crap into it. And whole nation states that do the same but for censoring instead of ad delivery.
Can you explain why you think Symantec demonstrating incompetence is completely isolated from your Symantec SSL protected website?
I sense a lot of hostility coming from you. It seems like you think we do these things for fun. Do you imagine a bunch of grumpy men get together, drink beer, and pick a new SSL provider to harass and bully?
Oh, I get it. I've worked with lots of people like you.
As an infosec practitioner, I'm the one that cleans up after the people who claim good current infosec practices are "too hard" or "impractical" or "not cost-effective", which all boil down to sysadmins and developers like you creating negative externalities for people like me. I have heard all of these arguments before. "Oh, we can't risk patching our servers because something might break." "Oh, the millisecond overhead of TLS connection setup is too long and might drive users away." "Oh, this public-facing service doesn't do anything important, so it's no big deal if it gets hacked."
I'm not at all sorry that the wider IT community has raised the standards for good (not best, just good) current infsec practices. If you're going to put stuff out there, for God's sake maintain it especially if it's public-facing. If using the right HTTPS config is that difficult for you, move your stuff behind CloudFront or Cloudflare or something and let them deal with it. If you can't be bothered with some minimal standard of care, you need to exit the IT market.
And good luck finding a job in any industry, in any market, where anyone will think that doing less than the minimal standard, or never improving those minimums, is OK.
My goodness, you just nailed it.
The IT job market is so tight that complete incompetence is still rewarded. Incompetence and negligence that would get you fired immediately or even prosecuted in many if not most other professions.
If restaurant employees treated food safety the way most developers treat code safety, anyone who dined out would run about a 5-10% chance of a hospital visit per trip.
I was just arguing with a “senior developer” who left a wide open SQL injection in an app. “But it will only ever be behind the firewall, it’s not worth fixing.”
That’s like a chef saying “I know it’s old fish but we’ll only serve it to people with strong stomachs, I promise”.
To your parent comment -
No, I don't think it's a cabal of "grumpy old men" - I think it's a cabal of morally righteous security-minded people who have never worked for small companies or realize that most dev teams don't have the time to deal with all this forced entropy.
You care about security, I care about making valuable software. Security can be a roadblock to releasing valuable software on time and within budget. If my software doesn't transmit sensitive data, I surely do not want to pay the SSL tax if I'm on a deadline and it's cutting in to my margins.
Most people who advocate for security, including myself, have worked on small teams and understand the resources involved. Putting a TLS certificate on your shit with LE takes minutes. Doing it through another CA is minutes, in a lot of cases. You spent more time downloading, installing, and configuring Apache, then configuring whatever backend you want to run, and writing your product or blog post or whatever it is you’re complaining about securing.
Honestly, in the time you’ve been commenting here, you could have gotten TLS working on several sites. Managing TLS for an operations person is like knowing git for a software developer. It’s a basic skill and is not difficult. If it’s truly that difficult for your team, (a) God help you when someone hacks you, they probably already have and (b) there are services available that will front you with a TLS certificate in even less time than it takes to install one. Cloudflare and done.
> Security can be a roadblock to releasing valuable software on time and within budget.
Great, you've pinpointed it. Step two is washing it off. Ignoring security directly impacts value, and I'm mystified that you don't see this.
But I guess I'm a zealot ¯\_(ツ)_/¯
if you have one server, yes.
else it's the other way around, because if you have multiple servers you need to do a lot of fancy stuff.
And LE also does not work in your internal network if you do not have some stuff publicy accessible.
And it also does not work against different ports.
Oh and it's extremly hard to have a proxy tls <-> tls server that talks to tls backends, useful behind NAT if you only have one IP, but multiple services behind multiple domains.
IPv6 fixes a lot of these issues.
I don't understand your last point. Where do you see the problem with letting a reverse proxy talk to a TLS backend?
You get the requested server name from the SNI extension and can use that to multiplex multiple names onto a single IP address. The big bunch of NATty failure cases apply to plaintext HTTP just as well, no?
This means the backend server certificates are only ever exposed to your reverse proxy. There's no need to use publicly-trusted certificates for that. Just generate your own ones and make them known to the proxy (either by private CA cert or by explicitly trusting the public keys).
If you need lots of different domains, use one of the auto certificate tools.
If you can't use one of those yourself, consider hosting on a platform that can automatically do this for you for all your sites, like cPanel (disclaimer: I work for cPanel, Inc).
If your stuff is never publicly accessible because you're in a fully private network, just run your own CA and add it to the trust root of your clients.
If you need an SNI proxy, search for 'sniproxy' which does exist.
If you're so small that you can't afford an infrastructure person, a consultant, or a few hours to set such things up yourself, then maybe you should shorten the HN thread bemoaning doing it and use the time to learn how.
Funny you mention this.
With this new functionality, I can register valid certs for any domain in the world if their DNS is insecure, or if I can spoof it.
Have we gotten any headway yet on that whole "anyone can hijack BGP from a mom and pop ISP" thing ?
How many CAs are still trusted by browsers, again? How many of those run in countries run by dictators?
HTTPS doesn't secure the Internet. It's security theater for e-commerce.
This is just one anecdote, but I worked at a company small enough that I was the only developer/ops person. Time spent managing HTTPS infrastructure couldn't have been more than a handful of hours a year.
What is so painful to you about running your website(s) on HTTPS?
It may be easier to be more empathetic.
It's not that ominous. It's not even red!
I think it's pretty obvious to most users that "Insecure" doesn't matter as much on some random blog, but does matter a lot on something that looks like a bank or a store.
SSL has a history of being a pain in the ass. There are a lot of pain in the ass implementations out there. Everyone gets that.
At the same time, it's never been easier, and basic care for what you're serving your users demands taking that extra step. What Google is doing amounts to disclosing something that's an absolute fact. Plain HTTP is insecure (in the most objective and unarguable way possible), and it is unsuitable for most traffic given the hostile nature of the modern web.
Do you want your users being intercepted, engineered, or served malware on? If the answer is no, secure it. The equation is that simple. Any person or group of people who in 2018 declines to secure their traffic is answering that question in the affirmative and should be treated accordingly!
That's not "zealotry" friend, that's infosec 101.
So in a way, you're right. I'm not sure why that's a negative.
Your software does not work if it is not secure. Security is a correctness problem.
Yes sometimes it's pain to solve some TLS based errors and I also miss the opportunity to debug each transmitted packet with tcpdump but I also appreciate it that the continuous focus on TLS improves the tooling and libraries and each day it get's a little bit easier to setup a secure encrypted connection.
Do they keep their servers up to date? Why is it so much easier to do that than getting an SSL cert four times a year?
I hope they update their servers more often than that.