So in airports I rack my brain for a website that might serve content over HTTP, not HTTPS. Alas, they're getting harder to find, and it's getting harder to keep those airport wifi sessions going. I half considered registering "alwaysinsecure.com" or "alwayshttp.com" so I could reliably find one... Now I'll probably just use qq.com, it's short and easy to remember.
EDIT: Thanks all :-)
I also created http://httpforever.com/ for this purpose until such a time that the problem is solved properly.
Operating system maintainers have added captive portal detection to improve user experience. Hotspot owners often sabotage captive portal detection by whitelisting Apple and Google's CPD URLs! AAPL & GOOG have to change these URLs periodically and 802.11 vendors add the new ones in a cat and mouse game. So the customer's phone and thus customer think they're connected but they end up hitting a CP. Why? Because the CPD browser will close immediately upon receiving unencumbered Internet access. While many just want the legal liability limiting I Agree button to be tapped, many others owners want customers to see their store promotion/ad flyer page for longer than a second, place a tracking cookie, and do other value adds.
The interests of Wi-Fi network operators and users are not aligned. Right now only equipment manufacturers directly profit off Wi-Fi service. Hotspot owners can only indirectly earn money through attracting more hotel guests, cafe customers, etc. Some now harvest customer MAC address and in-store walking behavior. Ideally hotspot owners could receive a revenue share for offloading traffic from LTE, which would give them an incentive to be more loyal to users and maintain quality of experience.
Thanks for making me smile!
In my experience most portals' firewalls block all traffic (at IP level) except ports 80 and 443, which are transparently redirected to their auth server. Tunnelling isn't an option because you just can't contact anything else.
And it's not like I'm describing something that's hard to do. You've been able to script this stuff in iptables forever.
Edited parent for clarity.
My point is you can't "tunnel" out. Not for DNS, VPN or whatever. It's all blocked. The only services they let you access are their own to get you to authenticate. Even if you walked in with a complete hosts file (so didn't need to query any DNS server), you'd still not be allowed to connect, at firewall level until you authenticated.
Yes that's expected. So what does their DNS server do when I ask it for the IP address of google.com?
Which you actually need to do, to prevent locking the user out from the website they tried to access in the first place after authenticating, thanks to DNS pinning.
> The connection you make to that result is still going to be redirected to the portal.
Correct, but who cares? This is about DNS tunneling, there will be no connection anywhere, the DNS queries and their replies ARE the connection, so to say. How about reading up on what a DNS tunnel actually is? Considering your posts in this thread you have no clue how it works or at least have huge misconceptions about it.
For 1. it's not open very often, indeed. But for 2. I see it open all the time, even in banks...
I'm not talking about just blocking DNS. Or just blocking know DNS services. A good captive portal just denies everything save the web/dns traffic redirected to an internal server running on the portal. That is until you log in and get explicit allow rights.
If your portal isn't doing this, if you can escape and actually connect to external IPs, it's not captive.
So they need to lookup the actual IP address on your behalf and then in turn redirect traffic to that IP to their CP. boom, you just managed to communicate to the outside world.
Also services like Cloudflare wouldn't work if low TTLs didn't work. CF rotate IPs often as part of their DDoS protection.
I mean, if it weren't the case, why would CPs work in that way then anyways? It wouldn't be an issue to have a DNS server in the mix that resolves everything to you CP address with a TTL of one second, and then additionally to your port 80 TCP redirect just add another one for 53 UDP. It's not being done by any CP solution I have ever encountered.
neverssl.com, mentioned below, is one I seem to for some reason always struggle to remember it's name when I need it. I end up typing things like nossl.com or neverhttps.com and such...
(I'm still sad that example.com responds to anything at all. It was originally defined as a completely unused, thus unresponsive, domain.)
I love this site.
I totally understand that, by adding a popular but historically unsafe
cryptography implementation, I can help web clients to avoid MITM
shenanigans while reading my content. MITM indeed is a problem. But,
I'm not willing to make that my problem. The vast majority of my
customers don't experience MITM data modification while reading my
content. The vast majority of my customers wouldn't care if the whole
world knew that they have read my content.
Perhaps in the future, code for all the required cryptography layers
will be just as provably correct as code for reading the cleartext
HTTP protocol. When that day comes, I will add HTTPS. Not before.
1. You don’t have to write any code to use HTTPS, it has already been written.
2. Are you saying that since HTTPS isn’t perfectly secure you’re gonna use the definitely insecure HTTP? What kind of logic is that?
Different sort of security. While I don't agree with OP, the logic here is somewhat sound: he is concerned about the threats against the server. It is reasonable to claim that TLS increases the attack surface, because any any vulns in TLS implementation would be purely additive to the potential vulns in the HTTP server.
HTTP worst case: your reader is harmed.
HTTPS worst case: you yourself are harmed (e.g., if someone finds a zero-day in a cryptography library and uses it to run arbitrary code on your server)
Of course, a more typical HTTP/HTTPS server will be using the Linux kernel for networking (with some userland bits for DHCP and DNS), and OpenSSL for TLS – overall representing multiple orders of magnitude more code. But I suspect the proportion between TLS and non-TLS would be somewhere in the same ballpark, depending on how you count.
The problem is that on Linux, the TCP/IP stack is in-kernel, universal for everything on the system, and overall unobtrusive – it mostly 'just works' – while OpenSSL is in userland, has an unwieldy API and unstable ABI, can be linked in many different ways, requires a lot of configuration, and is overall a pain in the neck. So as a developer, the complexity of TLS weighs heavily on you, while the complexity of TCP/IP is something you can largely forget about.
Personally, I think TLS adoption would have been quicker if it had been implemented in the kernel and it just became a switch you could flip in the socket API; something like:
socket(AF_INET, SOCK_STREAM, PF_TLS)
I'd even wager that a typical homebrew http server is going to have way more rce problems than an existing stack that includes tls.
Users have no idea if what you are serving them is what they are receiving. If this is content like code samples they can be corrupted my malicious users. If it's political content the message can be altered.
<hyperbolic>I mean, we all know rurcliped is a Nazi, their website said so the other day. </hyperbolic>
The best solution is to get your site into the HSTS preload list. You can do that here: https://hstspreload.org/ - this adds the site to the Chrome list, and most other major browsers (Firefox, Opera, Safari, IE 11 and Edge) have an HSTS preload list based on the Chrome list. Once that happens and the browser updates, users will always use https, even if they ask for http.
A 301 redirect will offer some lasting protection as it can be cached but it's not really that great. The goal here is to take the first step to get on HTTPS and then longer term the sites can consider HSTS and eventually preloading.
Most people making web sites, even many devs, don't necessarily want to, or have the wherewithal to deal with SSL.
Setting up SSL on AWS recently was a monster pain the rear for our little nothing project site, granted a lot of that was AWS oriented.
It needs to be easier, if it were easy, everyone would be doing it.
I can buy this for a non-dev clicking "Install Wordpress" in a legacy cpanel of a cheap shared web host (though in that case the web host should be setting up certs automatically for them), but what exactly is complicated about setting up certs for a dev? https://certbot.eff.org/ holds your hand through the entire ~30 second process. It's simpler than most other tasks a dev needs to do while setting up a website.
The current alternative is to simply use HTTP, which doesn't yield any warnings. When Chrome will mark all non-HTTPS sites as insecure this will change, hopefully.
There's also the obvious question of having user login/signup on a non-HTTPS site in the first place (pseudo-anonymous user accounts or otherwise).
What server software are you running?
It is difficult if you want to understand the process, though no more so than, say, using Git IMO.
In short cpanel and other shared host panels make it as easy as the clicky-clicky Wordpress install
Given how arcane and confusing anything beyond the most basic Git usage can get, that's hardly a glowing endorsement. :)
Seeing as how he mentioned AWS, it's a bit more complicated if you have a cluster of servers that are automatically managed. You have to set up a reverse proxy and integrate that into your cloud provider's workflow.
it may not be complicated but you cannot beat doing nothing.
Our biggest problem with the transition was mixed content issues with hardcoded HTTP URLs hidden away on long-forgotten pages.
Doesn't seem that bad
AWS has become very complex over the last few years - there was a period where you didn't need admins. Now it seems you do again, just cloud admins, not local machine admins.
Those devs need to be shamed and taught a lesson.
>many train operators don't [...] have the wherewithal to read the safety manual
>many police officers don't [...] have the wherewithal to attend gun safety classes
>many senators don't [...] have the wherewithal to listen to their constituents
Example : http://www.leboncoin.fr/ redirects to https://www.leboncoin.fr/
The biggest thing we've come across so far is geo-sensitive handling of requests. Some sites will redirect you to HTTPS or not based on where you are making the request from! This of course means you might see HTTPS and we see HTTP.
I think it's still fair to include those sites in this case because they aren't serving all traffic securely.
Most pressing ones are uva.nl (major University) and at5.nl (local but relatively large news site).
Then I checked. The are secured. Not sure since when, but maybe the data from whynohttps isn't as fresh as one might think.
And lot of bigger press sites (spiegel.de, faz.net. computerbild.de) aren't secured still. Kind of a shame imho.
The site I was debugging was a WordPress site that had somehow gotten the images in its opening carousel set to http:// despite the fact that the "header" module had a default of https://. Very useful to just feed it the URL and notice these were the broken images; I could have grep'd through a "show source" page, but whynopadlock.com made it easy for me to identify that it was the header module of a site for which I had little familiarity.
curl -IL https://www.bbc.com
date: Tue, 24 Jul 2018 08:55:34 GMT
HTTP/1.1 200 OK
Expires: Tue, 24 Jul 2018 08:56:35 GMT
Most BBC sites in the UK have already been switched over the past few months. https://www.bbc.co.uk/news was finished about a month ago.
The BBC is a complicated beast and these sort of things take longer than you would think.
We redirect to http in the meantime to not have both being indexed.
I put my sites on free cloudflare and i get https without having to do anything, is that enough or?
China used its influence at the UN to have that included in ISO: https://en.wikipedia.org/wiki/ISO_3166-1#Naming_and_dispute
It’s called the one China policy. I.e strong arm governments to not recognise Taiwan in order to do business with China.
only 17.7% of HTTPS sites support HTTP Strict Transport Security (HSTS) == 82.3% of HTTPS sites are HTTP!!!
google.cn is an interesting one though: It redirects HTTPS to http, but the site consists of an image that redirects to google.com.hk which is https-only.
Who knows what you see in China when accessing google.cn...
Sites 1, 2, 3 are for pirating movies and TV shows :))))
Ouch number 5 in NZ is a Credit card payment gateway
I'd be more concerned with the ecommerce sites on the list, like Rebel Sport. Kmart at least does seem to redirect to HTTPS.
I'm biased, but I do think it's a pretty painless way to enable HTTPS for sites both big and small. No uploading of certs, no modifications to your web app are necessary, it just works.
I can't empathize with this perspective.
"Encrypt every packet with strong cryptography" is the mission statement of the information security community ever since Edward Snowden went public.
The "mofos" have "forced" you to protect your users from attacks like QUANTUMINSERT.
The "mofos" have "forced" you to protect your users from abusive ISPs injecting advertisements that track them into web pages that gain no revenue from these ads.
The "mofos" have "forced" you to protect your users from being hit with increased malvertising and watering hole attacks because ISPs generally cannot secure their own systems.
I think the wins here far outweigh the temporary inconvenience of having to install/use certbot.
Why would strong-encryption be necessary for a video game guide web-page? Say, one about Factorio?
Some game communities are toxic. IE: Minecraft guides I'd host with https due to the threat of scumbags and hackers. But Factorio's community is incredibly lax and laid-back. So I would consider HTTPS to be a waste of effort and resources.
 Unencrypted, pretty much everything about your internet traffic is laid bare for any middleman - your ISP, the person who controls the router you're connected to, some script kiddie on the same network as you - to both read and write. Injecting ads or cryptominers, tracking what pages you visit, changing what a page says, even straight up serving a different page. Why let your site/server be a (potential) vehicle for exploitation, disinformation and privacy invasion?
For everyone else, its a game guide. The worst that can happen is that they get the wrong information about how Trains work in Factorio. There wouldn't be a need for me to track users or clicks or whatever in a hypothetical game guide community website.
As I stated before: I know some game communities (ie: Minecraft or Eve Online) can be toxic. But the Factorio community isn't like that. So I'd be comfortable hosting a Factorio webpage under HTTP.
If I were hosting a Minecraft or Eve webpage however (warning: toxic community ahead), then I'd host it under HTTPS due to the dastards who troll and harass others in that community.
That's the more important part btw: understanding your audience. Some game communities are toxic and full of harassers, trolls and so forth. But other communities are lax, friendly, and can get away with lesser amounts of security.
And the last mile is still going to be just as unencrypted, non-private, and tamperable as before.
You literally cannot get end-to-end encryption/privacy without the host supporting TLS.
It's really not an optional thing to support for you as an operator, and especially now that Let's Encrypt is a thing, there's really no excuse for not doing it.
(Now that we're on the subject of toxicity anyway: I'd say that depriving your users from the ability to secure their network traffic, just because you're trying to die on a weird hill, is pretty toxic behaviour.)
But some webpages are simple, static one-off projects that I put out on behalf of a community. I don't believe in ads and would rather pay for all the bandwidth that my users would use. Consider it a donation "for the love of the game".
Very, very simple, nearly static webpages, close to "neocities" level of web design. No users, no passwords, just information I'm publishing to help a game community out.
Nothing to steal, nothing to phish, nothing. Pure text, maybe a few images and videos to elaborate on specific points.
I understand that TLS is important for any website with interactivity for privacy reasons. But the above webpage is completely static and non-interactive. Its old-school Web 1.0 stuff. There's nothing to steal, phish, or cheat here. Literally nothing.
I just don't see the point in HTTPS-ing this site.
I'm not sure why you aren't listening to other people. TANSTAASWS.
There ain't no such thing as a static website.... When someone can MITM you, the simple page you serve them can have all your content, with a complete redesign... Scripts, login places, anything the hacker chooses to put on it.
When you don't use HTTPS any middle man can take your content and do anything they want with it.
So I was reading this site and am particularly concerned about the crypto miner present on the page. Care to explain this to me? Hint: MITM due to insecure context and the miner isn't coming from the site itself but as a user, I'm going to blame the site because it happens on the insecure site.
If you think a MITM can't do any harm with a static page then you simply aren't being creative enough.
 Reusing a previous post of mine: https://news.ycombinator.com/item?id=17509373
Then you understand wrong. It's important for any website, interactive or not, for privacy reasons. Reader privacy is a thing regardless of whether something is interactive. I don't know where you're getting the idea from that 'static' sites are somehow special.
For example, Eve Online would 100% be under HTTPS. Period. That online community is incredibly secretive, incredibly untrustworthy, full of scammers and requires every bit of security on EVERY webpage.
Factorio's community? Erm... no. Trolls just don't exist in that community. Unlike Eve Online, there's no warring factions of spies trying to take over each other's online turfs "outside of the game". Factorio is a lax community without any trolls or hackers.
A lot of it is understanding the userbase and general security posture. If I were a serious Eve Online player, I'd give 100% secure settings, as much as possible, due to the shennanigans that community is known for pulling.
> Factorio's community? Erm... no. Trolls just don't exist in that community.
HTTPS isn't about protecting "secretive" shitty people. It's about protecting everyone.
But ultimately, I don't think that this vague concept of "privacy" when applied to a game guide really matters. People normally don't shuffle books and anonymize themselves as they put books back onto the library cart for example.
And I'm old enough to remember physical library cards with the names of everyone who checked out a book. I don't recall any privacy concerns about that. But maybe I'm just old-skool or something.
With regards to malicious ISPs MITMing their users: they kinda control your DNS requests, so good luck with that. I'm not sure if there really is a way to fully secure against an ISP-level attack against the users.
An ISP can always inject into the HTTP -> HTTPS redirection, and serve HTTP right there and then. HSTS assumes that the user has visited a clean version of your site before, if a new user comes in without ever seeing the HSTS, then the ISP still "wins" and captures your users on a fake HTTP version of your site.
So no, the level of attacks you've described, I don't believe HTTPS solves the problem.
HTTPS security doesn't depend on DNS.
Please keep in mind: I do app security for a living.
> An ISP can always inject into the HTTP -> HTTPS redirection, and serve HTTP right there and then.
...they said, in a thread about a popular browser marking HTTP insecure.
Do you really think HTTPS-by-default is out of the question in the future, especially if adoption rates exceed 99%?
If it were a big network, like GameFAQs, Reddit, or you know, something where you can steal a password or something. Maybe you'd have a point.
But random blog with a bit of information on the game??
Real story btw. I'm paying a bit of money for the host (not much though), I don't believe in ads taking money from the few reads I do get. Its basically a static page that costs pennies. I'm no longer updating the webpage, but I'm leaving it up just in case someone out there wants to learn more about the game. (It doesn't seem like any of the information is out-of-date. Its a few years old but the game hasn't changed in this aspect, so the information is still solid).
I just don't see any point converting this webpage into HTTPS.
There's literally nothing to phish here.
So I'm not convinced that HTTPS is the solution for that hypothetical attack. Not while untrustworthy certificate authorities are default-enabled on most clients anyway. At best, HTTPS complicates the attack but it doesn't make you immune to it.
A hypothetical MITM attacker can just get a fake certificate from a low-security vendor (ex: Comodo), and serve that to get a nice "trusted" version of the fake webpage. If you control the network, you control the certs that are eventually served to the users.
That's literally all security. It isn't binary. It never is. At best, ASLR complicates ROP. At best, salts complicate breaking password hashes. At best, memory safe languages complicate buffer overflow attacks.
One could use your argument to dismiss basically all security. You've chosen zero mitm protection rather than a lot of mitm protection.
If you aren't using https then a network attacker with no preplanning can cause problems. If you are using https then a network attacker needs to get a bogus cert ahead of time. This costs money and time and does not scale well. Security is an economics issue. Making it more expensive to attack people is good.
example screenshot: https://www.pcrisk.com/images/stories/screenshots201703/zeus...
You're saying that someone can inject a "redirect header" into a fake webpage, force that upon my users through the control of a network (WiFi router or whatnot), and use my domain name and my trust to take advantage of the users.
(Your example with the Zeus malware is bad because Zeus attacked the OS directly, so it wasn't a network attack. But hypothetically, lets say it was a network attack so that it remains applicable to my example)
Alas, HTTPS does NOT solve that, at least not while globally trusted HTTPS certificate roots remain insecure. They only need to get one HTTPS certificate signed by Comodo (or some other low-security HTTPS vendor) to attack my domain name in a manner like that.
HTTPS raises the bar. There's no happily ever after in security. Maybe in five years domain hijacking and cert abuse will be as common as aforementioned fake tech support scams that prevent users from closing the tab. Some of them even set full-screen on desktop browsers and vibrate your phone (grr).
> That scam is mostly used through ad network vector not MITM.
Just one more reason why I'm not going to use ads to fund any web-projects I do.
I agree that HTTPS raises the bar and makes it more difficult for certain scams. Indeed, I'd go as far as to say that any webpage with user-inputtable data (ie: username, passwords, etc. etc.) is required to be HTTPS. The risks are too great and that's the minimum security users expect these days.
But I'm still of the opinion that Web 1.0 style static-sites can be served with HTTP just fine. If there's no usernames, no interativity, and PURELY hosting static content in a community that's relatively lax (again: Minecraft and Eve Online fail. I'd use HTTPS even for a static site if I were doing Minecraft or Eve Online stuff), then I'd think HTTP is just fine.
Dude, I'm TRYING to have a discussion here. And frankly, I'm not fully convinced about the arguments that a lot of people are making here. Lobbing personal attacks is not cool, no matter the subject matter.
So hypothetically, you're saying a man-in-the-middle attack is going to change the content of my files. I understand.
Now tell me: how does HTTPS secure against that if Comodo certificates (or other poor-security certificate authorities) continue to exist?
A SINGLE bad certificate authority trusted by any of the major vendors would allow the attack you so described happen over HTTPS.
I have seen these things happen. HTTPS is not a magic cure-all, not while globally trusted bad CAs exist. Bad actors can still MITM HTTPS sites with a fake cert.
ISPs typically have some degree of HTTP proxying and caching available btw. You don't benefit from HTTP proxies / caches if you encrypt everything. So there's a reason. And if the bad actors can attack HTTPS with a MITM attack anyway, there's not much to gain from HTTPS from my viewpoint.
If a user goes to http://site.com, even if you have a redirector to the https://site.com version, the https version can be MITM'd with a self-signed certificate.
At which point their browser will give them a big scary security warning.
In this hypothetical example, you're clicking on a video game guide. Someone watching you buy games from Gamestop would have more information than someone watching you click on "How Factorio Trains Work" or something else on this hypothetical example.
If the reverse DNS points to the IP address of the blog (ie: people see that you're browsing "FactorioGuide.com"), they're gonna figure out that you're learning how to play the game Factorio in any case. Even if all the traffic were encrypted.
The only way people don't know what you're doing is if the guide were on a shared host with many-many webpages on a singular IP Address. But otherwise, the typical website (ie: self-hosted on a VPS) would have a unique IP Address and a unique reverse-DNS entry. And people would figure out how long you've been browsing and what you've been looking at, even through HTTPS.
These are not issues that are likely cause great damage, so no, there is no need to encrypt every last bit of traffic. But the bigger question is "Why wouldn't you encrypt it"? There's really not a lot of reasons not to.
> But the bigger question is "Why wouldn't you encrypt it"?
You're right. There's not a lot of good reasons I can think of. The best argument I've been able to come up with is ISP-level caching of HTTP traffic, which may save on bandwidth. But my host doesn't even charge for the measly amounts of bandwidth I use, so that's certainly not a concern.
Modern servers have HTTPS hardware-acceleration in the form of AES-NI, so it doesn't even use much more CPU power to use TLS these days.
So really, bandwidth savings from ISP-level caching is the best counter-argument I've got. Which is to say: not a very big concern of mine.
All people will assume that your page is installing malware on their hardware. Non tech people will not understand that was this free hotspot.
Now move a bit further, someone at ISP or network somewhere in world injects malware only into traffic of your page. I visited your page I got malware installed or my AV started alarms. Would they know that was not your site serving malware?
I'm not entirely convinced that it solves the MITM attack still. But I'm still not convinced that the arguments a lot of people are making around here necessarily make sense either. A lot of these attacks are fundamentally theoretical and don't seem broadly applicable.
The main argument that convinced me is: its easy to do, so why not? But the scare tactics that a lot of people in this discussion are unsavory to me and are unintelligent IMO.
Installing certbot isn't something you can do on 100% of hosts. Switching hosts is also a pain.
If you bring up that you don't control the host (shared hosting), then we should shame the shared hosting provider, who has no excuse.
> Ask Heroku to support LetsEncrypt on all dynos.
> Vote with your wallets and move away from Heroku.
> Launch a competitor that allows others to move away from Heroku.
It's almost like this isn't a super simple process for everyone.
> "Conditional support for OpenBSD's sandbox, Mac OS X, or experimentally on Linux."
What now? I get to experience experimental functionality?
Remember that https offers both privacy and integrity, so even if you don't care about the privacy, you should care about the integrity.
I mean, there's the option to not use my site. Can I take a stand against HTTPS because I believe PKI to be a dumpster fire?
But the heavily flawed PKI is rapidly improving from the dumpster fire it has been. The glaring 'blindly trust every CA to never go rogue' problem is on the edge of being solved, with browsers beginning to require CAs to submit all new certificates to Certificate Transparency logs in order to be accepted. Attackers would have to either compromise multiple targets in detectable ways, or publicly disclose their forged certificate to the world before they can use it, at least once the older certificates from the dark ages of 2017 have all expired in a few years.
In any case, HTTPS doesn't protect your site, it protects the users of your site (by protecting the confidentiality and integrity of the data in transit). If you don't care about your users, then those potential users should avoid your site.
MITM attacks have become pervasive. HTTPS was less important years ago, but that time has passed. For example, ISPs, hotels, airlines, and many others have decided that it's okay to attack their customers. Supporting HTTPS is an easy way to help those users. It doesn't need to be perfect to be useful.
It's no secret ISP's are consolidating and being bought up by media conglomerates who want to secure the distribution method for their product and exclude competitors in a new market. Those companies view the internet like they view cable TV; it exists to deploy entertainment and advertising to the public and to convert people into products for companies. But they need a mechanism to figure out what is engaging, e.g. Nielsen Ratings.
I'm really sure they have a sour taste in their mouth left over from from google fiber due to the politics of ILECs trying to protect their literal state-granted monopoly. And in all seriousness, with POTS being retired, the government is granting monopolies over last-mile wire to cable co's.
So Google decided on the nuclear option: they've decided we're just going to get everyone to encrypt their traffic and not just with easily SSLStrip able stuff. We're going to push the standards organizations to deploy SSL\TLS in a way nobody can crack it, then we're going to provide resources like Lets Encrypt to make getting keys cheap, and then we're going to be beligerant whenever the government complains about not being able to get at the traffic.
Now all the ISP's can see are HTTPS connections to AWS, Azure and Google on their equipment; using IP's to try to figure out what site is what doesn't work. They can still attempt to MITM the traffic and replace keys and impersonate key repo's and I'm sure in a lot of cases they do, however, that opens them up to class action lawsuits and criminal hacking charges. You don't want a citizen organization charging your network architect with violating the wiretap act; Network Architects are hard enough to work with, give them a reason to tell their employer they are not going to commit a crime, and they'll end up fighting their employers tooth and nail. Both of which I'm sure Google would be more than happy to fund to protect its business.
The other advantage you get out of HTTPS is the session re-use feature. When someone re-connects to your webpage, instead of establishing a new connection, they can re-use the session they had before; this reliably tags a device and allows you to identify connections from a device even when IP's are changing, and would be more accurate than a browser fingerprint. This is not a new technology or mechanism; IPSEC IKE sessions can last years.
In all candor, we've got military-grade encryption protecting general internet traffic at this point and everyone on this forum is going to argue about privacy without understanding where the market is at or why. That speaks volumes to the times we live in and how effective astroturfing and misinformation campaigns are.
Every mainstream browser sends the server's domain name in plaintext at the start of the TLS connection, so (short of domain-fronting, which browsers don't do) it's generally not a mystery what site clients are talking to, even if they used DNS-over-TLS. ISPs still have that metadata.
TLS session resumption could theoretically be used for tracking users, but why would Google benefit from doing that when it already uses HTTP cookies? The actual benefit is one fewer round trip, making the web, and all of their sites, faster to load.
It's far more plausible that they're pushing to secure the web with HTTPS and Certificate Transparency because an insecure web ecosystem is just plain bad for business, and makes us all more insecure. It doesn't require spite or a zero-sum game of tearing down the competition to explain pushing HTTPS and Certificate Transparency, which lack real nefarious downsides for users.
I'll agree with you that there are substantial ancillary benefits to added browser security well beyond the scope of this discussion and to continuously hardening any infrastructure in general. From a business perspective, those are always worthwhile investments in of themselves simply from the standpoint good security means you have a discplined, well-thought-out system in place. But, you cannot discount the fact that Traditional Television is a direct competitor to Google and the internet in general and that is not a primary motivating factor in their decision to enforce encryption. Large companies simply do not mess with their core products for the good of the public, doing so shows the company is not loyal to employee or stockholder interests both of which shouldn't be disregarded for a variety of reasons. I'll agree companies can decide not to take things too far, but they won't disregard their own interests either. Assuming as such is a naieve belief.
In short: I have yet to see a really convincing argument in favor of HTTPS in these scenarios.
The sad reality is that http->https redirects are like vaccination. In some specific cases they are needed (such as login pages), but for some it's more about herd-immunity (normalizing https usage and ensuring availability). Mind you that there's a solid argument for allowing self-signed certs to allow encrypted but unauthenticated transfer. This mode allows MitM, yet does protect against the threat model of a passive eavesdropper.
"Every single issue mentioned in that post only affects end-users. Not a single issue for the operator" - so don't care about the user and the risks we expose them to, only ourselves? This isn't really an approach I'm happy taking.
My poorly communicated point is that by using extreme language like "I'm going to hack your static site" dilutes the message and makes average operators less receptive to more advice in the future. Troy does a lot of good work on reducing friction and advocacy, but sometimes he puts out more extreme content like this which makes me worry that it may have the opposite effect.
PS - Do you think the vaccine analogy works? I'd appreciate some advice on how to improve it
I'm a hacker at a local Starbucks. I go there every Thursday and use a WiFi Pineapple in my backpack. By naming my WiFi access point similar to the Starbucks' free WiFi I trick a few dozen people a day to connect through my Pineapple instead of the Starbucks provided WiFi. Over a period of a few weeks I log all traffic and devices. I see a number of regulars - many with their own unique browsing habits. I create a few phishing sites to target these unfortunate users who routinely browse at the coffeeshop. Over the course of the next few days I MITM all traffic in the shop and successfully phish a small number of the users. Now imagine a wider net. A collection of compromised networks that don't require my physical presence in a coffee shop and a small team of individuals selecting vulnerable targets based on their browsing patterns.
Neither you nor your users need to be individually targeted by some 3 letter government agency for this attack to work. They only need to be an unfortunate victim and you only need to be too lazy to spend 10-15 minutes setting up a TLS certificate.
This attack is heavily thwarted by sites using TLS certificates. I'd need to get my hands on a number of invalid certificates and even that can be thwarted by HSTS. Now instead of my attack being completely transparent I need to worry about raising suspicion of users browsing https:// sites not getting errors about invalid certificates.
That's why I cringe whenever I hear the mass propaganda that "HTTPS is secure". It encrypts traffic between the two endpoints, that's it.
If they send your user to https://dvfjsdhgfv.com (malicious server) instead of https://dvfjsdhgfv.com (your server) the browser will yell at them about the site being insecure. If they try to use http://dvfjsdhgfv.com your user can see that it isn't secure. They would need a fake certificate for dvfjsdhgfv.com to serve with their malicious version of the site. Arguing against the increased security theoretical attacks exist is a bit misguided - especially when certificates have been revoked or CA's been blacklisted/go out of business for this behavior. It's extremely uncommon - there have only been a handful of instances of it occurring/being caught (an important distinction I'm sure you'd bring up). Because of the difficulty in getting an invalid cert signed by a CA they tend to only go after the big fish (Google/Alibaba/Facebook) and hope they don't get caught quickly.
If fake certificates were as common as having an unlocked bike left in central L.A stolen, the argument would be a lot stronger.
>It encrypts traffic between the two endpoints, that's it.
Which is why it is important. The attack is called "man in the middle" and not "man at the ends". Also "mass propaganda"? Propaganda from who exactly?
I don't understand the refusal to implement https, even on static sites. It takes literally minutes and provides additional security to your readers/users. Refusal to do so is laziness at best and maliciousness at worst. I have a personal file host that receives <5 unique views/day, mostly only by friends, and 99% of all traffic only comes from me - I still took the time to set up TLS . It took me under 10 minutes to implement and it was my first time ever doing so. If you expect to have 0 visitors ever why not just use localhost?
Only if the attacker is very, very stupid. They will happily redirect the request to paypal.com to https://www.xn--paypl-7ve.com (which resolves to https://www.xn--paypl-7ve.com that Let's Encrypt will happily give you a certificate for). The latter looks exactly like paypal.com and has a green padlock - so for an unsuspecting user it's "secure". Only having implemented DoH correctly you could talk about benefits you mentioned, without it it only gives the user a false sense of security. Seriously, people need to be aware of that.
EDIT: HN formats the IRIs so that the above makes little sense, see https://people.csail.mit.edu/ayf/IRI/index.htm for more examples.
Chrome patched in in Version 58, Opera patched it not long after. Safari and Edge quickly followed suite (or always displayed the punycode) and I believe IE has always shown punycode. Leaving the only browser with significant user share that's susceptible to this attack being Firefox. At least for users who haven't enabled `network.IDN_show_punycode` in about:config, which is probably most (if not all) users who haven't heard of this attack. Firefox is 6%~ market share - so this attack would fail on 94%~ of your viewers as long as they were paying any attention to the domain. Probably the only way Mozilla will stop dragging their feet in joining everyone else is if someone creates a malicious punycode version of Mozilla with a cert and brings the battle to their doorstep.
This isn't an argument against TLS/HTTPS - this is an argument against Firefox as far as I'm concerned.
Even if you don't use punycodes, many users are still vulnerable to another type of attack that Let's Encrypt allows:
Even without altering the network traffic many people fall victims to these vicious tricks. The big question here is how much attention do you pay to the address bar.
Nevertheless, the benefits of HTTPS are obvious - there definitely is some protection when the user is sending some data. But for reading a static website, I'm sorry, but I hardly see any benefit. I installed Let's Encrypt on all my websites, but each time I see someone calling it "secure" I really get frustrated.
Every single one of those paypal.com phishing URL's issued could be prevented if users understood how domains work. That's asking an awful lot, I know.
Security, much like any other form of personal safety, is equal parts of following protocol and being educated about the dangers. You can't reasonably protect yourself from something you don't know exists and you won't protect yourself from danger if you don't follow protocol (see also: OSHA, lab safety). If the user's understanding is that "https/green padlock = correct site!" then that's terrible, I agree. If the understanding is "https=secure!" that's better but a bit misguided. A secure connection with a malicious server probably isn't what the user has in mind when thinking "secure". But even this misguided approach is a vast improvement over the alternative of "nothing at all" which is why there has been such a strong push towards it. It's quite literally "something is better than nothing" being applied to the general population of users who will probably never be educated enough to protect themselves properly.
By your last reply I had kinda pieced together that your issue is more with the "https=secure!" generalization and not necessarily an "https isn't any more secure than http" argument.
>The big question here is how much attention do you pay to the address bar.
I check the cert for every site I visit - although if I were to become the victim of a MITM attack using DNS spoofing while in the middle of browsing a site and it was targeted directly at me... I don't check the cert on every page load so would probably be fooled for that small window. I also don't lock my front door when I go get the mail, it's a risk I'm willing to take. I understand this makes me in the 0.001% of "maybe a bit too paranoid" users - if there are even that many of us.
>But for reading a static website, I'm sorry, but I hardly see any benefit.
The benefit is very small but still existent. Simply because the time to implement is in the order of minutes instead of days/weeks - I can't see a good argument against taking the time to implement it. Even if it only ever protects a single user.
Well, it's almost nonexistent. To reiterate: if the attacker can only sniff your traffic, they will see what static websites you visit and that's it - whether you use HTTPS or not. On the other hand, if the attacker can modify your network traffic, they will attack you in million ways but using any dynamic website (i.e. requiring some interaction on your part - sending a login etc.). Such an attack on a static website doesn't make any sense when you can do so much damage everywhere else. Can we agree on that? If so, I find the past Google's policy of marking as insecure websites with forms etc. as pretty responsible and I applaud it. Whereas now it looks like blackmail on their part. And I still don't have a feeling I'm protecting any of the users who visit my static websites, I'm just forced to do that because Google rules the Internet now.
Doesn't HSTS thwart this? Unless paypal.com were omitted from the preload list, AND you had never browsed to paypal.com before with that browser, it should refuse to connect over HTTP and the attacker won't be able to issue their redirect. It will try HTTPS instead and immediately fail out of certificate validation.
 https://www.xn--80ak6aa92e.com/ (this will show the punycode domain on HN, visit it in Firefox it will look just like https://apple.com )
 The cert still shows punycode though most users wouldn't check the cert: https://kimiwo.aishitei.ru/i/MMFWtvE5ZgYdHWki.png
The problem with https everywhere is - for all its good aspects - it adds a layer of fragility to the web. It seems like we're leaving the day where a website can simply be, untouched, for decades. Now if you don't update your TLS certificates every few months, the thing goes poof.
It would be nice if there was a good way to publish content to the web without having to tend it constantly.
Installing certs is just as regular as installing patches, do it every 6 months if you like, but certainly not every 10+ years!!
The issue is key management. Both parties need the same key and it has to be at least as large as the data you want to send. Each set of parties needs a different key.
If you had a method to securely transmit such keys then you could just transmit your data over it instead.
This is why one time pads are only used by countries to communicate with staff overseas. You can send the pads by diplomatic courier for use in communication later. There is no equivalent mechanism for your web activity and every site on earth.
That's what happens with passwords.
If I connect to https://www.SomeWebsiteIveNeverVisited.com/, how is the web server supposed to tell me where to get the key? Or if I, the client, am choosing where to get the key, how do I securely tell the server where to get it?
Passwords work because they're being sent over TLS which we've decided is "good enough".
Both parties can deduced that offset from the messages they perform.
Before reaching the end of that key, they can agree on another source for a new key to continue the communication.
Honestly, it feels like you're treating "one-time pad" as a buzzword without understanding what it actually is. It's just an encryption technique. It doesn't fix the PKI problem. And your one-time pad key needs to be sent over a secure channel. How do you suppose that happens?
If I need encryption for one of my projects, I'll try that.
Experts are often wrong. They exist because because we don't know. When we know something we don't need experts anymore. We just know and apply our knowledge.
Yes, sometimes experts get it wrong. Yes, non-experts can sometimes find solutions that the so-called experts couldn't find. I'm not arguing against those claims.
But suggesting one-time pads as a solution to PKI is like seeing someone on the side of the road with a flat tire and suggesting they refill their gas tank.
IMHO most people defending HTTPs do that by loyalty because they invested so much time on that and not because they understand all the details of the crypto behind.
My message is just: "It's overcomplicated. I quickly found an alternative. I don't buy the meme".
That's exactly my point though. Your proposed alternative does not solve the problem.
We didn't reject your alternative because we think you're incompetent. We didn't reject your alternative because we think HTTPS is fine.
We rejected your alternative because it DOES NOT SOLVE THE PROBLEM. AT ALL. And rather than admit that, you keep defending a point that nobody is arguing against.
You're talking about who and not what because the "what" is proven to be unbreakable. You're dishonest.
Earlier, I asked you a question to try to lead you to understand why your proposal was wrong, and you told me to answer my own questions and called me patronizing.
You continued to defend a point (OpenSSL and PKI have problems) that nobody argued against.
Even now, you keep acting like I'm telling you wrong simply because you admitted you're not into crypto.
And you're calling ME dishonest?
I give up. At this point, I'm quite certain I'm being trolled. Or you think being told you're wrong is a personal attack. In either case, you're not worth my time.
You feel threatened because you invested time in those tools. It's not rational. It's an emotional reaction.
I use upper case because your responses are frustrating me, because you continue to insist that your suggestion is being dismissed simply because you're not into crypto, when I have said over and over that it was dismissed because it is simply not a valid solution to the problem originally brought up.
Your claim that I keep trying to change the subject instead of agreeing on the problem is baffling me. Which problem are you referring to here?
IMO you just wanna be loyal to the group you think you belong to (you said "we"). I don't do that. I decide for myself only.
You don't admit your part. You just just justify your feelings. If you feel threatened, it's your problem. Not mine.
Maybe I'm just entirely misinterpreting your messages, because you're never specific about what you're trying to refute.
> It's an emotional response.
An emotional response to what? Please be clear here. Are you still thinking the rejection of one-time pads as a solution to the problems of OpenSSL and PKI is not based on logic or merit, but somehow based on emotion? If the answer to this is yes, then just come out and say so, because I would gladly explain what is wrong with your proposal.
> IMO you just wanna be loyal to the group you think you belong to (you said "we")
What group do you think I'm trying to be loyal to? The fans of OpenSSL? The people that believe everyone should use HTTPS?
> You don't want to admit your part.
What "part" am I supposed to be admitting?
> You just justify your feelings.
> If you feel threatened, it's your problem.
I don't feel threatened. Why would I feel threatened? You're the one that is acting persecuted for having a proposal get rejected.
If you find this patronizing, I apologize. I do not mean to be. But you admit that you're not into crypto, and so I'm assuming you don't understand why your proposal wouldn't work. At the very least, I don't know what you know and don't know.
So, as mentioned, supporting HTTPS on your site adds the problems of OpenSSL's large code base, a history of OpenSSL vulnerabilities (Heartbleed being very well known), and the problems of PKI. These are all well-known, and are deemed an acceptable risk because the risks of NOT using HTTP are much higher.
Now, one-time pads definitely have a lot of benefits. First, they're the only algorithm mathematically proven to be unbreakable when used properly. They are extremely simple to implement, and using them for encryption instead of having to choose between the dozens of algorithms that OpenSSL uses could reduce your encryption library's memory and storage footprint.
Of course, the big caveat here is that they are unbreakable when used properly. This means:
- A pad can never be used more than once -- If a pad is used twice, then an attacker that has sniffed two encrypted messages can derive the pad, and therefore decrypt all message encrypted with that pad. This derivation is quite trivial, especially for text data. (see https://crypto.stackexchange.com/questions/59/taking-advanta...)
- The pad must be at least as large as the payload -- If the pad is too short, you have to repeat the pad, causing the problem mentioned above.
- The transferring of the pad needs to be done over a secure channel -- You have to assume that an attacker is always listening and can see all traffic. Even if the server tells the client "Go get the pad from this URL", then the attacker will get that URL as well. Also, keep in mind that the pad needs to be at least the size of the data being sent, which means that if you have to fetch a pad, you're doubling all bandwidth requirements.
- The random number generator used to creating a pad must be cryptographically secure -- Being able to predict the supposedly random numbers another system is generating is a pretty known attack and has had several proofs of concept. Generating the amount of cryptographically secure numbers to create a one-time pad large enough for transferring very large amounts of data just isn't feasible.
- One-time pads do not solve the problems of PKI -- One-time pads do not offer any method of authentication. They're nothing more than a form of symmetric encryption, and symmetric encryption does not attempt to solve the problem of authentication.
That's basically the gist of it. If you want me to explain anything better (Like what symmetric encryption is, and how it differs from asymmetric), I'd be more than happy to. I'm not a crypto expert by any means, but I like to think I have a strong grasp of the basic concepts and ideas.