1. create a local CA
2. create a certificate using that local CA
3. Then you can add the CA in your trusted authorities (Firefox does need an extra step: either enable the "security.enterprise_roots.enabled" flag, either import the CA certificate manually in it).
Details at: https://gist.github.com/cecilemuller/9492b848eb8fe46d462abeb...
I mean- if there are adversaries out there trying to hack our communications- then we need to let these adversaries try so that we can engage them head on so that their methods become published, public, and thoroughly analyzed by the people in charge of strengthening our protocols.
This approach would be like fracturing the bone to make it stronger- we allow nation state hack us in order to figure out ways to stop and prevent such hacks using open and transparent software alone. Trusting any group of people anywhere "just because they're trust worthy" feels like a variable defining <the contents of its data> as equal to <the contents of its data>. It just doesn't make sense for a variable to trust itself "just because" because then I wonder if something fishy is going on under the hood.
"Certificate Transparency helps eliminate these flaws by providing an open framework for monitoring and auditing SSL certificates in nearly real time. Specifically, Certificate Transparency makes it possible to detect SSL certificates that have been mistakenly issued by a certificate authority or maliciously acquired from an otherwise unimpeachable certificate authority. It also makes it possible to identify certificate authorities that have gone rogue and are maliciously issuing certificates."
"Certificate Transparency logs use a special cryptographic mechanism to facilitate public auditing of certificates and logs. This special cryptographic mechanism, known as a Merkle hash tree, is a simple binary tree consisting of hashed leaves and nodes (see figure 1)."
EDIT: And you’re saying the SSH model is broken then. Also you can verify the certificate signature via another channel, like a git repo of all the signatures of most important websites (I know, it look like a CA)
1. SSH's whining about first connection fingerprint trusting is needlessly petty and nobody actually checks the fingerprints, and in many cases they have no need to do so anyways.
2. Almost all cert errors a user will encounter in the real world are the fault of misconfiguration (wrong domain) or pathological/greed-driven behavior (expiration) rather than something that actually impacts the confidentiality of the connection (which is what we care about).
3. The fact that all cert errors are treated as the same severity (red screen! exclamation points!!1 YOU ARE IN DANGER!!!1one) conditions people to click by them without thought.
> 1. SSH's whining about first connection fingerprint trusting is needlessly petty and nobody actually checks the fingerprints, and in many cases they have no need to do so anyways.
I disagree, but this is really a question of configured defaults and security UX. The first connection you make to a server is not secure, and impacts the security of all subsequent requests to that server.
> 2. Almost all cert errors a user will encounter in the real world are the fault of misconfiguration (wrong domain) or pathological/greed-driven behavior (expiration) rather than something that actually impacts the confidentiality of the connection (which is what we care about).
This is the great success of TLS - attacks are so rare that most users won't encounter them. Misconfiguration is indistinguishable from an attack, so the only reasonable thing to do is to warn the user as if it is an attack. Expiration is not a money grab, especially since the CA with the shortest expiration is also completely free. Expiration is a great thing. It limits the window of vulnerability for compromised certificates, and means that revocation lists like those shipped by chrome do not have to grow endlessly large, since expired certificates can be pruned.
> 3. The fact that all cert errors are treated as the same severity (red screen! exclamation points!!1 YOU ARE IN DANGER!!!1one) conditions people to click by them without thought.
With HSTS, that's not an option - and chrome can be configured by sites and enterprises to disallow bypassing certificate warnings. For example, try bypassing this one:
SSH actually follows the same model as SSL in this respect. It's just that basically everyone goes self-signed and there isn't a big institutional system to distribute SSH CA's.
It's more common in enterprisey environments where you have config management to distribute the CA but you can do it right now https://www.lorier.net/docs/ssh-ca.html
But when I spin up a new cloud server built on some image from a cloud provider, I am not sure how I can verify this certificate. How can I verify I am not being Mitm'ed on this new server?
Or maybe the cloud provider is dropping the ball in not giving me the fingerprint when I request its creation?
But yes, it does seem broken to me.
1. Maximum cert lifetimes are falling, once upon a time you'd just pony up the cash and get five years. A year ago it was 36 months, for a few months now it's been 825 days, and there is downward pressure. So you are still going to need to renew this cert, and that means...
2. You can and should automate. Imagine buying a device in 2018 that expects you to manually input an IP address because "Eh, we could do DHCP but this was less effort (for us)". That'd be crazy right? Time to feel the same way about certificate automation.
Look at the trend. First it was just a small SEO bump.
But so often the device doesn't have any name at all, so it's maybe 10.0.0.1, and so is everything else, the problem only appears to be in the security layer because that's the first place which absolutely insists that you can't have a situation where everybody is just named "Bruce" with no other identifier.
Where it does have a name, the name is often not part of the global namespace. At least here we can fix that with a namespace suffix. Sold five million routers with serial numbers? Name them $serialnumber.routers.your-company.example and problem solved. Now that they have a name, issuing them certificates isn't difficult.
(Yes, a commercial vendor who'll hook you up with five million certificates won't do it for free. The little rubber feet and the half-arsed English translation of the instruction manual weren't free either. Too bad)
The thing issuing DHCP leases has full control over your ability to connect to the internet anyways, so around here seems like the right place to put it.
My only qualm is that I trust router manufacturers to implement this correctly about as far as I can throw a sheet of paper.
As a user, I don’t want local networks setting me up to make me recognize their CA services.
At first I liked SSL everywhere, but now I’m seeing a lot of hacks that are going to make SSL less useful.
You say that as if users don't already mindlessly dismiss most warnings already. I'm not convinced this would be that big of a difference from the current system.
If this trend continues it means you will no longer be able to configure these devices with a webbrowser but forced to use the manufacturers "cloud solution" or install an app where both ends of the TLS connection can be controlled and you're not bound to public CAs.
It’s not a good trend, but this is a plus for most consumers who don’t care about local network security.
The browser has no realistic way to conclude that your "local" network is secure. It probably isn't. So there's no sane policy that says that's OK
If you are using localhost (or other HTTP instead of HTTPS site intentionally, it's not going to cause any problems.)
Many small devs don't want to deal with the complexity of HTTPS and the extra fees. It's a lot better with Let's Encrypt but I've talked to non technical people who have shelled out $300/year to their host providers just to have HTTPS and inevitably lots of things break due to hard coded links in their outdated software.
If authentication happens through a 3rd party provider and the there isn't any need for a site to be secure, why force the matter?
Broken sites lead to a massive drop in sales. All because Google thinks it knows best.
If they truly wanted to solve the problem, why don't they offer a proxy, that converts HTTP traffic to HTTPS traffic that gets used in Chrome.
Instead they force people who don't have the technical knowledge that they can get HTTPS for free to pay huge fees and inevitably have their sites broken in the process.
For example, a while back Chrome changed their porn viewing mode ("Incognito") to label HTTP Not Secure, and changed normal mode to mark pages Not Secure if the user seems to be filling out a form.
In the a world of state surveillance and invasive data practices by SV based companies it's a difficult to understand this obsession with http scaremongering by some to perpetuate more centralization.
Your surely meant it's not difficult, right? The first part of your sentence is exactly the answer.
Everyone is concerned about centralization in other contexts but do not see the downsides of certificate centralization and control? How is it that there is no technical solution that does not involve 'authorities'?
This is how control works, first its innocuous and harmless - just get a cert, its even free from letsencrypt. Then after that is accepted its x,y,z. Then its x,y,z and your first newborn. And now you have a way to effectively prevent people from publishing and can silence dissent and anything you don't like under the cover of 'process'.
Sarcasm aside, I think that the big organizations pushing for HTTPS everywhere also tend to employ a lot of people who visit HN; company culture does have an effect.
If you need to simulate HTTPS for your local host, but you actually control all the moving parts (e.g. a dev environment) you can use any private key + associated certificate for a DNS FQDN you control, then use /etc/hosts or its moral equivalent to tell your local machine that this name is on the local loop, and the key + certificate will validate.
You must not ship this as a "product" because when you do that all the end users end up with the private key, which both destroys the whole _point_ of public key cryptography AND violates the terms of whichever CA issued you with the certificate.
Still self-signed, but generates a CA that gets added to your browser. It is all pretty seamless.
No way to verify what you're sending on the wire if the application is proprietary (and statically compiled) without dumping memory, which would be quite odious.
You own the client. You can watch the traffic in the browser before it is encrypted.
Now, imagine you are Joe Blow hosting his blog on some small web host that barely supports Wordpress. Logging into CPanel is confusing to you. How do you deploy SSL?
However, I have to wonder how many hosts actually enable it...
This is really going to create an additional layer of inconvenience for people who just want to drop some html documents in an ftp folder and be done with it.
So I was reading your blog and am particularly concerned about the crypto miner present on the page. Care to explain this to me? Hint: MITM due to insecure context and the miner isn't coming from you but as a user, I'm going to blame you because it happens on your insecure blog page.
Both my personal static site and my "literally only I can use it I've disabled user registration" file host use https. I can think of no good reason not to - to which people always link me that stupid anti-https n-gate article. The same site where the owner links to a Patreon account that I cannot verify is them and not a malicious actor looking to get donations from readers of the site. They also link to a Twitter account that may or may not be them.
And people that _don't_ understand cyber security will have no context for what "not secure" means, and may needlessly avoid a variety of HTTP static-HTML sites, where these security issues aren't that great a concern.
Preventing MitM attacks is the only thing I can think of.
You don't have to wonder all that hard given how publicly Google has discussed their stance on this. They have been using their leverage to try to force SSL usage for some time, including adversely affecting search rankings for sites that don't use it. They have clearly articulated many times they think SSL everywhere is important for the web, and they have the leverage in search/browser marketshare to try to make this a reality.
The Google IO talk for Google's desire for "HTTPS everywhere"
For what its worth, most metrics show a significant jump in SSL usage in 2016/17 following the announcement that it could adversely affect search rankings, although who knows if the two are related.
This is not a theoretical vulnerability. Comcast routinely adds stuff to unencrypted web pages.
In a way, it's a little like a public health argument. You might not be worried about measles but you should still be vaccinated for the sake of the herd.
"I'm lazy, so I don't want to set up encryption on my website, but please don't tell my site visitors. They don't need to know"
Good education >> Browser gimmicks.
Also I get a huge red alert when I follow that link. Seems like chrome is doing a good job telling people it isn't secure.