We use this sort of identification for any joint services in public sector digitalisation, which is a lot, because a lot of our foundation is shared.
Reading this I’m fairly happy we do it mainly with C# at my place. All that configuration required in JAVA is simply crazy to me, why the hell would you want unsafe settings that aren’t disabled by default? You can turn them off in C#, opening yourself to the same vulnerabilities, but it’s an active and very obvious choice to do so. Though it’s likely been fixed in the 7 years that’s passed since this article.
I do wish we had a better system to identify IT-systems. When you operate more than a 1000 connections maintaining certificates that expire every 4 years becomes really fucking tedious. We’ve automated most of it, but some of it still requires hands-on, and we’re not perfect, I’ve seen project managers e-mail the private keys when someone bought a system around IT...
This is the default implementation; our WebKit port (the main consumer of this API) has a subclass of this which shows a "this certificate is invalid, do you want to continue?" message and actually returns false.
But it's been multiple years since that comment was added, probably more than time to fix that. Thanks for the (inadvertent) reminder :)
Those dialogs were a bad idea in browsers (and have been gradually going away) too. Never do this in new software.
Treat "Cert is broken" the exact same way you'd handle not being able to connect. Don't treat it as something you can just cross your fingers and ignore.
As a user i really dislike this because more often than not certs are broken because the server is temporarily badly configured (instead of some malicious reason) and i do not even care about it (or even if it was compromised really) as all i care about is read some page's content, connect to some messaging server (Pidgin often had issues with -IIRC- MSN servers), etc.
Of course if it is about, e.g., downloading some application (like an auto-upgrade mechanism) then sure treat it like that. But in other cases let the user decide, even if such sort of decision is made opt-in.
The device doesn't _need_ a certificate for a "local" (presumably RFC1918) IP address.
It needs a certificate for its name, and arranging to have a valid (by which I'm assuming you mean trusted in browsers) certificate for a name isn't hard. Sectigo and DigiCert both offer vendors a suitable product for that purpose last I checked. If you're making a short run hobby product you could just use Let's Encrypt.
In a typical consumer network every device is getting a (more/less) random ip address by dhcp. So there is no guarantee that the dns name will always point to the right ip address, therefore it is impossible to use a fixed dns name. This fact makes it impossible to get a valid certificate. Especially not from let’s encrypt.
In easy words:
You use example.com for your IoT device. Where should example.com point to? To all possible ip addresses? I don’t think so...
Besides that you need a private key on the iot device for decrypting the tls traffic, imagine somebody gets access to the device and now can basically mitm all other devices...
If you think the lesson here is "SSL/TLS is terrible, look at the bad implementations people have done" then you screwed up.
What's notable is that TLS is good enough that this even matters. Compare the situation with PGP and S/MIME. Instead of a list of bad examples, as a contrast to how it should be done, all you can say for those entire ecosystems is "Well, this is terrible, never do any of this".
The same story applies for the Web PKI. There have been a bunch of problems with the Web PKI over the years. But rather than "This PKI is terrible" the lesson is actually "This PKI is so good that it actually matters if things go wrong".
I have to agree with Frank. TLS has proven itself to be extremely hard to implement or use correctly. And why use certificates If you can use plainkeys? There are other choices, like Noise.
TLS is for use to connect arbitrary peers on the Internet, which means you're going to need a PKI. So that means certificates.
I don't happen to agree with Noise's philosophy about agility, but that's one of those things where we'd have to agree to disagree in the short term. In terms of replacing TLS, Noise just isn't in the picture at all.
Nope. As I said, if you need a transport layer to connect arbitrary peers over the Internet, TLS is exactly what the doctor ordered and it already exists. Already studied, already got libraries that implement it, already got test frameworks, everything is in place. An alternative even if it was technically no worse (which isn't guaranteed) and available today (impossible) doesn't have those things.
> You can do PKI with plainkeys.
Nope. A PKI specifically involves the binding of keys to identity, that's what the certificates are for. You don't have to have X.509 (though I'll argue you might as well) but you need that binding.
SSL/TLS in anything that's not a browser is a total shit-show, and this is ignored by security professionals, ISVs, developers, and network architects alike.
As an example, take a look at something like a Citrix NetScaler, a popular network load balancer and security appliance (similar to an f5 BIGIP LTM):
Until recently, it was flat out unable to validate host names because like all network devices, it assumes that "IP address == the host".
Some dingbat put the "host name" part of the SSL validation into the SSL Profile. So you now have to make a separate profile for each and every host name, making this feature practically unusable.
By default it'll accept any certificate for a back end, signed by any CA. Or self-signed. Or whatever. 512 bits? No worries! It's a cert! It's good! We're SSL now!
Recently "server authentication" was added so you can actually validate the cert chain of a back-end service. Except for one minor flaw: it lets you pick exactly one signing certificate to validate against. So even if you know ahead of time that a back-end server is about to have its intermediate CA change, you're facing at least a temporary outage while you quickly switch out this parameter on the NetScaler.
For some retarded reason, the back-end and front-end SSL capabilities are wildly different. You read the manual and think: Yay, there's TLS 1.3 support now! Nope... front-end only.
The stupid things still generate 512-bit keys by default, and this can't be overridden for some scenarios, making them so insecure out of the box that Chrome refuses to talk to one.
Validating CRLs or OCSP is so difficult that I've never seen it set up on a NetScaler. I tried once and gave up.
Sure, you're keen. You want to validate CRLs and use OCSP like a good boy. Bzzt... chances are that some Security Troll has blocked outbound port 80 from the NetScaler because everybody knows that it's an "insecure protocol". So you're now facing a multi-month argument with a whole team of people convinced that you're trying to undermine their precious firewall rules.
There's no supported way of renewing a certificate automatically on one of these things, so of course, certificate expiry is like the #1 reason for outages in any NetScaler deployment.
Etc... it just goes on and on.
A lot of SSL/TLS design for network appliances was very obviously hacked in to support one scenario only, and anything else is going to be dangerously insecure. NetScaler was originally designed to do front-end SSL offload for HTTP-only servers in the same broadcast domain on a physically secured network. For any topology or scenario more complex than that it just falls apart and provides essentially zero protection against a MitM attack or anything similar.
Hostname verification got much better. It's default-on in a lot of common software these days where it might have been optional or even entirely broken back in 2012.
People do struggle with IDNs. The key trick here is to understand that SANs are mandatory in the Web PKI and by definition they are in DNS's own internal A-label format ("punycode") because the character set used for SANs deliberately isn't Unicode capable. You will need to know the A-labels in order to successfully resolve the hostname to get an IP address, so you can re-use those A-labels to match the SANs, and doing so will make your software work how programmers expect it to. You don't need the U-labels at all, they are only for display purposes. If you find yourself providing a separate "Hostname to verify" API distinct from the interface where you learn the name to connect to that's where you're going to trip yourself up, so don't.
But yes hostname checks are still embarrassingly missing in plenty of less obvious places if you go looking, they did not vanish entirely overnight. The obscure Haiku operating system is an example, they offer a "secure socket" abstraction but it doesn't implement hostname validation so in essence it's worthless, yet it is used by their native web browser and other software.
> The obscure Haiku operating system is an example, they offer a "secure socket" abstraction but it doesn't implement hostname validation so in essence it's worthless
Probably very true. But the topic should get renewed attention since all the voice controlled IOT-devices require SSL-verification at some point if popular services are in use and that will happen outside of any browser. It also creates a few requirements for software maintenace.
Reading this I’m fairly happy we do it mainly with C# at my place. All that configuration required in JAVA is simply crazy to me, why the hell would you want unsafe settings that aren’t disabled by default? You can turn them off in C#, opening yourself to the same vulnerabilities, but it’s an active and very obvious choice to do so. Though it’s likely been fixed in the 7 years that’s passed since this article.
I do wish we had a better system to identify IT-systems. When you operate more than a 1000 connections maintaining certificates that expire every 4 years becomes really fucking tedious. We’ve automated most of it, but some of it still requires hands-on, and we’re not perfect, I’ve seen project managers e-mail the private keys when someone bought a system around IT...