Hacker News new | past | comments | ask | show | jobs | submit login

I've talked about TLS before, it is a bane on the Internet in its current form.

Here is what users need from any browser that cares about the user:

- An option to not use OS supplied root certificates. Instead, to customize and set up user allowed certificates on the first run/first site use.

- An easy UI for trusted certificates that give categories and what each certificates scope means.

E.g. 'This certificate is issued by a telco provider on behalf of the Government of Canada, is a blanket certificate for all domains, can let the government see all your encrypted traffic.'

'This certificate is issued by the IT department of your organization and let's them see all encrypted traffic'

- It should appear as a warning when a user sees these blanket certificates used on general purpose websites (like Google services) and have a way to tell whether there is a potential MITM happening.

'Normally this website would use a certificate provided by Google.com Inc but it seems your Government issued certificate is being used to see all encrypted traffic you are sending to Gmail.com'

- Warn meaningfully about algorithm usage choices: 'The algorithm strength is 2048 bits and the year is 2023, and governments already have capable computers that can break this encryption with this strength.'




Requiring users to manage their own root bundle is a recipe for disaster for ordinary users. You shouldn’t need to understand path building to securely initiate a connection with a website.

(Besides, macOS and Windows do let you tweak the trust store from their respective GUIs. There’s nothing stopping you from doing so.)


Anyone who would suggest putting security decisions in the face of users should be reminded of Windows Vista and its User Account Control. Dialog fatigue is not to be taken lightly.

See also, cookie consent banners.


Then a first time setup may be okay. People assume everything is safe because there's a green lock icon but the situation must change for improving the actual users privacy.

While dialogue fatigue is true, browsers can take care of offloading the cookie dialogues if there were a standard for it.

Doing what's necessary for privacy is no excuse to throw bad UI at a user. There has to be a way to customize our transit security and subsequently privacy. It's a step in the right direction.


I disagree with the current approach.

Corporate users do not have a control over this, nor will people where governments try to propose a law forcing people to accept crony CAs to the trusted list.


These are orthogonal issues: it’s not enough for governments to have CAs in trusted roots, as this attempted regulation demonstrates. OS-supplied roots exist because ordinary users can’t and shouldn’t be expected to make PKI policy decisions to use the Internet securely.

If you’re trying to circumvent your corporate laptop’s MITM, you’re playing a lost game. It’s also a game that has nothing to do with ordinary users on ordinary, non-work computers.


Corporate users don't generally have control period. Keep your personal stuff away from the work laptop. TLS is useless on a machine you don't control.


Government crony CAs are already in the trusted list. How does this change anything?


This prevents them from actually misusing their certificate on blanket spying programs. Every browser that allows you to configure your trusted list in a more manageable way is one more hurdle these CAs have to cross.


The default trusted list has many government CAs on it which can be used for blanket spying programs.


Yes - and the point of adding this is to ensure that the user is warned when this happens.


Warnings like that would likely violate eIDAS' requirements for "user-friendly" recognition, as well as its prohibition against "mandatory requirements".

This is why eIDAS is so nuts - it forbids browsers from rolling out pro-user security measures.


> 'This certificate is issued by the IT department of your organization and let's them see all encrypted traffic'

I'm a bit out of the loop on modern security implementations. In theory though, I thought the purpose of certificates and certificate authorities was to certify that you're actually talking to who you think you're talking to?

A browser can still communicate security with any computer out there, regardless of corporate certs, etc. Or do the corporate firewalls block communications that they cannot inspect?


> A browser can still communicate security with any computer out there, regardless of corporate certs, etc. Or do the corporate firewalls block communications that they cannot inspect?

Yes, you’ve got it. Many corporate firewalls require you to install their own root CA certificate and “transparently” intercept and rewrite traffic leaving the network. It looks like you’re negotiating with the external site, but you’re actually doing TLS negotiation with an on-prem proxy that connects to the destination site on your behalf.

I say “transparently” in quotes because it definitely, in practice, breaks things here and there.


I'm sure it's well established practice, and the courts would find it legal, but it seems like fraud. My computer says "I want to connect to my bank, are you my bank?" and the corporate firewall says "yes, I am your bank".


I think the loophole and why it's legal is this:

> My computer says

It's not your computer. It's the company's computer and they installed the root certificate on it. For any non-provisioned machine you'd get a certificate error because the corporate-issued certificates are signed by the corporate CA that isn't browser-approved by default.


Your IT department owns a certificate authority. They add its self-signed, root certificate to the trust stores of all the machines in your firm using group policies or whatever. They then install something like a Bluecoat browsing proxy, which they configure your browsers to use using group policies or whatever. The Bluecoat terminates the TLS connection by transparently providing certificates for the sites that you're trying to access, which your browser trusts. It then makes the outbound connection to the actual site, if appropriate.

It's called TLS interception.


Your understanding is correct, but if your IT department puts a wildcard cert on your machine, it can MitM all your traffic (e.g. your company's firewall can say "Hi I am $SITE, here's a certificate to prove it" and your browser will accept it). Traffic between you and the firewall, as well as between the firewall and the actual site is still encrypted.


In theory, you know whom you are talking to. In practice though, every company and authoritarian government wants to sniff, be it through https proxies or enterprise-wide certificates.


User-friendly recognition means some way to say this is a certificate from John Doe, France. Probably a green box in the URL bar like EV certificates used to be, but maybe blue instead.

Someone has to explain the threat model to me better. If I go to google.com and the certificate says the site is owned by John Doe (France) I should be suspicious. Is the problem that the certificate validates something other than the domain name? Any government can already easily get a domain certificate for google.com.


TL;DR:

The certificate is associated with a private key which can decrypt the traffic for a session established using the cert. The issue is that, given a bogus certificate from a trusted authority, someone can impersonate any site they want, then proxy the traffic to the real site and spy on your session.

Longer explanation:

If an organization is able to obtain a certificate that the browser trusts, which matches the domain name you are visiting, then they can easily terminate the TLS at the organization's server, then make a new request to the site, where the encrypted session is between the site and the organization's proxy server. They get an effective man-in-the-middle attack and your browser pretends like everything is secure because there is a trusted certificate matching the domain name you visited. The browser doesn't throw up any red flags because it doesn't differentiate between certificates, only certificate authorities.

So if a 3rd party can acquire an arbitrary certificate for some domain, and the 3rd party has the private key associated with that cert, and the same 3rd party can intercept the traffic between you and the domain in question, then they can get your traffic in the clear and there are no obvious signs that they did it.

Given the current state of TLS and the way browsers deal with certs, the only way for a user to discover that something is amiss is to investigate the details of the cert. Then they might notice that it was issued by a suspicious certificate authority. Unfortunately, the proposed EU law is attempting to prohibit browser vendors from withholding trust (or otherwise scrutinizing) the state-controlled (or state-influenced) certificate authorities, with the intention of preventing browsers from interfering with government's ability to issue illegitimate/insecure (but you must trust them, or else!) certificates.


Wait, your government issued certificate? That makes no sense. You think the hypothetical government MITM device will identify who's connecting to it, then sign google.com with their personal identity certificate? Instead of just signing all of them with a specific real or fake person's?


Stuff like this comes up regularly, more folks should pay attention.

https://www.securityweek.com/turkish-ca-issues-fraudulent-ce...


I think something more like ssh which exchanges certificates on first connection would be better, maybe with expiry, pinning, and sending new certs before an old one expires. On top of that some way of sending certificates, maybe offline, for really critical things like banking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: