In this email, Comodo discloses the security issue to Mozilla. The email was sent 26 days after researchers Florian Heinz and Martin Kluge of Vautron Rechenzentrum AG informed them of the bug.
Comodo clearly states that they used OCR for .eu and .be domains because the TLD registrars redacted their port 43 WHOIS data, and only provided an image of an email address on their web WHOIS pages. There was apparently no other way to obtain the email address.
Rather than flag humans to fix OCR in ambiguous situations, they had automated heuristics to correct the OCR, as determined by the security researchers. However, the heuristics chose the wrong output for the domain @a1telekom.at, producing @altelekom.at (an L instead of a one). The researchers registered altelekom.at and obtained a cert for a domain owned by A1 Telekom, a major ISP.
There is an accusation in the comment thread of the article that Comodo only disclosed this issue to Mozilla after it was reported publicly by the news media.
> steffen 2016-10-20 08:35:58 PDT
> In fact, the linked incident report refers to the heise article I also linked. So Comodo chose to "publish" this immediately after it was made public by others. That would be quite a coincidence. This raises the question of whether Comodo would've informed Mozilla at all if the media hadn't picked up on it.
A lone security researcher can find a bug and write it and share up a lot more quickly than corporation.
Corp has to write, test, verify, share internally, review and approve before it can be released. Bureaucracy. They also needed to patch their systems too.
Ryan Sleevi (Google) asked the question, and Robin Alden (Comodo) stated a reasonable timeline. There is no conspiracy here.
>The OCR has a reproducible bug and has trouble differentiating small l and the number 1. It also has trouble differentiating the number 0 and the small o. Instead of fixing the bug or not using such obviously unsuitable software the software apparently evaluates the following characters - if there is a number after the small l it reads the l as the number 1. Similar issues with o/0.
So what they're saying is y0u can fo0l their servers with 1eetspeak?
I mean - these 'fixes' are common as far as I can tell (working in/around OCR). But then again - I'm not issuing certificates.
Quite often you try to eliminate uncertainty by being clever: Sure, OCR engine: Go ahead and recognize O and l and B if you want. If I know that the context of this text is an amount, I'll still replace those chars with 0 and 1 and 8 afterwards.
(Engines usually allow you to configure the allowed character set, but in practice it seems to be easier/more reliable to work like a parser: Lenient in what you accept and strict in what you pass on)
I remember reading through the forum post for that letsencrypt bruhahaha when Comodo filed for trademark infringement on Letsencrypt.
The CEO seemed really self absorbed and saw himself as an innovator with letsencrypt stealing business processes from him (which makes filing for trademark infringement make less sense). He took it as a personal insult, and did not even appear to understand that letsencrypt was a public service rather than a for-profit product.
There was a fan who claimed to be a paralegal cheering on the CEO's rant yet appeared to confuse trademarks with patents with copyrights.
At least with the forums alone, that thread read like teenage high school soap opera, not a business. It was like an echo chamber of the CEO's fixation.
And now this. Sounds like there was a deliberate business decision to use faulty tech, and forgetting the wider social impact of operating a CA.
I recently replaced an expired cert with one from LE and the entire process was super simple. Their cli automatically handles all the tedious steps like creating the signed request, proving server ownership, and creating the certificate bundle. What's more, it also provides tools for renewing certs in an automated way. I really can't see myself using anything else from here on out.
How does this meme keep getting repeated over and over and over when it was wrong the day LE launched? It doesn't even make sense if you just think about it. Why would you need root access to make some HTTP calls? Most ACME clients don't require root. The one-click-and-reconfigure-my-webserver obviously requires root.
This really comes down to the weakest link though. You can use letsencrypt if you want, but someone else can still spoof your domain with a fake Comodo cert and any of your users' browsers will happily trust it.
Public key pinning seems like it would prevent this attack though.
Yes, this is called domain validation and most CAs operate like this for regular non-EV certificates. Sometimes it's an email, sometimes it's a file with certain content at a certain URL, sometimes it's a DNS record. But beyond that, no further validation is required for regular certificates.
I don't know why you were down voted. Yes, Let's Encrypt does verification by requiring a site to host a string on port 80. They discover the site via DNS, and they do NOT require DNSSEC. Thus you can absolutely trick Let's Encrypt into issuing a bad cert if you can serve them bad DNS responses.
This OCR issue with Comodo in TFA concerns WHOIS data, which may or may not be more reliable than unsigned DNS data. Regardless your point remains valid.
I am usually not good with donations but one company that I gladly donated to has been letsencrypt.
They have made life so simple. Please donate[0] or become a sponsor[1] if you can.
For the love of God, why has Mozilla not suspended Comodo yet? Too big to fail, my ass -- give a few months of warning before the notBefore cutoff date, and everyone will have plenty of time to switch over to a competent CA.
Comodo has had many serious security problems, including issuing fraudulent SSL certificates and releasing browser extensions that break security. Details: https://news.ycombinator.com/item?id=11962371
Comodo should be put out of business. They stole $100 from me for a certificate then gave me the run around for months while I tried to get a refund for a certificate I never received. Still haven't gotten my money back.
Yep, sounds like it would be useful for this case.
Not sure why people are down voting my answer. The other person posted "Chargeback?" - note the question mark. So I took that as if they were asking what a chargeback is, and was trying to explain it as simplified as possible.
You shouldn't be downvoted, but I do believe you misread the intent. Avenger42 was responding to a post which did not mention "chargeback", so it wouldn't make much sense to interpret it as, "What is a chargeback?". Rather, "Why don't you use a chargeback?" would seem to be a better interpretation.
Well, their backdating SHA-1 certs and issuing certs withouth proper proof of ownership and refusal to properly audit or disclose had me on the fence, but this, THIS is what finally convinced me they need to be put out of business.
The underlying issue here is that WHOIS is still not standardised despite being around for over 30 years, and the registrars do not have any other common interface that can be used to discover domain owners and other metadata. Is there no workable solution to this problem?
I work for CertSimple and we automatically use `(site)/.well-known/pki-validation` if we see whois privacy, because in practice customers using whois privacy are inevitably uncontactible. We may use `.well-known` by default in future. Our friends at LE/Certbot (ie, ACME) use `(site)/.well-known/acme-challenge`.
Also there is various efforts to replace/standardize whois, eg WEIRD and RDAP https://tools.ietf.org/html/rfc7483. However .well-known resources work well enough for the purposes and in practice is more widely used.
It is possible that the people drafting that ballot didn't feel they ought to approach IANA until after it was voted through, and then they simply forgot.
It is also possible that the subsequent IP fuss (a bunch of CA/B members turn out to have patented some methods listed in Ballot 169) distracted everybody
Finally it's possible IANA just didn't get around to updating the list yet.
Sorry what I meant is proving control of the domain by the specified legal entity.
If I apply for an ev cert for ACME Inc to use on acme.com and have "ACME Inc" as the registered owner in Whois, does that satisfy the domain control check?
EV doesn't necessarily promise "this company owns this domain", does it? I thought it says "the certificate holder is this company" + "the certificate holder controls this domain" (which is the standard DV check).
Which is all I want, anyway. If I'm typing my Bank of America login info, I care that it's being encrypted to a private key in Bank of America's control, and it doesn't really matter if that happens through a DNS name registered to Bank of America or not. It would be weird if it didn't, but it wouldn't inherently impact my security if the key was still BoA's.
Yes, your understanding is correct, though a CA can use domain ownership to skip DV checks (if you validate requester is X Inc and X Inc is listed in WHOIS, no DCV needed).
Understood, but that's still not correct. Whois is Only used for proving control by whatever contact is mentioned in Whois. The company name in Whois isn't used for EV verification of legal entities at all.
If you're willing to go for "current control of the domain" instead of "owner" (and in practice we are, because a ton of CAs including Let's Encrypt allow that), there's hitting /.well-known/ via HTTP, as 'nailer points out, and there's also emailing admin@, administrator@, webmaster@, hostmaster@, or postmaster@ the domain, as explicitly permitted by the CA/Browser Forum baseline requirements (3.2.2.4 #4).
No, that's not the issue at all. CAs like Comodo are paid very handsomely for what is essentially a zero marginal cost item. It is not the problem of WHOIS that they want to skate their responsibilities and think some rent-a-coder OCR bullshit is an adequate solution.
There is. There is a relatively new IETF standard called RDAP that provides JSON-structured WHOIS data responses (i.e. https://tools.ietf.org/html/rfc7483) along with automatic discovery. ICANN is working on making it a mandate for gTLDs to support it.
It is already supported for IP address lookups by all 5 regional Internet registries.
Actually as of 2013 - ICANN required registrars on the latest contract to standardize whois output, further next year a new protocol RDAP will be available that will add to this. So I'd say their are much more workable solutions here and more coming soon.
Universities that are part of InCommon paid to get unlimited Comodo SSL certs. Their API was pretty terrible and we ended up finding quite a few issues.
Every time I hear about these Comodo breaches, I'm not surprised. Supposedly, Iran was able to get them to issue fake certs for some major sites:
It's a plausible suggestion that all services that provide basic critical infrastructure for the Internet (e.g. DNS, Certificate Authorities, ISPs and network providers) should have a very high level of security testing given their attractiveness to attackers.
the only problem is... who's going to pay for it? the CA model and the DNS model tend to be a very low-cost one which means there's unlikely to be the kind of money available to pay for expensive red-teaming on a regular basis, let alone running infrastructure that would resist their attacks..
In my limited experience (infosec for a big 4 firm), the answer is no. The audits are done as cheaply and as quickly as possible. I worked alone, in fact, and essentially did process testing (read: document review).
I work for one of the big CAs out there. For those domains a human actually performs a manual WHOIS query for those TLDs and then manually enters in the email address associated with the domain contact and they are required to include a screenshot of the WHOIS details for verification of the information. A second individual then is required to perform a verification that the email address entered is correct per the attached screenshot.
All of this, plus the querying of the organization's legal registration, business address and contact is all done by trained people and due to internal efficiencies and workflows we can complete that in a matter of minutes from the time a customer places an order.
In the end, even an organizational vetted certificate is still completed just as fast as it takes customers usually to click on the approval email to authorize issuance and submit the CSR for the certificate creation.
I'd expect HTTP, DNS based or email to {hostmaster,postmaster,etc.}@domain validation to be more common than validation based on WHOIS data. There are probably very few domains for who only the WHOIS based validation is doable.
I think that is a great anti-spam measure. Most web scrapers are not gonna run OCR on the images of your contact page and you save yourself huge pains and loads of spam. Tell me, what is a better way? Using obscured JavaScript code to inject the address into the page? CSS hacks? HTML comments inbetween parts of the address?
I have my email address open on most websites I've designed and programmed for decades now, and I don't receive more spam at these address than on at those addresses which aren't public. Spammer harvest for years now using many channels: buying addresses, hacking databases, using viruses stealing complete address books from people.
Using images as anti-spam measure really gives you a lot more problems than benefits (except you'd use random single-usage addresses).
Don't obfuscate, but fix your spam filters or switch your email provider. What helps a lot is server-side moving of detected spam emails to your junk folder and looking through this folder from time to time.
A better way is to just list your email address and accept that you will get some spam. My email address is already in plaintext on many webpages and WHOIS entries. It's not that big a deal.
I have a few domains with public whois, the spam from that today is significantly less than it was when I first registered around 2000. The paper spam is almost zero, just a scammy registrar every once in a while. Email spam is probably less than one a week.
Web security based on PKI model based on 100's of "trusted" authorities is just broken. And yet, the "security industry" continues doubling down on "moar TLS" "moar green locks" model instead of coming up with a better model.
The tragedy is, that most of the internet access is now happening from mobile devices and majority of that is coming from native apps. The apps need neither the same trust model nor have any "green locks". But PKI/TLS based orthodoxy has such a death grip on the industry that people continue to use this broken model for native apps where it makes even less sense than it does for browsers.
I suspect that this would be a "perfect is the enemy of good" type of situation w.r.t. native apps. Key pinning is certainly the preferable solution for native apps, but defaulting to at least the security properties provided by the Web PKI (which, obviously, isn't perfect, but it is a significant hurdle for an attacker). I mean, we've had plenty of reports of various mobile apps not getting certificate validation right in the last couple of years, imagine what the situation would be like if TLS backed by the Web PKI weren't the default? How many apps would even bother to implement pinning, and how many would get it right?
I think defaulting to the Web PKI and strongly encouraging use of key pinning (with good tooling, etc.) is the right approach here.
You can still promote using TLS while recognizing the flaws in the current PKI-based authentication system. From a purely technical perspective, a TLS connection without authentication is still better than no encryption at all.
And yet there are millions of people using the Internet and we have to protect them with the army we have. I hope we'll see something like Marlinspike's Notaries becoming widespread in our lifetime, though.
Well, unless apps authors are writing their HTTPS clients from scratch, I suppose major mobile OSes provided HTTP client API functions do actually check certificates?
I think chetanahuja is saying apps don't have to rely on CAs. They can distribute a single trusted certificate, only trust a single CA or use key pinning.
Did Comodo admit to using OCR for this, and that it wasn't a human transcription mistake (humans mistake 1's and l's too)
It just seems odd for them to use an image of a web page to transcribe information from a web lookup when they could just scrape the text off the web page directly without using the intermediate image and OCR.
However, I could see them using a human in the chain to look up the whois information, it just seems strange to come up with a complicated OCR solution (and if they did, that they couldn't find a font that makes 1's and l's look more distinct, like http://forum.high-logic.com/viewtopic.php?t=4004)
>when they could just scrape the text off the web page directly without using the intermediate image and OCR
Try looking up whois info on google.eu. Most tools will simply output `NOT DISCLOSED! Visit www.eurid.eu for webbased whois.`. Now you can search https://whois.eurid.eu for the whois information of google.eu. You will find that the email address is only available as an image.
That's exactly the situation Comodo tried to solve.
Given that it's a simple font with no obfuscation, a small pattern-matching python programm should give you near 100% accuracy. Apparently Comodo used some off-the-shelf software instead, and that software seems to make assumptions it shouldn't make.
Ahh interesting, so it's a case of the cure being worse than the disease -- spammers know how to use OCR too and they don't care about transcription errors.
So it seems like there's little point in deliberately obscuring the email address in an image, and certainly no reason to do it with a font that doesn't more clearly distinguish between letters and numbers.
Is this sarcastic? I can't tell. How is it unrivalled, and why would you trust it coming from a company with an abysmal track record of security? Just Google 'Comodo Project Zero' for a taste. Or read this[1], one of the bad ones.
Also whats wrong with ufw? Github is down so I can't view that link.
> The quality of the software both good and bad doesnt' apply here.
Well I clearly does, because this post is about how their software did some crazy roundabout stuff to validate domains that didn't work.
ufw, and by extension iptables, lacks features such as per process rules. You have to do hacks like assign rules to users, and run the processes under different users. Tails does this to isolate the Tor Browser process.
When you check out the link, see that nothing like this exists on linux. The closest thing on OSX would be little snitch.
github isnt down, flush your dns, its left over cached NXDOMAINS from this mornings outage.
you can also clear your browser dns cache with chrome://net-internals/#dns
I think my original meaning is there is lots of teams and not all of them are bad :-)
> ufw, and by extension iptables, lacks features such as per process rules.
iptables --pid-owner[1]
> I think my original meaning is there is lots of teams and not all of them are bad :-)
Sure, you find diamonds in the rough, but when their AV and their certificate teams are trash does that inspire confidence? When their antivirus software bundles a 'secure browser' that disables CORS (!!!) and infects you only by scanning a file then why is it safe to assume that they know what the fuck they are doing as a company?
That's like assuming it's safe for a doctor to operate on your kidney, despite killing all their patients when operating on other organs, because you know, it's different.
The firewall might work and have the slickest interface but if it's full of buffer overflows and written by an idiot then it's not the best, is it.
Thats broken (only matches the exact PID, no child processes and according to the documentation does not work on SMP systems). It also got removed at some point. There is a cgroup match which may be usable instead, also network namespaces may be a good solution in some cases.
> lacks features such as per process rules. You have to do hacks like assign rules to users
From a practical standpoint, I find it hard to imagine that the cost of added complexity for configuring application rules per user would outweigh the benefits of simply configuring them system wide. I remember the days of terminal clients logging into mainframes but all I see are single user desktops. Things like location on the network matter more in an application firewall than which user is accessing the desktop.
I didn't mean per user vs per system; in as much a per process image. ie: firefox can get out to *, but sandbox can not.
If you have never used something like Little Snitch on MacOS it is very surpassing to see all the outgoing connections from processes. It returns some control to the user to block cloud services, application dial home, etc.
Being able to interactively allow/deny access to resources (say via hostname or via IP) per connection per process (image?) is very valuable.
This is hard to do in linux. Several good solutions to do this in MacOS and Windows.
The fact that it's "purchased" has nothing to do with it. The important part is a "trusted" third party (note my use of quotes). Without authentication, the guarantees of confidentiality and integrity are pretty much empty.
(In fact, I would argue that free certificates from Let's Encrypt are more trustworthy since the ACME protocol drastically reduces the surface area for attacks on the verification process. See slides[1].)
This comes up so often, and I really wonder what the logic behind this should be.
"We have identified a security problem. Therefore instead of fixing it we should abandon all security." Doesn't seem to make a lot of sense to me.
No the problem is that we centralized trust instead of designing a distributed trust model. I suspect we centralized trust at the encouragement of folks like the NSA and similar ilk.
Regardless, look at the vast majority of technical users relying on SSH without using a CA for secure communications, had browsers done a better job at self-signed certs we could be doing the same on the web.
> Regardless, look at the vast majority of technical users relying on SSH without using a CA for secure communications, had browsers done a better job at self-signed certs we could be doing the same on the web.
How many technical users actually check the fingerprint matches the expected one of the server, out-of-band? Almost everyone I know just accepts the unknown fingerprint, so almost nobody knows who the endpoint they're actually connected to is.
> I suspect we centralized trust at the encouragement of folks like the NSA and similar ilk.
In the mid-90's, CA certs were put into Netscape Navigator (IE joined later) in order to facilitate the new wild wacky concept that someone might buy something online. They called it "e-commerce".
Trust was centralized because it was far easier to add the then ~half-dozen CA's rather then somehow vetting every joe that wanted to self-sign their certs. PGP's web-of-trust existed but it was deemed less viable.
Besides, the whole SSL certs thing was a major business premise behind creating Netscape - profits. Without that there might not have been a dot-com and the huge amount of money that followed since then.
If the browser adopts a trust-on-first-use policy with self signed certificates and the certificate is replaced (possibly because it has expired), how do you know whether it's MiTM or benign?
I recall a website with a self-signed cert, but they had a non-https page that had a their TLS cert fingerprint signed with their GPG key; which effectively moved the trust from the centralized CA system to the PGP web of trust.
I think it would be cool to have a standard URL (/.well-known/certificate or something) that explains why you should trust their self-signed certificate, and have the browser show that as part of the view when you encounter a self-signed cert.
Then you have leaked material over http or some other cleartext protocol. If you try to add encryption to that you just have turtles or move the key exchange somewhere else.
The current PKI system allows you to make an unsolicited encrypted connections to an internet origin over an untrusted connection with strong server authentication.
Self signed can not provide this. We do not want the web to be like SSH.
Also, PGP is a very different security model - in order to effectively use PGP, you need to go disclose your government identity to a handful of other people who are already in the strong set. That's not a great fit for the web.
Just like you do with ssh: you ask the site administrator.
Now, if you have a CA who gives out certificates to parties who should not have them, and that CA can sign certificates for any domain, how do you know if any connection is a MiTM or benign? Do you actually check the issuer every time you connect to a secure site?
CAs give a false sense of security. We need to move off this model.
Just like you do with ssh: you ask the site administrator.
How?
CAs give a false sense of security. We need to move off this model.
Maybe, but not by adopting the politician's syllogism. We should only move to a better model. As easy as it may be, it's still slightly harder to get a CA to sign a cert for a domain you don't control than it is to sign them yourself, and that shouldn't be dismissed.
TOFU + a standard or convention for validating keys with Face-To-Face verfication?
If I go to a bank or business, and as part of setting up a relationship they hand me a pamphlet printed with their Public Key Fingerprint, AND the browser shows me that fingerprint image at first use, ..... then that's pretty reliable verification, isn't it?
A standard for face-to-face TOFU checking would be in addition to the PKI infrastructure, and to facilitate a secure way to use self-signed certs.
Obviously we've never had face-to-face relationships with the likes of Paypal, Stripe, etc, and would still rely on our browser and operating system vendors to validate the chain of authority.
Have a self-signed CA certificate with a longish expiration and sign the actual keys the webservers use with that. Apply TOFU to that CA certificate (on a per-domain basis).
There's IIRC no mechanism for that, but for the single-domain CA key it'd make sense to sign the new one with the old one.
Self signed certificates are always less valuable than purchased ones. The worst case CA breach is equivalent to the normal case of self-signed certificates: a certificate is misissued and an attacker can carry out an active MITM attack. A CA breach cannot enable passive attacks because they don't know the private keys of certificates used by websites.
I don't think you can say that self-signed is strictly better than plaintext.
Plaintext HTTP gives no illusion that your connection has confidentially, authentication nor integrity.
HTTPS is meant to provide all of these things. With a validated, trusted certificate, you get these things. With a self-signed cert, you have none of these things.
Knowing that I don't have things is better than thinking i have these things and not.
So no, untrusted self-signed can't ever be better.
Note cant reply to reply, but all i can say is
"With a self-signed cert, I know I am talking to exactly one party and that no third party can surveil or monitor that communication. "
(This is not true, as this could easily be my MiTM Proxy that is substituting your self signed public key). Once you introduce 'trusts' you are back to CA land.
No. This is absolutely false. HTTPS is a transport protocol (that's the second T) and provides point to point security, only. The fact that we have grafted onto this protocol the notion of authenticity as verified by a warden is an historical accident (and frankly the cause of many problems).
With a self-signed cert, you have none of these things.
With a self-signed cert, I know I am talking to exactly one party and that no third party can surveil or monitor that communication. This is a channel over which the second party and I can then negotiate authenticity, which is a much better way to do it than the two-headed monster we've built for ourselves.
Furthermore, for the vast majority of websites I visit, an assurance from a third party that they are who they say they are is of absolutely no value to me. Take HN: I do not particularly trust ycombinator.com more than I would trust a phisher trying to convince me he is ycombinator.com, so a third party's assurance that this is in fact ycombinator.com doesn't give me any useful information.
And it shouldn't have been. These are two separate questions.
Think about it. If you connected to my blog, would it actually give you any useful information if Thawte assured you I am who I say I am? You don't have any reason to trust me more than you would trust someone pretending to be me, because I'm some rando with a blog you got linked to.
What use is TLS if I can't know that when my browser says I'm talking to example.com, I'm actually talking to example.com? I've got a secure connection... to literally anyone who can mitm me. PKI isn't 100% secure, but it raises the barrier significantly. I, as someone who owns a coffee shop or runs a corporate network, can't just go out and get a cert for google.com.
So do you trust that Comodo (or any one of the other 180 or so CAs your browser trusts) will not issue a certificate for your bank to the wrong person, after reading this bug report?
I mean, obviously my dream solution where my bank hands me a USB stick with their public key on it when I open an account isn't really feasible, but this bug report makes me wonder what value PKI is really adding.
If that happens (and there are a fair number of pieces that had to fall into place for this attack to work) it puts us back to where it would be if we had no CA. So, do I have absolute confidence? No. But I think the risk is significantly reduced.
subs TLS then; this isnt about the semantics of names.
Server Authenticated TLS provides these things.
"With a self-signed cert, I know I am talking to exactly one party and that no third party can surveil or monitor that communication."
You can't state this. The self-signed cert you are seeing could be the one that my MiTM proxy has substituted on your connection! The only thing that saves you is your Trust Store.
I feel like you don't quite understand public key substitution and MiTM attacks.
This isnt a Phishing problem, its a 'breaking the encryption between client and server' problem.
"Furthermore, for the vast majority of websites I visit, an assurance from a third party that they are who they say they are is of absolutely no value to me. Take HN: I do not particularly trust ycombinator.com more than I would trust a phisher trying to convince me he is ycombinator.com, so a third party's assurance that this is in fact ycombinator.com doesn't give me any useful information."
This is HIGHLY ALARMING! If this is the case then please use my proxy server and do your banking. Im sure its cool, you don't need to check with a third party that i'm not messing with your encrypted connections.
The self-signed cert you are seeing could be the one that my MiTM proxy has substituted on your connection!
Which makes that MiTM the one and only one party with whom I am communicating. And I know no other party is surveiling or altering this phishing attempt.
Over this secure channel, my phisher and I can negotiate authenticity, which hopefully he will fail.
If this is the case then please use my proxy server and do your banking.
Sigh. My bank is not ycombinator.com. I don't think YC is even a bank. My bank is one of the few sites I mentioned where I do care about their authenticity because I do trust them more than I trust a potential phisher.
You are not simply substituting the origin with a phish, you are reading/modifying traffic by a third party without any possible detection from the server.
There are three parties. You, Me (the mitm) and your bank. (assuming your bank uses self signed, which it doesnt). Ive substituted my self-signed cert with the banks in your TLS handshake. I then reencrypt everything you send me to the bank. Bank can't distinguish you and I.
I think you're still misunderstanding MiTM attacks. There is no "third party". I'm not communicating with my intended recipient at all: I'm communicating with precisely one person, the phisher.
And this can only be "solved" with PKI if we have CA's who don't do the sort of thing outlined in this bug report. But we do.
The MitM is in the middle of something, no? You're "A", the site you intend to connect to is "Z", and the MitM is "M".
In this scenario, you're communicating with Z, while M listens in. Anything you send down the pipe can be transparently proxied by M and relayed to Z.
What mechanism can you use to verify the authenticity of the other end that can't be spoofed by M, now that they're inserted into the communication path?
> Plaintext HTTP gives no illusion that your connection has confidentially, authentication nor integrity.
Without an explicit warning, it's not going to be at all obvious to the average user - or anyone who forgets to check the address bar/isn't enforcing HTTPS - that the connection lacks those properties.
As the system grows I think there are more and more players, competing for market share, trying to get low prices and high profits, and more shit happens as a result. On top of that, since it's getting more and more important, we also hear about it more.
More worrying than some OCR silliness is that Comodo is issuing certificates based solely on WHOIS data. I don't think it is intended for such security critical use.
If you read the report linked from the bugzilla, you will read:
One of the methods that Comodo uses to validate that a certificate applicant
owns or controls a domain to be included in the subjectAlternativeName of a
server authentication certificate is set out in the CA/B Forum's Baseline
Requirements document [2] at section 3.2.2.4.2.
That method may be summarized as the sending of an email to an email address
(and obtaining a confirming response) where the email is identified as
belonging to the Domain Name Registrant, technical contact, or
administrative contract as listed in the WHOIS record of the domain.
So the Browser (Google Mozilla Microsoft) and the CA (Comodo Symantec LE) industry working group agreed that was acceptable.
Maybe it is time to review that policy. As far as I can tell, WHOIS protocol and data are both completely unauthenticated, and as such relatively easily manipulated by mitm.
Baseline Requirements also allow domain validation via DNS or a HTTP request. Essentially all domain validation methods break down once you're in a position to MitM the CA.
And if RDAP or whatev would have been used here then the whole problem would have never occurred. But it wasn't, due not being required to. Ergo sum, the requirements should be reviewed to avoid repeating this sort of thing.
How people still think the PKI system is actually delivering security is beyond me.
We have zero idea how many bad certs like this may be out there (the nefarious people won't publish their results, after all), and yet a browser will still treat a Comodo cert as better than a self-signed one (it's identical to a self-signed cert, since Comodo is a known bad actor now). It's better than plaintext, of course, but that's not saying much.
"Dont throw the baby out with the bathwater."
We have certificate transparency, the browsers are more responsive and reporting, we have HPKP and other browser countermeasures.
Its not perfect but with enough eyeballs it can get close.
It provides meaningful security. CA signed clearly not the same as self-signed; Maybe you misunderstand what this means?
How is a certificate from Comodo any different from an self-signed certificate? There is actual evidence that they gave a certificate to a third party. That means you should treat any certificate from them as self-signed, because you cannot trust Comodo to do their jobs.
Assuming you understand the structural difference between a self signed and a CA signed certificate (ie: subject pubkey sig vs issuer pubkey sig respectively) the difference is clear.
You cannot determine provenance of a self-signed certificate. The sig matches the subject.
With a CA signed, the hold of the CA private key is the only source (with high probability), so it is attributable.
If you trust the company or not -- play with you trust store. Otherwise this is apples and oranges.
The only time when this comparison would be apt would be the compromise of the Comodo Private Key. This would allow anyone to issue Comodo certificates, thus removing their provenance.
Of course then their cert would be revoked and we wouldn't have this conversation.
I think I understand why you are not following what I am saying.
You prove that the certificate came from comodo, and only comodo. It can't have come from anyone else.
This isnt trust -- its public key crypto. Only the issuer could sign it. If you trust it or not, its irrelevant, only that it could only have possibly originated from there.
If you believe that the public key is truly owned by the subject because the issuer said so -- this is trust.
OK, but that's only helpful here in the sense that I could remove Comodo from my trust store, but nobody's going to do that. Not even me, and I'm the one complaining about this. What I can't do is have any confidence in the provenance of a CSR they signed: did it actually come from the organization that controls that domain? (That was what I meant by "provenance")
I still prefer the present situation (States or other resourceful actors might be able to MITM me on some sites) to the scenario without https (any random person whose Wi-Fi you connect to can trivially MITM everything)
I mean, sure, from a practical standpoint I agree we're stuck with PKI, but I think the browsers need to cull, and cull hard, the number of CAs that they trust (and, yes, that would absolutely suck for both CAs and their customers). They also need to be a lot more open to per-user, per-site cert pinning so that we can build trust networks on our own.
Most of the sites I connect to I don't actually "trust" in any real sense, so a third party assuring me they are who they say they are isn't useful information for me. (Am I being phished right now? It doesn't matter, because I don't trust news.ycombinator.com with any information I wouldn't also give to a phisher.)
Not really; self-signed certs prevent non-resourceful actors (to use your phrase) pretty well too, well enough that I don't care about the difference for the vast majority of my traffic (how many sites do I actually trust more than I trust some rando pretending to be them? Not many, and those few are the only ones where 3rd-party verification gives me any useful information). If we decoupled the need to just encrypt transport (which is easy) from the need to verify authenticity (which is hard), we wouldn't need so many CAs to begin with (because far fewer sites would need them) and stuff falling through the cracks like this would be both less likely and easier to spot.
> How people still think the PKI system is actually delivering security is beyond me.
It has its problems, but it's what we've got to work with right now, and it's not as bad as you make it seem.
> it's identical to a self-signed cert, since Comodo is a known bad actor now
That's not technically true. I dislike Comodo as much as the next person, but this statement is a bit disingenuous, since you're implying that you can trust anyone who generates their own certificate for google.com or yourbank.com more than a CA-signed certificate. There's a reason PKI exists; it's because self-signed certificates by themselves aren't trustworthy in the wild.
But Comodo can issue a certificate for google.com or yourbank.com
They may have, for that matter. Do you check the issuer every time you go to a secure site? Do you trust them to do a better job securing either of those domains than they did with a large telecom in Austria?
And they have to do that precisely because there are 170 or so entities of various levels of dubiousness that are by default authorized to sign any certificate for any domain, and everybody knows that this is fundamentally a horrible idea.
For that matter, since HPKP requires sideband key management, it's
A) an admission that PKI doesn't actually work, and
B) a pretty clear example of why the CAs are unnecessary
If you have pinned certificates you are essentially operating exactly the way 99% of us do with SSH (only rather better because there are several million eyes on high-value certificates), which means you don't need CAs anymore.
Pins are either built into the browser by the vendor, or assigned on first access AFTER a successful server authentication (via the cert).
Pins are trusted/verified through another channel.
SSH Tofu trust occurs pre server authentication. It has no defense against an ALWAYS mitm. Of course there are far less ALWAYS MITM for SSH than TLS.
You have no way of validating the pubkey you have written to known_hosts against anything else.
In this email, Comodo discloses the security issue to Mozilla. The email was sent 26 days after researchers Florian Heinz and Martin Kluge of Vautron Rechenzentrum AG informed them of the bug.
Comodo clearly states that they used OCR for .eu and .be domains because the TLD registrars redacted their port 43 WHOIS data, and only provided an image of an email address on their web WHOIS pages. There was apparently no other way to obtain the email address.
Rather than flag humans to fix OCR in ambiguous situations, they had automated heuristics to correct the OCR, as determined by the security researchers. However, the heuristics chose the wrong output for the domain @a1telekom.at, producing @altelekom.at (an L instead of a one). The researchers registered altelekom.at and obtained a cert for a domain owned by A1 Telekom, a major ISP.