Hacker News new | comments | ask | show | jobs | submit login
Obtaining Wildcard SSL Certificates from Comodo via Dangling Markup Injection (thehackerblog.com)
258 points by pfg on July 29, 2016 | hide | past | web | favorite | 58 comments



The timeline looks awesome:

  June 4th, 2016 – Emailed security@comodo.com and reached out on Twitter to @Comodo_SSL.
  ...
  July 25th, 2016 – Robin from Comodo confirms a fix has be put in place.
50+ days! Funny that the entity in question is the biggest CA.

That said, one should never click on links and buttons in emails. It's more comforting to be able to copy a link instead.


The part that makes this a real exploit is that the administrator of the target domain would only have to open the e-mail with a client that displays images in order to be exploited.


And that's true even if the client, like gmail, proxies the images: https://gmail.googleblog.com/2013/12/images-now-showing.html


I wouldn't even copy links. Too many unicode shenanigans and other URL display problems out there.

I made an exception for sites I was going to immediately use 1Password's autofill with, trusting its URL validation to save me from fakes. I hadn't considered that merely loading a malicious URL could cause trouble. I think I'll have to reconsider my policy on that.


I don't even open links from emails anymore, just login to the company's dashboard and find it instead. :/


That's how I handle phone calls, too. For example, if I get a call from a bank then as soon as they begin asking for personal information I inform them that I'll have to call them back. I do this even if I was _expecting_ a call from them.

I've never had a representative balk at this, but many times they'll give me a number and extension to call. But that would defeat the whole purpose. I tell them that I need a published number that I can independently confirm. Usually they just say it's fine to go ahead and use the number on their website.

None of this is foolproof. My confirmation of a phone number is usually rather cursory--a quick Google search comparing results with what I find on the company's website. But infosec crimes are becoming increasingly common and I'd rather not be the low-hanging fruit.


I heard of an attack that causes even calling back to fail. Because with some phone providers, when you hang up, it doesn't actually hang up. So you dial the new number thinking you are making a new call, but you are actually on the same call and they just play ringing sound effects for you, and have a new person "answer".


This only affects landlines, right? A smartphone tells you who you're connected to.


I'm not at all surprised. And at the end of the day I have no doubt that my social security number, drivers license number, bank account number, etc are for sale on the black market.

But at least as of today I'm pretty sure those databases are still quite expensive as that information isn't as easily available. And even if it was more readily available, it's still advantageous for criminals to maximize their payout by being selective about the accounts they rob. Who knows what information is useful to them when determining risk and benefit. Credit card numbers are a dime a dozen, but the numbers of high net worth individuals (not me, obviously!) which see less scrutiny for high-dollar purchases are worth a heck of a lot more than the ones you dump after fraudulently charging $10. If I owned a Tesla and somebody called me up to ask questions about my Tesla, I'd neither confirm nor deny; I'd politely ask them their purpose, and if it seemed legit I'd call them back on a published line. (Obviously just telling them you'd call back is all they'd need to know to profitably classify your identity, so it's very much a judgment call.)

My rule is that if I haven't previously published the information, I'm not going to give it up easily unless I initiate the communication, even if I know the information is already public, such as birth date, place of birth, etc.

My goal is just to minimize my exposure to fraud; I can never eliminate it. And choosing not to divulge personal information unless I initiate the contact, whether by phone, e-mail, or whatever, is a relatively simple, convenient, and rote countermeasure I've employed for almost a decade now.

I don't care about the technical details, except in so far as it helps me to very roughly gauge relative risk. Criminals are specialists in their area and I'm not about to pretend I can keep up with the state of the art. At the same time, fraud enterprises, like porn, tend to be at the vanguard of technology. So if something is going mainstream it's a sure bet it's been underground for awhile.


Can you provide more details? I haven't heard of this one and hope to educate myself better if it really is possible.


This is a potential concern with traditional analog POTS. There isn't an explicit signal to end the call on the analog side. Some switches take longer than others to detect when your phone goes on-hook and end the call. If yours takes a long time, then on the plus side, you can hang up one phone, walk to the other side of your house and pick up the other to continue a call, but on the minus side, an attacker could play a dial tone at you and trick you.

This can happen with connections between switches too, potentially; but modern (like 1980s modern) out of band signaling techniques on trunks generally prevents this type of thing.


Shortened URLs or extremely long ones are scary.


Microsoft took almost a year to fix a bug I reported. A year after my initial disclosure, they finally issued an emergency patch for Exchange and gave me a bounty.


What's particularly terrible about problems like this is an organization with absolutely no relationship with Comodo is vulnerable. The CA system is so horribly broken.


Here is a graph of CA authorities. Who do you trust? https://notary.icsi.berkeley.edu/trust-tree/

See also Comodo vs Letsencrypt. https://letsencrypt.org/2016/06/23/defending-our-brand.html


Funny to see the giant swirl that is DFN with all its individual children.

(DFN is the "Deutsche Forschungsnetz" ("German research network"), and it gives out sub-CAs to the individual member institutions. Which apparently is an uncommon practice)


I talked to the CA guys at my university once and they don't have access to the keys. They do identity verification and send the documents onwards to some other CA folks at DFN for the actual signing. Makes the whole thing rather pointless imho.


Of course, this remind me of the ANSSI fiasco in late 2013 where one of the intermediates was used for MITM. From https://bugzilla.mozilla.org/show_bug.cgi?id=693450#c29 : "I received email from the CA representative that said that the decision to include this root IGC/A 4096 SHA2 is discontinued (reason is the complexity of the operation and the associated costs)."


The certificates are signed with a SOAP-API. The the individual institutions do not have the keys for the sub-CAs.

The DFN will change this praxis while migrating to a new root certificate from "Deutsche Telekom" until 2019, but the staff at my university has major headaches. It's far more easier to document, that the members of the university should only trust (e.g. enter their password on) sites with a certificate signed by a CA with the same name as the university.


Seems like (almost) every single German university is part of the DFN and has their own CA. I worked for a hospital for a while that was also part of it. You'd think it would be a nice idea to get rid of bureaucracy and make it easier to obtain certificates but no, our boss had to appear in person at one of the offices. (Yes the hospital has several physical offices for its own CA).


The graph doesn't seem to include Let's Encrypt.


It seems to be based on data from 2012, or at least older than 2015. (Blog post presenting it is from 2012, there are certs listed that expired 2014)


What do the dot sizes and red color indicate?


red = roots not signed by other CA certificates

size = number of certificates signed by it (not sure from what dataset)


To somewhat avoid email-based attacks like this, I asked my email client to show me the text instead of html email when both are provided. It's pretty clear to me that not that many people do this... many major websites send me emails where the text version is empty, truncated, completely different content, or a message like "Your email client is misconfigured, it should be showing you the html version".


I have mutt configured to open the html email in lynx and dump the text. Lynx is a terminal browser and does not support images and does not execute javascript. Prior to configuring mutt this way, I frequently encountered the kind of bad content you talk about in mail, but now I get the content and can still feel safe. I am still vulnerable to HTML parsing bugs in lynx, but I don't think the risk of anyone targeting lynx is all that big.

To read and send mail, I ssh to my mail server and use mutt there. For the most part it works great, except when I receive links I need to visit longer than about 70 characters because then I have to copy the URL in parts rather than all at once due to the plus signs inserted by mutt to indicate that the line continues.


I'm a mutt user, too. I think

set markers=no

in your ~/.muttrc will get rid of the plus signs.


That removed the plus signs. Thanks! :)


I've been configuring my clients as text-only for years and the number of image-only emails that need reading at all is essentially zero.


Incompetence, from top to bottom. If you are in a position to deny them money you owe it to the internet to divest asap.


I did exactly that - went with Let's encrypt for my latest site and ditched them for good.


> Peeking at the above raw email we notice that the HTML is not properly being escaped.

While I don't believe the author is mistaken that he successfully injected HTML into the email, the email snippet quoted in the article is properly escaped, because none is needed, because the email is not HTML:

  Content-Type: text/plain; charset=UTF-8
  Content-Transfer-Encoding: 8bit

  [ snip ]
  Subject:
  <h1>Injection Test</h1>
  <pre>This order was placed by</pre>
This is correct for plain text as <h1> has no special meaning. That said, this is a multipart/alternative email:

  Content-Type: multipart/alternative; boundary="(AlternativeBoundary)"
"multipart" meaning the email contains multiple parts, "alternative" indicating that they're the same semantic things, but alternate representations of it. We're looking at the plain text representation, when we should be looking at the HTML representation.

(These are so email clients that don't support HTML — or are configured to ignore — can fall back on something.)


Oops you're right, I snipped the wrote part of the actual email I received. There's an HTML block below the text/plain block :) nice catch! I'll update it shortly.


This is so bad. I used to think a while back that SSL pinning was over the top. It looks like we as an industry need to move to SSL pinning asap wholesale.


The problem with pinning, as I understand it, is that it prevents me from being able to see traffic from my own computer (via HTTPS proxy). Pinning is fine as long as I can turn it off, but I don't want to completely lose the ability to audit the traffic coming off my computer, phone, etc. Related is the recent supposed change in Android disabling the ability to add a trusted root certificate, that also screws the ability to audit.


If you are using Firefox you can log all TLS Master Secrets to a file so you can later on decrypt the recorded TLS (i.e. HTTPS) sessions. Wireshark supports the file format out of the box.

https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NS...


For better and for worse, installing a private root in most browsers causes pinning to be disabled (for certs signed by that root?).


From what I can tell from my former employers HTTPS MITM was that websites with certificate pinning turned on had to be exempted from the TLS proxy, otherwise browsers would throw pinning errors.


"Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor. A key result of this policy is that private trust anchors can be used to proxy (or MITM) connections, even to pinned sites. “Data loss prevention” appliances, firewalls, content filters, and malware can use this feature to defeat the protections of key pinning."

https://www.chromium.org/Home/chromium-security/security-faq...


It's not hard to disable this in my experience. I can break cert pinning in less than 2 mins on Android apps and perhaps 20 on iOS (replace the key in strings).

Web is even easier, just run chrome without pinning enabled.


A very obvious mitigation to this and similar issues seem to be to forbid HTML in domain validation emails. I don't see a reason not to do this and I totally expect that Comodo is not the only CA having such issues.


Another day, another CA problem.

I'm almost desensitized to it at this point. Despite the fact that these problems are potentially Web-breaking.


Wow, this attack was XSS 101.

And it could have been mitigated with anti-XSS 101: rejecting all form posts containing angle-brackets.


This is insufficient to prevent XSS, or DMI -the de facto anti-XSS is contextual (generally HTML) encoding, and it is the only proper mitigation. Blacklisting specific characters such as angle brackets is not safe, and will end in tears.


Yes, blacklisting is insufficient in general. But it covers many cases, including the one here. Contextual encoding must be done each and every place you emit user input into HTML, and it's easy to screw this up. Character blacklisting provides an extra layer of protection.

Unless you're intending for your user to post HTML, or math inequalities, or diff patches - there's no reason to allow angle brackets on a form post.


> Yes, blacklisting is insufficient in general. But it covers many cases, including the one here. Contextual encoding must be done each and every place you emit user input into HTML, and it's easy to screw this up.

Blacklists are only acceptable if you do it in addition to contextual encoding wherever you emit user controlled data, and even then a much stronger protection would be to whitelist acceptable characters. Either way, contextual encoding whenever you emit user data is the only real protection. --And even then it's not a good fallback protection, a strong Content-Security-Policy should be your fallback protection.

> Unless you're intending for your user to post HTML, or math inequalities, or diff patches - there's no reason to allow angle brackets on a form post.

Form posts are not the only vector for XSS, any HTTP request can be potentially exploited to perform XSS (and any part of an HTTP request), doesn't matter if it's a Form, an AJAX request, an HTTP header value, or a GET parameter, they're all potential attack vectors.


> But it covers many cases, including the one here.

In my experience, it doesn't even cover cases such as this one, but it certainly makes developers confident they don't need to apply contextual escaping.


Why is blacklisting not safe, assuming you contextually blacklist?


There are a huge number of contextual corner cases, this cheat sheet lists just a few:

https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_She...


I don't understand what that page is trying to tell me. What is the "filter" that <body onload=alert()> evades?



Programming like that leads to websites where using single quotes in a form gets a mysterious error. (They were trying to stop SQL injection in the most user-unfriendly way possible.)


It's a stretch to say that an attack based on lcamtuf's "Postcards From A Post-XSS World" is "XSS 101". It's "XSS 201" at least.

And "rejecting all form POSTs containing angle brackets" is definitely not the XSS 101 prevention mechanism! Don't do that.


Um, no.


Why does the title specifically mean wildcard SSL certificates?


I think the author is using "wildcard" in general sense of "any value you fancy" rather than a "wildcard certificate".


[flagged]


Not free. But an attempt to counter LetsEncrypt [0].

This [1] is what Comodo attempted to do a while back.

[0] https://letsencrypt.org/

[1] https://letsencrypt.org/2016/06/23/defending-our-brand.html


It's not really free. It's a trial certificate, for lack of a better term.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: