Hacker News new | past | comments | ask | show | jobs | submit login
GitHub Pages generated a TLS cert for my own domain (securem.eu)
120 points by suixo on April 18, 2018 | hide | past | favorite | 71 comments



This is a feature, not a bug.

You asked them to host your site. They secured it.

Github is the designated endpoint for your site traffic, so they can not be 'rogue'. You explicitly granted them control over that endpoint, and their securing that traffic does not diminish your security in any way.

It's free hosting, version controlled, and now with a free TLS security upgrade. That's actually pretty awesome.


I mean, it's kind of overstated to call it rogue. A new feature of Github hosting, sure. It's a pretty common practice to do it automatically in hosting and CDNs. cPanel does it, Cloudflare does it (by themselves adding up to 10-20+% of all certificates currently trusted), a immeasurable number of SaaS-es and blogging/ecommerce platforms do it. I saw one user freak out when FastMail started doing it too for domains pointed to their static hosting.

From a Web PKI perspective I feel it's fine. DV is DV after all.

I do always create CAA records for my owns domains though, even if it's just:

    issue ";"
    issuewild ";"


To me it felt rogue since it had been generated without me knowing nor expecting it, whereas I expect CloudFlare to do it. This is not an official feature of GitHub... But I understand the word may be too strong.

For CAA I would love to, but my registrar still doesn't allow me to create these kind of records :/


I've banged on this before in HN, but SSL certificates do not mean what many (if not most) people here seem to think they do. They do not mean that you are communicating directly with a person, or even that some person approved the communication, or the same for a company. It may sound silly when I say it, because obviously we aren't actually getting webpages from a person, but it's important to internalize.

They mean something much more like "you are communicating with a machine authorized to respond on this domain by the owner of that domain". There is no obligation that the machine in question belongs to the domain owner. So when you delegated the (sub-)domain to GitHub, you also delegated the ability to generate at least low-class SSL certs to verify that delegation is correct and authorized and that HTTPS is legal.

What's important is that nobody can get that authorization without your delegation. And even with Let's Encrypt, you should find you can't just stroll out and get a certificate for any domain you choose. At some point you have to have control of the domain itself to get a cert.

(This is also why I have no problem with anything Cloudflare does with a certificate. There is no reason that they can't be shared with an authorized delegate by the domain owner. What matters is that CloudFlare can't do it without authorization, and if the owner wishes to revoke that delegation they have a clear path and CloudFlare can't do anything about it.[1] Cert delegation happens all the time, though; everyone running their HTTPS website off a hosted VM image is delegating the actual HTTPS-ing to the VM host, for instance.)

The tricky bit here is that you did not fully understand what you were authorizing when you delegated the domain to GitHub. No criticism intended, this is complicated business. It somehow needs to be fixed but heck if I know how.

[1]: In this case, note that CloudFlare may have a valid cert for your domain for a while after you leave, but when people check DNS to find where your domain is, they'll connect to you rather than CloudFlare. This is not a CloudFlare-specific issue, it would apply to GitHub here or any other delegate. The fundamental gap here is that domain delegation has no temporal component and SSL certificates do; an impedance mismatch is inevitable. In theory you ought to be able to revoke their certificate but that's a shipping container loaded with cans of worms.


Thanks for the detailed post. The rogue word seems to be a bit too strong, as I totally understand how GitHub generated the cert thanks to Let's Encrypt. The surprising bit is that when granting them the right to handle all internet traffic for the given domain (back in 2014), I wasn't expecting them to use it to generate certificates.

Then Let's Encrypt was released to the public (yeah), and today I am happy that GitHub generated this cert. However, I was surprised to see it was generated "in my back", without any kind of notice and no public documentation of the feature.


> The surprising bit is that when granting them the right to handle all internet traffic for the given domain (back in 2014), I wasn't expecting them to use it to generate certificates.

I hate to flog a dead horse, but considering you were specifically pointing the domain at them for them to host HTTP, them then securing that really shouldn't be surprising. If they'd started running other services on it (eg email) then I'd start being surprised. HTTP host hosting HTTPs though...

More granular DNS records would be interesting for the future. The ability to say "this host resolves to that IP but ONLY for web traffic and nothing else" (an "ahttp" record) idea intrigues me.


This already exists: SRV records


Of course it does; I'm an idiot. Though it does seem like very few applications (I can't find a single mainstream browser) that actually support using them. I do wonder why not.


Kerberos and AFS use it! There a few others that can, but it's client dependent. It would be nice if more things supported it out of the box.


Great explanation. As someone getting into automated customer ssl cert setup, i've found a lot of people have this misconception that it's some super secret sacred identity, when really it's more about machine to machine trust imho.


This is true except for the case of extended validation certificates. These should be actually validated as sanctioned by the registering entity and the entity should also be verified.


Correct; that is what I mean by "low-class SSL certs", but thank you for using the correct term (which escaped my mind at the time). If GitHub got an extended validation cert, we have a real problem on our hands.


To be more precise, it would be a problem if GitHub got an Extended Validation certificate with the organization field containing the name of the domain owner rather than GitHub itself.

It would in fact be perfectly fine for a CA to issue an EV certificate containing "GitHub, Inc." and a domain that GitHub does not actually own, but which they control because someone pointed their A or CNAME record to GitHub pages. The Extended Validation Guidelines[1] do not require domain ownership for the organization requesting the certificate, it's enough to control it in a way that allows you to complete one of the blessed validation mechanisms defined in the Baseline Requirements.

[1]: https://cabforum-travis-artifacts.s3-us-west-2.amazonaws.com...


Nothing is rogue when you point your DNS to someone else's host. Because you pointed a domain you owned at GitHub, it was then possible for GitHub to obtain a certificate for your domain. Sure, maybe something was missed in the terms of use, but this is something everyone needs to understand. Pointing your domain at someone else's server allows them to get a certificate for that domain. That's how domain validated (DV) certs work, it's how they've always worked.


Cloudflare is interesting because they do it even if you don't and have never used their proxy service, just their DNS hosting.

I was surprised when they issued certificates for my domains (as well as injecting a tonne of broad CAA records into my zone). You have to disable Universal SSL from the bottom of the Crypto tab. So, on second thought, I sympathize with you.


Cloudflare is interesting because they do it even if you don't and have never used their proxy service, just their DNS hosting.

Ah, so that's why they made one for HN. Thanks!


Their DNS hosting != Their public DNS server, announced at the start of this month.


CAA wouldn't help in most cases!

If you have an IP based delegation (A or AAAA record) you're probably okay, but if you have CNAME delegation you're beholden to the named entity. I've commented on this before on HN when CloudFlare did the same thing to me: https://news.ycombinator.com/item?id=16579486


> To me it felt rogue since it had been generated without me knowing nor expecting it

That's understandable. Fastmail do the same, i.e. acquiring certs for their customers' domains without asking or informing, with a view to moving them to HTTPS.

The general opinion here seems to be in favour of this practice. So we can look forward to a future where your domain may publish on the web only with the permission of a CA.


> Fastmail do the same, i.e. acquiring certs for their customers' domains without asking or informing, with a view to moving them to HTTPS.

Not sure about the the down-voters, but I was just about to post a 'citation needed' comment dismissing this as crazy talk, and I was surprised to discover it is actually true: https://www.fastmail.com/help/files/secure-website.html


When I started using CloudFlare for DNS only, without any of their extras, they also did generate cert for my domain and it's impossible to disable this behavior through UI (requires contact with support). I did not expect that because I just wanted DNS, no traffic through them and I got a cert with my domain intermixed with some strangers.


If you don't want this, you can prevent this by setting CAA DNS records on your own domain. How this works is described here: https://ma.ttias.be/caa-checking-becomes-mandatory-ssltls-ce...

You can validate if they've been configured correctly here: https://dnsspy.io/labs/caa-validator

The article is pretty strongly worded for something that isn't all that bad. Yes, they issued a certificate, but you've sort-of given them permission to do so by hosting your content with them. If they own/control the server, they can get their certs validated.

It's a pretty good example of why you'd want something as Certificate Transparency even on HTTP-only domains, to know _when_ someone issues a certificate without you knowing about it. I use Oh Dear! app for that feature: https://ohdearapp.com/


If you recommend your own (especially paid) services, please mention that they are yours.


a completely honest question, what's wrong with recommending a product without mentioning if we're affiliated or not with that product?


It's advertising misleadingly pretending to be not-advertising. If you have a potential motive apart from "I genuinely believe this is the best recommendation I can give you" for making a recommendation, you probably should disclose that. Especially in a community like HN, where talking about your own stuff is somewhere between accepted and encouraged, I can't think of good reasons not to do it.


Thanks for this insight! Now i see the difference and where bias could play a role when recommending a product in case we're affiliated with. In fact, I see this widely used only in the HN community.


> you can prevent this by setting CAA DNS records on your own domain

Well this is interesting, I already have a CAA DNS record on my root domain, but of course its also set to 'letsencrypt.org' since that is what I use on my root domain. Although I don't guess it matters since its on the root and not the subdomain

Edit: Actually, looks like a CAA record on the root domain will also limit subdomains. So, although I already had a CAA record setup, looks like this new Github feature will work as expected when it rolls out to my account without any changes since I was already using letsencrypt


If your DNS host supports it (Route53 does), you can set a wildcard CAA record with no valid issuers that will do what you want.

Bare -> LE delegation WWW -> explicit LE delegation * -> no delegations, and will override "bare" since resolution walks up the domain tree.


I mention CAA DNS records in the end of the post, but unfortunately the last time I checked my registrar did not offer the possibility of creating these records... :(

Oh Dear! looks really interesting (though I won't pay for monitoring my personal blog).

I am not sure I fully understand your HTTP-only remark, since how the communication is made (HTTP-only, HTTPS, IMAP, etc.) is not related on how the certificate is generated (which implies CT).


CertSpotter is a free service and open source project for monitoring CT logs: https://sslmate.com/certspotter/

(I'm not affiliated)


This is the intended behaviour and something Github Pages users have been asking a long time for: https://github.com/isaacs/github/issues/156

It's using LetsEncrypt under the hood, and only generating a cert for the custom cname pointing at the Github page.


I really don't see the problem. Once you set your cname record to GitHub, you've essentially yielded all control to them.

If you don't like that, don't set a cname record.


From GitHub themselves:

"GitHub Pages sites have been issued SSL certificates from Let's Encrypt, enabling HTTPS for your custom domain. This isn't officially supported yet and it's not possible for you to enable and enforce it on your sites at this time."


Could you please provide a link to this page? I wasn't able to find anything like this on the docs (https://help.github.com/articles/securing-your-github-pages-... and related)

EDIT: found out the "official statement" here https://gist.github.com/coolaj86/e07d42f5961c68fc1fc8#gistco...


What if you trust them at the time, but then move your domain over to different hosting. Is it possible to revoke the previous certificate, or could your old host theoretically keep hold of the old cert and use it in a MitM attack against you?

Fortunately LE are moving towards shorter and shorter validity periods for certs, which at least limits your risk somewhat.


Certificate revocation only really works in theory. Fortunately Let's Encrypt certificates are rather short-lived.


There was some discussion recently on this pull request that Github was rolling this out to some accounts:

https://github.com/isaacs/github/issues/156


Thank you! This approves the gradual release thesis, although I am surprised that no communication was made by GitHub.


You gave GitHub the right to use your domain to serve content (including over HTTPS) when you pointed your domain at GitHub servers. This is not a problem.


There is no huge problem, just an interrogation over how this happened since the UI doesn't allow it and the documentation states this is not possible.


OP here. Thanks to all your rich comments, I have updated the post with the final conclusion:

GitHub is gradually (and silently) deploying HTTPS to custom-domains websites hosted on GitHub Pages, using DV from Let's Encrypt.


This would be good, preferable and the right thing to do, expect with intimation, and an opt-out.

The author has the right to be annoyed that this was done without notification. (though I would say despite how it was done, there was no-harm no-foul here). I also eagerly await this change to my own github-pages hosted custom domain sites.


I didn't realize they were doing this, but sure enough a domain I host on github pages now responds on https with a Lets Encrypt cert. Cool.

They are not redirecting port 80 traffic though, at least not yet.


> TL;DR: This blog is hosted on GitHub Pages,

In my books that's not rogue. If you don't trust them to serve https why are you trusting them to serve http? Feels a bit outrage for the sake of it


If you want a practical problem, what about revocation? OP's trust in Github hosting is revocable at any time by changing the CNAME, but the generated cert with still be valid for some time (and can be used e.g. to MITM people).


Any CA can issue any certificate for anyone. This is where Certificate Transparency Log comes into play (this little backdoor in the browsers that sends the hosts that you visit to Google and friends).

Imagine, you are the host of a domain and you receive a HTTPS request.

What are your possibilities ?

A) Drop the request ? Fallback to HTTP and get the user MITM

B) Self-signed certificate

C) A certificate trusted by a well known authority

D) MITM yourself with CloudFlare ? Put CloudFlare in front then CloudFlare will proxy the traffic in pure HTTP to GitHub.

Now talking about risks:

   $ openssl s_client -servername blog.securem.eu -connect blog.securem.eu:443 | openssl x509 -noout -dates

   notBefore=Apr 15 15:48:38 2018 GMT
   notAfter=Jul 14 15:48:38 2018 GMT
https://letsencrypt.org/2015/11/09/why-90-days.html

The certificates are valid only for 90 days.

It looks like just inventing a problem. If you decided to give control of part of your domain to GitHub, yes they will be able to serve content on your behalf. That's normal, and logic.


I don't think it's a major problem, but it does violate the Principle of least astonishment, which I think we (developers) should strive to avoid.


I think they should just add a line in the UI instead of hiding the info deep in tickets or issue, but that's a mere communication issue. Overall, it's not surprising, and actually good for the user.

On letsencrypt.org:

     In Progress:
     These large providers are currently rolling out support for Let’s Encrypt for custom domains.
     You may or may not be able to enable support in their control panel,
     or you might notice certificates have recently been issued for your domains hosted with these services.

     Blogger
     GitHub Pages
https://github.com/isaacs/github/issues/156#issuecomment-366... https://community.letsencrypt.org/t/web-hosting-who-support-...

and so on.


> I don't think it's a major problem, but it does violate the Principle of least astonishment,

The principle of least astonishment should tell you that in 2018 HTTPS is becoming the default and hosting web pages with automated HTTPS should be expected.

What you should be astonished about is that it took Github so long to support HTTPS everywhere.


They already supported HTTPS, just not for custom domains.


Github can and should ask Let's Encrypt to revoke the certificates, the API lets them do that by proving it's their certificate. There's a suitable OCSP cause code like "obsolete" but I can't remember if you can set the cause code in ACME.

I also can't remember whether there's an API for legitimate owners to revoke a cert issued to someone else that's no longer OK. Let's Encrypt does have to be able to do that, but if there's no API it might be very manual.


> I also can't remember whether there's an API for legitimate owners to revoke a cert issued to someone else that's no longer OK. Let's Encrypt does have to be able to do that, but if there's no API it might be very manual.

You can do that by going through the usual challenge process in ACME and obtaining an authorization object for all names on a certificate. Any ACME account who is authorized to issue certificate for these names can also request revocation for existing certificates (even if owned by different ACME accounts).

(You're a bit screwed if the certificate you want to revoke also contains names from other users. This is a good argument for a "one name per certificate" or "only the names of one user per certificate" policy for such implementations.)


At least with Let's Encrypt, you can revoke a certificate issued by a different ACME account, as long the ACME account you are revoking from has a valid authorization for all of the DNS identifiers on the certificate being revoked.

Of course, this is useless if the certificates were issued under a different CA, so your point is still valid. Prevention is better :) !


> ACME account you are revoking from has a valid authorization for all of the DNS identifiers on the certificate being revoked

Does this mean that if GitHub did what CloudFlare does and batched multiple domains they serve into the same cert you wouldn't be able to revoke it?

I guess with a 90-day expiry it's not that big of a deal...


Yeah, that's a good point too.


Actually this was exactly what happened and surprised me. I was moving my blog out of GitHub Pages to a self-hosted solution, and it's only after switching DNS and generating the cert for my own server that I noticed that GH Pages was ALSO serving the old version of the blog over HTTPS.

I didn't know about LE revocation mechanism at the time.


Somewhat ontopic, I find interesting that Cloudflare is issuing certificates for HN, despite HN serving a Comodo cert.

https://crt.sh/?id=182943715


They may be using Cloudflare in some regions only, or as a standby to take over in case of a DDoS on the original host.


Another post in this thread answered it: HN uses Cloudflare's DNS, and they create a cert even if you just use the DNS.


HN doesn't use Cloudflare DNS for news.ycombinator.com, but is actually served by Cloudflare.


It's kind of github to only make the ssl cert valid for ~90 days

https://www.sslshopper.com/ssl-checker.html#hostname=blog.se...

> The certificate will expire in 87 days.

Good reminder for everyone to check their cert expirations!


That's not kindness, that's how Let's Encrypt works. And if you still have certificate expiration reminders in your calendar, you're doing it wrong.


That’s like saying “if you’re not using <ESx feature released three months ago>, you’re doing it wrong”.

The vast majority of websites still use traditional, yearly carts without automation. It may not be perfect, but it’s not the worst thing in the world.


> three months

Three years.

In these three years, I've seen at least 5 outages because of expired certs. Maybe if you have only one domain, the ROI is not as clear.


You can use CloudFlare HTTPS + Github Pages = 100% free hosting with SSL


You don't need CloudFlare anymore, as Github are gradually rolling out native https support for Github Pages with custom domains (using LetsEncrypt)


Hasn't been rolled out to ours yet (sqlitebrowser.org).

Wish it was though, as we have an open Issue on GitHub from a user about not using HTTPS. This would mean 1 less open issue. :D


CloudFlare "Flexible HTTPS" is not HTTPS in the traditional sense. The connection between GitHub and CloudFlare is unsecured.


I have pages on Netlify with their one-click SSL, and more than once the certificate has (here my ignorance becomes apparent) stopped working, breaking the site (since https is forced server-side and/or cached by browsers like Safari). To get the site up and running again I've needed to contact support and have them manually issue a new certificate.

Maybe this is way easier than handling things on my own, but it seems like an achilles heel of fully automated SSL.


Sounds like they just have a poor auto-TLS infrastructure. A good system will (1) try to generate multiple times ahead of expiration and (2) warn humans if the cert is about to expire.


Yup, this is definitely a Quality of Implementation issue, like if you go to a cheap bulk host and every few months your web site is just "down" due to some idiot error they made.

If it's cheap or free, well, hard to complain.




Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: