I don't understand why you made the CA private key user-readable.
On my system, modifying the trust anchors requires root privileges.
mkcert undoes this boundary by opening a "shortcut" where any code on the machine can mint trusted certificates for any domain (like internet banking).
Why not make the CA private key only readable as root, and the issuance of new leaf certificates a privileged operation?
I would also expect that if/when you add ACME, that the CA key material is locked away in a different user to the one in the DE.
Maybe not every platform you support actually has this boundary, but I would expect that mkcert doesn't remove the boundary on platforms where it does exist.
That is a fair argument, but it is already possible to get the behavior you want: the root key is generated with permissions 0400, so if your first run of mkcert is with root, all following invocations will also require root.
My reason for not making it a first class feature (with its complexity cost) is that in the vast majority of cases, an attacker with the ability to read local files can also directly attack the browser. (At least in mkcert’s use case, which is development not deployment.)
Also, I don’t really see how to maintain that boundary once ACME gives certificates to anything that can connect to a local port (without sending headers suggesting it’s a tricked browser).
It's similar to the approach used by Telerik Fiddler, you only have to be admin once to install the CA and it autogenerates certificates as needed for the HTTPS mitm proxy. However, it's ca is labelled "DO NOT TRUST" so it's easy to notice in the certificate info window.
I don't like the idea of the root requirement. Sometimes you are logged into a system maintained by someone else, and you want to do things as a normal user.
There's already a root requirement so that `mkcert -install` can deploy to the trust store. The problem is that after this point, the trust store is totally undermined, because the new CA's private key sits around unprotected.
I feel like this is a similar pitfall to how if you add a user to the `docker` group for convenience, you (perhaps unknowingly) gave that user root access to the host.
With this minor change, mkcert still retains its full function and convenience. Just type your password once in a blue moon when you need a certificate for a new fake real domain.
Do you really want a valid certificate for localhost?
I can get a valid certificate for development by simply getting a valid certificate for localhost.example.com on my server through let's encrypt and then making localhost.example.com resolve to 127.0.0.1 in my /etc/hosts file.
Some code can behave slightly differently on localhost than on localhost.example.com, for example in deciding whether to keep cookies on a third level domain, so tests are more reliable that way.
If you don't care about HTTPS on localhost but still want a fully qualified domain name you can simplify this approach even further.
Let's say you have an app running on localhost:8000 on your dev box.
If you goto lvh.me:8000, it will work without having to install or modify anything on your dev box or modify your DNS records for another domain. If your app had an "example" subdomain you could even goto example.lvh.me:8000.
Yes, this is what I do. In fact, I have *.localhost.example.com resolve to 127.0.0.1 in DNS. "localhost" may be a special case in browsers.
I'm wondering if anyone would be aware of any issues if the private keys where exposed? I'm going to assume if one can MITM my localhost, it's already game over.
I was wondering the same thing, but the only realistic attack I can think of is if you happen to make localhost.example.com resolve to example.com and don't check the host name server side. And then the end user needs to not look too closely at the address bar.
So perhaps not do this if your domain is large_bank.com. But if you are aware the keys are exposed you could simply change your DNS to make it resolve somewhere else.
Of course, another option is to use localhost.junk_domain.com for testing so it doesn't matter if you lose the keys.
The slight concern is, I'm concerned that most of the pre-built binaries (the "pre-built binaries link" goes to https://github.com/FiloSottile/mkcert/releases) are not signed. There are only a few releases that are marked Verified. Since this messes with the local trust store, it would be good to have things signed.
The snark is directed at this sentence:
> If you are doing TLS certificates right in production, you are using Let's Encrypt via the ACME protocol.
I agree that Let's Encrypt is great, but I dislike the implication that, if you're not using Let's Encrypt, you're not doing TLS right. Or maybe the text was supposed to be read as…
> If you are doing Let's Encrypt TLS certificates right in production, you are using the ACME protocol.
Current desktop OSes don’t offer fine grained permissions, so there is no reason to hold software with specific features to a higher standard of integrity. Still, I recommend you install anything you can, mkcert included, from your package manager, as shown in the README.
As for the snark, I stand by my provocation: I believe it’s best practice to automate TLS certificate renewal, and for 95% of deployments that means ACME, which for 95% of cases currently (but not necessarily) means Let’s Encrypt.
It's still desirable to have the "release" version of source code tarballs signed, so maintainers who are making packages for a system distribution can check the integrity and get some level of confidence. But fair enough, at least GitHub has HTTPS. And well, it uses AWS for files, so a signature is still useful.
From time to time, I encounter projects that the upstream doesn't support HTTPS and all tarballs don't have signatures. There's a high psychological pressure as a packager when one is dealing with projects like this, it's totally impossible to verify the integrity of the project and users may get pwned because of my package. My solution is (1) download the packages from different servers and compare the hash, (2) check the hash from the repository of other Linux distributions, to make sure at least my own network was not subjected to a MITM attack...
There have been cases were official download servers have been seeded with bad payloads, I think it is usually solved within a day/week. So some time between downloading and comparing those hashes is probably a good factor as well.
Automatic software installation is really a great way to have a backdoors into systems.
> Current desktop OSes don’t offer fine grained permissions
Please correct me if I'm wrong, but my impression was that Windows offers several certificate stores on several security levels (AD-domain, machine, user), each one able to separately set ACLs for the certificate container, the individual certificate and the private key.
Just because Linux and OSX doesn't support fine grained permissions, doesn't mean "current desktop OSes" doesn't. In fact the dominant desktop OS out there does this just fine and has for decades.
It's the niche hipster-OSes used by developers and security experts (like Linux and Mac OS) which lacks this fine grained control (and I'm saying this as a Linux user).
No need to say that what the guy asks for doesn't exist, when it clearly does in the OS with the clear majority market-share.
Or... Again... Am I missing something crucial or really obvious here?
> Windows offers several certificate stores on several security levels (AD-domain, machine, user)
macOS has different certificate stores and different keychains too. The OP's tool specifically opts to use the "admin" cert store in the system keychain, not user/user (the default for the `security add-trusted-cert` command).
Honestly if you take Windows users out of the equation, this whole thing could just be a posix shell script, which would also solve the "verified release binaries" issue, IMO.
Windows doesn't have fine-grained permissions for adding to or changing certificate stores, though. When you run "mkcert -install" you'll get a generic prompt for mkcert requiring admin permissions, not a prompt for it changing certificate stores.
I believe the point is that any software asking for admin could fiddle with your certificate stores, so there's no sense in asking for a higher standard of integrity from software that tells you it will do so.
> Windows doesn't have fine-grained permissions for adding to or changing certificate stores, though.
From quick testing, in a non-elevated prompt, I can edit my local users trust store, but not the machine wide one. In an elevated prompt, I'm allowed to edit both. Seems like there are different permissions for different stores to me. (I haven't tested if any browser considers the user store, if they don't it's not very relevant for this specific use case. If I remember right, which browser wants certificates presented how was kind of a hassle)
>>You cannot install public ACM certificates directly on your website or application. You must install your certificate by using one of the services integrated with ACM and ACM PCA.
With AWS being the biggest cloud vendor, it means that most people could probably use ACM and still do things right (it's probably even better than LE as it's harder to mess it up). So your 95% above is probably extremely inaccurate :)
Requiring some manual steps for certificate renewal is actually standard practice many places. The renewal process offers a chance to recognize when you have sites or services that are no longer needed and should be shut down, whether it's to save money, reduce potential security risks, etc. --That's not to say that it needs to be completely manual, it could be as simple as an email being sent to a group and someone approving the renewal. Everything after that could be automated.
APIs to handle obtaining and deploying certificates weren't new with ACME. What's new in ACME is the stuff that lets you automate proof of control of a name. There's a fun thread on m.d.s.policy where I'm explaining this and Peter Guttman (who does some work in this area) starts out asserting that this feature is so unimportant and stupid that you could easily do it with other existing protocols, when that argument was destroyed he switches to claiming nobody ever wants to do such a thing, and eventually he retreated to the idea that the Web PKI is an insignificant application anyway and so it doesn't matter.
The funniest part is that this all happened not when Let's Encrypt was an untried experiment but months into deployment when it was already a huge success.
It was the equivalent of that moment when an elderly relative informs you that they can't see these new fangled computers ever amounting to much.
I was pleasantly surprised to discover a few other CAs support the ACME protocol, however it’s either a free cert without support for SANs or certificates that require manual approval via a web control panel.
I’m not against paying for a cert but don’t make it much harder than LetsEncrypt to use it.
So if I'm using AWS certificate thingy I'm doing it wrong? Should I feel dumb, too? How dumb should I let myself feel before changing it to what the blog post says?
Letsencrypt has dramatically helped increase the amount of websites that are secured with SSL. Theyyre important, but not even close to being the sole certificate authority.
Besides: their stack is open and available. If monopoly would ever be a concern in the future, other CAs could spin up using the same software. But that doesn't seem something worth being concerned about right now.
In fact, there have been some HN posts lately talking about having a backup CA, since LE is not the only free CA that allows automation of cert creation.
Hopefully everyone has documented all the systems that auto-generate certs, what certs they request and put that in a manifest so that they can have a contingency plan, in the even that LE is knocked offline or shut down for whatever reason. It can certainly happen.
127.255.255.255 is a broadcast address in the /8. It doesn't describe the amount of bits in the network. You need a netmask or CIDR notation for that. For IPv6 you need CIDR, so I suggest to learn that.
CIDR is understandable for people who know basic networking. If someone doesn't understand what 127/8 means (which is totally OK if someone doesn't) I'd argue they shouldn't learn about the topic we discuss (valid HTTPS certs for localhost); they should learn something as basic as CIDR first. CIDR isn't difficult to understand. Basics first. You can't learn to speak English if you don't know how to open and close your mouth...
Yes, there is. Either you mention an IP address with a netmask such as 127.0.0.0 255.0.0.0 or you use CIDR. Mentioning a broadcast address to describe an address space is wrong because it doesn't describe how many bits (netmask and CIDR do that for IPv4).
Well this is timely. I spent several hours today trying to get local https working. I needed it because iOS Safari will only let you make getUserMedia (WebRTC) requests for the webcam over secure connections. I needed access to the webcam because Safari WebRTC won't give you access to other browsers on your LAN for data channels (which have nothing to do with your webcam) unless the user gives permission to use the webcam[0]. I needed access to LAN data channels because I'm working on a multiplayer game that I'd like to work on the major browsers. I had given up and finally decided to just use my VPS with Let's Encrypt for testing iOS, but maybe I'll give this a try. Worst case it doesn't work. Now I just need to figure out how to convince my users that taking a selfie is a reasonable requirement to play my game.
Same principle, though less well maintained and somewhat iffier especially the server bits, you may want to / have to set up your own localtunnel server[0] though an advantage is that you can do that.
FWIW for my use case (testing github API calls & webhooks feedback), ngrok with a free account was more reliable, and interacted better with little snitch.
Are you sure you can't do getUserMedia on localhost? On Chrome at least it overrides localhost to be considered a secure URL, so service workers etc function fine over HTTP.
Private CA is a good route if you plan on setting up a long standing internal only test server and devices. OTOH if you just want to access https://localhost on your PC things like "chrome://flags/#allow-insecure-localhost" exist and are easier to manage.
You might find using mkcert to be even easier than generating self-signed certificates :) and when it's time to test with a real hostname, it's just a matter of adding a line to /etc/hosts and rerunning mkcert!
I've always worked around low-level C++ so never got the need to understand how HTTPS certificates work... until I needed to test some code for WebRTC applications.
This tool was really helpful, when you just want to test your app and don't care at all about all those security restrictions imposed by the browser, Mkcert helps so thank you a lot for this.
At the beginning it was a bit confusing because some knowledge is assumed about how all this stuff works. For complete strangers like me I ended up asking for help, this includes some exact steps that one can follow to generate and install a certificate, so it might be helpful for someone reading this:
For local web dev/testing most tools generate test temp certificates automatically or it's easy to get them, so the only issue is using it in production e.g. for communicating from web apps to local services.
The article links to Let's Encrypt post (https://letsencrypt.org/docs/certificates-for-localhost/) where they mention that "modern browsers consider “http://127.0.0.1:8000/" to be a “potentially trustworthy”". Before that several years ago it was just impossible to communicate from https to local http. Now most browsers (excluding Safari) support communicating with local http from https pages. And there is a hostile opposition from a Safari developer to support this, even though it is now a standard. (https://bugs.webkit.org/show_bug.cgi?id=171934)
Distributing an unconstrained CA root certificate and asking users to install it is a terrible idea. Even generating one locally is dangerous if private key is easily exportable. If one still needs to support local https, it's better to limit the scope as much as possible. E.g. to issue a certificate only to sub-sub domain and apply name constraints as @tedunangst mentioned. It's easy to do this with XCA tool:
Cool. I wish there was an option to specify which trust stores to install to, for example only to Firefox's. The current default is to install in system (requires sudo), Chrome (which I don't use), and Firefox trust stores.
This is great, will definitely check this out. I just struggled for hours following multiple blog posts and howtos to setup https locally, because I was trying to test Facebook login, which only allows HTTPS and can't be tested otherwise.
Definitely a great product if it works as advertised!
While the tool been showcased here is great, there's really no substitute for doing it yourself and getting your hands dirty. Too many people (not saying you) use these kinds of tools as a crutch instead of learning the basics themselves.
indeed it's good to learn how those things work (I did while doing a similar process than your article) but a tool like this is definitely good to save time.
One could make a similar comment about using cURL vs building your own HTTP request, to learn the basics themselves :)
In my experience, you don't want HTTPS for localhost, you want HTTPS for a server in the local network so that you can also test on other devices than the development machine.
A convenient solution in case you can control DNS at the local network level is to just get an ordinary certificate for a real domain (for example dev.mydomain.com) and have it resolve to that local IP.
Using your own CA is possible too, of course, but getting it installed on all the devices is a nuisance.
Using the DNS challenge with LE, you can create a wildcard cert that is valid for *.domain.tld. Now you need a simple local DNS server like PiHole to resolve any local domains to your local reverse proxy serving local sites with the wildcard cert and you're done. You only need internet connection on the user's browser to get the little green lock and when you generate the wildcard cert itself.
If you do it this way, you have to copy your actual server’s private key to your local machine (or even machines if you’re using several), though. That possibly increases the chances of it getting compromised.
I’d prefer getting a separate certificate for local.domain.tld instead.
Routing to an IP in the local network obviously doesn't work outside that local network.
Re-routing the domain to the local machine (by editing /etc/hosts) should work just the same. A certificate from a trusted CA can be validated without an internet connection, they are part of the OS/Browser distribution.
I've been using https://www.tinycert.org/ for years which basically lets you create your own certificate authority and issue certs (which obviously aren't trusted by everyone but can be trusted by you/your team). It's ideal for generating SSL certs for ephemeral apps e.g. review apps on Heroku since it can all be done using an API
Am I wrong to think this opens a gaping security hole?
Adding a new root CA means anyone who gets their hands on the keys (which are openly available to the user of mkcert) gets to completely obliterate your ‘trust’ store and MITM any of your secure connections.
AFAIK attackers with local access already have lots of ways to exploit your system.
However, for added security you could run mkcert on another computer that is not connected to a network. Then you just copy the root cert and the leaf cert to your dev machine, but leave the root cert’s private key offline.
That is a risk, but not an uncommon one. There are a dozen other workflows that rely on users installing CAs, maybe not quite w/o root, but all the same, installed, so this isn't a new idea.
Also, it seems to me that virtually any way to get those keys involve hostile code running on your machine, meaning you're already toast.
I do also wish it didn't install certs as non-root, but as pointed out in another comment, there is a workaround for that.
Last time I setup a certificate for localhost, I was unable to get *.localhost to work because wildcard "tld" certificates are rejected by the browser.
I wonder if browsers will allow this to work for localhost?
I love this tool a lot. It's super helpful. I usually can get away with this but some tool really require it.
For example, I implement google oauth flow recently and they have a flow of redirect URL you have to pre-defined. To simplify the code and config, I want to make HTTPS by default and the only option instead of checking for the env(dev/prod to enable https or not). This simplify the code and still allow me to test https with it.
My solution was to use AWS and route 443 to 80 in a load balancer. That way there IS NO MIXED content: it is all HTTPS. (And certificates are free on AWS!)
If you don't like AWS, can't you just nginx as your load balancer and do the same thing?
Why should a website care about what port traffic is coming in on, shouldn't the TLS happen IN FRONT of your website so the website can be agnostic?
mkcert is just making it easier to generate & install self-signed certificates... Nothing that couldn't be done before and using nginx as you describe is pretty standard practice. I don't really see why you went for the AWS solution.
Ah, I crammed a bunch of threads in my terse answer. We use AWS because elastic beanstalk works better for our environment than heroku. Plus with S3 integration (and Route53), we found it doesn't make sense to maintain the OS and to keep everything on one cheap platform (cheap because it can be scaled up/down very quickly). Using the LB to send everything to :80 on a VPC means our devs don't need to dink with TLS on their remote laptops.
So the tool can generate a CA cert and automate it's installation. In my experience it can be painful to teach other developers to generate and install their own, especially if the Java keystore is involved, so this could be very useful for large teams.
I'm almost 99% certain that everyone who's this concerned - already has a domain. (or they can get a free domain from something like .tk - as it really does not matter since this is for pure local development).
So I feel like the following workflow is simpler no?
1. Use something like local.mydomain.com as your local dev domain. (set the DNS in Cloudflare / Netlify etc. to 127.0.0.1)
2. Use Let's encrypt to generate certs for that domain.
Am I going about this the wrong way? (or is there something super insecure that I've missed?)
For the dns part, I honestly think a hosts file entry is more flexible, as you can support environments using vms/containers etc with a guest that has a dhcp address.
The security issue comes in when you ship the private key - if you are following best practices - won't the private key be different for each domain / managed in a better way?
So, now you're going to give each member of your team a way to authorise valid certificates for your domain? Great, I don't want to imagine what your HR/security vetting process will be after the first abuse of that power.
That's neat! I'm writing a web framework for Python that has some built-in features to make testing on localhost easier: you can pass it the file name to an existing certificate or it will generate one itself. The disadvantage to the no-setup required autogeneration is that you need to add a security exception on first use and every time the root expires. `mkcert` could definitely help with not training web developers to perform unsafe actions.
I solved this problem by requesting a TLS certificate from my company's Windows domain CA for my machine's hostnames on the internal network (machinename and machinename.company.com - both added to the same cert via Subject Alternative Name). Then I mapped those domains to 127.0.0.1 in /etc/hosts so it works even when I'm not on the internal network.
From there it's simple to host the front-end app I work on using a node.js script.
I guess I am not the only one having around a `genssl.sh` that grew big during the years, using openssl, generating CA, certreq, then signing and setting properly `subjectAltName` etc :-D
I like mkcert since it also injects the root CA into the trust store - was always too lazy to do this programmatically.
Also note that this doesn't have intermediate CA, root CA signs everything. And of course: NEVER use this even close to production. (as you can see, even I only used in my sandbox env)
That's what I use, Caddy webserver with Gandi DNS (also used Route53 in the past) handling the ACME challenge and a `*.l.mydomain.tld` pointing to 127.0.0.1.
Adding reverse proxies for different local services becomes a piece of cake, but mkcert allows you to use `localhost` directly if needed. Personally I never have and like not having to make any changes to the trust store.
As long as you run "mkcert -install" where the client/browser runs, you can use the certificates it generates in whatever deployment you like. If instead you want to run it 100% inside Docker, but your browser is outside, you'll have to manually install the root from "mkcert -CAROOT".
On my system, modifying the trust anchors requires root privileges.
mkcert undoes this boundary by opening a "shortcut" where any code on the machine can mint trusted certificates for any domain (like internet banking).
Why not make the CA private key only readable as root, and the issuance of new leaf certificates a privileged operation?
I would also expect that if/when you add ACME, that the CA key material is locked away in a different user to the one in the DE.
Maybe not every platform you support actually has this boundary, but I would expect that mkcert doesn't remove the boundary on platforms where it does exist.
Fan of your work since your Heartbleed scanner!