Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Mkcert – Valid HTTPS certificates for localhost (filippo.io)
839 points by FiloSottile 72 days ago | hide | past | web | favorite | 115 comments



I don't understand why you made the CA private key user-readable.

On my system, modifying the trust anchors requires root privileges.

mkcert undoes this boundary by opening a "shortcut" where any code on the machine can mint trusted certificates for any domain (like internet banking).

Why not make the CA private key only readable as root, and the issuance of new leaf certificates a privileged operation?

I would also expect that if/when you add ACME, that the CA key material is locked away in a different user to the one in the DE.

Maybe not every platform you support actually has this boundary, but I would expect that mkcert doesn't remove the boundary on platforms where it does exist.

Fan of your work since your Heartbleed scanner!


That is a fair argument, but it is already possible to get the behavior you want: the root key is generated with permissions 0400, so if your first run of mkcert is with root, all following invocations will also require root.

My reason for not making it a first class feature (with its complexity cost) is that in the vast majority of cases, an attacker with the ability to read local files can also directly attack the browser. (At least in mkcert’s use case, which is development not deployment.)

Also, I don’t really see how to maintain that boundary once ACME gives certificates to anything that can connect to a local port (without sending headers suggesting it’s a tricked browser).


How about instead adding an alert when not running it as root?


It's similar to the approach used by Telerik Fiddler, you only have to be admin once to install the CA and it autogenerates certificates as needed for the HTTPS mitm proxy. However, it's ca is labelled "DO NOT TRUST" so it's easy to notice in the certificate info window.


I don't like the idea of the root requirement. Sometimes you are logged into a system maintained by someone else, and you want to do things as a normal user.


There's already a root requirement so that `mkcert -install` can deploy to the trust store. The problem is that after this point, the trust store is totally undermined, because the new CA's private key sits around unprotected.

I feel like this is a similar pitfall to how if you add a user to the `docker` group for convenience, you (perhaps unknowingly) gave that user root access to the host.

With this minor change, mkcert still retains its full function and convenience. Just type your password once in a blue moon when you need a certificate for a new fake real domain.


I don't think `mkcert -install` requires root in all cases. The NSS trust store is stored in ~/.pki/nssdb/ and can be written to without root.


can you create/use a local CA as non-root too?


Is there no passphrase on these private keys? Especially the local CA?


Do you really want a valid certificate for localhost?

I can get a valid certificate for development by simply getting a valid certificate for localhost.example.com on my server through let's encrypt and then making localhost.example.com resolve to 127.0.0.1 in my /etc/hosts file.

Some code can behave slightly differently on localhost than on localhost.example.com, for example in deciding whether to keep cookies on a third level domain, so tests are more reliable that way.


If you don't care about HTTPS on localhost but still want a fully qualified domain name you can simplify this approach even further.

Let's say you have an app running on localhost:8000 on your dev box.

If you goto lvh.me:8000, it will work without having to install or modify anything on your dev box or modify your DNS records for another domain. If your app had an "example" subdomain you could even goto example.lvh.me:8000.

I wrote about lvh and other useful free services that help developing apps without needing to install a DNS server or mess with SSL certificates at https://nickjanetakis.com/blog/ngrok-lvhme-nipio-a-trilogy-f....


Regarding lvh, see also:

    localho.st


> resolve to 127.0.0.1

I wish people would stop using my IP address!


You can also use 127.0.0.1.xip.io for that with no modifications to hosts.


Yes, this is what I do. In fact, I have *.localhost.example.com resolve to 127.0.0.1 in DNS. "localhost" may be a special case in browsers.

I'm wondering if anyone would be aware of any issues if the private keys where exposed? I'm going to assume if one can MITM my localhost, it's already game over.


I was wondering the same thing, but the only realistic attack I can think of is if you happen to make localhost.example.com resolve to example.com and don't check the host name server side. And then the end user needs to not look too closely at the address bar.

So perhaps not do this if your domain is large_bank.com. But if you are aware the keys are exposed you could simply change your DNS to make it resolve somewhere else.

Of course, another option is to use localhost.junk_domain.com for testing so it doesn't matter if you lose the keys.


ppl tend to forget the good ol trusty hosts file :) thx for noting it. seems to be the practical solution really..


What about 127.0.1.0? Loopback is a subnet, not a specific address from it.


I love this simple solution.


I have one slight concern, and one snark.

The slight concern is, I'm concerned that most of the pre-built binaries (the "pre-built binaries link" goes to https://github.com/FiloSottile/mkcert/releases) are not signed. There are only a few releases that are marked Verified. Since this messes with the local trust store, it would be good to have things signed.

The snark is directed at this sentence:

> If you are doing TLS certificates right in production, you are using Let's Encrypt via the ACME protocol.

I agree that Let's Encrypt is great, but I dislike the implication that, if you're not using Let's Encrypt, you're not doing TLS right. Or maybe the text was supposed to be read as…

> If you are doing Let's Encrypt TLS certificates right in production, you are using the ACME protocol.


Current desktop OSes don’t offer fine grained permissions, so there is no reason to hold software with specific features to a higher standard of integrity. Still, I recommend you install anything you can, mkcert included, from your package manager, as shown in the README.

As for the snark, I stand by my provocation: I believe it’s best practice to automate TLS certificate renewal, and for 95% of deployments that means ACME, which for 95% of cases currently (but not necessarily) means Let’s Encrypt.


It's still desirable to have the "release" version of source code tarballs signed, so maintainers who are making packages for a system distribution can check the integrity and get some level of confidence. But fair enough, at least GitHub has HTTPS. And well, it uses AWS for files, so a signature is still useful.

From time to time, I encounter projects that the upstream doesn't support HTTPS and all tarballs don't have signatures. There's a high psychological pressure as a packager when one is dealing with projects like this, it's totally impossible to verify the integrity of the project and users may get pwned because of my package. My solution is (1) download the packages from different servers and compare the hash, (2) check the hash from the repository of other Linux distributions, to make sure at least my own network was not subjected to a MITM attack...


There have been cases were official download servers have been seeded with bad payloads, I think it is usually solved within a day/week. So some time between downloading and comparing those hashes is probably a good factor as well.

Automatic software installation is really a great way to have a backdoors into systems.


> Current desktop OSes don’t offer fine grained permissions

Please correct me if I'm wrong, but my impression was that Windows offers several certificate stores on several security levels (AD-domain, machine, user), each one able to separately set ACLs for the certificate container, the individual certificate and the private key.

Just because Linux and OSX doesn't support fine grained permissions, doesn't mean "current desktop OSes" doesn't. In fact the dominant desktop OS out there does this just fine and has for decades.

It's the niche hipster-OSes used by developers and security experts (like Linux and Mac OS) which lacks this fine grained control (and I'm saying this as a Linux user).

No need to say that what the guy asks for doesn't exist, when it clearly does in the OS with the clear majority market-share.

Or... Again... Am I missing something crucial or really obvious here?


> Windows offers several certificate stores on several security levels (AD-domain, machine, user)

macOS has different certificate stores and different keychains too. The OP's tool specifically opts to use the "admin" cert store in the system keychain, not user/user (the default for the `security add-trusted-cert` command).

Honestly if you take Windows users out of the equation, this whole thing could just be a posix shell script, which would also solve the "verified release binaries" issue, IMO.


Windows doesn't have fine-grained permissions for adding to or changing certificate stores, though. When you run "mkcert -install" you'll get a generic prompt for mkcert requiring admin permissions, not a prompt for it changing certificate stores.

I believe the point is that any software asking for admin could fiddle with your certificate stores, so there's no sense in asking for a higher standard of integrity from software that tells you it will do so.


> Windows doesn't have fine-grained permissions for adding to or changing certificate stores, though.

From quick testing, in a non-elevated prompt, I can edit my local users trust store, but not the machine wide one. In an elevated prompt, I'm allowed to edit both. Seems like there are different permissions for different stores to me. (I haven't tested if any browser considers the user store, if they don't it's not very relevant for this specific use case. If I remember right, which browser wants certificates presented how was kind of a hassle)


AWS ACM is great, and they handle auto renewal for you as well. LetsEncrypt is fantastic, but there are other options.


AWS ACM is not for general use like LetsEncypt.

>>You cannot install public ACM certificates directly on your website or application. You must install your certificate by using one of the services integrated with ACM and ACM PCA.

https://docs.aws.amazon.com/acm/latest/userguide/acm-overvie...


Anything with automated renewal is great.


With AWS being the biggest cloud vendor, it means that most people could probably use ACM and still do things right (it's probably even better than LE as it's harder to mess it up). So your 95% above is probably extremely inaccurate :)


Requiring some manual steps for certificate renewal is actually standard practice many places. The renewal process offers a chance to recognize when you have sites or services that are no longer needed and should be shut down, whether it's to save money, reduce potential security risks, etc. --That's not to say that it needs to be completely manual, it could be as simple as an email being sent to a group and someone approving the renewal. Everything after that could be automated.


I'm in the middle of automating certificate renewals for my company and I really haven't thought about anything except Let's Encrypt.

In fact I was surprised to see that various providers offer APIs to deploy paid certs.


APIs to handle obtaining and deploying certificates weren't new with ACME. What's new in ACME is the stuff that lets you automate proof of control of a name. There's a fun thread on m.d.s.policy where I'm explaining this and Peter Guttman (who does some work in this area) starts out asserting that this feature is so unimportant and stupid that you could easily do it with other existing protocols, when that argument was destroyed he switches to claiming nobody ever wants to do such a thing, and eventually he retreated to the idea that the Web PKI is an insignificant application anyway and so it doesn't matter.

The funniest part is that this all happened not when Let's Encrypt was an untried experiment but months into deployment when it was already a huge success.

It was the equivalent of that moment when an elderly relative informs you that they can't see these new fangled computers ever amounting to much.


I was pleasantly surprised to discover a few other CAs support the ACME protocol, however it’s either a free cert without support for SANs or certificates that require manual approval via a web control panel.

I’m not against paying for a cert but don’t make it much harder than LetsEncrypt to use it.


So if I'm using AWS certificate thingy I'm doing it wrong? Should I feel dumb, too? How dumb should I let myself feel before changing it to what the blog post says?


:/


> I agree that Let's Encrypt is great

A single company managing the security for the entire internet? That doesn't sound great.


Letsencrypt has dramatically helped increase the amount of websites that are secured with SSL. Theyyre important, but not even close to being the sole certificate authority.

Besides: their stack is open and available. If monopoly would ever be a concern in the future, other CAs could spin up using the same software. But that doesn't seem something worth being concerned about right now.


It has, to the point that http is now a problem. Imagine that LE vanishes, what happens then?

62% of sites use letsencrypt [0], up from 35% in Nov 2017 [1]. Monopolies are bad, be they Microsoft or Google, and that's a worrying trend.

[0] https://censys.io/certificates/report?q=tags%3Atrusted&field... [1] https://news.ycombinator.com/item?id=15744359


In fact, there have been some HN posts lately talking about having a backup CA, since LE is not the only free CA that allows automation of cert creation.

Hopefully everyone has documented all the systems that auto-generate certs, what certs they request and put that in a manifest so that they can have a contingency plan, in the even that LE is knocked offline or shut down for whatever reason. It can certainly happen.


It's worth noting that UAs can treat localhost, 127.0.0.1, etc. as secure even when they're visited over plain HTTP:

https://w3c.github.io/webappsec-secure-contexts/#potentially...

Chrome (in the post's screenshots) follows this; I'm not sure about other UAs.


It is broken in Firefox since version 62 https://bugzilla.mozilla.org/show_bug.cgi?id=903966


That's an old, long closed bug. I think the bug you want is https://bugzilla.mozilla.org/show_bug.cgi?id=1488740


It's not, the comment is mistaken, CORS is unrelated.

But you need to use 127.0.0.1, not `localhost`


This does not work if you want to use a hostname other than "localhost" (for running VirtualHost) pointing to your local machine.


Open a world of fun by running your local services like proxies on 127.1.2.3, 127.2.3.4, etc with the matching hostnames in /etc/hosts

You have 127.255.255.255 to play with!


You mean 127/8


There's nothing wrong with the previously used notation.


127/8 may only be understandable to those who already know that the whole 127.255.255.255 range is reserved for localhost use.

Sometimes being understandable is more important than using the most precise and shortest notation.


127.255.255.255 is a broadcast address in the /8. It doesn't describe the amount of bits in the network. You need a netmask or CIDR notation for that. For IPv6 you need CIDR, so I suggest to learn that.

CIDR is understandable for people who know basic networking. If someone doesn't understand what 127/8 means (which is totally OK if someone doesn't) I'd argue they shouldn't learn about the topic we discuss (valid HTTPS certs for localhost); they should learn something as basic as CIDR first. CIDR isn't difficult to understand. Basics first. You can't learn to speak English if you don't know how to open and close your mouth...


Yes, there is. Either you mention an IP address with a netmask such as 127.0.0.0 255.0.0.0 or you use CIDR. Mentioning a broadcast address to describe an address space is wrong because it doesn't describe how many bits (netmask and CIDR do that for IPv4).


Well this is timely. I spent several hours today trying to get local https working. I needed it because iOS Safari will only let you make getUserMedia (WebRTC) requests for the webcam over secure connections. I needed access to the webcam because Safari WebRTC won't give you access to other browsers on your LAN for data channels (which have nothing to do with your webcam) unless the user gives permission to use the webcam[0]. I needed access to LAN data channels because I'm working on a multiplayer game that I'd like to work on the major browsers. I had given up and finally decided to just use my VPS with Let's Encrypt for testing iOS, but maybe I'll give this a try. Worst case it doesn't work. Now I just need to figure out how to convince my users that taking a selfie is a reasonable requirement to play my game.

[0] https://github.com/webrtc/samples/issues/1123


I have used localtunnel over https for this: https://localtunnel.github.io/www/


Nice, is this similar to ngrok?


Same principle, though less well maintained and somewhat iffier especially the server bits, you may want to / have to set up your own localtunnel server[0] though an advantage is that you can do that.

FWIW for my use case (testing github API calls & webhooks feedback), ngrok with a free account was more reliable, and interacted better with little snitch.

[0] https://github.com/localtunnel/localtunnel/issues/261#issuec...


Are you sure you can't do getUserMedia on localhost? On Chrome at least it overrides localhost to be considered a secure URL, so service workers etc function fine over HTTP.


It's on iOS. Not sure about desktop Safari


Private CA is a good route if you plan on setting up a long standing internal only test server and devices. OTOH if you just want to access https://localhost on your PC things like "chrome://flags/#allow-insecure-localhost" exist and are easier to manage.


>OTOH if you just want to access https://localhost on your PC things like "chrome://flags/#allow-insecure-localhost" exist and are easier to manage.

is there anything similar for firefox?


You might find using mkcert to be even easier than generating self-signed certificates :) and when it's time to test with a real hostname, it's just a matter of adding a line to /etc/hosts and rerunning mkcert!


do you know about how to do it in safari?


For local web dev/testing most tools generate test temp certificates automatically or it's easy to get them, so the only issue is using it in production e.g. for communicating from web apps to local services.

The article links to Let's Encrypt post (https://letsencrypt.org/docs/certificates-for-localhost/) where they mention that "modern browsers consider “http://127.0.0.1:8000/" to be a “potentially trustworthy”". Before that several years ago it was just impossible to communicate from https to local http. Now most browsers (excluding Safari) support communicating with local http from https pages. And there is a hostile opposition from a Safari developer to support this, even though it is now a standard. (https://bugs.webkit.org/show_bug.cgi?id=171934)

Distributing an unconstrained CA root certificate and asking users to install it is a terrible idea. Even generating one locally is dangerous if private key is easily exportable. If one still needs to support local https, it's better to limit the scope as much as possible. E.g. to issue a certificate only to sub-sub domain and apply name constraints as @tedunangst mentioned. It's easy to do this with XCA tool:

* XCA: https://github.com/chris2511/xca/

* Name Constraint Wiki: https://wiki.mozilla.org/CA:NameConstraints

* How to create CA with NC using XCA: https://security.stackexchange.com/questions/31376/can-i-res...

* An example how valid/invalid certificate look on Windows https://github.com/DataSpreads/CA

(Name constraints are also not supported on MacOS currently)


Cool. I wish there was an option to specify which trust stores to install to, for example only to Firefox's. The current default is to install in system (requires sudo), Chrome (which I don't use), and Firefox trust stores.


In my experience, you don't want HTTPS for localhost, you want HTTPS for a server in the local network so that you can also test on other devices than the development machine.

A convenient solution in case you can control DNS at the local network level is to just get an ordinary certificate for a real domain (for example dev.mydomain.com) and have it resolve to that local IP.

Using your own CA is possible too, of course, but getting it installed on all the devices is a nuisance.


Using the DNS challenge with LE, you can create a wildcard cert that is valid for *.domain.tld. Now you need a simple local DNS server like PiHole to resolve any local domains to your local reverse proxy serving local sites with the wildcard cert and you're done. You only need internet connection on the user's browser to get the little green lock and when you generate the wildcard cert itself.


If you do it this way, you have to copy your actual server’s private key to your local machine (or even machines if you’re using several), though. That possibly increases the chances of it getting compromised.

I’d prefer getting a separate certificate for local.domain.tld instead.

Please correct me if I have a misunderstanding.


Wouldn't this need an internet connection to check the authenticity of the certificate ?

How do you make it work if you are working offline, say in a plane ?


Routing to an IP in the local network obviously doesn't work outside that local network.

Re-routing the domain to the local machine (by editing /etc/hosts) should work just the same. A certificate from a trusted CA can be validated without an internet connection, they are part of the OS/Browser distribution.


I've been using https://www.tinycert.org/ for years which basically lets you create your own certificate authority and issue certs (which obviously aren't trusted by everyone but can be trusted by you/your team). It's ideal for generating SSL certs for ephemeral apps e.g. review apps on Heroku since it can all be done using an API


Am I wrong to think this opens a gaping security hole?

Adding a new root CA means anyone who gets their hands on the keys (which are openly available to the user of mkcert) gets to completely obliterate your ‘trust’ store and MITM any of your secure connections.


AFAIK attackers with local access already have lots of ways to exploit your system.

However, for added security you could run mkcert on another computer that is not connected to a network. Then you just copy the root cert and the leaf cert to your dev machine, but leave the root cert’s private key offline.


That is a risk, but not an uncommon one. There are a dozen other workflows that rely on users installing CAs, maybe not quite w/o root, but all the same, installed, so this isn't a new idea.

Also, it seems to me that virtually any way to get those keys involve hostile code running on your machine, meaning you're already toast.

I do also wish it didn't install certs as non-root, but as pointed out in another comment, there is a workaround for that.


I've always worked around low-level C++ so never got the need to understand how HTTPS certificates work... until I needed to test some code for WebRTC applications.

This tool was really helpful, when you just want to test your app and don't care at all about all those security restrictions imposed by the browser, Mkcert helps so thank you a lot for this.

At the beginning it was a bit confusing because some knowledge is assumed about how all this stuff works. For complete strangers like me I ended up asking for help, this includes some exact steps that one can follow to generate and install a certificate, so it might be helpful for someone reading this:

https://github.com/FiloSottile/mkcert/issues/60


This is great, will definitely check this out. I just struggled for hours following multiple blog posts and howtos to setup https locally, because I was trying to test Facebook login, which only allows HTTPS and can't be tested otherwise.

Definitely a great product if it works as advertised!


While the tool been showcased here is great, there's really no substitute for doing it yourself and getting your hands dirty. Too many people (not saying you) use these kinds of tools as a crutch instead of learning the basics themselves.

This is a great guide on how to set up SSL on localhost https://deliciousbrains.com/ssl-certificate-authority-for-lo...


indeed it's good to learn how those things work (I did while doing a similar process than your article) but a tool like this is definitely good to save time.

One could make a similar comment about using cURL vs building your own HTTP request, to learn the basics themselves :)


Last time I setup a certificate for localhost, I was unable to get *.localhost to work because wildcard "tld" certificates are rejected by the browser.

I wonder if browsers will allow this to work for localhost?

Here's the script I used to do this last time, just a couple of openssl commands: https://gist.github.com/xPaw/840f378f3fed64806b46211c2287b52...


It's not honoured by chrome, yes.


I love this tool a lot. It's super helpful. I usually can get away with this but some tool really require it.

For example, I implement google oauth flow recently and they have a flow of redirect URL you have to pre-defined. To simplify the code and config, I want to make HTTPS by default and the only option instead of checking for the env(dev/prod to enable https or not). This simplify the code and still allow me to test https with it.


I needed this three years ago!

My solution was to use AWS and route 443 to 80 in a load balancer. That way there IS NO MIXED content: it is all HTTPS. (And certificates are free on AWS!)

If you don't like AWS, can't you just nginx as your load balancer and do the same thing?

Why should a website care about what port traffic is coming in on, shouldn't the TLS happen IN FRONT of your website so the website can be agnostic?


mkcert is just making it easier to generate & install self-signed certificates... Nothing that couldn't be done before and using nginx as you describe is pretty standard practice. I don't really see why you went for the AWS solution.


Ah, I crammed a bunch of threads in my terse answer. We use AWS because elastic beanstalk works better for our environment than heroku. Plus with S3 integration (and Route53), we found it doesn't make sense to maintain the OS and to keep everything on one cheap platform (cheap because it can be scaled up/down very quickly). Using the LB to send everything to :80 on a VPC means our devs don't need to dink with TLS on their remote laptops.


So the tool can generate a CA cert and automate it's installation. In my experience it can be painful to teach other developers to generate and install their own, especially if the Java keystore is involved, so this could be very useful for large teams.


I solved this problem by requesting a TLS certificate from my company's Windows domain CA for my machine's hostnames on the internal network (machinename and machinename.company.com - both added to the same cert via Subject Alternative Name). Then I mapped those domains to 127.0.0.1 in /etc/hosts so it works even when I'm not on the internal network.

From there it's simple to host the front-end app I work on using a node.js script.


I'm almost 99% certain that everyone who's this concerned - already has a domain. (or they can get a free domain from something like .tk - as it really does not matter since this is for pure local development).

So I feel like the following workflow is simpler no?

1. Use something like local.mydomain.com as your local dev domain. (set the DNS in Cloudflare / Netlify etc. to 127.0.0.1)

2. Use Let's encrypt to generate certs for that domain.

Am I going about this the wrong way? (or is there something super insecure that I've missed?)


For the cert part, LetsEncrypt specifically recommend against that: https://letsencrypt.org/docs/certificates-for-localhost/

For the dns part, I honestly think a hosts file entry is more flexible, as you can support environments using vms/containers etc with a guest that has a dhcp address.


The security issue comes in when you ship the private key - if you are following best practices - won't the private key be different for each domain / managed in a better way?


So, now you're going to give each member of your team a way to authorise valid certificates for your domain? Great, I don't want to imagine what your HR/security vetting process will be after the first abuse of that power.


I had not thought about that...great points!

This is way simpler in that case!


That's neat! I'm writing a web framework for Python that has some built-in features to make testing on localhost easier: you can pass it the file name to an existing certificate or it will generate one itself. The disadvantage to the no-setup required autogeneration is that you need to add a security exception on first use and every time the root expires. `mkcert` could definitely help with not training web developers to perform unsafe actions.


It would be nice if the root could be generated with name constraints so losing it isn't quite so catastrophic.


Cool stuff, thanks.

I guess I am not the only one having around a `genssl.sh` that grew big during the years, using openssl, generating CA, certreq, then signing and setting properly `subjectAltName` etc :-D

I like mkcert since it also injects the root CA into the trust store - was always too lazy to do this programmatically.


If you have a commented version of this script (actually even an uncommented one), I would love to be able to read it.


Quick&Dirty (just replaced company details):

https://gist.github.com/kmARC/458a2afbd55635eef3b5f911359ce5...

Also note that this doesn't have intermediate CA, root CA signs everything. And of course: NEVER use this even close to production. (as you can see, even I only used in my sandbox env)


Thanks !


What's wrong with Lets Encrypt and DNS challenge? Then you get a globally recognised cert.


Nothing, but thats not what this is for at all.


You cannot use them for localhost


Seems a lot easier to register your own domain (useful for so many other things), point localhost.mydomain.com to 127.0.0.1 and run:

    sudo apt-get -yy install --no-install-recommends certbot
    mkdir -p le
    certbot --work-dir ./le --logs-dir le --config-dir le -d localhost.mydomain.com --manual --preferred-challenges dns certonly
Follow the instructions adding the TXT record, you have a valid cert in minutes.


Yes you can. You control the DNS, just point that subdomain to 127.0.0.1.


That's what I use, Caddy webserver with Gandi DNS (also used Route53 in the past) handling the ACME challenge and a `*.l.mydomain.tld` pointing to 127.0.0.1.

Adding reverse proxies for different local services becomes a piece of cake, but mkcert allows you to use `localhost` directly if needed. Personally I never have and like not having to make any changes to the trust store.


Why do you need a reverse proxy? The DNS challenge works without https.


Does anyone know if there's a publicly available domain that does RFC1918 resolutions? Such as

  192-168-86-30.localresolve.com has A record 192.168.86.30
  10-10-10-1.localresolve.com has A record 10.10.10.1
and so on.


Might be close:

http://lvh.me is a free service that resolves itself along with all subdomains to localhost.

Edit: http://nip.io/ looks like it does exactly what you asked.


YES! Thank you. As an indie PWA developer this is exactly what I've been looking for.


Will it work inside a Docker container and my browser in my bare metal?


As long as you run "mkcert -install" where the client/browser runs, you can use the certificates it generates in whatever deployment you like. If instead you want to run it 100% inside Docker, but your browser is outside, you'll have to manually install the root from "mkcert -CAROOT".


I've wondered if there was a way to do this forever. Was just trying to come up with a good solution for trusting my self-signed certs on my machines.


Can this be used for host hacked domains? I tend to use host hack where www is replaced with dev on my localhost like "dev.somedomain.xx"


If you need to keep a database of certs, checkout XCA.


i'm doing something like this in bash for hosts i don't own the certificate for the domain and using own created root CA and adding to java keystore.

https://pastebin.com/ChNiHbYL


Awesome! This looks like it'll solve a real pain point. Thanks for sharing :)


Awesome. Thanks a lot.


what is the benefit of having a certificate in localhost? I don't really see the point if you are on your local machine




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: