
Certificates for localhost - colinprince
https://letsencrypt.org/docs/certificates-for-localhost/
======
zaroth
I’ve always liked the concept of a localhost’d web app talking back to a
localhost web server. It seems like a great way to get the cross-platform ease
of use of developing the UI without having to do _everything_ in browser, so
you can optimize the heavy lifting and don’t end up with an Electron app
pulling 8Gb of RAM and 100% Of 16 cores.

But I could never quite satisfy the nagging feeling that the localhost server
could adequately be secured against outside network requests being routed to
it, or as TFA mentions, inside network requests being routed away from it to
an outsider!

This article helped enumerate some of the difficulties of securing such a
service. Things like a memory-safe parser, checking origins, etc.

I wonder is there a definitive guide someone had setup, or even better a
sample Golang or similar localhost server, which demonstrates the dozen-odd
layers of checks and protections and magical incantations necessary to have
such a server “secure” in the sense that a localhost UI is able to make
requests to it to receive sensitive data but it should be safe from external
attackers trying to spoof the same requests?

~~~
mholt
> _I’ve always liked the concept of a localhost’d web app talking back to a
> localhost web server._

We're doing exactly this prime-time with Relica:
[https://relicabackup.com](https://relicabackup.com) (sorry, not much on the
landing page yet, but we have emailed out some info about the UI already [1]).
That technique will allow us to distribute backup software that works the same
for macOS, Linux, BSD, and Windows, right away; screenshot: [2]. And it's very
lightweight.

An added benefit of this approach: we were able to take its REST API and,
after writing a small custom Go package, we instantly gained an elegant CLI
[3] so it has a headless mode too! With all the functionality of the GUI [4].

I haven't seen much consumer software that does this, and I'm not sure why, so
I feel like we're taking a bit of a risk, but I think (hope) it will pay off;
the benefits have already started becoming clear and they're definitely
appealing.

[1]: [https://mailchi.mp/2b5e7f57e400/a-brief-introduction-to-
reli...](https://mailchi.mp/2b5e7f57e400/a-brief-introduction-to-relica)

[2]:
[https://twitter.com/relicabackup/status/1005105584260067329](https://twitter.com/relicabackup/status/1005105584260067329)

[3]:
[https://twitter.com/relicabackup/status/1006204516344086528](https://twitter.com/relicabackup/status/1006204516344086528)

[4]:
[https://twitter.com/relicabackup/status/1006206423821254656](https://twitter.com/relicabackup/status/1006206423821254656)

~~~
zaroth
Thank you, yes, I think architecturally there are great advantages to
splitting up an app like a client/server even when designed primarily to be
accessed over localhost.

Obviously the “server” API is extremely sensitive and I think you have to
assume it is effectively exposed to the outside world, even with a 127.0.0.1
binding and a firewall.

I guess if you make localhost users literally login and establish a session
then you could consider yourself safe. But it’s a weird experience logging
into a local application. So whatever you are doing to authenticate the
request as local, it has to be unspoofable.

I just don’t think I trust the HTTP headers enough!

~~~
mholt
Yeah, we don't trust just anything that comes in on that socket. For example,
we implement standard CSRF mitigations like checking Origin/Referer headers.
We also don't use DNS at all in the local frontend/backend interactions and
require the Origin to be exactly "127.0.0.1" (or the IPv6 equivalent) which is
what we bind to.

(Edit: I just looked it up again and I'm 99% sure that web pages can't
override the Origin header, especially when making cross-origin requests.)

~~~
orf
> I just looked it up again and I'm 99% sure that web pages can't override the
> Origin header, especially when making cross-origin requests

What about other, potentially unprivileged software running on your machine
that can?

~~~
mholt
If you've got rogue software running on your machine, all bets are off.

~~~
Ajedi32
Not necessarily. It could be running in a sandbox, or as unprivileged user,
and accessing your app's API over localhost would allow for privilege
escalation.

~~~
mholt
I see your point. But in my book, that's still rogue software. And that's a
terrible sandbox. :)

~~~
orf
I see your point, but you're saying:

> Yeah, we don't trust just anything that comes in on that socket

Well, you are, you're trusting it came from a browser so the Origin header is
correct, right?

------
BillinghamJ
The Plex approach to this kind of problem is pretty interesting:
[https://blog.filippo.io/how-plex-is-doing-https-for-all-
its-...](https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/)

Unfortunately I haven't seen it being done elsewhere. It'd be nice if
LetsEncrypt or similar could provide this for more generic everyday use.

~~~
randomfool
As discussed in crbug.com/378566, Chrome currently allows connecting to unsafe
WebSockets on localhost. So just use a WebSocket to communicate from your
HTTPS hosted page to your local server.

And yes, you definitely should whitelist access based on the origin header.

~~~
chillydawg
I tried this and Chrome complains about mixed mode and forces you to allow
this behaviour and then to reload the page before you can get it working. If
that's acceptable, then sure, but for actual use it's not really any good.

~~~
randomfool
Did you connect to ws://localhost:<port>/?

As mentioned in the bug above, this approach is currently recommended by
Chrome and in use by a number of large applications/sites.

It's completely transparent and is being used in production today.

------
xg15
What I don't understand is this bit:

> _Fortunately, modern browsers consider
> “[http://127.0.0.1:8000/"](http://127.0.0.1:8000/") to be a “potentially
> trustworthy” URL . [...] WebSockets don’t get this treatment for either
> name._

It's good they at least added an exception for (verifyable) localhost access -
but then why is the exception _only_ given for HTTP? There seems to be a
deliberate restriction that websockets are excluded.

I find this kind of strange and frustrating as the websockets wire protocal
actually contains _more_ protections against accessing vulnerable services
than HTTP. So without this exception, this leaves you _no_ way at all to
connect to a local service via websockets.

I've found this ticket for chome[1] where apparently the rationale is that
they want to move people to their own IPC messaging mechanism. However, this
is chome only and requires you to register a chrome extension.

So if I get this right, this still leaves no standard way to have asynchronous
communication with a local process.

[1]
[https://bugs.chromium.org/p/chromium/issues/detail?id=418482](https://bugs.chromium.org/p/chromium/issues/detail?id=418482)

~~~
cjm
Chrome permits access to ws://127.0.0.1 from https apps. I believe it's also
making its way to Firefox, which is currently http only. Other browsers
(ie/edge/safari) don't even follow this exception for http yet.

~~~
xg15
Interesting. My impression from the above bug was that they explicitly decided
against keeping it open. If they reversed their position, that would be good
news.

Could you give a link where this is discussed? I see crbug.com/378566
mentioned in another comment, but there doesn't seem to be a decision in
there, apart from general acknowledgement that the use-case exists.

------
jongalloway2
If you're using ASP.NET Core, this is built into the most recent 2.1 release -
local dev cert as well as HTTPS redirection middleware and HSTS in
development.

[https://blogs.msdn.microsoft.com/webdev/2018/02/27/asp-
net-c...](https://blogs.msdn.microsoft.com/webdev/2018/02/27/asp-net-
core-2-1-https-improvements/) [https://docs.microsoft.com/en-
us/aspnet/core/security/enforc...](https://docs.microsoft.com/en-
us/aspnet/core/security/enforcing-ssl?view=aspnetcore-2.1&tabs=visual-studio)

As the letsencrypt article points out, you want to start building and testing
with HTTPS as early as possible, so this is all wired up as part of creating a
new project with ASP.NET Core 2.1.

[disclaimer: on .NET team, Nazgûl]

~~~
ryanjshaw
Came here to write this, very impressed! Saved me so much time. It might help
others to point out you need the ASP.NET Core 2.1.300 SDK -- 2.1.200 is not
sufficient, and it took me a while to realise this. Perhaps I'm veering off
topic, but why no Typescript in the 2.1 reactredux templates?

------
rootusrootus
I was hoping for a good answer, rather than "this is hard and it will just get
worse." We have an old app that my team is modernizing with this exact
situation. Uses websockets now, but that's an historical thing and all the web
apps were non-secure so it worked okay. Now everyone wants SSL turned on, and
this puts the websocket method in jeopardy. Somebody before me decided we
should switch all the inter-app communication to an external XMPP server. Ugh.

~~~
recursive
What's wrong with wss://?

~~~
rootusrootus
As a practical matter you cannot implement a localhost secure websocket.
Essentially all of the methods for communicating between a web app and a
native app are some kind of ugly hack and none are supported well enough that
you can feel confident they'll stick around for any length of time in all the
major browsers.

------
judge2020
Relevant discussion regarding Activision/Blizzard's use of the domain
"localbattle.net" (pointing to 127.0.0.1) for localhost communication with the
agent:

Reddit:
[https://www.reddit.com/r/heroesofthestorm/comments/7lb8vq/he...](https://www.reddit.com/r/heroesofthestorm/comments/7lb8vq/hey_blizzard_whats_the_deal_with_this_sneaky_root/)

Blizzard response:
[https://us.battle.net/forums/en/bnet/topic/20760626838](https://us.battle.net/forums/en/bnet/topic/20760626838)

~~~
rnotaro
It reminds me of that time when I accidentally went on a domain that had his A
record set to 127.0.0.1. I was extremely confused about why a random domain
had a copy of my project.

------
beagle3
Everyone here and TFA talk about 127.0.0.1 (a.k.a localhost), but the entire
127.0.0.0/8 subnet is routed to the local machine - does anyone here know how
the browsers treat the other addresses?

Do they compare to 127.0.0.1? to 127.0.0.0/8? Do they consult the routing
table?

------
thatha7777
I suspect that the default reading of this text means "add your localhost.crt
to the system's locally trusted roots" (or worse, "import the root certificate
[to your locally trusted roots]"). Although there are methods to "install
localhost.crt in your list of locally trusted roots" so that it's only visible
to your application, I'm concerned about the naive reading of the post.

I would be very concerned to find that a random application like Spotify, is
installing root certificates on my machine, as that would allow them to MITM
any connection that doesn't have some kind of key pinning.

Although not exactly the same case, because the driver's shenanigans had no
legitimate use, while Spotify does have a legitimate use, an audio driver
caught installing a CA root certificate was reported as a CVE vulnerability:
[https://www.kb.cert.org/vuls/id/446847](https://www.kb.cert.org/vuls/id/446847)

~~~
vbezhenar
Blizzard application installs root certificate:
[https://news.ycombinator.com/item?id=15982161](https://news.ycombinator.com/item?id=15982161)

I can say that I wasn't even aware about it, it installs absolutely silently.
So if you care about it, check it from time to time, because that random
application may install it at any time!

~~~
0xCMP
It's not a root certificate in the CA sense. It can't sign new certificates
and is only for the browser to communicate to the https server running on the
same computer.

The key isn't even distributed anywhere. It's generated locally and then
marked trusted so the browsers don't show warnings.

Checking the list of certificates from time to time is a good idea, although I
wonder if anyone really does that very often without some automated help, but
in this case Blizzard was doing something that was not only fine (using a
self-signed cert to secure local traffic) but the absolutely correct way to do
this. (As additionally explained by Let's Encrypt themselves)

edit: I guess the one thing is: If you can find the certificate's private key
you can serve your own server at that domain and launch an attack using a
trusted certificate without needing admin permission to add your own malicious
certificate.

~~~
vbezhenar
I agree, but for me their reasoning was weak and I would opt-out of this
feature, because it's not even relevant for me. I don't like when someone
messes with my trusted certificate store. And they installed that certificate
absolutely silently, I learned about it from hacker news.

------
Grue3
An absolute joke how many loops one needs to go through to do this very basic
thing. Hell, this command line is longer than the code required to start a web
server in some programming languages.

~~~
tialaramex
To be fair, the complicated openssl invocation is largely because OpenSSL is
crap. Let's break it down:

"openssl req x509"

This is a dodge because we don't actually want to write a CSR and then sign
the cert, we're going to skip all that so we're using a sub-feature of a
different sub-feature. Fine.

"-out localhost.crt -keyout localhost.key"

There surely must be nicer ways to set this, but it's not so objectionable...

"-newkey rsa:2048"

This is arguably boiler-plate, the configuration file can set this default,
but there's a good chance "your" config file was pasted in by an OS vendor ten
years ago and says e.g. 1024-bit RSA, or worse.

"-nodes -sha256"

Now we're getting into the nonsense. We don't want DES encryption. Nobody
wants DES encryption, and if we did want DES encryption we could specify the
passphrase for it, which we didn't, so this needn't be here. SHA-256 has been
the reasonable baseline choice here for years and so likewise we shouldn't
need to specify.

"-subj '/CN=localhost'"

This is pointlessly arcane and shouldn't be needed, but it's only partly
OpenSSL's fault. This abuse of X.509 Common Name was obsolete on the Internet
last century, and it's annoying the people were still coming to terms with
that in the last few years so that certs which lack Common Name often don't
interoperate, and thus it's easier to put it in anyway.

"-extensions ...."

This part, which involves a multi-line sub-shell and other shenanigans, is
completely out of hand. This is the shortest, least crazy way to ask for the
certificate practically everybody running OpenSSL actually wants, and yet
instead of being the default or offered with some easy to understand command
line parameter, even in brand new versions of OpenSSL it's done only by this
arcane hack.

All Web PKI certificates this century are supposed to use SANs (Subject
Alternative Names), so you'd expect an openssl feature named say, "-san" which
adds one such name, and OK, this being OpenSSL maybe you'd need to manually
write "DNS:localhost" rather than it being smart enough to figure it out if
you write "localhost". That'd be stupid, but par for OpenSSL. But no, you have
to manually specify how SAN extensions even work in the X.509 certificate, as
if it has never seen one before, even though every valid cert for the Web PKI
has one. It's like if Firefox made you type not just the HTTP port default of
80 into URLs, but actually made you specify that you want to use TCP/IP in
case maybe you're using Novell IPX or something.

~~~
j4cob
FYI, OpenSSL has a new flag that makes the "-extensions" part somewhat
simpler, but most people won't have it in their copy yet:
[https://github.com/openssl/openssl/pull/4986](https://github.com/openssl/openssl/pull/4986)

~~~
kawsper
Interesting, is it normal that there are no tests for features like that, or
does it happen somewhere else?

------
tootie
I'm having a hard time understanding the use case here. Using a domain name
for loopback IP and generating a cert will work fine for internal use. They're
saying it's a security hole because you may need to distribute that private
key to users. What exactly is that scenario? Shipping an app with a built-in
web server? Not sure I've ever seen that done. And could you not solve it with
certificate pinning?

~~~
kilburn
> What exactly is that scenario? Shipping an app with a built-in web server?

I've had this need a couple times. My scenario: web-based application that
needs to use some usb-gadget (NFC reader for example).

You cannot access the device directly because there's no (standard) Browser <>
NFC Reader API implemented in the browsers. You need native code to access
that device. Yet your entire application is web-based, and it would be fine if
only you could interface with that pesky gadget.

In the past, people used NPAPI plugins for that, but those days are gone
(browser plugins are much more sandboxed now, so they just don't have those
capabilities anymore).

Solution? You ship a companion small native app. That native app is supposed
to interface your web application with the usb device. How do you achieve
that? Simple, the native app exposes an http/websocket-based API at
"localhost:someport" on the one side, and talks to the device using native
drivers on the other.

If you don't use https in your web-app, that is it. Your web-app can now
communicate with your native app (by making requests to "localhost:someport"),
and through your native app communicate with the device. Problem solved...
right?

Wrong, because your web application should be using https. For security
reasons, the browser will _not_ allow connections from an https-page to a non-
https (or secured web sockets) server. Thus, your native app's exposed API
must use https/wss too. And here is where the article's ordeal begins.

> And could you not solve it with certificate pinning?

Pinning a certificate is not secure because using the same certificate in all
installations is not secure: notice that your native app must have the
certificate and the corresponding private key (because it must serve requests
under that "localhost:port"). At this point, any of your users could just grab
that cert/key pair (from their local installation) and use it to man-in-the-
middle your other clients (the cert is valid for them too!).

The problem is exactly the same if you obtain a publicly-recognized
certificate and distributed it with your native app.

The secure solution is, as the article says, to generate a certificate
_specific for this user_ during the installation of your native app, and
adding that certificate to the user's certificate store. This way the
certificate is only valid for that user. Your native app can use this user-
specific certificate to serve https/wss requests, and the user's browser will
connect to that without warnings because the certificate is trusted. However,
if user A extracts the certificate and key from her installation and tries to
use them to main-in-the-middle another user B, it won't work because the
certificate B trusts is his own installation-time-generated-one, not A's
certificate which is different.

~~~
AgentME
>Wrong, because your web application should be using https. For security
reasons, the browser will _not_ allow connections from an https-page to a non-
https (or secured web sockets) server. Thus, your native app's exposed API
must use https/wss too. And here is where the article's ordeal begins.

The article also says that the website could make requests to
[http://127.0.0.1:portnumber/](http://127.0.0.1:portnumber/) and browsers will
allow that even from HTTPS websites.

~~~
kilburn
It works for http requests, but not for websockets (the article also mentions
that). If you want decent bidirectional communication (no polling, i.e., the
kind you'd want to get reads from an NFC reader) that's not a good option.

Also, I'm willing to bet browsers will block that at some point too...

------
wildpeaks
Good timing, I was working on a short tutorial on how to setup HTTPS for
localhost, including how to get a green lock even in Chrome 58+:
[https://gist.github.com/cecilemuller/9492b848eb8fe46d462abeb...](https://gist.github.com/cecilemuller/9492b848eb8fe46d462abeb26656c4f8)

------
zapdrive
Easiest way is to get a certificate for a subdomain of a domain you own, e.g.
dev.example.com, and then point dev.example.com to 127.0.0.1 in your hosts
file.

~~~
philip1209
This sounds like a bad idea. You don't want private keys to a production
subdomain being handed around teams. For instance, let's say you have
dev.mybank.com. Somebody could trivially poison a DNS cache for a local system
to redirect to their server, have a valid SSL key on the company domain, and
implement a very real-looking phishing website for the company.

Another problem - controlling a subdomain could be used to steal login cookies
from the main website. This is why Github moved Github Pages to a separate
domain: [https://blog.github.com/2013-04-09-yummy-cookies-across-
doma...](https://blog.github.com/2013-04-09-yummy-cookies-across-domains/)

~~~
nikanj
A domain you own <-> a production domain.

Our corp has corptech.com and a few similar ones for this purpose. A generic
.com costs about nothing, so no point in running anything non-production on
your primary domain.

------
aequitas
I've been using Terraform
([https://www.terraform.io/docs/providers/tls/index.html](https://www.terraform.io/docs/providers/tls/index.html))
lately for local CA's like my homelab. It's nice when you want to keep
everything in configuration management. Example of a self-signed CA:
[https://gitlab.com/failmap/ca/blob/master/ca.tf](https://gitlab.com/failmap/ca/blob/master/ca.tf)

------
vbezhenar
There's government CA in Kazakhstan issuing certificates for people and for
some government websites. They have software for people, so their website can
talk to USB tokens. This website connects to that software via secure
websockets at 127.0.0.1. And they bundle private key for 127.0.0.1 issued by
that CA inside that application. Is it bad? I guess there's no point to report
it to them, because they are CA and developers. It's not browser CA, it's some
kind of "private" CA (users must import their certificate as a trusted root to
work with their website and software).

~~~
skim_milk
I worked for a place that did something similar, they were running a server on
their local machines listening to
[https://localhost.company.com:someport](https://localhost.company.com:someport)
(resolving to 127.0.0.1) so their javascript frontend hosted at example.com
could communicate with their local machine. It was set up so the server would
only respond to requests originating from company.com. They distributed the
private key for the certificate localhost.company.com which was trusted by all
browsers.

What kind of risk is there to having the private key to localhost.company.com?

~~~
vbezhenar
Well, CA forbid that kind of usage, so if they found out, they'd revoke that
certificate, that would be the major concern for me.

Other than that, obvious attack is to extract private key from your
application, launch fake server and forge DNS responses for some poor guy (for
example if he's using some untrusted WiFi). So his requests would be
redirected into that fake server instead of localhost application.

------
nailer
Browsers are solving this by making localhost trusted over http (so webcam,
notifications, and other privileged features work work), but here's a more
specific guide to getting localhost https working on MacOS - using Keychain
and a single command to export the created cert into PEM for your local
webserver:

[https://certsimple.com/blog/localhost-ssl-
fix](https://certsimple.com/blog/localhost-ssl-fix)

------
justinclift
> Traffic sent to 127.0.0.1 is guaranteed not to leave your machine. ...

Isn't that a widely held, but incorrect, assumption?

eg People with reasonable knowledge of IPv4 on *nix can still route 127.0.0.1
traffic out through an external interface?

From memory, people used to do that when attempting to bypass various
firewall/filter rules on other hosts for a locally attached network.

Maybe things have tightened up/changed in the last few years?

~~~
phlo
No, 127.0.0.1 should never appear on any network, and no network device should
ever route it.

The earliest documentation I was able to find is in RFC 1122 [1] from 1989,
but according to RFC 6890 [2], the principle dates back to 1981.

[1]
[https://tools.ietf.org/html/rfc1122#section-3.2.1.3](https://tools.ietf.org/html/rfc1122#section-3.2.1.3)

[2] [https://tools.ietf.org/html/rfc6890](https://tools.ietf.org/html/rfc6890)
(table 4)

~~~
justinclift
Ahhh yeah. But that's how most OS's set things up by default, in order to meet
the required specs. (bugs and implementations hiccups aside)

Once the OS is up and running, manipulation of the routing tables at least
_used_ to make this possible on Linux and Solaris. Not sure about FreeBSD, but
that's just from memory fuzziness on my part. :)

~~~
yebyen
> Traffic sent to 127.0.0.1 is guaranteed not to leave your machine

This is definitely false, without any routing tables. Any unprivileged user
can start an SSH tunnel listening on any localhost port above 1024, sending
traffic out to wherever.

~~~
j4cob
The implicit threat model here is "no one outside your machine can do
something to you to make 127.0.0.1 traffic route elsewhere." It's true that
software running on your machine can make copies of things and send them
elsewhere, but that's not the point of the sentence you quoted.

------
endzeit
I've posted a RFC about a potential service/solution for this 3 days ago on
the LE community boards:

[https://community.letsencrypt.org/t/rfc-a-way-to-use-
valid-h...](https://community.letsencrypt.org/t/rfc-a-way-to-use-valid-https-
with-local-hosts/64596/13)

The idea is to basically offer a free subdomain service (ssl.fun) in
conjunction with solving the DNS-01 challenge.

This would automate the existing practice of using a e.g. a public
local.domain.com A record pointing to 127.0.0.1

The difference in this approach in regards to previous attempts is that the
private key would not be compromised, as the client is issuing it directly.

Happy to receive feedback on this idea :-) As this would run into the 20
certs-per-domain LE limit quickly it needs some blessing before I can offer
this service publicly.

------
metafunctor
The openssl command in the article is great, but I might add "-days 3650" or
something so I don't need to re-generate the certificate every month.

Also, I wouldn't use the host name "localhost", but something like
"mydomain.test", and update /etc/hosts accordingly.

------
456hdsaq234g
Ive found many of these, and reported them to the vendors. Sometimes they are
happy (swag!!!), sometimes they are not. This is a great article that explains
the issue and work arounds.

Your friend is strings blah | grep PRIVATE KEY Run it over your fav bins
today!

~~~
schoen
Note that you can also report it to the issuing CA if the vendor doesn't take
action, and the CA will revoke the certificate.

------
homakov
If you're desperate, just serve local sdk from
[http://localhost/sdk.html](http://localhost/sdk.html) which will talk to your
local app via postMessage proxy. You may even open this proxy with window.open
under [https://](https://) app

------
j4cob
Folks may also be interested to read this useful summary of issues around
locally installed roots: [https://github.com/njh/dymo-root-ca-security-
risk/blob/maste...](https://github.com/njh/dymo-root-ca-security-
risk/blob/master/README.md#how-could-this-be-done-better)

------
captn3m0
I run my local apache server with a wildcard certificate for *.captnemo.in.
Works perfectly. I forward `$port.in.captnemo.in` -> localhost:$port via
Apache for some common ports and can access all my local servers easily over
HTTPS.

I can expose some specific service by changing the DNS to my local WLAN ip
address (have a script for this that updates the DNS entry in cloudflare).

------
ctcherry
[https://github.com/ctcherry/tlself](https://github.com/ctcherry/tlself)

I created this proxy to help with this, it dynamically creates certs from a
self-signed-trusted local CA. It only targets OSX for now. It's not perfect
but it has been working great for me.

------
cedricziel
We're using LetsEncrypt with local domains.

We have a domain for internal usage only, where we can modify TXT records.
Through this, and a little help from acme.sh, and dnsmasq, every workstation
can have unlimited, valid certificates for local projects.

~~~
chillydawg
Are you using a fake domain or a real one? If fake, I'd be interested in how
that works.

~~~
cedricziel
Real domain, just no A record.

For our projects, we create domains like {project}.{workstation}.company.net

~~~
mdtusz
Any chance there's a write-up or docs on this somewhere?

------
Myrmornis
Is it time to start using a better acronym than either XHR or AJAX? Is there a
modern accurate alternative for an HTTP request made by a browser that is not
a request for a page reload?

~~~
dfcowell
I believe the term you're looking for is a fetch (from the fetch API)? Though
I don't know if that is also commonly used to refer to a standard page reload.

------
michaelmior
> This subtly changes how browsers handle cookie storage.

Good article, although would have been nice to have an explanation on this
final point.

------
meagher
wrote about how you can do this with a multi-container docker set up
[https://meagher.co/blog/2018/05/21/certificates-for-
localhos...](https://meagher.co/blog/2018/05/21/certificates-for-localhost/)

------
tempodox
The command shown doesn't work on macOS. All I get is a usage text for
openssl(1).

