
Chrome to force .dev domains to HTTPS via preloaded HSTS - Mojah
https://ma.ttias.be/chrome-force-dev-domains-https-via-preloaded-hsts/
======
noinsight
.test is an official IANA reserved special-use domain name that will never be
delegated out. Use it. Problem solved.

I don't know why people thought they could start using random TLD's on their
own, there was always the risk they could be delegated officially.

[https://www.iana.org/assignments/special-use-domain-
names/sp...](https://www.iana.org/assignments/special-use-domain-
names/special-use-domain-names.xhtml)

~~~
adrian17
> I don't know why people thought they could start using random TLD's on their
> own, there was always the risk they could be delegated officially.

If a company has its own internal network with its own DNS, does it still need
to conform to ICANN's name assignments? I thought it doesn't...?

~~~
CydeWeys
You're free to do whatever you want on your local network. If you have
collisions with resources on the global Internet, however, and you local
network is connected to the Internet, them you're setting yourself up for
problems. Hence why it's a best practice to not do that.

~~~
xg15
Except for things like this issue, where you'd be affected when using Chrome
or Firefox - even if your local network were completely disconnected from the
internet.

~~~
CydeWeys
Chrome and Firefox are primarily designed for use on the World Wide Web. If
you're going to use a locally modified network you may need to use locally
modified browsers as well, just as you'll need your locally configured DNS.

~~~
adrian17
> If you're going to use a locally modified network

Except (correct me if I'm wrong), this is not a "modified network", it can be
another disconnected network that's just as "correct" in the standard
conformance aspect as the Internet. It's more like Chrome is "modified" to
support one network better than others - or rather, to possibly break on other
networks.

In fact, it seems to me like the very idea of HSTS preload list isn't friendly
with UAs working on multiple separate networks.

But yeah, you're right that this is what they are designed for after all. (I'm
also probably slightly biased, since some of our test environments use .dev
domain in our LAN.)

~~~
CydeWeys
Chrome is primarily an Internet browser though, not a random network browser.
It wouldn't be a good idea to prioritize random network browsing at the
expense of useful security features to secure browsing on the Internet.

------
CydeWeys
Hey everyone. I'm the Tech Lead of Google Registry and I'm the one behind this
(and likely future) additions to the HSTS preload list. I might be able to
answer some questions people have.

But to pre-emptively answer the most likely question: We see HTTPS everywhere
as being fundamental to improving security of the Web. Ideally all websites
everywhere would use HTTPS, but there's decades of inertia of that not being
the case. HSTS is one tool to help nudge things towards an HTTPS everywhere
future, which has really only become possible in the last few years thanks to
the likes of Let's Encrypt.

~~~
panic
One of greatest things about the Web is how easy it is to write HTTP clients
and servers. I see why HTTPS everywhere would be helpful, but I also think it
would be a shame to lose this simplicity. Has there been any thought put
toward simpler alternatives to HTTP+TLS?

~~~
CydeWeys
That's way outside my area of expertise. I don't think HTTPS is that bad
anymore, not now that certificates are easy to obtain. What are your biggest
pain points?

More broadly, I think it's worth incurring some inconvenience for the sake of
security. Go too far in the other direction and you end up like Equifax.

~~~
hughw
I think he's just wistfully hoping for a return to simpler times, when it was
easy to construct your own HTTP server, just using sockets APIs.

~~~
CydeWeys
Security's complicated, is the real answer :/

But you can't exactly just skip it (witness Equifax).

------
hartror
Use .localhost pointing at 127.0.0.1 for local development. It is reserved for
this purpose and obvious to everyone unlike .test.

For reference your options are:

    
    
                       .test
                    .example
                    .invalid
                  .localhost
    

[https://tools.ietf.org/html/rfc2606](https://tools.ietf.org/html/rfc2606)

With only .localhost fitting the purpose of most people's usage of .dev.

~~~
tomjen3
So what if I want to test a website under development from my phone? .test is
more correct.

~~~
chairmanwow
This sounds like a really useful thing to do. How do you set something like
this up in a dev environment?

~~~
TwelveNights
If you use Chrome and have an Android device which also has Chrome installed,
you could check out this tutorial:
[https://developers.google.com/web/tools/chrome-
devtools/remo...](https://developers.google.com/web/tools/chrome-
devtools/remote-debugging/)

------
hannob
Just commented over at Matthias' blog, I'll just copy-paste it here:

First of all I think this is generally a good move. If people use random TLDs
for testing then that’s just bad practice and should be considered broken
anyway.

But second I think using local host names should be considered a bad practice
anyway, whether it’s reserved names like .test or arbitrary ones like .dev.
The reason is that you can’t get valid certificates for such domains. This has
caused countless instances where people disable certificate validation for
test code and then ship that code in production. Alternatively you can have a
local CA and ship their root on your test systems, but that’s messy and
complicated.

Instead best practice IMHO is to have domains like bla.testing.example.com
(where example.com is one of your normal domains) and get valid certificates
for it. (In the case where you don’t want to expose your local hostnames you
can also use bla-testing.example.com and get a wildcard cert for
*.example.com. Although ultimately I’d say you just shouldn’t consider
hostnames to be a secret.)

~~~
lewisl9029
I've been wanting to experiment with something like this for my own dev
environments for a while, and have been eagerly awaiting wildcard cert support
from Let's Encrypt.

One thing I'm not quite sure about is if this means we need to be using the
same wildcard cert for both dev and prod? I don't suppose the cert can be
considered valid by the browser if otherwise?

If that's the case, I'm wondering if there are any best practices around
securely distributing valid production certificates to dev machines across a
team and keeping them up-to-date with Let's Encrypt's auto renewing mechanism?
Ideally in a way that's transparent to each individual developer? I'm guessing
committing them directly into a repo is probably a bad idea, especially for
open source projects.

~~~
blfr
You can have more than one certificate for the same names but both would need
to be valid. If the testing one leaks, someone can impersonate your production
service.

~~~
ryanbrunner
To go further on this though, wildcards usually only allow one "level" of
wildcarding. so if you had a wildcard cert for *.internal.domain.com no one
could use it to impersonate www.domain.com (which is good, you should consider
a cert that every developer has on their machine not trustworthy)

~~~
lewisl9029
Thanks for the clarifications. This is all starting to make a lot more sense
now.

Though I'm still curious how people usually distribute a cert like that
internally and update it to keep it in sync with Let's Encrypt automatic
renewal mechanism?

As far as I understand, Let's Encrypt requires a public facing web server on
the matching domain to renew certificates, so we'd have to actually set up a
server solely for the purpose of certificate renewal on a 2-levels deep
subdomain, expose it to the public internet, and then propagate the updated
certs from that server into every dev machine every time a renewal is
triggered?

It sounds like there's little security risk with this approach as long as we
use a wildcard cert at least 2-levels deep as you've described, as we don't
have to trust this cert for real production traffic at the root domain. But
I'm still wondering if there's some tooling I could adopt to streamline this
process a bit? Or should I just bite the bullet and script it all myself?

~~~
UnrealIncident
You can issue/renew via DNS. I have a bunch of valid certs for domains that
only resolve internally using this method. I believe the plan for wildcard
certs is to only support DNS-01 challenges.

~~~
lewisl9029
Awesome. Renewing via DNS sounds like exactly what I need. Thanks!

------
captainmuon
They should do the opposite. There should be a .insecure domain where browsers
accept HTTP or HTTPS with wrong or no certs, and pretend it is HTTPS with all
consequences (e.g. loading of HTTPS third party resources). I wouldn't put it
on the open net, but rather let people set it up internally for testing.

------
tscs37
Just as a note; .dev is not _yet_ an official TLD, it's on the status
"proposed" which means that google is basically the highest priority on the
waiting list.

.foo is delegated and thusly a full TLD, yes.

On the other hand, you should not be using .localhost if the target is not
running on your loopback interface, resolving localhost to anything but
loopback is considered harmful.

I find .test or .intranet to be more useful for such installations, they are
either designated as "cannot be a TLD" or are very very unlikely to become a
TLD, respectively.

~~~
wut42
I think you are wrong:

* DEV is listed on the "delegated strings" page of ICANN[0]

* `dig dev. ns +short` returns a couple of nameservers, the one of Charleston Road Registry: Google's registry.

[0]: [https://newgtlds.icann.org/en/program-status/delegated-
strin...](https://newgtlds.icann.org/en/program-status/delegated-strings)

~~~
tscs37
[https://icannwiki.org/.dev](https://icannwiki.org/.dev)

The ICANN Wiki specifically lists the .dev domain as proposed, not delegated.
Which means they can also just kill it and Google won't have it.

~~~
wereHamster
Who do you want to trust more, the DNS root servers or a wiki (which might be
outdated, because "This page was last modified on 1 August 2014, at 10:04").
Is the wiki even ran by ICANN, given it's on a separate domain, or by a
separate entity or volunteers?

~~~
tscs37
Then I might be wrong, yes.

------
andrewaylett
For my development needs, I try to either:

* Publish mDNS records to give myself extra `.local` names, or * Get a wildcard published in the organisation's internal DNS

If you can't do either of those, _please_ use `.test` as your test TLD, as
it's explicitly set aside for that purpose so you know you're never going to
collide with anyone.

[https://tools.ietf.org/html/rfc2606#page-2](https://tools.ietf.org/html/rfc2606#page-2)

~~~
the_mitsuhiko
I would recommend against funky things on .local. It’s not a great idea
either. I recommend just using a subdomain on a TLD you control.

~~~
jwilk
What's funky about .local? It's an officially-registered special-use domain:

[https://tools.ietf.org/html/rfc6762](https://tools.ietf.org/html/rfc6762)

~~~
okket
The .local TLD is reserved for mDNS requests, as the RFC states?

------
xg15
The article mentions as workaround:

 _That means your local development machine needs to;

\- Be able to serve HTTPs

\- Have self-signed certificates in place to handle that

\- You'll have to click through the annoying unsecure site window every time

Such fun._

Part of HSTS is the requirement that certificate warnings become unskippable.
So the above wouldn't work - you'll need an actual CA-signed certificate that
is accepted by the browser, otherwise, you won't be able to access the site.

~~~
wongarsu
Or don't use .dev and use .test or .localhost instead. Those were reserved by
rfc2606 for such purposes nearly two decades ago.

------
hobarrera
This is perfect and great. I'd love to see gradually (yes, GRADUALLY, without
breaking anything!) all TLDs do this.

".localhost" has existed and been popular for local development for MANY
years. I've no idea why somebody would use `.dev, but now that it's a
registered TLD, using it locally is just asking for trouble.

Also, you can just use 127.0.0.1, 127.0.0.2, 127.0.0.3, etc.

~~~
scott_karana
You can't show your app build to other people on the same LAN if you use
.localhost.

(Well _technically_ you can, but it would be confusing ;)

------
Kipters
Another option is not using Chrome as the main dev browser. Firefox replaces
it just fine.

~~~
onion2k
Web developers test their code in all the browsers their audience use.
Switching to a different browser isn't a solution. This isn't about what we
use for personal web use.

~~~
Kipters
This is why I said __main __. Most webdevs don 't continously test in each
browser they target, they usually just use Chrome and then check if things
break in other browsers. And some don't even check at all.

~~~
onion2k
It makes me a little sad that there are still devs who haven't discovered the
magic of [https://www.browsersync.io/](https://www.browsersync.io/)

------
0x0
With all the hacks that people have put in place for using .dev locally, who
in their right mind would want to even register and use a .dev domain? :P

~~~
lucb1e
As a developer, I never used .dev for development and I don't know anyone who
does. The number of developers around the world is a small percentage of all
users, let alone if not all of them have this redirect. And those who do
probably use /etc/hosts, so your specific host needs to be in there (since the
hosts file does, unfortunately, not do wildcards)... yeah, I think people are
not going to have any second thoughts registering .dev domains.

~~~
evolve2k
Basecamps open source project POW ([http://pow.cx](http://pow.cx)),
specifically configures rails projects to be automatically available at
projectname.dev

From the homepage "That’s it! Your app will be up and running at
[http://myapp.dev/](http://myapp.dev/). See the user’s manual for more
information."

I'm sure basecamp won't love this change and I'd guess many rails devs won't
like this either.

~~~
spiralganglion
Let's hope this issue gets some traction:
[https://github.com/basecamp/pow/issues/545](https://github.com/basecamp/pow/issues/545)

It's a bit unnerving that the project hasn't seen an update since 2014, and
has 100 open issues.

------
ComputerGuru
I’ve never used .dev - but going back five or six years we set up a .dev sub
domain of our domain and use that exclusively for development.

dev.ourdomain.net is a web-accessible server on our local network, configured
as the dns server for that sub domain and is our internal CA trusted to issue
the certs we use for development.

------
donatj
We have always used local.{site}.com as a sub domain rather than tld. Makes
CORS rules simpler, and we actually have a real dns record pointing to
127.0.0.1 so we don't have to bother with HOSTS

------
noway421
With .test thrown around a lot, would there be any complementary support from
browser vendors for that TLD to be specifically a development tld? localhost
is recognised to be one by chrome for example, that's the only domain where
html5 geo api works without https, and "your passwords are transferred via
plain text" is not displayed. In order to help shift to .test google might
alter it's heuristics to recognise .test as a common tld used for development.

~~~
jwilk
From
[https://tools.ietf.org/html/rfc6761#section-6.2](https://tools.ietf.org/html/rfc6761#section-6.2)
:

 _Application software SHOULD NOT recognize test names as special, and SHOULD
use test names as they would other domain names._

------
bpicolo
The main issue here is how much of a PITA it is to work with HTTPS locally
(totally true that .dev is the wrong thing to use for dev boxes here). Self
signing certs and forcing /etc/resolver/ configs is only half of it. Then you
run into trouble with mobile emulators, proxying, etcetc.

We have an automated setup of it for devs, but it's out of necessity rather
than anything else. It's a pain to deal with.

------
onion2k
I don't really see this as a problem. In fact, I wish Chrome would do that for
_every_ gTLD, but obviously that's not going to happen any time soon. Secure
by default would be great.

The real issue (for me at least) is that it's far too much of a pain to run an
SSL secured site locally. It can be done, but doesn't work well across teams
given you need to register your certificates locally. Being able to serve a
site from a Vagrant box or a Docker container over https in a way that a
browser will accept (or even just pretend to accept) would be immensely
helpful. I'm sure web developers and browser vendors are trying to resolve the
problem already, but it can't come soon enough in my opinion.

------
apatheticonion
*.localhost is a cool idea, would be cool if it allowed self-signed certificates as valid, or even have the browser do some magic and pretended it had an ssl certificate.

------
ramses0

      Sorry for top-leveling a grand-child comment, but reading between
      the lines, this is the attack vector: 
       
      > And for the last question: Again, there are no .dev domain
      > names. There never have been. It's never been available for
      > registration. The recommendation for a long time has been to
      > only use either (a) domain names that you actually own, or (b)
      > domain names that are reserved for testing and are guaranteed
      > never to exist a la RFC 2606. Using domain names for testing
      > that don't yet exist but could in the future is a huge security
      > hole that you must fix now. Do it now while the domain names
      > still fail to resolve. Once they resolve, and you don't own
      > them, then your security situation gets a lot worse.   
       
      Google is concerned with nation-state attacks.  This means they
      have to assume ninja-assassin-scuba-divers have tapped all their
      cables underground.  They're also concerned about 
      ninja-assassin-usb-stick-droppers, and all kinds of other use
      cases.                                                   
                                                               
      What they're doing is:                                   
                                                               
      1) Requiring *.dev to match PRE-LOADED HSTS certs.  This allows
      google to "safely" boot up a computer from scratch.  Just so long
      as "clone-a-computer-from-scratch.dev" matches  the 
      public/private handshake for HSTS/HTTPS then google knows that no
      MITM, no nation state DNS takeover, etc. is possible.    
                                                               
      So long as the VERY FIRST CONTACT WITH THE INTERNET is a *.dev
      domain, then that computer can be "as secure as possibly known".
       
      2) Forcing people to bounce "off" of invalid TLD's as a network
      administration method.                                   
                                                               
      Remember, google is concerned about nation states.  Remember
      wanna-cry?  How it was disabled by some random researcher
      registering xyz-abc-123.com?                             
                                       
      That attack costs $15.  Now imagine a nation-state, intentionally
      registering a gTLD of "\*.haha-now-your-company-infra-is-pwnd"
      which they somehow glean is the gTLD your developers use for
      local development / testing / intranet portal.
    
      If you could spoof IBM's intranet by doing something like:
      "http://www.welcome.ibm" or "https://www.welcome.ibm" (b/c the
      *.ibm wasn't cert-pinned.....) then you could trivially cause
      *.ibm to resolve to some sort of spoofed site to collect
      passwords.  Or what if they're catching `mysql -uroot -pxyz
      staging.product.ibm`?  Whoops.
    
      Or... perhaps another gTLD we'll see google register is "*.go" or
      maybe their internal builds of chrome already do cert-pinning on
      that.  (Reason is I've seen/heard they allow
      'http://go/my-internal-shortlink' ... I know that other tech
      companies have had similar setups).
    
      Same attack vector.  You control the DNS, you control ALL
      responses. And when somebody types www.microsoft.com, it may be
      _impossible_ to know if that "Down for Maintenance" banner is
      real or fake if their DNS is controlled by somebody who really is
      your enemy.

~~~
sowbug
FYI your comment is unreadable:
[https://pasteboard.co/GKOcbiR.png](https://pasteboard.co/GKOcbiR.png)

There is something about how you've formatted it that requires unreasonable
side-to-side scrolling.

~~~
ramses0
Unfortunately n.y.c can fund billion dollar startups, but can't let you put
STAR DOT EYE BEE EMM in a discussion about domain names. ¯\\_(ツ)_/¯

------
frik
I test with HTTP, locally.

This forcing of opinionated things goes on my nerves. How about develop the
browser, and let the mass decide what they use. Amazon was 100% HTTP for 20
years (except the single login page) - it worked very well.

~~~
waibelp
Same here. Time to check out firefox :)

~~~
pfg
Mozilla uses Chrome's HSTS preload list. (Microsoft too.)

~~~
SquareWheel
I've never quite understood the preload list. Is its only job to save the
initial 301 redirect from http > https on first load?

Also, wouldn't bundling (tens of) thousands of domains start to add up, and
slow down first page load for regular browser use?

I'm sure I must be missing something, because this doesn't seem very logical
to me.

~~~
daenney
In a way yes, it's trying to avoid the initial redirect. But not in order to
save you from the potential latency of the initial redirect:

> These sites do not depend on the issuing of the HSTS response header to
> enforce the policy, instead the browser is aleady aware that the host
> requires the use of SSL/TLS before any connection or communication even
> takes place. This removes the opportunity an attacker has to intercept and
> tamper with redirects that take place over HTTP.

Read this: [https://scotthelme.co.uk/hsts-
preloading/](https://scotthelme.co.uk/hsts-preloading/)

> Also, wouldn't bundling (tens of) thousands of domains start to add up, and
> slow down first page load for regular browser use?

Why would it? Checking in a data structure if the domain the user requested
should be loaded over HTTPS can be done in a perfectly efficient way. A hash
table would give you O(1) lookup times on average and there's other things you
can use to mitigate the worst case lookup of O(n).

~~~
SquareWheel
Good answer, thanks.

I was hoping the article would cover the scaling aspect a bit more. I guess
it's just meant to be a mid-step towards browsers defaulting to HTTPS at some
unknown point in the future.

~~~
pfg
The maintainer of the HSTS preload list wrote a detailed report on the current
state of HSTS preloading last year, covering the list size aspect[1].

[1]:
[https://docs.google.com/document/d/1LqpwT2aAekrWPtLui5GYdHSG...](https://docs.google.com/document/d/1LqpwT2aAekrWPtLui5GYdHSGlZNMNRYmPR14NXMRsQ4/edit#)

------
kuschku
The most annoying part here is that Google isn’t even using .dev as public TLD
– they purely use it for internal testing, and all registered .dev domains
resolve to 127.x.x.x addresses.

.dev should have been entirely reserved, or made available publicly.
Registering a TLD just for your own internal testing, and forcing everyone to
switch away, is the most user unfriendly move you can do.

~~~
CydeWeys
We're not using it for internal testing.

And specifically, the wildcard DNS entry for 127.0.53.53 is for ICANN's
Controlled Interruption process. See here:
[https://www.icann.org/resources/pages/name-collision-ro-
faqs...](https://www.icann.org/resources/pages/name-collision-ro-
faqs-2014-08-01-en)

~~~
kuschku
So, why is this not documented with the TLD, or in any of the informational
material?

(And why is Google in the TLD market at all? Google’s already far too large as
company – impossible to democratically control, any further growth of Google
should be immediately and forcefully stopped).

