
To www or not www - jacobwg
https://www.netlify.com/blog/2017/02/28/to-www-or-not-www/?utm_content=buffer67bee&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
======
arihant
I understand the article walks through the technical aspects. However, having
seen a lot of non-savvy users use Internet, I am firmly with WWW. Probably
because of association and all major "established" companies using www, people
associate it with strength. I have learned it to be important in similar ways
that a .com extension is.

If the site did not have www, most people assumed it is probably made by kids,
who do not have www yet. See, most people do not understand that www is not a
domain like .com which you have to buy. So for average joe consumer, it
signals strength. For an enterprise customer, it probably does too. So unless
your product is for savvy users or zen-like designers, I'd stick with www.

A lot of people think that naked domain is cleaner. It is actually not, since
the average mind is conditioned to read www.x.com, and you have
[https://x.com](https://x.com). It's cleaner in the sense that a face is
cleaner without a nose.

~~~
edmccard
>having seen a lot of non-savvy users use Internet, I am firmly with WWW

As an anecdotal counterpoint, at our family Christmas, I mentioned a few
different urls, spelling out "www" each time; one of my nieces said "why do
you keep saying 'www'", and one of the other kids said "that's how old people
find websites." After some discussion, it turned out that nearly everyone over
30 habitually wrote urls with "www" at the front, and everyone under 18 always
omitted it.

~~~
karlshea
At least we've mostly gotten away from "h-t-t-p-colon-slash-slash..."

~~~
tomatsu
I always thought it was odd when they go with the whole
"[http://www."](http://www.") voodoo ritual, but then they skip the final
slash after the domain.

~~~
derefr
AFAIK the lack of a path part is allowed in an HTTP URL per RFC2616.

    
    
           http_URL = "http:" "//" host [ ":" port ] [ abs_path [ "?" query ]]
    

> If the abs_path is not present in the URL, it MUST be given as "/" when used
> as a Request-URI for a resource (section 5.1.2).

Browsers may differ on whether they auto-fix schemaless URIs, but they're all
required to fix path-less URIs. So it's much "safer" to use them.

------
zimbatm
After having tried both I'm very much in the WWW camp. Even though the naked
domain looks nicer, it's just not worth the hassle.

> End users save an extra DNS lookup

Most intermediate resolvers will return both the CNAME and A record in one
response anyways.

Another issue with naked domains is that all the cookies are automatically
served on subdomains as well. It's just another hassle to worry about when
trying to keep the cdn clean or wondering why a session works only in specific
cases.

~~~
tradersam
So you're "very much" in the WWW camp but you don't even use it on your own
site. Curious.

~~~
mod
Sometimes you have to make the mistake to know it was one. I wouldn't trust
his opinion if he had never tried non-www.

~~~
zimbatm
Yes basically this is the main reason. I could change it but I have better
things to do.

Also my www recommendation is for commercial websites who tend to have
multiple subdomains, more traffic with HA requirements, and a more complex
setup than a static website.

------
peterwwillis
Reasons to use www:

* Cookies for the root domain get sent to all subdomains, so a subdomain for static content still gets flooded with cookies, slowing down requests. Now subdomains will get cookies you may not want them getting, complicating site design. You can end up sending dozens of kilobytes of cookies with each request due to the www-less cookies. The way around this is buying whole new domain names just for static content, and then duplicating SSL and all the other requirements for this new domain. Or hoping RFC 2965/6265 won't break anything using your site.

* There is a security boost by the same-origin not allowing a subdomain to hijack cookies for the root domain ("forums.foobar.com" could be made to set a cookie that "foobar.com" interprets, which can be used to hijack user sessions; this would not happen on www). This problem affected Github and they had to implement complicated workarounds.

* It is easier and more flexible to configure a round robin of frontend hosts with a CNAME (on www) than by A records on the root domain. If your cloud hosting provider's IP changes, they can change their hostname records without needing to modify _your_ DNS records - less work for you and them. And if you think a single static anycast address could never have a routing problem, think again.

* Google will (or did in the past) ding you for duplicate content. The same content on foobar.com/ and www.foobar.com/ will appear as duplicate. Providing the content only on www separates it from other content and makes it easier to search subdomain-specific content. (This won't happen if one of them is 301 redirected to the other, however)

Reasons not to use www:

* "It looks cleaner."

People, you can 301 redirect your www-less site to www, gain all the
advantages of using www, and the only "hassle" will be in how the address bar
looks.

~~~
z3t4
You need A records for DNS round robin.

[http://serverfault.com/questions/574072/can-we-have-
multiple...](http://serverfault.com/questions/574072/can-we-have-multiple-
cnames-for-a-single-name)

------
teddyh
> _If you want to be able to receive email on your domain, you’ll need to set
> MX records at the apex domain. With a CNAME, no other records can be set._

> _Want to validate your domain for webmaster tools? Or for the DNS validation
> required for some domain validated SSL certificates? Now you have to add a
> TXT record to the apex domain. If you already have a CNAME, again, that’s
> not allowed._

It’s actually worse than that. _All_ domains have, for technical DNS reasons,
both a SOA record and at least one NS record in them at the “apex” domain.
This would conflict with an apex CNAME record. Therefore, you can’t have a
CNAME on an apex domain, even if it would otherwise be empty.

(There is a technical, and very theoretical, way around this limitation: The
administrators of the top-level .com domain could, for example, add a CNAME
record _directly into the top-level domain zone_. This would be valid,
technically, but good luck convincing the various parties involved to do
this.)

------
sytse
Don't make the mistake I made: host different content on www.gitlab.com
(static site) than GitLab.com (application). People expect them to be the
same. Ended up moving the static site to about.GitLab.com

~~~
warent
Oh my

I always have my www.* domains alas to the site without the www. by default.
While you assumed people will almost always think they're different, I'm
assuming people will almost always think they're the same.

I wonder how many people get frustrated by being redirected from the www to
it's naked counterpart before spamming refresh and leaving in defeated
frustration... Uh oh

~~~
sytse
Yeah, big mistake on my part. I don't understand why you're mentioning a
redirect, that was not something we were doing.

------
tscs37
The main problem is, in my opinion, that CNAME is broken for the root domain
but something that can hardly be fixed on such an ancient protocol without
some pain.

What Cloudflare and DNSimple are doing is the right thing. I hope that CNAME
flattening or ALIAS records become some kind of standard.

~~~
p49k
Could you explain more about what Cloudflare/DNSimple are doing to workaround
this, from a technical standpoint?

~~~
tscs37
It's explained in the article, but the TL;DR is that CF and DNSimple are
simple pretending that a CNAME on the root domain is the corresponding A or
AAAA record instead.

It breaks geographical CDN a bit but it works somewhat.

------
mugsie
This is why I wish that as part of HTTP 2 they had allowed the use of SRV
records and gotten it built into the browsers / clients etc.

SRV records are far superior - its a priority and weighted list of hosts for a
protocol, which could really cut down on load balancing complexity.

~~~
detaro
I don't think there is a reason it would have to be tied to HTTP 2, and also
not much to gain by explicitly including it. Proposals for using SRV records
for HTTP have been around a long time, seems like there have been some open
questions and not all that much interest.

The Mozilla Bug is old enough that it is a Mozilla bug (Firefox didn't exist
when it was filed):
[https://bugzilla.mozilla.org/show_bug.cgi?id=14328](https://bugzilla.mozilla.org/show_bug.cgi?id=14328)

Chromium's bug is from 2009:
[https://bugs.chromium.org/p/chromium/issues/detail?id=22423](https://bugs.chromium.org/p/chromium/issues/detail?id=22423)
(which has some interesting comments regarding DNS fallback behavior and the
latency penalties incurred)

~~~
teddyh
The HTTP 2 standard _must_ include provisions for SRV records to be used,
since that is part of how clients should follow a URL. Additionally, the SRV
specification itself says that a protocolspecification must say that SRV
record should be used before any client of that protocol takes it upon itself
to use SRV records.

~~~
detaro
The most explicit reference to name resolution I know of in any of the HTTP
standards is RFC 7230 Section 2.7.1
([https://tools.ietf.org/html/rfc7230#section-2.7.1](https://tools.ietf.org/html/rfc7230#section-2.7.1)),
which is still quite vague:

[...] _If host is a registered name, the registered name is an indirect
identifier for use with a name resolution service, such as DNS, to find an
address for that origin server._

[...]

 _When an "http" URI is used within a context that calls for access to the
indicated resource, a client MAY attempt access by resolving the host to an IP
address, establishing a TCP connection to that address on the indicated port,
and sending an HTTP request message (Section 3) containing the URI's
identifying data (Section 5) to the server._

I don't think that excludes SRV-based name resolution. Some sort of
standardization of course would be helpful, even if just for reference, but
that could in my mind be an independent document recommending to use SRV
instead for HTTP, without any detail about the version (since HTTP 2 has no
property that makes it more or less fit for use with SRV records than 1.1).
Adding something that's totally unclear if it ever will see any use to HTTP2
just because seems worse.

------
alfredxing
I'm surprised that the article doesn't mention anycast, which is more or less
the "correct" way of using a CDN on an apex domain, since for the user's
purposes it's just a static IP address.

I find anycast to be convenient even for subdomains, since it isn't affected
by things like DNS caching, (although things like edns-client-subnet
apparently help with that).

I'm actually currently looking for a CDN for my website. I don't like www
(just personal preference) so anycast is pretty important to me, but there
don't seem to be a lot of providers offering anycast for decent price. The
closest I've seen is Google's Cloud CDN, which out of all the CDN's I've tried
(a lot), is one of the best, but for a small site like mine I tend to get more
cache misses than hits (simply due to eviction).

Maybe I'll write up a blog post about this issue :)

------
ef4
It's odd to hear a CDN complaining about this limitation when it has already
been solved for well over a decade by other leading CDNs.

Akamai can serve your apex domain from their edge servers. They do it by
giving different answers for the A record to different users, based on where
each user is coming from. All that's required is that you use them as your NS.

~~~
bobfunk
If you read the start of the article you'll see we do that as well. This only
applies to people that don't use netlify for DNS.

------
ne01
At SunSed, we use Google HTTP(S) Load Balancer which allows us to load balance
our entire infrastructure via a single IP.

Our users don't need to worry about CNAME vs A records they can do what ever
they want with the IP, since we don't need to change this IP there is no
benefit for using a CNAME.

On top of that SSL handshake for HTTPS happens at Google front ends which
reduces the load on our servers. Also we can send traffic to different sets of
VMs based on the URL! How cool is that?

I really think that Google's HTTP Load balancer is the hidden gem of Google
Cloud.

~~~
Drdrdrq
> On top of that SSL handshake for HTTPS happens at Google front ends which
> reduces the load on our servers. Also we can send traffic to different sets
> of VMs based on the URL! How cool is that?

Very cool - let's just hope Google is better at hiding contents of some random
memory than Cloudflare.

~~~
ne01
Do you operate your own CDN?

If not, then basically on any CDN you need to trust them with the SSL
certificates unless you serve your content over HTTP.

Unless you don't use a CDN at all!

~~~
im3w1l
You could give them a certificate that is only valid for cdn.example.com, no?

~~~
ne01
Yes. But it's best to serve your entire website (including the HTML pages) via
a CDN to reduce latency.

------
mattcoles
Am I reading this wrong or, does this only apply to people who are netlify
customers?

~~~
tyingq
It applies to any service where having them host your domain is done by
publishing a CNAME record.

But, that's not the only way to do that sort of thing. Firebase, for example,
allows you to use A records pointing at their IP addresses.

Cloudflare and WordPress.com allow you to make them the authoritative server
for all your records, then they provide an edit interface.

Netlify doesn't mention these as good options, probably because they don't
have them to offer.

Edit: Apparently they do offer these options, but have their own reasons for
preferring the CNAME approach

~~~
bobfunk
Author here. We do actually offer all of these options.

We offer a public IP address for A records pointing to a our main load
balancer. This will send all traffic to a single origin instead of serving
your HTML pages out of our global CDN.

We also offer DNS hosting for pro plans and up. When you move your DNS to
Netlify, the caveat about naked domains doesn't apply (as mentioned in the
first paragraph), since we hook the domain record straight into our global
traffic director.

For enterprise customers we also offer an anycasted IP address that lets you
use our CDN with a normal A record, but we still recommend either using our
DNS hosting or a www domain since the DNS based traffic direction is faster at
responding to localized issues and offers more precise traffic distribution.

~~~
jsjohnst
Wouldn't a simpler (for the end customer, not for you) solution be to use
Anycast on a (or block of) IP addresses and then let folks always use A
records as intended? Solves the ANAME non-local caching issue and also handles
people using DNS servers not nearby to them.

~~~
bobfunk
We do run an anycast CDN network, but there's a lot of limitations on BGP
routing compared to CDN based traffic direction.

We can only route BGP requests to hardware we control, whereas we can add PoPs
in all the major cloud providers on our DNS based network. We can then use
tools like Cedexis or DYNs internet intelligence to identify where the
different cloud providers have the best networking and peering agreements and
piggy back on that + their DDoS mitigation. This means we get a combination of
all the best AWS/Google Cloud/Rackspace/DO, etc, etc has to offer in that
aspect.

On the DNS based traffic director we can also do very quick traffic decisions
(20s TTL, instant changes) whereas on our BGP routed anycast IP we have to be
more conservative and force 10 minute intervals between any up/down changes
for a PoP.

~~~
kyledrake
I did GeoDNS + Unicast IPs for a while. I had a really rough time making it
work, and we ended up building our own anycast network
([https://status.neocities.org](https://status.neocities.org))

Aside from the root domain issues (and less options for market-priced
bandwidth), "GeoDNS + Cloud" pushes your traffic into someone else's ASN,
which means complaints end up being sent to them, and your hosting is
effectively governed not just by one, but by two different ToSes.

This isn't a big deal for a couple thousand sites (unless they're huge), but
once you start getting into the hundreds of thousands, you'll see a
significant spike in issues (phishing, malware, spam, DMCA, legal threats,
etc.) that get sent to whomever owns that IP address. After getting too many
of these complaints, those other providers can decide you're just not worth
the effort and boot you off their servers.

Crazy hypothesis? Sounds like it would be, but it happens:
[https://twitter.com/surge_sh/status/685164708861624325](https://twitter.com/surge_sh/status/685164708861624325).
DO did the same thing to us when we tried to use them for part of our CDN
early on. After that, I tried three other cloud services that either did the
same thing or threatened to do the same thing (to say nothing about the
ridiculously overpriced bandwidth).

The choice we were left with: Get our own AS, or die. Mind you, this was over
< 30 abuse reports per month, not thousands. Most of these providers are
designed for a single company or a wordpress blog, they're not designed (and
not really equipped) for usage as infrastructure for a web hosting provider
with hundreds of thousands (or millions) of customers.

Building out the anycast CDN was a "drinking from the firehose" experience and
had some upfront costs I would have rather not paid, but it solved this
existential problem for us permanently, and probably saved our life. From
experience, I do think you'll have to do this eventually (or at least do
GeoDNS + unicast with your own IPs and AS).

~~~
zaroth
Have you written up your experience with building out the anycast CDN? That
would be extremely interesting!

~~~
jsjohnst
I'd be interested in reading that too

------
nkkollaw
This is an ad... Why is it on the front page?

The article brings absolutely no value.

~~~
adventured
104 comments says that you're wrong. There is a many years long discussion
about www vs non-www and this is a continuation of that. It served the purpose
of sparking the conversation, that was its value.

------
Navarr
This could be solved by a new record, of course, but how many years exactly
would that take? So many companies would have to jump on board.

Thinking a record like `DELEGATE <comma delimited list of record types>
<priority> <name server>` or _something_.

~~~
Old_Thrashbarg
> So many companies would have to jump on board.

Noob question: what companies would have to jump onboard to get a new record
up and running? Could it not just be one company like DNSimple who first
adopts it?

~~~
janywer
It would require extensive support from browser vendors, so if google got
behind a proposal like that, it could probably be pulled off.

Most servers would likely use both protocols for quite a long time before one
could be discarded.

~~~
Avalyst
I feel like the biggest problem would be all the ISP's DNS servers, ISPs are
notorious for breaking all kinds of stuff and this would probably be just
another thing they break.

------
kerouanton
Adding www Doesn't make any sense for URL shorteners for example. The same
occurs on media like Twitter where chars are counted and "precious": using
www. adds 4 chars to the message (in theory, since those url shorteners are in
help).

Another detail I've noticed since wide adoption of browsers that include a
single combined url/search entry field. Most people don't even care about the
exact URL, they just enter the name they believe the website is, and let the
search engine do the job if mistyped or inexistant. (That leads to phishing
attacks).

~~~
DrScump
Do (informed) people even _use_ URl shorteners anymore, given that they become
a malware vector?

~~~
kerouanton
I agree, but unfortunately most major corporations/websites do shorten urls...

------
gumby
Why use CNAME at all? You can put the same IP address into as many A records
as floats your boat. Bonus: saves a round trip to the DNS server.

~~~
jrochkind1
Because it lets different organizations/organizational units control different
parts of the resolution. for example, you don't want to give heroku control of
your whole dns (and they don't want to be in the dns business), but you want
to let heroku change the actual network ip addresses that handle your app on
their own, you don't even want to have to know what it is.

cnames are what make 'the cloud' work.

------
wheelerwj
this is slightly off topic, but is there anyone who can elaborate a little bit
on why/where/how netlify differs from heroku? it's a little more expensive and
you cant host your back end, so im a little confused of the value provided.

~~~
cardosof
I find it perfect to host static pages generated with hugo.

~~~
wheelerwj
hugo looks pretty cool, thanks!

------
JohnTHaller
It very much depends on the age of your target market. I'd say there's a
cutoff around age 30 where people simply omit the www. when talking about
addresses and assume everything is just whatever.com.

------
22goodman
Short answer: Dont www Long answer: Do www

------
proyb2
At least Facebook don't uses WWW.

~~~
Hurtak
They do

~~~
proyb2
I did tested earlier it was shown as facebook.com but now it shown
www.facebook.com. They are something, after this discussion.

------
shmerl
www is the relic of the early Internet. Really no point in it today.

------
esotericsean
www is dead.

~~~
ojm
Long live www.

------
rgj
A few months ago, we built
[https://www.forcewww.com/](https://www.forcewww.com/) to make our lives, and
that of our customers, and everyone else, easier.

