
Why use www? - wfunction
http://www.yes-www.org/why-use-www/
======
feelix
This reasoning is all based on valid, but technical issues for the hosting
side. A general rule for any customer-facing business is to put the customer
first. I could list 200 different reasons why you should make the customer
register an account with their email address before they can purchase
something for you. However, if you put the customer first, in many cases it is
easier for them if they don't have to do that. Having the www before the
domain name adds unnecessary visual clutter and from the customers point of
view, an unnecessary redirect before they can get to your site. A lot of sites
use minimalist style everywhere, and it's great. Having the www there for
technical reasons is putting the user second in those cases.

~~~
forgotpasswd3x
I think that www vs. no-www, as a matter of "putting the customer first" is so
INCREDIBLY insignificant, compared to the thousands of other decisions that go
into a product, that it's ridiculous we're even having this conversation. This
is looking for optimization in the wrong places at its finest.

~~~
lkrubner
Do you a think a user should have to write:

[http://www.apple.com:80/](http://www.apple.com:80/)

or:

[http://www.apple.com/](http://www.apple.com/)

Ever since the first web browsers, way back in the early 1990s, it has been
commonplace to leave out the port number. The web browser adds it
automatically.

Similar logic would lead us to leave off the "http". And similar logic would
lead us to leave off the "www". The trend has been to simplify the URL as much
as possible.

~~~
ldjb
Nobody is suggesting the user should be forced to type in the protocol, the
subdomain, or the port number. If the user types in:

apple.com

It should lead to where the user wants to go.

However, there are good reasons for using the www subdomain as the _canonical_
URL, and it is also worth noting that some users will habitually type in www
anyway.

If you don't want to include the subdomain in marketing material, then there's
nothing stopping you from leaving it out, just as there's nothing stopping you
from leaving out the protocol.

~~~
dalore
And even just typing:

apple

Should lead to where they want to go.

~~~
rootkea
Which leads to 127.0.53.53 on Iceweasel 38.5.0
[https://www.icann.org/namecollision](https://www.icann.org/namecollision)

------
Rezo
Remember that HTTP vs HTTPS adds yet another dimension you have to take into
account. This bit me the other day.

I run a site at example.com (I prefer a naked domain, for no technical reasons
whatsoever), with a CNAME record for the www subdomain. But I only want to
serve the site over HTTPS. So [http://example.com](http://example.com)
redirects to [https://example.com](https://example.com), as does
[http://www.example.com](http://www.example.com). Simple enough, right?

I however started receiving some spurious reports that the Google Account
login option wasn't working on the site, which was quite puzzling at first.
Turns out, some users were manually entering
[https://www.example.com](https://www.example.com) as the address (it's not
indexed or linked to anywhere in this form that I could find), which was being
handled by the Nginx default_server directive on port 443, causing the site
itself to appear to work just fine at
[https://www.example.com](https://www.example.com) as well. But the Google
OAuth service checks the authorized origins for any client side requests, saw
www.example.com and was expecting example.com so simply failed silently. Doh!

TLDR summary: If you redirect to HTTPS by default, check that all 4 options
([www, naked] * [http, https]) work correctly, and that all redirect just ONE
canonical name to keep things sane. And make sure the 3 redirects preserve any
request URI parts after the domain as well.

~~~
vasquez
> And make sure the 3 redirects preserve any request URI parts after the
> domain as well.

I intentionally break such requests by dropping anything beyond the host name.

If someone are sending data in the open, I don't want their clients to keep
working thanks to built-in support for redirects.

------
cantrevealname
The OP could have avoided a lot of confusion if he began his article like
this:

> _www. is not deprecated for webmasters (but users don 't need to type it)_

> _This page is intended for webmasters who are looking for information about
> whether or not to use www in their canonical web site URLs; however, the
> website can still be advertised without the www and the user never needs to
> type it._

Even in this HackerNews discussion with technically knowledgeable people I see
a lot of discussion stemming form this misunderstanding of what the OP is
trying to say. (Example: "I should take the toll by typing www every time?
life is too short.")

~~~
erikb
Yeah, this would have helped a lot to avoid discussion. I suppose the author
of the website expects mostly webmasters to come and read what he writes. But
here are a lot of other people as well. So probably it's not the author who
should adopt his text (why should he even know we exist) but the title of the
link in HN itself should make it clear.

------
ludwigvan
I'm perplexed by the cookie claim. I have a naked domain (foo.com), and a
static domain with the same domain name. (static.foo.com) and the cookies of
foo.com are not being sent to static.foo.com if the path is configured as /.
(I can see that cookies are not sent from the dev tools' network tab. The
cookies for google analytics, which set the domain to .foo.com as opposed to
foo.com are being sent to the static subdomain though.)

Could someone enlighten me on this? Seems like the article might be spreading
misinformation.

Edit: Seems like the issue is "host only cookies" are just sent to foo.com,
not to static.foo.com

A cookie, unless the domain is explicitly set is already host-only. So you
will see that if you set up a naked domain, cookies you set for authentication
will probably not be sent to subdomain by default.

The third party cookies on your application, like Google Analytics on the
other hand, have to have specified a domain name and are not host only, so you
will see your Google Analytics cookie being sent to the static subdomain.

So, this statement from the article seems to be wrong:

"If you use the naked domain, the cookies get sent to all subdomains (by
recent browsers that implement RFC 6265), slowing down access to static
content, and possibly causing caching to not work properly."

It should be

"If you use the naked domain, the cookies which are not host-only and have
domain set get sent to all subdomains (by recent browsers that implement RFC
6265), slowing down access to static content, and possibly causing caching to
not work properly."

------
marcosdumay
Well, the obligatory counterpart:

[http://no-www.org/](http://no-www.org/)

~~~
EugeneOZ
"Technical part" is not as convincing.

P.S., About cookies: Don't use cookies, LocalStorage is much better.

~~~
tiglionabbit
It doesn't accomplish the same thing. Cookies are added to every web request,
while you have to supply your localstorage values manually if you want to use
them. At the least, this requires JavaScript. It's also impossible to do for
regular clicks on links and when loading images and scripts. Sometimes you
want to authenticate those too.

~~~
EugeneOZ
Ok, I'll rephrase: only use cookies when you want to send them with requests
to static files. But don't forget - you will not be able to use CDN in this
case.

~~~
vidarh
Nothing stops you from using a CDN with cookies as long as your app generates
proper caching directives and the CDN obeys them. The only reason this would
ever be a problem is if your app/server does not correctly mark pages that may
contain private/user-specific content accordingly.

Further, a common method to reduce the risk of this is to place purely static
public assets and user-specific private data on different domains.

~~~
EugeneOZ
And who will do authentication process? CDN? And different domains can be an
issue too.

> Further, a common method to reduce the risk of this is to place purely
> static public assets and user-specific private data on different domains.

In other words, "don't use CDN when you need cookies", fine.

~~~
manigandham
A CDN is just a reverse proxy with caching. The "proxy" part means anything,
including cookies, can be transferred from the end-user to your origin server.

There is no issue using CDNs with cookies.

~~~
EugeneOZ
No, CDN is content delivery network and it will not wait your server response
on each request, otherwise it will kill whole point of CDN existence.

~~~
manigandham
Yes, that's what the letters CDN stand for and is just a marketing term.

Technically, they are reverse proxies with a focus on caching, however that is
not all they do. You can use them for various other features like security and
front-end optimization and they work fine with cookies.

It's common to use a CDN for the entire site - caching static files at the
edge while sending page requests to the origin server with all the cookies,
especially important as many sites are now dynamic and customized to the
individual. There is no issue in using a CDN to proxy all requests.

~~~
EugeneOZ
Maybe you have not noticed, but we were talking about authorized access to
static files, using cookies. And I was talking you there's no point to use CDN
if EVERY request will be send to main server (it will work even slower). Now
you are trying to explain me that CDN can send SOME requests to main server
(to dynamic pages), delivering static files from the nearest point to the user
and without authentication by cookies.

~~~
manigandham
I dont see where in this thread that became the topic, rather it's been about
cookies in CDNs. You still seem to think that CDNs are only used for caching
when that is just one of their features. You can also use them for security,
for example, without using any caching at all.

If you need to authorize every single request (even to a static file) which
means that every single request is unique, then this is obviously not a good
use case for an edge server cache. You can still cache things in the browser
with cache headers and continue using the CDN to proxy the full request with
cookies to the origin. This doesn't add much latency and can sometimes
decrease it because the CDN will keep faster connections open to the origin.

However, most CDNs today also offer their own access controls either with
cookies or url tokens so you can do authorization at the CDN edge instead of
the origin.

And yes, you can use the CDN to cache static files and just proxy requests to
pages. Usually private information that requires authentication is in the
webpage and the static files like javascript, css and images dont need
protection.

------
deathanatos
If SRV[1] records had been a thing sooner, we could have had our cake and
eaten it, too. SRV records encode the protocol into the DNS entry. If you
wanted the HTTP server for example.com, for example, you'd lookup the SRV
record for _http._tcp.example.com. You get back the IP _and port_ of the host
to connect to.

If you had a hosting provider, you could CNAME _http._tcp.example.com to your
hosting provider. Naked domains + CNAME works as expected.

(And you can weight records, assign priorities…)

[1]:
[https://en.wikipedia.org/wiki/SRV_record](https://en.wikipedia.org/wiki/SRV_record)

~~~
drdaeman
No. The main issue here is cookie control, or, to be exact, the complete lack
of one.

You can't tell user-agents "the cookie I set must be valid for example.org and
websocket.example.org but no others" (the example is crude and non-scalable,
real-world semantics of this must be different), which leads to all sort of
problems. Heavier static media requests, mixed cookie state if you use
staging.example.org for pre-production environment, inability to provide third
parties subdomains for their UGC, etc etc. All can be solved but not really
convenient.

Would there be a way to have good control over cookie scoping, a lot of hacks
would be gone and www/non-www distinction would be purely cosmetic for most
cases. Granted, not all, SRV records are still a good idea.

------
r3bl
I'm using a naked domain. The biggest reason why I'm using it is because my
domain name as minimalistic as it can get (six characters long, seven if you
count the dot) and I sure as hell love saying to people to just type in my
alias and add .me at the end.

Although I did not know about these issues, I have to say that the source
really didn't give me strong enough arguments to convince me to go through the
hassle of making the switch.

With that being said, I opened up a couple of links from the source and I will
look through them and see if they'll change my mind.

~~~
akcreek
301 redirect the naked domain to the www and you can keep telling people the
same thing - they will be redirected to the www version seamlessly.

~~~
clessg
You can do that, but the redirect will affect initial load times. Might be
worth it, might not be.

------
bobfunk
We also strongly discourage users from using naked domains, unless they have a
DNS host that supports ALIAS records or CNAME flattening.

I wrote a post with all the details around why it's best to use www and why
naked domains can be really bad for performance and uptime:

[https://www.netlify.com/blog/2016/01/12/ddos-attacks-and-
dns...](https://www.netlify.com/blog/2016/01/12/ddos-attacks-and-dns-records)

~~~
tamana
Your post is grey, light-weight text, which is two reason people won't read
it.

~~~
megablast
And the background isn't even white. Come on!

~~~
switch007
And "font-weight: 300", naturally. Argh!

------
chrisblackwell
> You should use www because today you have a small web site, and tomorrow you
> want a big web site. Really big.

So Twitter, Pocket, Github, Trello...are all doing it wrong?

I really don't think this matters more, and I think that the non-www version
makes a web address so much more readable.

~~~
ymse
It only matters if you intend (or require) to CNAME your main site off to some
other DNS name. If you're serving your entire site from S3 for example, you
can't just alias yoursite.com to your-bucket.s3.amazonaws.com, but you can
CNAME www.yoursite.com and have a small server sitting on yoursite.com sending
a 301 redirect for every request.

Github and Twitter are large enough to not care. And if you're using something
like Cloudflare, they can just take over your IP address with BGP, no DNS
trickery needed.

I've worked a sysadmin for more than 10 years now, and never realized that
it's not possible to CNAME the domain root. It's good to keep in mind, but in
most cases there are other workarounds.

This is also why "naked domains" set up a whole other domain for static files
rather than "static.yoursite.com", to avoid the "top-level" cookie
(megacookie?) being sent with every request.

~~~
developer2
By the way, even the CNAME on root domain is no longer a concern if you are
already using AWS. Route 53 has supported CNAME on root domain for quite some
time now: [https://aws.amazon.com/blogs/aws/root-domain-website-
hosting...](https://aws.amazon.com/blogs/aws/root-domain-website-hosting-for-
amazon-s3/)

~~~
treve
It's not a true CNAME, but rather a sort of server-side translator to make it
behave as a CNAME while really just returning A/AAAA records. It's more like
the 'ALIAS' record.

------
jv22222
As PG says, when you're starting out, do things that don't scale.

My advice is don't over optimize _anything_ until you actually need it.

In the early stages a naked domain is a branding decision that looks nicer
than old school www, to my eyes, at least.

You can avoid the cookie issue by not using any subdomains and sending all
your calls to single sub URIs such as yoursite.com/api/ _, yoursite.com
/blog/_

If you're running your own infrastructure on AWS, for example, you can start
your scaling efforts simply with load balancers and multiple instances all
pointing to your naked domain. That's going to get you pretty far until you're
so big that you need geo scaling & distribution.

Then, if you need geo scaling such as Akami or other geo load blanching
solutions you can start to redirect your traffic away from the naked domain to
www or whatever.

------
mythz
I frequently use naked domains as they "read back" better in every site I use
them in, removes visual cruft and requires recalling less character space in
customers memories.

Other websites I frequent that uses naked domains include:

    
    
      - stackoverflow.com
      - github.com
      - twitter.com
      - stripe.com

~~~
erikb
I'm surprised tha they really don't redirect. Would be interesting to see how
they deal with the problems mentioned. As always there are probably different
solutions to the same problem.

~~~
speakeron
They do redirect, but it's actually to go from the www to the naked domain.
This always leaves the naked domain in the URL bar and is, to me, a clean
look.

------
tantalor
See also these directions for redirecting naked domain to www,

[http://www.yes-www.org/redirection/](http://www.yes-www.org/redirection/)

Unfortunately, some hosting providers make this very difficult or impossible.
I struggled to get this working on a Google Site hosted at a Google Domain
using their DNS tools; it seems Google Domains doesn't have a basic "redirect
to www" feature. Eventually I gave up and used Dreamhost's nameservers
instead; they offer this feature (and its free; you don't need to pay for
hosting).

~~~
Sami_Lehtinen
wwwizer - [http://wwwizer.com/naked-domain-redirect](http://wwwizer.com/naked-
domain-redirect)

------
kmeisthax
Or, just use a DNS nameserver that can emulate an apex CNAME, if you are that
concerned about letting a third-party renumber their servers at the drop of a
hat. I know CloudFlare can do this and it's a feature that standalone DNS
nameservers should support.

------
spinningarrow
> The technical reasons to use www primarily apply to the largest web sites
> which receive millions (or more) of page views per day

How do GitHub and Twitter for example deal with this? Do they have to go
through a lot of hoops in order to not use 'www'?

~~~
ef4
> Do they have to go through a lot of hoops in order to not use 'www'?

No, they don't. The claim that apex domains can't be used with CDNs is badly
out of date.

Even a free-tier Cloudflare account supports CNAME flattening, which solves
the problem just fine.

[https://support.cloudflare.com/hc/en-
us/articles/200169056-C...](https://support.cloudflare.com/hc/en-
us/articles/200169056-CNAME-Flattening-RFC-compliant-support-for-CNAME-at-the-
root)

------
nailer
Previously:
[https://news.ycombinator.com/item?id=7961415](https://news.ycombinator.com/item?id=7961415)

------
christianbryant
While the original post was not specific to InfoSec, it does beg the question
whether there are any security implications to using/not using "www" in your
domain names. Does a naked domain pose any risk? No suggestion to this effect
can be found in discussions on the topic I can see so far, but it's a good
question to ask in case those of us who do prefer naked domains are missing
something.

~~~
robszumski
As mentioned on the site, there are some stipulations about cookies that are
fairly important if you're doing something sophisticated.

------
Spooky23
Missing reason: provides an anchor for weird domains.

Imagine being confronted with something.nyc, or something.cool, or
something.repair. WTF is it?

That www gives you some context.

~~~
djhn
Why not just [http://](http://)?

~~~
djsumdog
Exactly! [http://](http://) makes a lot more sense, and people are less likely
to track on a .com to the end

------
Nutmog
That's good news. I chose no-www because www seemed redundant and have been
wondering if it's a problem. Turns out no, unless it's very big, which I don't
expect my site to be (niche market, not a web app). It can even be changed
later by redirecting no-www to www.

Is this limitation a problem in the design of the domain name system, or
something quite natural and necessary?

~~~
yrro
I can't for the life of me figure out why web browsers don't yet do a query
for an SRV record named _http._tcp.example.com when the user browses to
example.com.

~~~
merbpoll
This came up again during http2. The browser vendors just aren't willing to
take on the extra latency at the start of the request.

~~~
zrm
Where is the extra latency? You can do DNS queries for example.com and
_http._tcp.example.com concurrently so that you get both answers in the time
of one round trip to the DNS server.

If there actually is a SRV record and the target isn't locally cached then you
would need to do another query, but that is _faster_ than establishing an HTTP
session with example.com, getting the 301 redirect and then having to do the
query anyway.

~~~
merbpoll
They didn't want to wait for multiple queries to finish. If you're interested
in a detailed rationale you can take a look through the mailing list archives
(though it'll be frustrating reading).

~~~
spc476
And that's because the major DNS servers don't bother with answering more than
one question per query. Nothing in the DNS specification limits the number of
questions to one. The only down side might be that the response to multiple
questions might not fit in the standard UDP DNS packet.

~~~
merbpoll
DNS servers can't answer more than one question per query because NXDOMAIN is
signalled in the message header and so when QDCOUNT is greater than one the
response becomes ambiguous.

------
lkbm
> If you are using www, then this is no problem; your site’s cookies won’t be
> sent to the static subdomain (unless you explicitly set them up to do so).
> If you use the naked domain, the cookies get sent to all subdomains (by
> recent browsers that implement RFC 6265)

From my understanding, pretty much all browsers DON'T send google.com cookies
to subdomains -- they only send .google.com cookies to all subdomains.

This seems backed up by their cited RFC 6265[1]: > Unless the cookie's
attributes indicate otherwise, the cookie is returned only to the origin
server (and not, for example, to any subdomains)

Am I just confused?

[1]
[http://tools.ietf.org/html/rfc6265#section-4.1.2](http://tools.ietf.org/html/rfc6265#section-4.1.2)

------
excitom
I dislike www simply because it is the world's worst acronym; having three
times as many syllables as the words it replaces.

~~~
njharman
dub dub dub ?

------
Palomides
you don't need to do a mod_rewrite thing, btw, just do:

<VirtualHost *:80>

    
    
      ServerName whatever.com
    
      Redirect permanent / http://www.whatever.com/
    

</VirtualHost>

~~~
perlgeek
That either doesn't redirect "deep" links, or it redirects them to the start
page.

~~~
walod
Got any proof of a case where this happens? I'm using it to redirect to
[https://](https://) and it redirects deep links to the right deep links, and
nothing goes to the home page

------
frogpelt
www bothers me for mainly one reason. It is a pain to say.

It is one of the few abbreviations that takes longer and more effort to say
than the unabbreviated words themselves——world wide web.

~~~
mgkimsal
And... few people actually pronounce it out.

I've lost track over the years of hearing major media outlets give addresses
as "double-you double-you double-you ourdomain dot com" Typing what they
actually say will get you nothing, unless they were sharp enough to also get
wwwourdomain.com

------
dzuc
some DNS services are providing "ANAME"
[http://www.dnsmadeeasy.com/services/anamerecords/](http://www.dnsmadeeasy.com/services/anamerecords/)
for instance

~~~
brandur
Exactly! It's a little questionable that the linked website doesn't mention
the existence of these types of records considering how widespread they are
these days. Every DNS host I've used in the last five years has supported
something like "ANAME". For example:

* CloudFlare's "CNAME Flattening": [https://blog.cloudflare.com/introducing-cname-flattening-rfc...](https://blog.cloudflare.com/introducing-cname-flattening-rfc-compliant-cnames-at-a-domains-root/)

* DNSimple's ALIAS record: [https://support.dnsimple.com/articles/alias-record/](https://support.dnsimple.com/articles/alias-record/)

* Route 53's alias records: [http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/res...](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html) (although these need to go to an ELB, CloudFront distribution, S3 bucket, or Elastic Beanstalk environment)

~~~
merbpoll
Those are non-standard proprietary extensions that aren't universally
available. Omitting them is not questionable.

------
endemic
What a bizarre advocacy site. If you need the subdomain, you'll use it.
Otherwise, why make such an impassioned plea for 'www'?

------
chrisweekly
Use a CNAME. Here's another good explanation:
[https://www.netlify.com/blog/2016/01/12/ddos-attacks-and-
dns...](https://www.netlify.com/blog/2016/01/12/ddos-attacks-and-dns-records)

------
braum
brand awareness and recall is king for small niche companies. we removed the
www to bring our brand, a unique but simple domain, into focus. IMHO if you
have a trade and/or trademarked domain name and wish to get more organic
traffic using a naked domain is the best choice.

------
Smudge
yes-www is clearly the more practical solution based on current technology and
its limitations. And no-www ([http://no-www.org/](http://no-www.org/)) is
clearly the more aesthetic solution.

Either is perfectly valid, and depending on who you ask you'll get reasonable
arguments in favor of both. That said, neither choice will ultimately make any
difference. Period.

Even if your site becomes the next Google, the minor hurdles of a no-www
domain vs the technical advantages of a yes-www domain will not make an ounce
of difference. Anyone who tells you otherwise is straight up fooling
themselves.

------
Sidnicious
I found this article hard to read. It seems to boil down to three points:

1\. Some hosts might want you to use a CNAME record to send them your web
traffic, and CNAME records are undesirable for apex domain names because they
can't coexist with other records you might need (like MX).

2\. Cookies set on the apex domain will be sent to subdomains.

3\. Older browsers may not let the apex domain read cookies set by subdomains.

Is that right?

(CloudFlare deals with (1) by just hosting your DNS, and I don't have enough
experience with (2) and (3) to have strong opinions on them.)

~~~
merbpoll
> 1\. Some hosts might want you to use a CNAME record to send them your web
> traffic, and CNAME records are undesirable for apex domain names because
> they can't coexist with other records you might need (like MX).

Nitpick: With the exception of RRSIGs, a CNAME record must be the only record
at a given owner name. Since the apex of a zone has a SOA record, a CNAME
cannot exist there.

------
mattiemass
Welp, stupidly I dropped the www with zero research because I saw others do
it. Love the explanation of why keeping the www is useful.

~~~
brianwawok
I did too. Then I wanted to host a webpage out of google cloud drive. Opps,
doesn't support naked domains. Back to www in the future.

------
jpswade
Interesting that I responded to this in 2007 and it still rings true today!

[http://wade.be/yes-www/](http://wade.be/yes-www/)

Somewhat ironically, in 2016 I've since moved to no www on my none business
critical blog.

~~~
theoh
Pedantic correction: that should be "non business-critical".

------
zhte415
Many many websites, fail to display anything if www is not prepended to the
URL. Particularly of small businesses. If anyone if looking for a business
development niche selling basic consulting to small and medium sized
companies, it is this.

------
downtide
I've always hated the www prefix. But understand the technical gains of using
one. If you are a domain owner, subdomains give you quite a bit of
flexibility. You can always use a different prefix than www.

------
jacobsenscott
Having run a website on an apex domain for over 5 years I can tell you all the
mentioned issues are easy to overcome.

But if you are a hobbiest or a novice at running a website (most startups)
then this is good advice.

------
z3t4
... Or you could just let your CDN control DNS.

I think www is a relic from the days where you only had one host per server
...

Or do you want to slap www infront of everything, like
www.news.ycombinator.com !?

------
patsplat
Don't get why blocking cookies to static.domain.com is a desired feature.
Especially when sharing cookies btw app.domain.com and login.dimain.com would
be desirable.

------
arca_vorago
With DNS and CAs as broken as they are, I'm skipping this and wondering not
just go back to using IP addresses. (I am also the resident contrarian so...)

------
Scirra_Tom
The "having to buy different domain names" to server static content seems like
a really minor downside. Cookie point is a good point though.

------
alexshye
Empirically, www seems to have won -- a large fraction of the top consumer
sites online have picked it. One could make UI/UX-type arguments for no-www,
but I sites like Facebook and Pinterest have some of the best UI/UX people in
the world, and have still picked www.

What is this the case? What is the top reasons for www? Is it the cookie
thing? I've heard that www lets you play DNS trics also, but haven't seen more
details on this.

I'd love to hear more about this choice from people who understand the
decisions at top Internet companies.

~~~
darylteo
I'm a convert from non-www to www. Just avoids security implications re:
cookies and no need to buy secondary domain for static assets. A simple
redirect mitigates the "ux" argument that the domain looks less nice to type
or read. Simple implementation with more pros than cons.

Should add it's not a hard and fast rule, and depends on the use case. But the
default question for me nowadays is "why no-www" and not "why www".

------
hartator
Your users won't save 4 chars...

------
tomphoolery
"www." is just 4 more characters I have to put on the flyer. no-www til I
die!!

------
vonklaus
I still don't think this makes sense. You can just get a static IP and point
it at a load balancer.

How would this help if your site fails? You can cname undfecarriage services
like api and blog.

What do you gain from www?

------
pbhjpbhj
Yes, but what about the trailing slash ...

------
robot
whichever the reasons, I should take the toll by typing www every time? life
is too short.

~~~
mastazi
No you shouldn't, the article suggests server-side redirection so you don't
have to type www in front of the url. Many of the highest-traffic websites
already do that, Twitter being a notable exception.

------
patsplat
www is a relic of a different time, when networks were client-server first and
http was an afterthought. Now it is expected that the root domain at minimum
bring up a single page description of the org.

Add the www cname with a redirect to the root, start phasing www out.

------
tamana
I am glad to have some rando explaining how Google's domain names are wrong.

------
hasenj
Two reasons are listed:

\- DNS records

\- Cookies

Both of these things seem to have been design with "www" in mind.

So the reason boils down to: original specs of various technologies are
optimized around the assumption that your site will have a www (or more
accurately: a subdomain rather than a naked domain).

I find "www" aesthetically unpleasant. If one must use a subdomain, how about
"web" instead? It's still three characters, but much cleaner.

~~~
brianwawok
"web" would also satisfy all of the requirements, at the expense of not
following convention.

