Hacker News new | comments | ask | show | jobs | submit login
Should you use www or not in your domain? (2017) (bjornjohansen.no)
332 points by amingilani 71 days ago | hide | past | web | favorite | 171 comments


" One of the reasons why you need www or some other subdomain has to do with a quirk of DNS and the CNAME record.

Suppose for the purposes of this example that you are running a big site and contract out hosting to a CDN (Content Distribution Network) such as Akamai. What you typically do is set up the DNS record for your site as a CNAME to some akamai.com address. This gives the CDN the opportunity to supply an IP address that is close to the browser (in geographic or network terms). If you used an A record on your site, then you would not be able to offer this flexibility.

The quirk of the DNS is that if you have a CNAME record for a host name, you cannot have any other records for that same host. However, your top level domain example.com usually must have an NS and SOA record. Therefore, you cannot also add a CNAME record for example.com.

The use of www.example.com gives you the opportunity to use a CNAME for www that points to your CDN, while leaving the required NS and SOA records on example.com. The example.com record will usually also have an A record to point to a host that will redirect to www.example.com using an HTTP redirect."

A lot of DNS providers these days will give you a pseudo-cname on apex... basically having the dns resolver do a lookup of another dns name and return that as an A record for the apex.

CloudFlare calls this CNAME flattening, right? [0][1] Personally, I always enjoy engineering solution that mean we're not stuck with old decisions forever. I chose the non-www as a teenager, and I'm glad 10+ years later I could add email to my domain no problem.

[0] https://blog.cloudflare.com/introducing-cname-flattening-rfc...

[1] similar discussion, 2014: https://news.ycombinator.com/item?id=7293512

Except that the IP that your DNS provider resolves your apex to may be on the other side of the planet.

Fine if all you are doing is 302 to the www. Variant, but otherwise no.

This is where the Client Subnet edns extensions come in handy. This allows the DNS provider to pass along the /24 the users IP address is in.

With an extra caching key, this can even be cached.

See https://developers.google.com/speed/public-dns/docs/ecs

And that also only helps if your DNS provider and the client's DNS servers also pass along that information correctly.

> "the IP that your DNS provider resolves your apex to may be on the other side of the planet"

Anycast addresses this issue, right?[1] Cloudflare uses Anycast for their IP addresses.[2]

[1] https://en.wikipedia.org/wiki/Anycast

[2] https://www.cloudflare.com/learning/cdn/glossary/anycast-net...

Only if your CDN uses Anycast. Not all of them do.

Yes. AWS Route 53 can do this for root or non-root records. They call these "ALIAS" records.

Those only work for AWS services though, CloudFlare CNAME flattening works with any endpoint by providing some sort of HTTP proxy

I wish there was a standard way to do the same thing. Route 53 is nice when I can use it, but it causes me pain on a regular basis because not all the domains I deal with are on Route 53.

> The quirk of the DNS is that if you have a CNAME record for a host name, you cannot have any other records for that same host. However, your top level domain example.com usually must have an NS and SOA record. Therefore, you cannot also add a CNAME record for example.com.

I discovered this when using a CNAME for a root-level domain and then wondering why I had spotty mail delivery. Turns out, quite a few mail systems and/or DNS resolvers handle this fine - but there are still quite a lot that don't.

Wouldn't anycast be a solution? Then the CDN can provide the same IP to all users, but the network layer ensures that the IP is one close to the user.

Expensive, but yes, anycast solves it.

Curious how expensive. Obviously I realize I have to colocate or provision at least two servers, but beyond that...

- do I own the IP address(es), and BGP-route them to the machines in question?

- can I use any provider (who is willing to do the required configuration)? As a specific example, could I run anycast between two boxes purchased through Hetzner auction? (Translation: considering that I'm going fishing around in the auctions as opposed to other options, would I even be listened to? heh)

- who am I actually paying, and for what? (besides power, bandwidth, and possibly the server itself)

- ...how does anycast actually work _within the context of using it for hosting_? :/ https://en.wikipedia.org/wiki/Anycast is... distinctly not contextually-scoped to my intentions.


Source: had to navigate the shitty position of trying to CNAME to a CDN and have that CDN's DNS infra replicate our e.g. MX records.

The reasons given for using www are honestly very weak, just edge cases really.

As far as cookies are concerned, keeping them on the origin ensures they get passed to all subdomains, which is usually a benefit, as opposed to a problem -- which you'll discover when you need to restrict API requests on a subdomain only to logged-in users, for example. And as long as you're keeping your cookie payload small, like a session ID or two, there's zero worry about a performance hit.

And as far as a CNAME needing to point to another domain instead of an IP, has that ever been an issue for anyone? Genuinely curious. I'd never even heard of that until now.

Honestly, simpler is better and "www." is an unnecessary vestige from another era. I dropped it from my sites starting a couple years ago. (Obviously with a redirect in case anyone ever types it.)

Sending session tokens to every subdomain, and also potentially leaking other sensitive data, is a pretty strong case in my opinion.

Of course, this isn't something that has a concrete right and wrong answer. It's more a question of how much you trust your subdomains. Personally, I think sharing that information should be a conscious choice when setting the cookies and not something that you let happen by default.

I agree. It doesn’t exactly seem like edge cases to not want to propagate front end cookies to internal services. IDK, seems like an issue with cookies just as much as www or no-www.

CNAME is absolutely an issue, especially when using managed hosting. Many hosts require that you CNAME since they move around IPs. (e.g. Heroku) so unless you put a provider in front of them that integrates with your DNS you need www.

You can add AWS, Azure, and GCP to this list. Most people don’t want to lay out the money for a static IP, and why should they?

Not true for App Engine: we have a fixed set of IP addresses to be used in A/AAAA record.

Disclaimer I work on App Engine.

This FAQ on App Engine specifically says you guys can’t map a static IP to a site:


That is something different. If you want to run your app on a naked domain, they give you a set of A/AAA records you can use.

Search in that same document for "naked domain".

you should read before you write.

> I'd like to map my app to a naked domain (such as http://example.com).


> App Engine does not currently provide a way to map static IP addresses to an application

the latter is that you own an IP and want to map it against your appengine.

the former is example.com -> appengine site.

(not related to google)

We solved this by having cloudflare manage our dns and they offer something like "cname to ip".

You sidestepped it, rather than really solved it: you got rid of the CNAME issue by trading it for a proxy availability issue. It works--of course it works, it's literally why cloudflare exists--but now your service availability is directly tied to cloudflare's availability. And cloudflare glitches often enough that plenty of people run into their 5xx pages on a daily basis across the web.

So you traded an intermittent problem (only relevant when an IP change occurs) for 24/7 guaranteed service outages for a (probably very) low percentage of your intended demographic.

Cloudflare provides CNAME flattening without enabling Cloudflare CDN on a domain, so you only have to be dependent on their DNS infrastructure, not their CDN/proxying infrastructure.

"Now your service availability is tied to your DNS service's availability" is always true.

Would you like me to edit it so it says "Now your service availability is tied to not just the global DNS system that literally anything that wants to exist on the web with not just an IP but a named host relies on, but a secondary routing entity that sits on top of standard DNS and does fancy things"?

Because I'm pretty sure we all know what I meant when I pointed out using Cloudflare means gaining a convenience at the expense of being at the mercy of third party outages.

For pretty much any DNS service you use.

It could just be a stopgap in the meantime while they write the true solution? Just a thought.

Cookies can be scoped to a parent domain. So www.example.com can scope it’s cookies to example.com and have them accessible to api.example.com.

Edge cases are good reasons.

In some cases a single edge case can be the reason to make a decision.

If your attitude is “working in 99% of cases is good enough”, that means over a long enough period of use, 100% of your users will encounter an edge failure.

The way to even have a shot at 99% of your users a good experience is to constantly let edge cases force your hand.

Only if edge cases are uniformly distributed, which is rarely the case.

No, they needn't be uniform.

If you plan to use a CDN in front of your site at the Apex, it severely constrains your options. You either need to pick a CDN that only uses anycast, so you can plop in a staticy A record, or you need to delegate your domain to their nameservers so they can send different resolvers different records. Those aren't necessarily bad options, but it leaves out a lot of CDNs that either don't do anycast or do a mix of anycast and unicast, and it leaves out the possibility of using geodns to use one CDN in one region and another in other reasons.

I'm a bit scale focused though. If your site is never going to need a CDN, you can of course pick aesthetics over serviceability. But if this is something you hope to one day get big, you might as well make an easy decision to prepare for it at the start. You will want to support any inbound links you get for as long of forever as possible, so it's nice if they are to the right hostname to start with.

If you run a high traffic website with geographic redundancy, being able to quickly switch your DNS is essential.

After a few years at a registrar, my experience is it comes up quite often enough. It's rare that the person is concerned enough to change to a DNS provider that allows alias/ANAME records, but people do ask about it.

Being able to have cookies restricted to a subdomain is kinda one of my favourite things about subdomains. I just like the idea of requests being minimal, and when you have a lot of static public assets, you can put them on another subdomain without cookies. That no longer works if you have a lot of cookies on the origin, since those get sent always.

What difference does it make if is written "www" or not? "www" is a subdomain like any other and works exactly the same.

CNAMEs can’t point to an IP, not sure what you even mean by that.

Netlify explains the issue in pretty good detail. https://www.netlify.com/blog/2017/02/28/to-www-or-not-www/

What do you mean explains it? CNAMEs can’t point to IPs. This article doesn’t say otherwise. Also this article says you should use “www”.

The post you replied to also said a CNAME can't point to an IP:

"as far as a CNAME needing to point to another domain instead of an IP"

So, it's unclear to me who you're arguing with. I thought a post explaining why using a CNAME on an "apex domain" is potentially problematic might be helpful. Guess not.

To me, that sentence reads as that it's possible for a CNAME to point to an IP, but that someone has a need to point it to a domain. Which makes the sentence incorrect, since it assumes something impossible, and explains hacknat's urge to correct it.

Let's take a look what the top sites according to Alexa do:


    Google: www
    Youtube: www
    Facebook: www
    Baidu: www
    Wikipedia: www
I see a trend here...

Would be interesting to go through the whole list and get a broader statistic.

Let's look at some of the newer/hipper sites on the list:

    Reddit: www
    Instagram: www
    Netflix: www
    Twitch: www
    Spotify: www
So all 10 out of 10 major sites I visited redirected me from the non-www to the www version.

The list of sites you're referring to are mostly older sites, such that they come from an era on the Web when www was overwhelmingly the status quo. What would be more interesting is for someone to compile a top 30 or such list of new & popular sites from the last five years.

Your list in years old:

Netflix 21; Google 20; Baidu 18; Wikipedia 17; Facebook 14; Reddit: 13; YouTube 13; Spotify 12.

Only Twitch and Instagram are less than a decade old (barely).

Netflix isn't new or hip, they have all-gray hair and are at heightened risk of breaking a hip in Web age terms.

Some popular sites without www:

Twitter, DuckDuckGo, Stripe, Imgur, TechCrunch, Bandcamp, Medium, Stackoverflow, Square, Github, OfferUp, Poshmark, Giphy, Lifehacker, Gfycat, 22 Words, Go.com, Discord, 24/7 Sports, Nextdoor, Mental Floss, Vimeo, thechive, Comicbook.com, Patch, Mega.nz, Food52, HBO Go, Phys.org, Gizmodo, Pitchfork, Padlet, Definition.org, Shmoop

What the heck does age have to do with anything? Do you think that Google and Facebook don't know how to redirect from www to the root domain because they're too old to figure it out or something?

This is just further shows how it's a weird cultural issue for some people.

> What the heck does age have to do with anything?

A lot.

If your site is 15 or 20 years old, you're from an era where consumers universally expected www to preface a domain. 20 years ago consumer ignorance was extremely high as it pertained to how the Web worked and how addresses worked, any variations from what was normal would be a poor choice. It was also technically more annoying 20 years ago to try to get by without www, companies like Cloudflare have helped make that far more trivial as an issue. Once you've become very successful such that tens or hundreds of millions of people are using your service and you've firmly established your address (eg as www.facebook.com), the downside risk is greater than the upside potential to bother switching from www to non-www at that point. If you're Facebook and you've acquired a sweet 2 billion person social monopoly, there's zero potential value in messing with switching from www.fb.com to fb.com.

Further, if you've built up an elaborate service over many years using www and attaching cookies and subdomains to countless services, there can be a plethora of super annoying problems (and or very serious security risks) to deal with in switching to non-www for your core. So once again, for someone like Facebook or Google, the value proposition to switching a gigantic service over, is more of a nightmare. If you're just starting out with none of that legacy, such things are not a problem, you build from the ground up for non-www.

If you're Twitter, Imgur, Github or Stripe and you want the identity benefit of having no www in the front - it produces a better url for advertising, a shorter url for function (valuable historically for eg Twitter and Imgur), and stronger visual branding (less cruft around your name) - then you avoid www from the early days.

More recent companies that redirect from the root to www: Airbnb, Uber, Lyft, WeWork, SpaceX, DiDi, Coinbase, Blue Apron, Stitch Fix, The Information, Flexport.

Gitlab goes one better and redirects the root domain to about.gitlab.com.

How about Bird scooters, founded 2017? Yup, redirects to www.

Lets look at some recent Ycombinator grads [1]. It's about half and half whether they redirect to www.

[1] https://techcrunch.com/2018/08/22/the-top-10-startups-from-y...

Whatever factor you think age plays in the www subdomain decision, it doesn't seem to be reflected by real companies.

Twitter is a notable exception: www.twitter.com redirects to non-www.

Wikipedia’s `www` version is just a page asking you which language you want: en.wikipedia.org, fr.wikipedia.org, etc.

And how many subdomains does each of those examples have?

Google: a lot, most of their services has one. However, I believe they use an OAuth (-ish) service rather than session IDs to manage authentication, so cookies aren't really an issue.

YouTube: no major subdomains as far as I know. Facebook: aside from developer resources (which require you to log on with Facebook), none really.

Baidu: not familiar with it, so no idea.

Wikipedia: one for every language. Furthermore, the Wikipedia of each language seems to be completely separate, both in content and accounts.

Reddit: a lot. Aside from the obvious api.reddit.com, users may use <whatever>.reddit.com and the subreddit's CSS may use this info to change the looks of the subreddit.

Instagram: no idea, but I believe the main interface is simply on instagram.com.

Netflix: just netflix.com.

Twitch: as far as I know just the main domain, no subdomains.

Spotify: a few services, like open.spotify.com and play.spotify.com, but they all require you log on separately.

It's pretty mixed, with some websites having a lot of subdomains, but not all of them requiring or using shared cookies.

> YouTube: no major subdomains as far as I know.

YouTube has:

- gaming

- tv

- music

- kids

- artists

Maybe more, but those are the ones linked on YouTube proper.

What is your point? Those subdomains are probably almost all applications or APIs and not websites.

At least Google has plenty of subdomains that are websites. mail.google.com, sites.google.com, drive.google.com, images.google.com, maps.google.com...

ycombinator.com also redirects to www.ycombinator.com even though that host doesn't have any content.

I wonder how much that has to do with Chrome and Firefox (and probably other browsers?) prepending www (and https).

edit: I guess both of mine have some plugin doing this?

That wouldn't work.

If I served my site through the apex domain, I'd redirect www to the apex — standard practice. With that in mind, if a browser kept forcing me to go to www when it hit the apex, and my site kept forcing the apex when it saw a hit to the www, you'd have an endless loop and the user would never be able to see the site.

In the case of Google, not at all:

    $ curl -v https://google.com
    < HTTP/2 301
    < location: https://www.google.com/

Just an FYI: in this case you might want to use `curl -i` which will display just headers + body (switch `-v` displays lots of other information too).

    $ curl -i https://google.com
    HTTP/1.1 301 Moved Permanently
    Location: https://www.google.com/
EDIT: interesting that it responds with HTTP/2 in your case and HTTP/1.1 in mine, would need to look up why the responses are different.

Not at all. A browser would never do this. It would break a lot of sites!

Actually, Firefox does prepend the www subdomain but only if it fails to resolve a domain. When I mistype a website (exmaple.com) I often end up at www.exmaple.com if I don't interrupt Firefox.

Chrome is moving towards the opposite though, hiding the www subdomain in the browser bar.

I wrote about this over 10 years ago and it mostly still holds true today...

See: https://wade.be/yes-www/

I still think the best argument for keeping www is from Heroku:

"Root domains are aesthetically pleasing, but the nature of DNS prevents them from being a robust solution for web apps. Root domains don't allow CNAMEs, which requires hardcoding IP addresses, which in turn prevents flexibility on updates to IPs which may need to change over time to handle new load or divert denial-of-service attacks.""

See: https://web.archive.org/web/20110628072339/http://status.her...

Like most things in technology, if Amazon, Facebook and Google are all doing it, it's probably for good reason...

I don't think Amazon Facebook and Google would have any technical trouble using an alias...

"www" is all about conforming to expectations

These days almost all dns providers support dns-server-resolved CNAMEs (ANAMEs as they are known). I don’t think this is really much of an issue anymore.

AWS Route 53 doesn't support this for external CNAMEs, just aliases for amazon resources.

There's a good reason for wage collusion??!

Well, yes. "Good" as in "it works well".

But not Twitter?

> which requires hardcoding IP addresses

That statement couldn't be more wrong. It's absolutely possible to programmatically update DNS records. Set the TTL low (which you would be anyway for the CNAME records) and change them as you see fit.

He gives two reasons to default to www:

- Cookies are passed down to subdomains / unnecessary cookies hurt performance / cookies may be read by third parties

- DNS origin can’t be a CNAME (must be an A-type record, that points to a static IP address)

He is using the non-www like everyone else, of course. We always list out the practical reasons that www is better, just before we choose non-www.

I can appreciate the pragmatic aspect of choosing www however I don't really like the 'oh well guess we'll just use www forever' attitude either. Seems like changing the specs to remove these restrictions is what we should really be talking about?

When your site grows large and you move it to an hosted service, or wants to point it to an Web Application Firewall or a DDoS mitigator, you might want to use a CNAME type record, to point the hostname to another flexible hostname that the vendor manages depending on your traffic and needs.

Now, if your website is hosted at the origin (“example.com”), you can’t do that. But there is no issue with the “www” hostname being a CNAME record. So if you want any scaling flexibility, now or in the future, you should go with the www hostname from the beginning.

Granted, his blog was knocked offline by HN. But would a CNAME have saved their Wordpress site?

FWIW, https://ycombinator.com/ redirects to https://www.ycommbiator.com, as does Reddit.

A domain is a marketing tool as much as anything else on your web page. If customers see a "www." in the address bar, it slightly dilutes the visibility of your brand in the main domain. I believe this is more important than any technical issue listed.

Does it dilute or position your brand? www for most gives the initial impression of web/technical/site to the average person. Removing www dilutes the association to those concepts. Using the www in print removes the need to include http:// otherwise domain.io might be confusing.

Using www probably helps for the newer gTLDs, as the average non-technical user may not realise that something like ".productions" or ".hockey" is a valid domain.

In the past I've heard of non-technical users adding ".com" to the end of one of these gTLDs, which obviously takes them to a completely different website. I wonder whether anybody has used this as a form of pseudo-typosquatting or phishing?

Anyone who paid $200k for their gTLD probably already bought the .com for the same name, or at least it's owned by someone bigger than a typosquatter.

Good point actually, I might look into this and see.

Yes, I have this concern with the new deluge of TLDs. Now something.anything could be a link but also means it might not be. www.something.anything would at least make that clear if your TLD is not easily recognizable. Even recognizable brands give me double-takes when they use their TLD like a .google address doesn't look like an address at all.

Yeh I always wonder if it's a FTP, SMB, DAVS, IRC, NNTP, etc. www makes it obvious it's a world wide web :P

For what it's worth, Apple, Microsoft, Amazon, and Google, all disagree. They all redirect to www.

That it good evidence, but being the largest websites, they have thousands of subdomains on those domains, so they're kinda technically forced to use www.

Customers do not look in the address bar, so it has no brand impact.

When putting your domain in print, you should omit the www if you can. Just catch the root domain request and redirect to www if you want it.

A slight branding dilution in an area that very few users look at or pay attention to that represents maybe 2% of the screen real estate and smallest font when a user is on your site outweighs the benefit of being able to use a regular CNAME for increased flexibility and scope your cookies to your actual website leaving subdomains available for purposes that would not be viable security-wise if you are using the root domain?

Sorry but such a blanket statement is naive and arrogant—you've got to look at individual use cases before making a call on this.

Should you use your turn signal in the turn lane when it's already implied? If yes, what about a two lane when you are in the outside lane? Did you know when you turn you are supposed to go into the closest lane and then change lanes again as opposed to driving through the close lane?

My point is web developers will do whatever they want. Technically you should use www for your website. But if your domain is known for being a website the www is implied and you could make an argument for leaving it out. But no one cares. This isn't linux where you can force users to follow obscure rules to get your software to work. You simply just won't get that traffic. And that, is justice!

So we all know the answer already. Redirect https:// to https://www or vice versa depending upon on your inclination.

You should always use your turn signal even when it's already implied, because that implication is not obvious from every vantage point near an intersection to people who are potentially crossing your path (and who may already be moving towards your path at speed).

Also, it's nice to know that the person is in fact meaning to turn, rather than just being in the wrong lane.

When dozens of 3,000 lb death boxes are travelling at high speeds in close proximity to each other, and in close proximity to slow moving, talking creatures made out of meat, it's best to communicate as clearly and possible.

What about an on ramp? There's no signal and no intersection. Just a road diverging from another.

But you signal when you merge.

Sure but when you merge you're taking an action. You're changing lanes. Otherwise if you're in a turn lane/on ramp you're just following the lane. For you the lane is going straight.

Pedestrians don't necessarily know that right lane implies turning right, so yes, you should use your turn signal.

As we move towards using more not-COMs and “dot whatevers”, right now it’s actually better to use a www since many non-technical people may not know that alphabet.xyz is a domain name. So in any offline marketing I’d definitely add www to the beginning of a not-com.

For SEO purposes, if traffic is just clicking from one site to another, like from a search engine, the url doesn’t matter. But it does pay to be consistent, though. The main issue can be duplicate content isssues, so you should choose one version. Same goes for all 4 versions, though, if you add http vs https into the mix. Choose one.

I can see www not being needed to tell the non-techies that a not-com is a domain name or web address as they get more used to not-coms. I senior taking another 5 years until we get there.

Alphabet actually uses 'abc.xyz' which looks even less like a domain name. Using 'www' would probably ruin that domains aesthetic though.

I doubt that Alphabet cares if anyone goes to that website, though. I don't think it has been updated since the day it launched, has it?

All their public services default to subdomains, including www.

Although technically Google Search should probably load at search.google.com if it were going to be consistent with the URL convention of all their other services.

On the other hand, Alphabet's target demographic is likely not to be misled by the lack of “www”. I'd be impressed if too many people outside of tech & finance actually knew what Alphabet is all about. In any case, their domain name certainly serves its purposes :)

> Conclusion: Go with www

Am I missing something or is OP using a non-www domain? I'm using Chrome fyi (mentioned because I know safari messes with the address bar)

Yeah, in the comment section he does state[1], that you should stick with the one you have decided to pick in the first place.

[1]: https://bjornjohansen.no/www-or-not#comment-510

If you are not using or planning to use sub-domains, the only issue is that you cannot use CNAMEs. This is nearly a non-issue as many DNS providers offer ALIAS/ANAME/ACNAME records that essentially provide CNAMEs on apex-domains.

Chrome messes with the address bar as well.

Safari doesn't mess with my address bar. Chrome on the other hand felt it necessary to remove any instance of www or m anywhere in the domain of the URL before it was rolled back due to backslash. www.m.www.example.www.example.com would show up as example.example.com in Chrome before the rollback.

You know part of that was a bug, right?

And Safari defaults to not even showing 90% of the URL, from what I can find.

OP here, minor nitpick: I'm not the author, I just found something interest and posted it.

That said, I agree. The author does jump to conclusions, but that's pretty much why I posted it. It's was first search result and I decided an HN debate would help me find a better answer.

yep he defaults without www.

His non-WWW site is broken. If you manually append the WWW, the site works.

Guess he is Stallmam-serious about practicing what he preaches.

Also this is a stupid clickbait non-issue.

No mention of iframes and cross domain Javascript for some reason. One potential attack vector if you allow user defined content on subdomains is that they can embed an iframe of your primary domain and they use Javascript to read the contents and execute code in it. Which they cannot do if the subdomains are different unless the page inside the frame explicitly sets "document.domain"

Of course you can get a similar effect by using the "X-Frame-Options" header but if you are doing something like allowing user-defined content it is best to have layers and do different subdomains AND X-Frame-Options.

Basically if you have an application that allows users to host Javascript, either use a completely different domain or make sure your root domain doesn't host any meaningful content or have cookies that are used for security or privacy.

The biggest reason to not use the apex domain is because a CNAME on the apex will render other records unreadable, and ANAMEs suck. They're great reasons.

However, Cloudflare already offers something called CNAME flattening for apex domains and there's already an AAAA record type that works like a CNAME but without all the problems they cause.

Granted, not all DNS providers support this, but if they do, is there else anything wrong with using the apex domain? Isn't the cookie problem a solved problem?

I'm using wwww just to fool everyone.

Seems like everyone is focusing on reasons why you should use www and why those reasons aren't important. Any reasons why you shouldn't use www besides 3 less letters in the domain name?

The no-CNAME-at-root issue is long overdue for a fix, but it looks like potential solutions are still in the discussion phases

My vote is for just letting CNAMEs work at the root, apparently a lot of DNS software already lets you get away with it: https://mailarchive.ietf.org/arch/msg/dnsop/awmoLxtbQtQhSt9K...

Yes, because when trying to give en url to an elderly or a non-technical, I find that, they often gets confused if it doesn't start with www.

Lots of people are saying "Root domains don't allow CNAMEs" and other such things, but let's be real. Every DNS provider under the sun supports CNAME flattening.

For my own domains, I have a large number of subdomains and a less-important root page. Because of that, I redirect www -> non-www. It makes me think "this is the root."

www is an atavism that doesn't make any sense today (as today the majority of web documents are generated dynamically, a URL is meant to identify a document rather than to locate it physically and public DNS and IP addresses rarely have 1-to-1 relation to a physical web server). Nevertheless many people still enter web addresses with www. Whenever somebody requests a URL with www it should just redirect to without www.

I wrote about using a CNAME on your root a fews years ago that covers the MX (email) records and problems it can cause. https://joshstrange.com/why-its-a-bad-idea-to-put-a-cname-re...

So it looks like chrome’s decision to hide the stable URL prefix is a good idea now.


The technical reasons really make me lean towards using www.

The article makes it sound like you have to choose between two options:

1. users see example.com in the address bar

2. hosting on www.example.com

However you can get the best of both worlds. If you go to amazon.com, you will see amazon.com in the address bar. Yet your browser makes requests only to www.amazon.com; the content is hosted on www.amazon.com.

Does anyone know how they pulled that off?

I think this is just a Chrome UI thing (which a lot of people complained about): https://www.digitaltrends.com/computing/chrome-69-will-displ...

For me, the "www" shows up in Chrome 70 and Firefox. Chrome 72 hides the www again.

This was changed in a recent version of Chrome, I believe.

See this post on Super User: https://superuser.com/questions/1356867/chrome-69-hiding-www...

Don't advertise your domain with "www", BUT make it handle it if entered that way by a user. Another advantage of "www" is that word-processors and text editors can auto-recognize it as URL if it starts with "www".

It's very very weird that you approach the problem from the userland (all of you) rather than from the administration angle. No it's not a marketing or a web designer problem. First of all it's an administration problem. WWW implies the protocol. FTP implies the protocol. MAIL implies to protocol and goes like this. You like it or not this is the main purpose of this usage because back in the day http was not the most famous protocol. Other protocols like FTP for example was very very famous and important too. But even today, when we see "www." in an FQDN we automatically understand that http:// is implied. But also a very serious reason is the computer science reason. It'a a matter of tidyness and a right theoretical approach the usage of WWW. When a DNS issue will come up no one marketing guy or designer will be there to help you, you poor admininstator.

So please leave this decision to the admins and the way they established it. With WWW!

It's very very weird that you approach the problem from the userland (all of you) rather than from the administration angle.

Is that really weird at all? "you take the pain so your customers have things more pleasant" is a common idea.

None of your customers care if you are doing "the right theoretical approach to the usage of www" if your competitor isn't. They'll simply go to your competitor, who does the thing they want.

e.g. "CNAMES at the root of DNS" which is mentioned in this post; you can find any number of greybeards lecturing you on how wrong it is, but when you face your customers and they say "Amazon Route 53 DNS lets us do it", what does it matter how wrong it is? Explaining to them how Amazon has built something custom and non-standard, won't make them happier. They'll just use Amazon.

Are we talking about a small shity startup with a CEO that knows everything or a big serious company like google where scientists debate of what is right or wrong?

There are no "scientists" at Google debating whether things are right or wrong. As with all companies, it is the product and business leaders of the company that make this decision, and they make it based on what attracts the most users.

Some people argue it’s only the users that matter. Some, only the admins. It’s a trade off. There’s some level of customer impact that makes the work worth it, and some that doesn’t. Different people arrive at different conclusions.

In the honor of bikeshedding once a year someone writes an article on the www/non-www topic.

The truth is: 1. No one cares 2. Choose one of two and stick to it 3. Care to maintain a redirect from the second option to the chosen one.


I'm sure they do and I'm also sure you've heard it discussed ad-nauseam. But it was fairly new for me :)

I'm grappling with this question at the moment.

The problem is that Let's Encrypt doesn't support wildcard certs, so having a single cert for the origin and allowing connections on "www." is not possible. This is a problem because a request on "https://www." will be rejected completely rather than redirected to the origin (or vice versa). In other words, I have to choose one, and the other one won't work at all and can't be redirected (for https, but I'm auto-redirecting from http to https as well, so for everything). Obviously, the marketing gains from not having a "www" outweigh any other considerations at this point, so no "www".

As I understand it, anyway. I could be wrong. I hope I'm wrong, and have just misunderstood how this all hangs together.

You can create a certificate with "example.com" as the subject and "www.example.com" as subject alternative name (SAN).

Here is an example certbot command:

  certbot certonly -n --agree-tos -m example@example.com --webroot -w /var/www/example.com -d 'example.com,www.example.com'
The argument to the `-d` option defines all the subject alternative names.

Yea, this is what I do. I wrote a script to automate the process too:


They do, and it's fabulous, but *.example.com certs from LE do not cover the root domain (example.com) so if you do a wildcard cert, you then must also do www.example.com.

unless I'm missing something :)

You can have multiple names in the cert, including the apex.

Here is how I do it using acme.sh:

    acme.sh --issue --dns --force --yes-I-know-dns-manual-mode-enough-go-ahead-please -d "${domain}" -d "*.${domain}" > /dev/shm/.le."${domain}".txt

I didn't find docs specific to wildcard + non-wildcard, but LE certs can use multiple subject alternate names. So whatever tool you use, use SAN to request both *.example.com and example.com into one cert.

I imagine you have to setup your stuff to respond to the DNS + web based challenge/response.

You can have example.com and *.example.com on the same certificate.


You can have multiple domains for a certificate. I usually do something like:

    certbot -d example.com www.example.com [other flags here]
That’s from memory, it might be another -d per domain.

You can have certs with Subject Alternative Name (SAN). So one single cert covers "example.com" and "www.example.com"

I've been using wildcards for all my domains on Lets Encrypt for some time now.

>The problem is that Let's Encrypt doesn't support wildcard certs.

They do. Since March 2018.

thanks folks :) I had misunderstood, and am rewriting chunks of server code today :)

With www Optional wize: easy dns handling and wildcard ssl will serve any other future sub domain dns. Seo wise: google prefers it And if I’m not mistaken there and rfc for it. I can look later

Sorry for the typos. Operational wise

There is an RFC for using www

You should use 'www' so you can say "sextuple-u".

> Conclusion: Go with www

But the site hosting the article does not use www...

This is such an unimportant question. We use www at work and redirect non-www to www. We could just as easily change that if we needed to. But it doesn't really matter either way.

What do the URL shortner services go with?

Why isn't he using www for his site?

If only http used srv records instead.

That would be great, but it does not appear to be included in http 3.0. At least, google does not agree it is needed, so it won't likely be. It can only be added in a major protocol version.

The bigger question for him is: should you use a CDN to handle your HN DDoSes?

The writer is a typical "do what I say, don't do what I do".

No. Chrome, safari have removed these from the url bar, www has no meaning anymore.

Yes of course www. We should resist all attempts to deprecate URLs so the entire web will be hidden behind Google's search box. SEO is a plague.

http://example.com is a valid URL.

The article suggests using WWW for DNS compatibility, but...

Everything it mentions in prefer for WWW can be solved by a combination of static IP and CDN. My site's domain, for example, has a domain with a static IP and the site is served by HTTPS. Certificate resolution for the HTTPS goes to CloudFlare which also serves as a CDN for everything retrieved from the domain. CloudFlare provides firewall and security options as a feature of its CDN capabilities.

Because I am already solving for all the DNS and application concerns mentioned by the article I choose to use the more sane approach and simply drop use of a default subdomain.


I also detest cookies. They can store virtually nothing at 4kb per domain and the cookie API is horrible. Instead I use localStorage for all storage concerns and share explicitly anything that needs to be shared via XHR, which currently is nothing.

Cookies and local storage are two different things. Use local storage if you want to store data on the client. Use cookies if you want to share data with the server.

And regarding Cloudflare: Terminating your TLS session on a third party provider's server also has it's downsides.

Not everyone can get a static IP for their site and if everyone did we would be out of IPs.

It's done using a CDN. Eg CloudFlare has an IP your website is on, but also where many other websites on the same IP

We are out of IPs regardless, at least IPv4 addresses. We are nowhere close to running low on IPv6 addresses.

Everyone can get a static IPv6 IP.

Good luck typing it.

> ... and the cookie API is horrible.

What don't you like about it?

It is arguably worse than the Date api.


Compare all that madness to the localStorage API:

    localStorage.myName = "string up to 5mb";
Attempting to delete a cookie makes me want to abandon coding. Removing from localStorage is as easy as:

    delete localStorage.myName;

The client-side API for cookies is irrelevant since it's disabled when the httponly flag is set on a cookie. And all cookies should have httponly set to true for security reasons.

You should only be using server-side APIs for cookie management. Many good ones exist.

Reference: https://www.owasp.org/index.php/HttpOnly

> You should only be using server-side APIs for coookie management.

No. I will use any string I want of any size I want. Cookies are a relic of an archaic age for the web.

Cookies only continue to be used because many server based web applications haven't figured out a modern way for managing sessions.

Storing arbitrary data in a cookie isn't a secure best-practice anyway. You shouldn't store actual data in cookies, just a meaningless token (aka session-id) that is used to look up session data on the server side. If you insist on using local storage, please be aware that it's unsuitable for storage of any sensitive data for the same security reasons that the httponly cookie flag exists.

Don't let your API elegance preferences be the only factor in your architecture design choices. Please consider security too as you may be putting your users at risk. Design choices in this area are more nuanced than you seem to be aware of.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact