Hacker News new | past | comments | ask | show | jobs | submit login
Chrome 69: “www.” subdomain missing from URL (chromium.org)
1572 points by gouggoug on Sept 6, 2018 | hide | past | favorite | 876 comments

Considering a subdomain "trivial" is ridiculous... there's a difference between "www.example.com" and "example.com". Not only can they serve different sites, they can even have different DNS records!

It seems that "m." is also considered a trivial subdomain. So when a user clicks a link to a "m.facebook.com" uri, they'll be confused why FB looks different when the browser reports it's on "facebook.com".

I sincerely hope Firefox doesn't follow suit.

This is certainly subverting the domain name system. I can't see the value or gain in security by this.

(If you want to put focus on the domain, then display the host-part with less contrast, i.e. grey, but don't hide any potentially vital information. Otherwise, put out a RFC, defining "www" as a substitute for "*", or a zero-value atom, in order to guarantee consistent behavior.)

Edit: There are also legal concerns with catch-all domains in some countries. Blending the lines certainly doesn't help.

A proposal for better security with domain names:

The domain name system has been around for decades and it's a clever and proven system. It can – and should be – taught in school and, arguably, knowledge of it is, while not difficult to obtain, essential in our times. Additional ambiguity in this is probably not what we want.

Arguably, the most sincere problems arise from mixed alphabets with Unicode domains and look-alike characters/glyphs. This could be addressed by a) going back to codepages (Unicode subranges) and defining a valid subset for each range, and b) enforcing a domain name (hostname and domain) to be in a single codepage. Clients should derive the codepage by the Unicode range and generate a codepage-identifier, which may be displayed as a badge, identifying the respective range. And, of course, any mixed domains should be regarded illegal and invalid. (We may even want to make this codepage-identifier a mandatory particle of any URI, preceding the hostname.)

> Arguably, the most sincere problems arise from mixed alphabets with Unicode domains and look-alike characters/glyphs.

No way. The most sincere problem is that hostnames do not enforce any binding to a real world identity that users can understand (nobody inspects certs) and that the most trustworthy component of a hostname is the second to the last section (right before ".com"). Humans tend to look at the front of the URL, making "www.bank.evil.com" a mind bogglingly effective phishing technique.

Homoglyphs are almost always a sign of bad behavior and can just be banned to a large degree. The fact that "foo.com" or "foo.evil.com" are not necessarily owned by company foo is much worse.

Regarding the parsing of URLs, this is a common, but mostly counterintuitive argument: Take for example names in most western countries, addresses (street-zip-city-country), etc. Most of our most important identifiers work this way.

Regarding lacking binding of identity: On the other hand, this has been one of the most important features of the web, from the very beginning. Also, there is no way to setup a system, which will attribute to a single person in a readable and intuitive way. (E.g., names fail to do so.) Arguably, this should be left to (optional) extensions.

I'd argue, the knowledge required to parse a URI safely may be conveyed in couple of minutes. Why not enforce this knowledge? Why not have a URL-parsing note on the start screen of any browser? Why dumb down the system and introduce ambiguity – and by this even more insecurity – instead of educating users? URL-parsing is a vital skill, which can be acquired in less time than memorizing a basic table of partial addition results. Why do we still try to teach addition, if we can't teach URLs?

Except this is already broken by multi-part TLDs.

foo.com is owned by "foo" foo.evil.com is owned by "evil" foo.co.uk is NOT owned by "co"

The best security change we could make, imo, is rewriting domains so that they look like com.evil.bank.www/now/urls/go/from/most/specific/to/least

You're in good company. Tim Berners-Less said something similar when reflecting on what he would do differently if given the chance:

> Looking back on 15 years or so of development of the Web is there anything you would do differently given the chance? > "I would have skipped on the double slash - there’s no need for it. Also I would have put the domain name in the reverse order - in order of size so, for example, the BCS address would read: http:/uk.org.bcs/members. The last two terms of this example could both be servers if necessary."

From http://www.impactlab.net/2006/03/25/interview-with-tim-berne...

Yes. Think how you have to read "mobile.tasty.noodles.com/italian/fettuccine" to determine where it goes.

First start at the slash and work your way left: "com -> noodles -> tasty -> mobile" - then jump back to the slash and work your way right: "italian -> fettuccine".

This is counterintuitive and I doubt most users understand it. "com.noodles.tasty.mobile/italian/fettuccine" makes more sense to me.

Also, I think TLDs like "com" and "edu" and now "io" and "cool", etc, are misguided. I wish we had "country.language" as the only TLDs. For instance, "us.en.apple.www/mac". I see several advantages.

One, if "us.en.apple" and "uk.en.apple" were different entities, it would make legal sense, whereas "apple.com" and "apple.cool" being different entities makes no sense. Two, a use would likely notice if they ventured outside their usual TLD(s), and be less surprised by the different entity. Three, these TLDs could have rules about allowed characters; eg, only ASCII in "us.en". This would make homoglpyh attacks much more difficult.

I'm not so convinced. There would be still "uk.co.bbc" and "com.bbc" pointing to the same body behind it and any kind of confusion arising from this, like, "is 'ug.co.bbc' the same?" The most important part is teaching users that the identity isn't just "bbc" with some extra decoration. Also, we have the reverse example in software packaging (com/example/disruptiveLibrary) and it isn't fool-proof either (especially, if you only know of "disruptiveLibrary" and not about its origin).

Years ago, before the WWW, some parts of the world indeed ordered them that way.

* https://en.wikipedia.org/wiki/JANET_NRS

I don't see how you could enforce a hostname binding to some real world identity. Hostnames really need to have a non-ambiguous mapping from a name to a computer (more or less), but real world entities don't have that, without using really cumbersome indentifiers. Many natural persons share a name, so how do we decide who gets a hostname based on that; the same is true of corporate persons -- there are many that share names. Even if there was a way to disambiguate these things, it seems unlikely that the entity in charge of this would also want to rub a public registry -- so how do you make that work.

Absolutely. People don't have a single coherent model of identity in the real world. It's hard to glue certs to a pile of sand. Did I buy lunch from "Tim Horton's", from "The TDL Group Corp." or from some random company with an address for a name? The answer is yes to all three, despite only buying one lunch.

On a higher level, all those considerations are about a single question: Is the Web about communication (then it's probably OK as it is), or about a viable business platform with an entry-level as low as it can be?

Who's without interest may throw the first stone, er, browser extension.

Both techniques are deceptively effective; I know the following might be only anecdotally relevant, but it's the most recent case of a successful phishing attack I know of:

Recently a friend of mine didn't see the lower dot on the 'e' in a URL [0], and promptly ended up inadvertently broadcasting messages to everyone on her WhatsApp contacts list.

[0] www [dot] hẹb [dot] com/coupon/

How about displaying an identicon, that is rendered from the domain, in the address bar? People might soon learn what the icons of their important sites look like and will easily detect if somebody is trying to phish their bank account.

The space of easily visually distinguishable images has a certain size. Let's assume there's a deterministic, pseudorandom mapping from domains to images. For a given domain, how many plausible impostor domains are there? What's the chance that there's at least one impostor domain that happens to get the same image?

If you have 1000 distinct images, but a given domain has 5 letters that could each be replaced with any of 3 visually identical Unicode characters, then, well, the chances are very high that there exists a plausible impostor domain with the same image. I don't think this is a very workable approach.

Yes, I admit it's hard to get something like this secure. It's in the same problem space like hash functions and might require some research.

FWIW, OpenSSH already does this and calls it "randomart".

Right, didn't think of this.

We might even call it "favicon", for the fun of it… :-)

Any fake-Facebook website can copy Facebook's favicon, so that wouldn't add any security at all.

An identicon is a hash value represented as an icon. "facebook.com", for instance, may hash to a red image with a yellow line through it. While you wouldn't remember the icon initially, over time you would – or at least your subconsious would. If you ever visisted a fake-Facebook, you'd immediately notice that something was wrong if the icon suddenly was green with a blue dot in it, for instance.

Not sure if serious, but no. Anyone can copy a favicon; the point of an identicon is that it's generated from the domain name, so subverting it would require an attacker to find a hash collision with a visually similar domain.

Sorry, I mistook "that is rendered from the domain" for "rendered from a resource from the domain".

However, teach users to read domain names! If users do not grasp the general concept, e.g., if the supposed identity is just "example" (possibly with some decoration considered insignificant) and not "example.com", how are they supposed to survive? Domains have been around for more than a quarter of a century, the Internet is actually part of our lives… There is no excuse, and there is no sense in pretending that there was no harm in not understanding the basics. That said, there are real ambiguities that have to be addressed.

Yes, it would be nice if every child would learn these basics in school.

Which would be called computer literacy. I find it both awesome and terrifying that people can successfully do jobs that require working on a computer and still be computer illiterates. Awesome in a sense that it illustrates how good computer interfaces actually are and terrifying in a sense that in any profession with heavy machinery a person with a solution "adjust switches and dials until something happens" would be told to immediately vacate the place for safety reasons.

Yes, learning to be good consumers is way more important than basic skills such as maths, reading, writing and general critical thinking /s

We do teach kids not to talk to strangers in the street and consider it quite important, I think. What's so much different about teaching them how not to get robbed on-line? It's not about being a good consumer, but about minding your own feet.

Yeah, cause moving from a text domain with no collision possible to some sort of collision prone visual system to ensure are able to understand the domain they are viewing seems like a great idea.

FFS, if users cant see that somedomainname.com is different than somedomanname.com how does a randomized image of the domain name based on a hash solve this.

I'm not saying it's a good idea. I don't think it is. But it's not just a favicon.

How is the identicon designed such that its difficult to spoof?

I know of at least one site which users a user selected image in the login screen to thwart phishing attempts. Because its user selected its memorable, I think more so than a password for example. It would be hard for a scammer to spoof as well because they don't know the image the user selected when they created the account.

Unfortunately this would probably be less notable and thus memorable if everyone did it.

I dedicated a blog post to this idea: https://vorba.ch/2018/url-security-identicons.html

Here is the discussion on HN: https://news.ycombinator.com/item?id=17947467

>enforcing a domain name (hostname and domain) to be in a single codepage

this is essentially what is already implemented in most browsers. You can't mix characters from different scripts in a domain name, except for special cases (e.g. japanese and latin are frequently used together and have little potential for confusion)

However, these are just "anti-phishing" heuristics. We really should apply a rule on this.

do you have any examples of school class materials which teach stuff like this? I'd be really interested to read through them.

The reason why Google is doing this is because they are slowly trying to do away with URLs, as direct traffic is probably their greatest untapped segment.

Google is trying to get users to go through their doorway pages, which is exactly the kind of thing for which they penalize publishers.

Pay attention to when you enter direct addresses, let's say from a device/media subscription authorization page. The autosuggestion feature will often recommend Google searches, disguised as URLs, instead of helping you complete the very obvious URL.

If they help you get to the site directly, the opportunity to acquire your page views diminishes.

These behaviors are hostile toward users. I'd like to see further in their playbook to depreciate the URL as we know it.

Compare yesterdays Ars Technica piece, https://arstechnica.com/gadgets/2018/09/google-wants-to-get-...

As a comment reads there, do they want to reintroduce AOL keywords?

Edit: May we expect a non-standard subdomain "google-remote", which is more of a protocol-extension and will be also hidden?

They already reintroduced AOL keywords in 2011 with their Direct Connect "feature" for Google+, https://googleblog.blogspot.com/2011/11/google-pages-connect..., so one could go straight to Pepsi's Google+ page with +Pepsi.

They killed it off in 2014.

"Their complexity makes them a security hazard."


Notably, what is the common answer to a system regarded to be too complex to be handled on a general level, so that it may be considered a common risk? Authority (read, trusted man in the middle).

Referencing the AMP URL controversy seems somewhat relevant in this context.

This needs to be a top-level comment.

> they are slowly trying to do away with URLs

They might be changing how they want to display them, but "do away with" is unsupported by the article:

> But this will mean big changes in how and when Chrome displays URLs. We want to challenge how URLs should be displayed and question it as we’re figuring out the right way to convey identity.


Did you read the whole article?

"The focus right now, they say, is on identifying all the ways people use URLs to try to find an alternative that will enhance security and identity integrity on the web while also adding convenience for everyday tasks like sharing links on mobile devices."

My statement is clearly supported by the article. They paint a rosy picture of it, because this is a submarine piece, but they are definitely making moves against the url.

> My statement is clearly supported by the article.

You're ignoring a direct quote in favor of a Wired reporter paraphrase (one which mentions sharing links, no less). They cite an earlier effort, which was a display change. This issue is for a display change. None of this points to "trying to do away with URLs".

>"None of this points to 'trying to do away with URLs'"

Except for that "trying to identify an alternative" part. But let's ignore that, because doing so makes you comfortable.

Sorry, what? Could you expand on this? What do you mean by doing away with URLs?

From the linked article:

"I don’t know what this will look like, because it’s an active discussion in the team right now," says Parisa Tabriz, director of engineering at Chrome. "But I do know that whatever we propose is going to be controversial. That’s one of the challenges with a really old and open and sprawling platform. Change will be controversial whatever form it takes. But it’s important we do something, because everyone is unsatisfied by URLs. They kind of suck."


She's says it's important that they do something! GTFOH! Hands of our Internet!

The problem here is that they view Chrome as their platform. They have too much market share ala IE6. Instead of following and helping to shape standards, they are considering highjacking the project. Argh!!!!!

They seem to view the Internet as their platform, given the way they like to bully the tech sphere.

Their dominance has become problematic when they entertain concepts like this seriously. They are really growing into the monolith that we all feared.

+1 Why do they need to change anything? Of course it’s going to be « controversial »!! What happened to RFQs?

Hidding the url scheme was the first step down this path of utter stupidity and I vividly remember the hostility and hubris of the Chrome team at the time.

We still have Firefox, but many times they just blindly follow suit.

Many users never use the url bar. They just 'Google' for websites they want to access and follow the results.

It is worse than that for some users. I've seen actual users that type/paste real url's into google's search box in order to go to the site. They actually had no idea that the bar at the top of the browser that said "google" (since they/someone set their default homepage to google) was a place where they could delete "google.com" and type/paste the url they wanted to visit there instead to actually get to the site they wanted to visit.

You seem shocked at this with word usage like "actual users", "real url's" and "actually no idea"

But how are we to expect users to know any better until general technology literacy improves?

Many people can't tell you the difference between a modem, router, OS, browser, or website.

I remember years ago sitting down with my elderly grandmother trying to show her how to use a desktop...

We are too close to our work so everything is familiar and easy.

Even the concept of moving the mouse on a table to represent moving the mouse cursor on the screen is something we take for granted.

Tell someone who's never used a mouse before to double click something to open it. You have to start way back earlier at the concept of which physical button on the mouse to use.

This turned more into a general rant about how we overestimate regular users but I'ts been on my mind for awhile.

> "Many people can't tell you the difference between a modem, router, OS, browser, or website."

They don't care, nor should they. How many people know how many spark plugs are in their car?

You're correct. We, the more tech-literate, take too much for granted; and most experiences and learning curves are too far over the head of the "average" user.

It's not them. It's us.

It's not about knowing how many spark plugs are in their car. It's more about buying a car that comes with a custom power adapter plugged into the cigarette lighter, never realizing that you can plug your own accessories into the cigarette lighter instead of buying your phone charger or GPS from the car company, and then not caring when they just take away the cigarette lighter and replace it with their own custom port.

Since we know all analogies breakdown under close inspection, I'm pushing the idea that the best analogy is actually a brief description of the event / idea itself.

So in this case:

Not display www. in the address bar is actually a whole lot like not display www. in the address bar.

And if anyone doesn't understand why that is a bad idea, maybe we should explain it to them, which might require using admittedly imperfect analogies that they can nonetheless understand.

I think the comparison to spark plugs is misleading when we talk about URLs and security.

It's more like looking in the mirror before changing lanes. It's something you need to check in order to stay safe.

Mirrors, like URLs, are just an implementation detail. But since currently driving works with mirrors, you have to learn how to use them.

The benefit is obvious in that instance. There is a very direct connection between checking your mirrors and not hitting a car as you merge or similar.

Where is the cause and effect for a URL or SSL cert? There is no learning experience.

Furthermore as some have claimed, and I've personally witnessed, for some URLS's literally dont exist. Just type whatever site you want into the google box and hope you get lucky.

I think the spark plugs example is an excellent one. People used to require an extensive knowledge of how cars worked in order to have a prayer of using them effectively. Now they don't, because we realized none of that knowledge is necessary if you design the system correctly.

We have enough historical context to realize that things like parsing URLs by eye is unsafe for the general population, and always will be. The solution is to engineer that need out of existence.

You might want to consider that manufacturers have added blind spot detectors to cars as people are bad at changing lanes safely, even with all the training in the world.

when did you have to know how spark plugs work to drive a car? And isn’t this why car mechanics exist? On the other hand you had to learn at some point what and RPM gauge is... And we still have it in cars even though you could say you don’t really need it.

My car does not have a RPM gauge, instead it has two arrows that suggest when to gear up or down (it is manual)

One could say the interface was dumbed down to the minimum.

my brand new one has a lot of gauges... so i’d say my point is still valid. And i find them extremely useful, cos you can make better use of fuel if you know what they mean.

Do you seriously not know how many spark plugs are in your car? It's the same as the number of cylinders. How could you not know that?

They absolutely should care. They should be aware that when they store things in "the cloud" they are not stored on their device and are visible to third parties. They should understand what encryption is and how to use it. "I don't know what I'm doing, and I didn't get the result I wanted, but it's not my fault it's the machine" is not an acceptable statement, whether we're talking about cars or computers.

I don't even know how many cylinders I have!* Why should I care? Put key in. Press gas down. Car goes forward. Works for me.

"How do you not know that?"

Why would I need to know this? Why do I need to know what a cylinder is to drive? Is this even a logical question with electric cars now?

You are arguing what should be vs. what is.

* Well, I don't currently drive but I couldn't tell you with 100% accuracy the number of cylinders my last car had.

I guess, the simile isn't entirely on the same level. You may not know how many cylinders there are in your car, like you may not know the number of cores in the CPU of your computer. They are both essentially hidden.

But you do know how many pedals there are in the car and probably, how many switches there are for the lights, and that the wiper has different steps of speed etc. You even manage to control these few elements, because they are the user facing elements your dealing with, the interface. There's no need to unify the pedals into a single one and to have the car to decide, whether it means accelerate, break, or clutch. Doing so would alienate you from the very task of driving, from what it means and what risks are involved. Taking these few controls away from you in favor of an ambiguous I-know-it-all-so-you-should't-care interface of ultimate convenience would probably not increase the security of operations.

On the other hand, we may expect you, as a driver, to know that there is a engine, that this is why the car moves, that it needs gas/petrol in order to run, that deacceleration is proportional to speed, etc.

Why is it so different with anything involving a computer? Is it, because we're telling them so?

Computers are magic to the majority.

I bring up in a previous reply that mirrors, and now lights, pedals, and other controls, that these are directly user-facing and must be interacted with in order to get anything done. Even knowing there is an engine that might need engine-y things like water and oil.

But where is the requirement a user knows about URLS in order to use the web?

Way back when we had AOL keywords. Now we have Google and apps and other tools that make URLS unnecessary.

My grandmother that I I mentioned before. She browses solely through bookmarks and via Google results. That a URL exists is not only an implementation detail but completely unneeded and unused in her case.

Then something like an SSL cert? Where it will work just fine without? I don't even want to imagine trying to explain that to my grandmother before sending her off to her decades old AOL mail inbox.

Only recently with Chrome displaying "Not Secure" have I even noticed any concern or interest amongst non-technical friends and aquaintances.

But why is it that computers are that magical? Computers have been now around for nearly 70 years. It's a technology as new as airplanes were in 1980. (If we include digital accounting machines with storage, they have been around even before the first flight of the Wright brothers, they are even older than any living person.) Computers are also the means by which many, if not most, are dealing with for a living on a daily basis. If we consider users generally as unfit to grasp even the basics, why is it that anyone is still admitted to their kitchen? (There are really dangerous, pointy objects there, which may cause real-life harm, and, if you have a gas oven, you may even blow up the house or the entire block. How could ordinary people tell a knife from a dish and how could we assume that they would know where they put them? Isn't it possible that someone just wanted to have a glass of water from the tap and blew up the house instead?)

Also, I consider some of this very US centric. In many parts of the world, AOL wasn't a big thing. In many languages, people are used to the fact that important parts of a sentence come at the very end, e.g., the verb, at least in some tenses. Moreover, most important identifiers go from the minor, less important part to the bigger, most significant ones. Why can't we tell users that domains work just like their post address? (As in "street-city-country". And there are even funny ones, like "street-city-state-country" and even funnier ones, like "c/o", meaning it's not the usual addressee. Why are people able to deal with this?) If you're living in a western country, even your own name works probably like this. Why this, oh, it's magic, don't care?

I'd say, it is mostly, because we encourage them not to care. Because we say, "Yes, that's really difficult", where we ought to say, "No, it's really simple and you ought to know." The user is still the person in charge. Pampering and flattering the person in charge into incompetence isn't apt to end well.

I'd say, there's a chance to convey simple things, like, the cloud is not on your local machine, or how a URL is principally constructed. Or that a file is saved only, when you safe a file.

Edit: Returning to the obligatory-car-simile, when I did my driver's test, I had to know the intrinsics of an engine, of the braking mechanism, of the steering. I was tested for knowledge of ad-hoc technical repair. It was assumed reasonable for a driver to grasp, to memorize the details, to minutely describe them, and it was even mandatory to do so in order to obtain a license. However, it was less important to drive a car then (you could do well without this in most occupations) than it is to operate a computer nowadays.

Edit 2: And, to level up a bit, how comes that academics are able to correctly cite a book and page, but are unable to parse a URL – and are even flattered for the latter?

Without prejudicing the rest of your points, computers are very unlike most inventions. Computation is extremely powerful, our only working definition for what it is even is relies on an intuition, called the Church-Turing thesis, that essentially says the computers are doing categorically the same thing we are, but doesn't purport explain why that's so. It looks observably true and that's the best we have.

So, it's entirely unfair to suppose that since people got used to having tap water and so we are surprised if a person can't operate a tap, therefore they should be used to the entire complexity of computation by now.

You definitely _should not_ count machines that aren't actually computers ("digital accounting machines with storage") since those aren't Church-Turing, they're just another trivial machine like a calculator. Instead, compare the other working example we have of full-blown Church-Turing: Humans. Why aren't people somehow used to everything about people yet? People have been around a long time too. Why isn't everyone prepared for every idiosyncratic or even nonsensical behaviour from other people, they've surely had long enough right?

> Is it, because we're telling them so?

Anti-intellectualism runs deep in our society.

Some engines have two per cylinder :)

I find this interesting. The parallels up to this point. My intent is not to pick fun on anyone but to just relook at the conversation just had.

We're talking about users not understanding the technology they use daily.

jwalton, in trying to give an example with spark plugs, allowed a more knowledgeable user or practitioner, mirimir, to give a more technically-correct description.

It seems to echo the main problem we are discussing in which users of a technology are not the same as those who design or know the nitty-gritty details of that technology.

Assumptions learned from day to day use in that technology (all cylinders have one plug, the google box is the only box I need) can so easily be proven incorrect when speaking to an actual expert in that field.

Well, I was just being pedantic, I suppose.

But it's arguably not such a great example, because details of engine design are generally trivial for drivers. Maybe a better example is the low oil pressure indicator. Maybe most people don't know what that actually means, but not having one can lead to severe engine damage. Years ago, I had a car with an oil radiator, and the oil line failed. So I knew to stop immediately.

Ohhhh holy cow I never realized Google might want to encourage this. Thanks!

I'm occasionally doing that and especially suggest non-technical users to do exactly this thing. I can mistype URL. Google will correct me, if site is well-known. Otherwise I'm risking to go to phishing website.

I just finished helping out a friend who did exactly this thing, clicked on an ad at the results page thinking it was Google's top result and was redirected to an ESTA scam site where they lost a bunch of money.

What's easier to tell apart for nontechnical users? URL bar from Google search field or ads from Google results?

> There are also legal concerns with catch-all domains in some countries.

Wow, really? Could you expand a little? I tried to search but all I got was catch-all mail addresses and no legal issues. Thanks!

Here in Austria, we had a rather problematic court ruling regarding this. Following to this and to common recommendations catch-all domains were mostly disabled, at least, you run them at your own risk.

What it was about: Say, there was a review or best-price-search site (here, "service.at"), using catch-all and mapping subdomain requests to product searches. So "acme.service.at" would be remapped to, say, "service.at/search?q=acme". Now Acme sued, claiming anything containing the name "acme" on the web ought to point to their site, including the subdomain "acme.search.at", since they were the owner of the name "Acme". To almost everybody's surprise the court decided that this was true, according to naming rights, and that a subdomain containing this name, even if just implemented virtually by a catch-all mechanism, was an infringement. This also implies that "acme.example.at", which is included in the set of "*.example.at", mapped to the very same as just "example.at" is a possible infringement. – Strange, but this is as it is. And, yes, it's particularly about search engines, like Google.

(I really don't remember the particulars, since this has been some years ago by now, but we may assume that the results returned by the service weren't exactly favorable and that the particular search enjoyed a higher Page rank than the site of this vendor, or at least a rank, which brought it up near the site of the vendor in search results.)

Wow, that's particularly deranged if "service.at/search?q=acme" is considered acceptable (and if that's not, how could any search work?).

Thanks for the explanation. I honestly would not have imagined anything like that.

IANAL, and I'm just speculating here, but the law could have been termed as "copyright laws apply to domain names on the internet" (i.e you can't use the name of a brand you don't own in a domain name), and acme.service.at is a domain name, but service.at/search?q=acme is not.

That would be trademark, not copyright, by the way.

I think this is exactly right. All the letters of the name are important and cannot be left off. If people want to equate www.domain to domain they can put a 301 redirect on the www address but the browser has no business making assumptions about what the owner of the name space thinks are equivalent.

Once upon a time, back in the middle 1990s when it was a major WWW browser, Netscape Navigator assumed that it could wrap domain names in URLs within "www." and ".com".

Interestingly, in Firefox, you can type a word into the URL bar and hit ctrl+enter to add "www" and "com". Shift+enter adds "www" and "net".

Oh come on.

"www." was used as a way of delineating what was a web address. Hence the fashion of putting that there so people knew you had to do it in the browser. Before then people used to also put the "http://" on there, and the combination of the two on vehicles/signs was ridiculous.

We're now in a web world. People know what a URL is. "domain.com" isn't ambiguous, it's obvious to man, beast or child that you type it in the browser. Most decent websites revert "www." or without to whichever is the canonical version; the one without should be that tbf.

The 'm.' is ridiculous too and ruins shareability. If the link was the bare domain, and the frontend does any switch that's needed, we'd all be better off.

There is an actual (small) reason for the existence of "www" nowadays. You cannot have a CNAME record for the domain apex (example.com). Many dns providers implement a workaround by resolving the CNAME record into A/AAAA records when queried.


You can but it prevents you from having other records there, which includes things like an MX record. Just a nitpick, as in practice that prevents most people from being able to use a CNAME on the apex.

I upvoted you because you're correct - it's a failing of the system.

My original point still stands though - we used to use 'http://' and 'http://www.' as a signal that this was a web address. I cannot believe this will still stand in 5 years time. The default is now the domain name, not the phone number.

Something that I was struggling with just today. Of all the days to update Chrome... At first I thought my redirect was broken.

Another good point, I forgot about that - thank you.

"www" was not a marketing trick, it was legitimately a different domain, by convention. General users never understood it, so companies started to have to add it to match their weird expectations.

To associate a base domain with a company identity happens to be true MOST of the time, but isnt actually true. Plus, foo.example.com follows different security rules than bar.example.com (CORS, certs, etc)

The problem here is that the precise domain has a technical meaning...but consumers are using it for a different meaning. Once that is also useful BUT NOT THE SAME.

Pretending the url matches this new meaning (and altering the display to match) serves both groups poorly.

There was never a requirement for www. to be anything other than the bare domain for most people. It became useful because it was synonymous with being a web thing right back when people hardly knew what the web was. This was serendipity, which turned out not to be serendipitous when people had to write it on signs / read it out on an advert etc.

I see no reason now to associate www. with the web version of your service. If I receive a request on port 80 or 443 for the bare domain, what's a better option than service the 99% of people who want a webpage?

You are splitting hairs on this one.

> It became useful because it was synonymous with being a web thing right back when people hardly knew what the web was

You're missing some history here (or we're talking past one another) Back then ( source: lived through it) subdomains for particular protocols were pretty common (www.example.com, ftp.example.com, gopher.example.com, mail.example.com) were pretty common, though not a requirement at all. Almost all the users were technical, so this helped users AND admins. Plus, machines were FAR less powerful back then, so anything exposed to the "public" probably didn't want to handle multiple purposes anyway.

Then non-technical users came in, saw "www.example.com" being used many places, and assumed it was part of the system. New domains either created a "www" subdomain or lost traffic (until browsers started trying to compensate). Note that what we're discussing is a switch in behavior. Prior to what the article is discussing, a browser would try the domain as typed, and if it failed would try prepending "www" AND ADD IT.

> I see no reason now to associate www. with the web version of your service

First, you still have people that type the "www" automatically because they never learned that was technically incorrect.

Second, what if you're reselling subdomains? The concept of "base domain == identity" is relatively recent and possibly temporary.

Third, what if you don't HAVE a single "web version of your service"?

The internet (and the web) has succeeded (granted, half by accident) by providing loose rules so practices can evolve inside those rules. If we start encoding the current practices in the rules, the rules no longer handle evolution well (or possibly at all).

I'm splitting hairs because hairs sometimes matter.

Sure, perhaps there is no reason to associate www. But the issue here is a browser showing an incorrect url.

Message 3[1] in the linked discussion has a great counter-example: "How will you distinguish http://www.pool.ntp.org vs http://pool.ntp.org ?

One takes you to the website about the project, the other goes to a random ntp server."

I do totally agree about m., but it's not Google's place to dictate that, rather it's a decision for each entity to make for themselves.

[1] https://bugs.chromium.org/p/chromium/issues/detail?id=881410...

Easy answer. If you end up where you wanted to go, you are in the right place. If not, google it. This is how the vast vast majority of users behave.

What about amp. prefix? You know that's the whole point of this, right? They're going to hide the fact that you're viewing the entire web through amp.

Did you look at the bug report? It's rife with valid examples of why this behavior is wrong on Chrome's part. For example, when "www" isn't the first subdomain, Chrome still elides it.

This thread contains many examples of professional technologists who don't know what a URL is. You, for example, don't seem to know what a URL is.

I guarantee you, most non-tech people don't know what a URL is. People know what links are, to the extent that they can click/tap on them to get to some thing, or copy them and share/email them. That's not the same as knowing what they are.

You're making a dangerous assumption about what people know, including yourself.

> "domain.com" isn't ambiguous

But that's because it's .com. Now, there are too many gTLDs, and companies will build their brand around their use of .io, .me, .cs, .es, etc. Just the other day I saw a link that caught my eye to studio.zeldman, and I had to take a moment to hover over the link to see if that was some new branded gTLD.

Nitpicking: .io, .me, and .es are ccTLD (respectively British Indian Ocean Territory, Montenegro, and Spain) and have been around for at least a decade. .cs was a ccTLD for Czechoslovakia.

Ahem. "www" comes from the times, when you had to have a dedicated machine, or, at least, a dedicated network interface, for each service. Hence, you had an FTP server, creatively named "ftp", and your WWW service ran on a host surprisingly named "www". A concept similar to well-known addresses.

That time has never existed. Some did separate things that way, many did not.

That time absolutely existed. Was common. Source: lived through it.

No, it didn't. Some separated services that way, but it was never any more necessary than it is now. Source: used to run an ISP through the early days of the web, and collocated services on the same host all the time.

The idea of the commenter I replied to that you 'had to' have a separate host or interface for each service is flat out false.

When people split it, it was over capacity or manageability concerns, but often we also set up separate hostnames for different services just because it was what people expected; often it pointed to the same hosts.

yep. or ftp (or mail/smtp/whatever) was a single host separate from the web servers, and www was a CNAME to a virtual ip/load balancer.

Still separated physically.

That time wasn't even that long ago.

The point was that this was always a choice - there has never been a point where it was required. My first ISP back in '93 ran mail, web, ftp and shell accounts on a single pc. So did the ISP I cofounded in 95. It isn't and never has been a technical limitation, but a choice down to what worked for you. Especially as address rewriting firewalls also existed back then, so multiple services pointing to the same external IP in no way implied they had to be the same physical host.

For us (early regional ISP, mid-'90s), a lack of separate per-service hostnames caused significant scaling fragility.

In the initial rollout, all services were served from a single physical host with just one listening IP, which the bare 'example.net' resolved to. (Was this naive of us? You bet.) Other service hostnames (www., smtp., etc) were all just either CNAMEs to that hostname, or A records to that IP.

When our SMTP usage started to exceed the capacity of that single host, we tried to move 'smtp.example.net' to a different host. This is when we we discovered that many users were configured to use 'example.net' for SMTP instead. We had to update all of those users' configs before we could turn down SMTP on the original host. (We couldn't afford big-iron load balancers, and they were less common then - we just used DNS round-robin for load distribution).

At that point, we realized that customers were using bare "example.net" for everything - homepage, SMTP, POP3, IMAP, FTP, DNS, shell access - you name it. It was easy to remember - and it worked. So it was hard-coded everywhere - FTP scripts, non-dynamic DNS settings, etc. And this was looong before email clients had automatic configuration detection, so that was all hard-coded, too.

So we had to painfully track down all the users who were still hitting 'example.net' for SMTP, and help them update their configs before we could turn down SMTP on the original ancient host. The other services had to go through a similar painful transition.

We concluded that the only way to prevent this from happening again was to make sure that the bare hostname never offered any services at all - except for a single HTTP service whose sole purpose was to redirect 'example.net' to 'www.example.net'.

From then on, each new vISP domain had the same non-overlapping service namespace ... so that the otherwise inevitable configuration drift would be impossible.

Later, with the rise of things like email autoconfiguration, load balancers, and POP/IMAP multiplexors (like 'smunge'), we had more options. But at the time, avoiding services on 'example.net' was the only way to go (for us). Having a bare 'example.com' as the sole hostname in the browser bar was a sign of brokenness. :)

I wasn't claiming it was a technical limitation or a requirement, just that the time where this happened certainly did exist. Choice or not, the time existed. That was my point. Fair enough.

I disagree about the 'm.domain' convention; it's good and useful. I like the ability to retrieve a mobile site on my desktop, and vice versa. Sometimes I'll be on a site that's difficult to read on mobile, and speculatively try the 'm.domain' - often it will work. When the site itself tries to autodetect what device I'm using, it often makes a poor decision that is not subject to appeal.

On the downside, the m subdomain makes for terrible social media posting.

E.g. a commenter using Wikipedia's mobile site posts a link to said page and desktop viewers are unexpectedly taken to the mobile, not desktop, page.

And Google decision makes it even worse by making the incorrect display incomprehensible.

This whole comment is two-faced.

People know what a URL is. But this issue demonstrates misunderstandings of URLs, as "www.x.y" is not necessarily an official "x.y" page.

"www" == "web address", or "m. is ridiculous" which are annoying fashions (agreed there!), but that has literally zero impact on the security characteristics, implying yet again that people do not understand URLs.


No. This comment is a perfect example of why this is not safe to do. It's throwing open the door to abuse.

What makes you think that http requests are the only thing domains are used for? Mentioned elsewhere in this thread, Active Directory requires that the A record for the bare domain be pointed to the PDC.

If you're serving email from example.com, for reputation reasons you should have the bare domain's A record pointed at your primary MX.

I understand that here on HN we're focused around web-based companies, but for every other corporation, there is a plethora of other services served out of a domain -- of which web/www traffic is maybe 10%, if not less. Everything from email to voip to directory services to vpns to crazy internal apps all rely on the corp's domain/domains, and you definitely should not be pointing your bare domain at your web server (which, chances are, is some contractor-built page living on GoDaddy completely outside your own infrastructure).

In a typical company, you'd have some server serving example.com doing some or all of the above. It would then be running a light http server which accepts requests on 80/443 and permanent-redirects them to www.example.com.

This is why www matters.

How about older people? Can your parents explain the difference? Mine can't and I can assure you most of them are the same.

Try this:

Grandma: I'm typing domain.com into my browse on [random device] and it doesn't work.

Nerd grandchild: well that's because it doesn't exist.

Grandma: But it works on my other computer.

Nerd grandchild: that's because the browser tries lots of domains like www.domain, domain.org and so on when you enter domain.

Grandma: yes, I noticed that. When I enter www.domain.com, it automatically corrects it to domain.com. So that's must be the right one, surely?

Nerd grandchild: Nope. domain.com is the correct domain. It's just trying not to confuse you.

Grandma: ?

Good point - but even my mum (65 and not very good with phones or computers) will just bash the bare domain into her browser.If it's got "www." she'll use that, if not she won't. She still knows it's a web thing, the www. for her and everybody else is superfluous.

Things change. We've had something like 30-odd years of URLs. The people who can't deal with this are vanishingly small, and those that can't are likely not your target market; or they're the sort that'll just consider Facebook to be the web.

I'm not disagreeing with your point btw that some people can't deal with this - all I disagree is the extent.

> We've had something like 30-odd years of URLs.

24 from the RFC, 26 from the discussion that led to it, per Wikipedia.

Good point. I was on Janet / ARPANet before the web. Point conceded!

I'd bet good money this change was put in by people younger than the web.

That may be true, and while I wholeheartedly disagree with this change, those that are younger than the web are the next Shepard's of the web.

We're going to see more and more changes that the "old folks of the internet" are going to hate. Some, or even many, of these changes will actually be good changes. We shouldn't prejudice on age.

Again, to be clear, I think this particular change is horribly broken.

Older people care more about consistency than "usability". They can successfully complete long tasks, maybe with several retries but they can, but if they are consistent. Input a long text somewhere, dial 15-20 digit phone number etc. But they can't usually deal with unpredictable situations, where computer will "intelligently" help them, fill parts of the input, when same action is different on different devices, when they need to know how system will behave in advance.

PS: consider how you would guide an older person over the phone when he/she is accessing Citibank website (for which www.citibank is different website), with Chrome will "intelligently" hide essential part of the address.

Agree to that - my experience exactly. Very recent example. My father's using Skype to talk with family. They refreshed UI, he got new version installed and that was it. I've had to answer multiple questions what does this and that button do. "The same dad, it just looks a little bit different." What I got as an answer was "It's placed somewhere else. It is so confusing. Couldn't they just left everything were it's always been..." Thankfully I have remote access to his desktop so I can guide him around in situations like this.

And I am quite sure removing 'www' on the address bar's domains is as confusing.

m. is where you often receive a feature degraded, app-walled, or sign-in walled website that isn't present on the full-featured website.

I want to know that I'm receiving a degraded version of a website and that maybe dropping the m. will restore it.

It's also where you receive a lightweight, fast, to the point version.

Admittedly, it's not just "m", but If I _must_ use Facebook the only bearable version is mbasic.facebook.com

It’s certainly not true in the case of Reddit. Their mobile site is horrible by any measure, way slower than the old desktop site.

But they want you to use the app anyway, so they don't put effort into the web version.

m. is where you find a lightweight, no-bullshit version of a site that doesn't load excessive images or Flash or JS before required.

I want to know I'm on a faster version of a website.

(Seriously, mbasic.facebook.com allows chat, whereas m.facebook.com reminds you to install the Facebook app.)

You're the second person recently to lament mobile sites on HN. I think they're great. Responsive isn't there yet.

Some of them are great. Some of them do incomprehensible shit like loading text in chunks while scrolling, making it impossible to scroll to the end of an article without waiting 5-10 seconds for it to appear.

(This might be a result of my using a content blocker to block mobile ads, but the fact that it’s even possible infuriates me as a user. I mean, it’s text! Just show me the text!)

I agree with you 100% that responsive design isn’t there yet.

I'm in the "basic usually better" camp too, but this doesn't matter, as we're all in agreement here! We want to know whether we're on the "better" or "worse" version of the site, for whatever each of us mean by "better" or "worse".

Some sites look absolutely terrible on a wide monitor when they are designed for a narrow mobile screen.

I must say that m.twitter.com is a better desktop website than twitter.com itself.

Most people don't care about the location bar.

Everytime they want to use Facebook, they type 'facebook' or maybe 'facebook login' in the location/search bar.

And to get to their gmail account they might type their own email address in the location bar.

Ehh this might have been true 10+ years ago, but most people are more savvy than that today.. I think you are describing something that is less and less common (which may explain why Google is trying to encourage people to keep doing it).

You'd be really surprised. There are still people that do exactly what the grandparent comment relates, today, in 2018.

"Less common" isn't "doesn't exist".

Yes, but I suspect it is far more common (I have no evidence to supply however) than the grandparent comment appears to imply.

We (the HN crowd) can easily get caught in a 'bubble' where because we known the details, and those with whom we typically associate also know the details, that we extrapolate those observations to conclude that "most people" know the details.

But until one's been in a situation of providing support or training for a diverse user group, one does not see just how little technical knowledge the "average joe" (a set of which we the HN crowd are very much not a member of) has of these things. The "average joe"'s level of technical knowledge is astonishingly low compared to the HN crowd's level of the same.

Yahoo Search still shows a second search box under the first search result if you search for Google, to capture users that search for Google then type in their actual search term, even when they're already on a search engine.

They may not make up a huge proportion of users, but they still make up a huge number of people in actual terms.

Google knows best, DNS standards be damned. Practice makes standard. /s

What part of the DNS standard governs how URLs are displayed, exactly?

>there's a difference between "www.example.com" and "example.com"

Can you link to a site where these two are different?

Many orgs do this.

For example, with Active Directory, the DNS A record for your foo.com domain must resolve to your domain controllers. Your www.foo.com will resolve to a separate non-domain controller web server.

I think a lot of the commenters here are thinking solely in terms of commercial web services such as twitter.com and such, but there's so much more to the wider landscape.

Thinking about it that way gives me conflicted feelings. Much as I hate what Google has done here I also feel like any organization stupid enough to use their public domain name for their Active Directory domain name deserves every little pain they receive for it.

You lack the compassion that comes with experience.

My $dayjob has our AD root domain the same as our public root domain. Because we implemented AD in the year 2000, and this was Microsoft’s recommendation for domain naming way back then.

And if you use Exchange, you can’t rename your AD domain, you have to rebuild your forest and migrate piecemeal. So we’re stuck with it.

The practice of using Corp.example.com did not evolve until many years after Windows 2000 and Exchange 2000 were in the wild.

So we run http redirectors on each of our domain controllers to send traffic to www.

This one is kind of a "religious" topic for me, I guess. I'm sorry that it is, but it makes me exceedingly defensive.

I trained on Active Directory (AD) with a group of veteran sysadmins in 1999. I don't have access to the "Microsoft Official Curriculum" book from my class in '99 (long-since thrown away), but I have a distinct memory of a lively conversation in class re: the pitfalls of using a public domain name as an AD domain name (or, worse yet, a Forest Root domain name) during the class. It was very evident to our group of veteran sysadmins that using a public domain name in AD would create silly make-work scenarios (like installing IIS on every DC just to run redirect visitors to "www.example.com"-- just as you describe, albeit IIS didn't natively support sending redirects at the time).

I'd go further and suggest that anybody with a modicum of familiarity with DNS knows having multiple roots-of-authority for a single domain name is a bad idea. Microsoft not supporting split-horizon in their DNS server (like BIND does with 'views') compounded the difficulties with such a scenario in an all-Windows environment.

I certainly wouldn't argue that Microsoft has given exclusively good recommendations for AD domain names in the past (evidence ".local" in Windows Small Business Server), but I am reasonably certain that their documentation always suggested that using a subdomain of a public domain name was a supported and workable option.

I started deploying AD in 2000. I've deployed roughly 50 forests in different enterprises, and I've never used a public domain name as an AD domain name. I've domain-renamed all my subsequently-acquired Customers for whom it was an option (which it was, so long as they had not yet installed Exchange 2007), and have been rebuilding the Forests of Customers who made the wrong decision in the past, where it makes economical sense.

Microsoft has provided mechanisms for split-horizon DNS service since Server 2003. views are not the only way of providing split-horizon DNS service.

* http://jdebp.info./FGA/dns-split-horizon.html#SeparateConten...

Windows 2000 didn't support stub zones, however. At the time that Active Directory was new there wasn't a good way to do split-horizon DNS with the Windows DNS server.

As an aside: I really enjoy your writing about using SRV lookups. It makes me sad that SRV records aren't being as much as they could / should be.

I don’t know anything about AD, so this might be a stupid question: can you not just run a web server on the same host as the AD server or port forward all HTTP traffic to a different server?

A domain controller on the internal network might not be the right place to run a copy of the public-facing content HTTP server (which might be in a datacentre, or even managed and run by an outside party, and might not be served by IIS). Then there are considerations of firewalling rules, browser rules, anti-virus rules, and even DNS rules for machines on the internal network that access a public WWW site that DNS lookups map into non-public IP addresses. (To prevent certain forms of external attacks, system administrators have taken in recent years to preventing this very scenario from working by filtering DNS results.)

* http://jdebp.eu./FGA/dns-split-horizon-common-server-names.h...

* http://jdebp.eu./FGA/dns-ms-dcs-overwrite-domain-name.html

* http://jdebp.eu./FGA/dns-use-domain-names-that-you-own.html

From the two comments above, it sounds like yes, some people who named their AD the same as their root DNS zone now have to run Http forwarders.

And the other comment mentioned that this was a known issue 20 years ago because the old versions of IIS did not support redirecting.

We beat this to death on Serverfault.com 9 years ago, so I'll spare all the rehashing here: https://serverfault.com/questions/76715/windows-active-direc...

Having a disjoint DNS namespace (and the needless make-work that it creates) is the issue, more than running HTTP servers on all your DCs to do redirects. There is absolutely no practical advantage to running an Active Directory domain with a public DNS name. It's all downside. It has always been all downside, and anybody who had any experience with DNS could see that all the way back in the beta and RC releases of the product in 1999 and 2000.

From one of the comments there:

http://www.pool.ntp.org vs http://pool.ntp.org

One takes you to the website about the project, the other goes to a random ntp server.

OK, which one of you hooligans runs this NTP server[1] that plays some loud obnoxious dubstep track?

[1]: https://i.imgur.com/cEukhNu.jpg

Those go to the same place for me

Not me.

http://www.pool.ntp.org/ redirects me to https://www.ntppool.org/en/.

http://pool.ntp.org/ takes me to an "It works!" default Apache 2 page for an Ubuntu installation. As the comment in the issue describes, http://pool.ntp.org/ takes you to a random ntp server.

If you want another example, try google.com using Google's own DNS:

  PS U:\> nslookup -
  Default Server:  google-public-dns-a.google.com
  > google.com
  Server:  google-public-dns-a.google.com
  Non-authoritative answer:
  Name:    google.com
  Addresses:  2607:f8b0:4009:810::200e
  > www.google.com
  Server:  google-public-dns-a.google.com
  Non-authoritative answer:
  Name:    forcesafesearch.google.com
  Aliases:  www.google.com
Even if you ultimately end up at the same site through redirects, you're clearly not going to the same site initially.

>http://pool.ntp.org/ takes me to an "It works!" default Apache 2 page for an Ubuntu installation. As the comment in the issue describes, http://pool.ntp.org/ takes you to a random ntp server.

Either way, the ask was for a difference in www.example.com vs example.com. Not a difference in www.pool.example.com vs pool.example.com. In the latter case, the different subdomains will still be shown (AFAIK).

>Even if you ultimately end up at the same site through redirects, you're clearly not going to the same site initially.

Which is nothing that an end user is going to care about and doesn't provide an example to the asked question.

>In the latter case, the different subdomains will still be shown (AFAIK).

http://www.pool.example.com displays as http://pool.example.com

Here's a gif: https://vgy.me/61I0DA.gif

For fun I'm going to set up a www.www.www.www.www.www.www.www.www record.

http://www.www.www.www.www.www.www.www.www.www.example.com shows as example.com

E: I'll add it to my certs later but I did it: https://www.www.www.www.www.www.www.www.www.www.www.www.aish...

E2: http://www.example.www.example.org shows up as example.example.org - this is fun.

Re: E2 (http://www.example.www.example.org === example.example.org)

I just found the same thing. How exactly is this a feature? What an insane decision.

That is absolutely insane and someone should be fired and shamed for this. I didn't like just trimming a pure www. but trimming any www. in the hostname is just dumb behaviour.

How would I differentiate between loadbalancer1.www.intranet and loadbalancer1.intranet? THOSE ARE NOT THE SAME.

Wow. You could do some pretty amazing spoofing with the www.com domain, then.

Some small subset of pool servers run an HTTP server that redirects you to www. Not all of them. You just got lucky.

That's exactly right. www.pool.ntp.org is the project site. pool.ntp.org is for getting an NTP server. Which one you get will depend on your location and random chance. That server will run NTP, but what it happens to run on port 80, if anything, is up to the operator of the server.

I must be lucky too, as I got the same result from both.

They definitely do not for me (ios).

See the issue.

http://www.pool.ntp.org/ http://pool.ntp.org/

https://www.citibank.com.sg/ https://citibank.com.sg/

Plus, this actually removes any www part of the domain.

So subdomain.www.example.com shows as subdomain.example.com

Why even open that can of worms?

A) Consider any sharing platforms where unrelated bodies coexist with distinct subdomains under a common root domain (e.g., Blogspot, Tumbler, etc) While "www" is probably a reserved name and mostly not of practical concern, "m" may be a practical issue.

B) Consider subdomains for test-purpose like "www.test.www.example.com" (now displayed as "test.example.com", which is actually not even the root of the specific subdomain).

C) Users unsure, if they are on the full-featured or a reduced mobile site, when "m" is hidden.

D) I may actually want to have a service agnostic default host at the root and subdomains for dedicated servers (like "www", "ftp", "mail", "stun", "voip", etc). Maybe this one just returns a short text message by design, if accessed on port 80. Not every domain is just about the WWW. (Edit: While we may assume that such a server would forward in practice, this may be assuming too much.)

>> there's a difference between "www.example.com" and "example.com"

> Can you link to a site where these two are different?

There are 3rd level domains where everyone can register "www.{TLD}". E.g., .com.kg, .net.kg, .org.kg. Look at the www.com.kg. It's also available as www.www.com.kg. Or www.org.kg that's in fact www.www.org.kg. If you display just the last part (com.kg, org.kg), does that mean that you're viewing the root website? Nope, that doesn't. That means that chrome is fucked up.

Someone mentioned www.citibank.com.sg vs citibank.com.sg in the issue.

One of my school's websites: I can't remember what it was and this was before I understood what the difference is, but www worked much better than without iirc.

This also applies to m.*, so literally any web-app with a mobile version.

Consider the different types of records you need to add for those examples if your web host is Heroku or some other cloud provider:


I don't remember the site offhand, but I was going to one recently where example.com didn't even work, it was some weird error page -- you had to use www.example.com. If it comes to me, I'll post it.

I've seen this behaviour, and the reverse. Can't remember examples, but it does happen.

This is what Chrome's update is trying to fix. Developers are confuser when setting up dns if they should have www or not have www or only have www...

Not really fixing it thou because they just strip the www part from the name. If the developer does not setup www.domain.com and the user goes there chrome will not “fix” anything.

I haven’t tested it but it will most likely show up as domain.com in the address bar and will result in an error show to the customer.

If chrome wants to strip www as it’s essentially the same domain.com they can submit an RFC and not just decide for everyone. Honestly I hope they start making more stupid decisions like this so ppl move to Firefox so we have more competition.

> If the developer does not setup www.domain.com and the user goes there chrome will not “fix” anything

Yup, that's on the developers. Hopefully this fix will make it so that it will be easier to setup DNS with just one domain instead of 2. Props to Chrome.

Read the source link. A concrete example (Citibank) is given.

www.pool.ntp.org pool.ntp.org

for ages, my former high school's website did not respond to requests that omitted the www. subdomain :/

Many companies have their marketing site at www. and they're app at at, say, app. e.g. https://www.netlify.com/ vs https://app.netlify.com/

That's www vs app, not www vs lack-of.

Ah. Thanks for clarifying that.

app. subdomains are not hidden

Can't trivial subdomains just be colored gray and done instead of getting away with them? Also maybe color blue the .com and similar.

> It seems that "m." is also considered a trivial subdomain. So when a user clicks a link to a "m.facebook.com" uri, they'll be confused why FB looks different when the browser reports it's on "facebook.com".

Will they? I find it very unlikely that many users would even check the URL in the first place, let alone understand that m.foo and foo route to different places.

humm... for sure that wasn't a technical decision, and here start the problem.

This is going to break a lot of sites.

Lots of people saying this is for the benefit of non-technical users.

For me, this is a minor inconvenience, precisely because I'm technically capable/interested enough to handle the inconsistency.

But this kind of stuff (and I am speaking somewhat generally here) tends to frustrate me, precisely when I'm trying to educate or deal with a non-technical user in some capacity where it happens to matter. I can't just tell them, "that is the address of the page, and that will always lead to the exact same place if you type it fully and correctly, and that's that". Instead I have to get my head around what if they're using browser X or operating system Y, I have to ask on the phone first, "hang on, tell me what you see on your screen", I have to say to the lady who's eagerly sat in front of me with pen hovered above paper waiting for me to dictate how to do a thing in straightforward steps, "well it depends, first you have to check this thing, and if it's like this then you can do this but it might also be like that in which case it's a bit different, let me explain" - and this is usually the point at which the non-technical user gets tired and throws the book at me.

In short, I think consistency of information and process is usually much more understandable and useful to users of any level, than the dumb 'simplification' of this half-baked information-hiding.

Yeah, I think that consistency is greatly undervalued.

Grandma has no problem with technical details being shown, she just ignores them, she just knows that clicking on the button on the top left will go the the webmail and that she needs to click the big red button in order to write a new email. Change anything and she will get lost, click everywhere, and usually find the solution, but sometimes make a mess.

There are also security implications. I told her to be aware of any change, because it may imply a phishing attempt, or some malware. But how is it going to work if legitimate software always change. You are basically training them to stop thinking about what happens, which is terrible since thinking is the only thing that can protect them since they don't have the technical intuition most of us have.

This! Consistency.

Browser start screens with a large search box in the center haven't helped either. Some users do see no difference between the location-field and this search box. Some have even unlearned this. Arguably, it facilitates ignorance of the location, the significance of URLs and how they work. Reading a URL isn't witchcraft, it's just about three simple things. But dumbing things down towards convenience at the expense of consistency will not empower users.

(Surprisingly, ordinary people have been able to manually dial a phone or to parse a street address without the help of a map service in the past. It can't be that bad.)

I wholeheartedly agree with this. There are two sides of the debate now. One side says that the machines should be clever enough to guess the user's intention and go around of their mistakes, while the other side says that the machines should stay as dumb and square. The question is about who is handling the complexity of this world, and in my opinion it should be often left to the human, not to the machines.

The worse outcome may be, if there's an ambiguity involved and the smart system takes precedent, but occasionally happens to take the wrong route – and suppresses any feedback for the user to intervene or even notice. Which is much, where we have arrived by this. I guess, collateral damage has become a matter of everyday life.

Most comments assume that this is for solving user confusion, or security, or building a better URL scheme, et al.

It's not, that is all smokescreen.

As ivs wrote[1] They are going to hide amp subdomain, so you don't know if you're looking at AMP or the actual destination. And then suddenly the whole world funnels through AMP.

And for that reason, it won't be reversed until people call them for what they are actually trying to do.

[1]: https://news.ycombinator.com/item?id=17928939

This should be the top comment. After this change, we are just one step away from using the browser's address bar only as a Google search box, and Google as the entire internet's gatekeeper. Google doesn't make money when you type the URL into your browser's address bar – it makes when you don't.

AMP pages are served through google.com, though? It's one of the big problems with them.

Not always. Sometimes Google results have taken me to websites like "amp.reddit.com" on mobile.

That makes sense. And not just AMP but they want to train users to NOT pay attention to domain/subdomains, leading to more room for other exploits.

Yeah... no. That's just baseless FUD.

They _are_ indeed planning to get rid of AMP cache URLs, but they'll be doing it through open W3C standards anyone can use, not through special-casing their own domains: https://amphtml.wordpress.com/2018/05/08/a-first-look-at-usi...

No, this Chrome update is about hiding the "amp." subdomain from the original URL. What Google wants to achieve, is to make it impossible for the average user to tell when the entire website is being served from Google Cache.

Google cache links aren't served from `amp.yoursite.com`, they're served from `cdn.ampproject.org`.

If you're visiting `amp.yoursite.com`, then the site _isn't_ being served from the Google cache.

Also "this Chrome update is about hiding the "amp." subdomain on the original site from the viewer" is patently false since this update _doesn't_ hide `amp.`; only `m.` and `www.`.

> Google cache links aren't served from `amp.yoursite.com`

That's not where things are going, according to your own source from the previous comment:

> Our approach uses one component of the emerging Web Packaging technologies—technologies that also support a range of other use cases. This component allows a publisher to sign an HTTP exchange (a request/response pair), which then allows a caching server to do the work of actually delivering that exchange to a browser. When the browser loads this “Signed Exchange”, it can show the original publisher’s web origin in the browser address bar because it can prove similar integrity and authenticity properties as a regular HTTPS connection.

So, the content will be served from Google Cache with the original publisher's URL in the address bar.

> this update _doesn't_ hide `amp.`; only `m.` and `www.`

It's Google, who decides what and when it wants to add to its browser's list of "trivial subdomains". Especially, when the websites with "amp." subdomains will become common.

Yes, once the Web Package Standard is finalized and implemented then AMP pages will indeed use the normal `amp.` URLs.

But at that point, what would be your concern with hiding `amp.`? That's no worse than hiding `m.`; it's just another subdomain which serves a different version of the same content. Heck, sites could serve their amp pages on `m.` domains if they wanted to; the actual subdomain they decide to use is irrelevant.

Seeing "amp." in the URL meant that it's not a "full version" of the site. Google wants to remove the separation for the end user, so that all publishers would serve their content through Google Cache. And that's a big concern to me, since it means, the entire web will be served from a single company's database.

> Seeing "amp." in the URL meant that it's not a "full version" of the site.

Yes, but once again that's no different from `m.`.

> And that's a big concern to me, since it means, the entire web will be served from a single company's database.

Are we talking about before or after the Web Package Standard is implemented here?

If before, then your concerns about the URL aren't applicable because `amp.` links aren't served from the Google cache (only `cdn.ampproject.org` links). If after, then the content isn't "served from a single company's database" anymore; it's served using a decentralized and open standard for cross-origin server push.

> If after, then the content isn't "served from a single company's database" anymore; it's served using a decentralized and open standard for cross-origin server push.

Does this mean that Google will no longer rank higher those, who implement AMP and serve through Google Cache, than those who don't?

Yes. https://amphtml.wordpress.com/2018/03/08/standardizing-lesso...

> Based on what we learned from AMP, we now feel ready to take the next step and work to support more instant-loading content not based on AMP technology in areas of Google Search designed for this, like the Top Stories carousel. This content will need to follow a set of future web standards and meet a set of objective performance and user experience criteria to be eligible.

Furthermore, once the Web Package Standard is finalized, the "Google Cache" won't exist anymore, at least not in the same way it does now.

The Web Package Standard allows any web page which supports origin signed responses to be served via cross-origin server push from any server that supports HTTP/2. So Google will probably still cache and push pages via their own infrastructure when you visit those pages from your Google search results, but the actual content being served will be fully controlled by the original publisher and behave exactly as if your browser received the page directly from the publisher's server.

> So Google will probably still cache and push pages via their own infrastructure when you visit those pages from your Google search results

And that's what I mean by saying, that the entire web will be served from a single company's database, which already controls the browser and the search. You will be able to browse the web without ever leaving Google servers, and Google will be able to track your every interaction on the web.

This doesn't increase Google's ability to track you at all. If you click a link on a Google search results page they already know you visited that site; them serving the initial page load via a cross-origin server push changes nothing.

It also doesn't give them any more control over the web, since the page contents are still strictly controlled by the original publisher (and that's cryptographically enforced).

So again, what's your actual concern?

Google now only knows the first page I visit from its search results. After this update, Google will be able to follow me across the entire web, because it will the one who serves it to me. How is that not a concern?

Are you seriously claiming that the largest ad company in the world is interested in decentralizing the web? Its blog article you linked to yourself says, that the goal of this entire initiative is to increase the usage of AMP by "displaying better AMP URLs".

> After this update, Google will be able to follow me across the entire web, because it will the one who serves it to me.

That's not how it works. Only the initial page is loaded over cross-origin server push. After you actually navigate to that page you're no longer on Google's site (which is why the URL bar is able to show the domain of the site you just navigated to instead of still showing google.com), so obviously they don't have any enhanced ability to monitor what you do after that point.

> Are you seriously claiming that the largest ad company in the world is interested in decentralizing the web?

The general web is already decentralized. This is about decentralizing AMP. And yes, decentralizing AMP is exactly what Google is doing here.

> the goal of this entire initiative is to increase the usage of AMP by "displaying better AMP URLs"

Yes, and they're accomplishing that by pursuing the development of open W3C standards which can be used by anyone. Just like how offline storage on the web started as a feature enabled by [a proprietary plugin developed by Google (Google Gears)][1] until Google pursued the development of open standards to replace it: https://www.w3.org/TR/service-workers-1/ (Check out who the editors are on that draft.)

Google's been following this pattern for over a decade now. They start with a proprietary initiative, then use the lessons learned from that effort to develop open web standards that improve the web for everyone. (I can give maybe a dozen more examples if you still don't believe me.) There's no reason to think AMP will be any different in this regard, especially since Google has already made their intentions on this matter clear.

[1]: https://support.google.com/code/answer/69197?hl=en

Another very real problem is not being able to share the real url rather than an amp link.

Is ampproject Google's website? On amp.google.com I can find the original url for sharing purposes, whereas on ampproject.org urls I can't.

This and many other changes over a course of a short period of time have caused me to go to Firefox exclusively now. I heard Firefox is going to stop third party cookie tracking altogether. Why not give Google the big finger and use a different browser? Vote with your cold hard actions if you feel so strongly about something.

I switched to firefox a year ago. Its a little slower, but im a lot happier.

Ive been trying to degoogle as much as reasonable. I moved to fastmail as well. Still using an android, but would switch if a reasonable alternative that wasnt iphone came up. Im not paranoid or a privacy nut, just think google is too involved in my life.

Maybe try an Android derivative? One tailored for privacy? I've heard of LineageOS, it's marketed as privacy-friendly.

Same here. Have you found any viable alternative to the Google Calendar? I'm at the point where I'm thinking about hosting a calendar project from GitHub myself.

Nextcloud, whether self-hosted or otherwise, works great! It's just WebDAV. You can get calendar, contacts, task, and note syncing, and it can even host your documents for reference management software like Zotero.

The calendar of Fastmail works. It's not great, but it does the job

If you own a Samsung phone, the calendar app is good. I wonder if you can install those Samsung apps (which for some are just forks of unmaintained AOSP apps) on a regular Android if you somehow get the apk.

I was in exactly the same camp as you a year ago. Then I played with a hand-me-down iPhone 6s and couldn't believe how much more pleasant it was to use iOS than to use Android (Nougat at the time). Having owned an iPhone 3G and 5, my memories were of a restrictive OS and a dumb Siri but both have really has come along since. I made the switch and can't imagine going back to Android now.

Are people still considering smaller, local ISPs for email? Or are there even enough of those to consider?

Upvoted from Firefox. Only reason I use Chrome nowadays is when apps launch it directly (whereupon I strongly consider uninstalling them) or when work requires it (... which is utterly ridiculous, and very likely why our web rendering performance and consistency is utter trash).

I'm now the same way, and it pains me to see "optimized for Google Chrome"


Hangouts and GotoMeeting are the only things I open in Chrome. I'm also completely sold on Tree Style Tabs, and I don't think I could now live without...

I would love to, but Firefox just feels more clunky. Not sure what it is, but the scrolling doesn't feel native to me (MacOS, Magic Trackpad and Logitech Mouse)

You can turn off that scroll behavior.

I wonder if it has anything to do with the HiDPI issue:


(there's a couple of hits when I search for 'scrolling' in that thread)

Faster than Chrome for me on MacOS

Been using Firefox on my desktop and its amazing. For some reason the mobile version is incredibly slow to load pages though.

Firefox is a better browser due to tree style tabs. But it is noticeably slower.

Having to use both for supporting complex web apps, I can't really agree that FF is noticeably slower. Chrome does seem to have less silly bugs though, like quick searching doesn't find multi-select box text in FF. Works great in Chrome.

Regardless of FF's little quirks, I use it almost exclusively for personal stuff. I would rather deal with those types of things than the mentality Chrome brings to table.

I've been using Edge on Windows for at least a year now and I'm quite happy. Now that it supports plugins, I haven't fired up another browser for months now.

Because the battery drainage with Firefox is unacceptable.

I use Firefox as my "at home" / private browser. However for work i unfortunately feel forced to continue using chrome. First I just really prefer the chrome devtools and i just can't seem top find an equivalent replacement for the "manage people"/multi user built ion function that chrome offers. I really wish Firefox had something similar...

Multi-Account Containers do that part for me, and I find them much nicer than Chrome's similar functionality: https://addons.mozilla.org/en-GB/firefox/addon/multi-account...

For bonus fun, also install Temporary Containers: https://addons.mozilla.org/en-GB/firefox/addon/temporary-con...

firefox has both profiles (for actual different users) and containers for isolating stuff (i.e. a sub-profile) for a single user.

The containers have a great UI/UX. I'm looking at the profile stuff now (after having not in years) and it seems counterintuitive and clunky



Anyway, "forced" is a strong word when you simply mean "prefer". The firefox dev tools are in the same league as the chrome ones, imo.

the containers are great thanks for sharing, will definitely use those at home! You're right I should have emphasized the feel part of my comment since the dev tools especially are just a matter of preference. However I stand by my point that the profiles are definitely not on par with chrome, since there doesn't seem to be a way to have multiple profiles open at once.

I’m sure there are other ways to do this, including specifying a profile when you initially launch the browser, but you can enter about:profiles in the address bar to see a UI to manage the profile. One of the options is to launch that profile in a new browser.

If you just need a "personal" profile and a "work" profile, what I do as a workaround is to use normal firefox for personal and firefox developer edition for work. They are completely sandboxed from each other.

Firefox does similar things though. They hide the URL scheme by default. And subdomain are displayed in a more subtle colour than the rest of the domain.

Firefox hides the scheme IFF it is `http://`. It doesn't hide `https://`. Also the subdomain AND the path is slightly toned down. The net effect is precisely what Google tries to do and Apple has been doing (namely, only showing the second level domain) without actually hiding any information.

You should really give the Brave browser from (www.)brave.com a try.

Yes. I've been using Brave exclusively for more than a year and I'm not going back.

I'm still undecided on which search engine to use though.

Have you tried Searx? Meta search engine, open source, multiple instances (domains) to choose from. Once configured to your particular needs, searx can prove very powerful.

Safari already hides "www.". In fact it hides everything except the root-level domain, e.g. "https://www.google.com/about/" shows just "[lock] google.com".

Firefox and Opera show the full domain but gray out everything in the entire URL except the root-level domain, so "www." is gray.

Just saying, de-emphasizing and hiding parts of the URL is clearly a trend. This isn't just a Google thing.

De-emphasizing is fine, hiding alltogether is not -- for both protocols and subdomains.

But Chrome does exactly that. If you put focus in the URL bar in Chrome 69 it shows the full domain including protocol so

amazon.de is visible on focus it's https://www.amazon.de

I thought that's how it worked too but it is not. 1 click shows the entire url and selects it. Still without "www.". Another click will show "www.".

Regardless of how you feel about the change, it does indeed hide 'www.' to the point where a power user could easily be fooled that it was the naked domain.

Edit: Here's a demo of how it works: https://www.useloom.com/share/f7d71b95d75b4c4582bb38cdc84326...

this is actually the main reason I cannot use Safari. It always boggled my mind that they made this decision.

For power users, they never look at the url unless they want information from it, in which case the `www` is valuable.

For low tech users, it can lead to straight up incomprehensible issues, like sites not rendering properly (think of a `m.*`).

The UI gains are so small, that part of the screen is never really looked at, but needs to be there, and typically has tons of horizontal room... I don't get it

Low-tech users don’t often understand that a difference could possibly exist between “m.” and “www.” at all.

However, if it shows the TLD, they can confirm it says “google.com”. Imagine they’re visiting a Paypal phishing link, to the domain:


The most important thing to show the user is “www.com”, because they’re expecting “paypal.com”. All the rest is nonessential for protecting users from bad actor sites.

Looking at the bug report, Chrome would actually show "www.paypal.com.www.com" as "paypal.com.com". At least Safari does the wrong thing the right way.

Personally, I always want to see the full URL. It's fine if part of the domain, the scheme etc. are grayed out to emphasize the second and top level domains, but don't omit elements that are necessary to fully identify the resource because the lowest common denominator may think that fishing.com/paypal.com is paypal.

Yep, I verified that bug as well, apparently they never planned for "www" being somewhere other than at the front of the domain name. Sounds like they already know, woot!

> Low-tech users don’t often understand that a difference could possibly exist between “m.” and “www.” at all.

They should. Children probably have difficulty with '6' vs. '9,' but they need to learn in order to use our number system. Likewise, users of the Internet need to learn the domain name system. Could there be better name systems? Sure. There could be better number systems, too, but this is what we have for now.

What difference is indicated by "news." rather than "www."?

Um, that they can be different websites?

The general public does not perceive that difference, likely as a direct result of dot-com inventiveness with respect to domain names. Thanks to the stupidity of “m.” (WAP is dead) and “amp.” (WAP lives!) and the cuteness of “baredoma.in” (Silicon Valley represent) and the insanity of “www1034.www” (here’s looking at you, HP), we have spent the last decade on the web directly teaching non-tech users that what used to matter (“www”) no longer means anything at all, and they’ve listened.

This is not a feature. Make users understand this, don't hide it, make the main domain glowing green, wash out the rest, anything, but this trend of hiding complexity will only lead to severe undereducation on the topic, and, eventually, it will reach professionals as well, who also won't understand, what they should.

Reducing the displayed value from { "is_secure" YES/NO, "http/https" ARGH/WHAT, "full URL" GIBBERISH } to { "is_secure" YES/NO, "domain" AOL KEYWORD } improves my chances of defending against a phishing attack someday, as well as those of non-tech users.

Reducing information density is a critical component of automobile safety measures. Dashboards in cars just prior to the "screens everywhere" era have been boiled down to the essence of what's necessary for a human being to operate a vehicle safely and without putting others at risk: One bright line showing speed, one bright line showing engine speed, one bright lint showing fuel remaining, and a few multicolored status icons; and then, a central info display where any logic more complex than "push to show next value" requires parking the car.


You can still see the full URL by focusing the address bar with either a click or ⌘-L.

I think it makes sense for the default display to show the most security-relevant information (TLD, SLD, and presence + validity of the certificate) in the default display, while deferring the full display (incl. spurious or malicious information that might be in the full URL e.g. https://example.com/www/paypal/com/login) to a user request (click or shortcut).

That said, Chrome 69's decision to to hide /all/ instances of www in the domain is unconscionably bad.

> this is actually the main reason I cannot use Safari.

Then I have good news for you! If you go to Safari's preferences and select the Advanced tab, there's a checkbox called "Show full website address" that disables this behavior and shows the full URL in the search bar.

Unfortunately this is not in Safari for iOS.

Safari on iOS barely has enough room to show the domain, let alone the full URL. Tapping on the URL bar will present the full URL in an editable/scrollable text field.

You know it’s a setting, right?

Hiding file extensions in Windows is a setting, too.

Funny how life repeats itself.

Safari does hide www, but it's a default setting that can be changed by checking [0]

  Preferences > Advanced > Show full website address
[0] https://imgur.com/a/3VMo5zH

EDIT: Formatting. Add missing article. Change linked image.

There is quite a bit confusion how it actually works, nobody seems to had a look at it. Chrome simply hides the subdomain if the url bar is not in edit mode, these parts are still accessible/editable and the http host behaves (and is recovered from history) as before. Copying the url works as expected.

It is still confusing for tech people, because we often need awareness where we are.

>Chrome simply hides the subdomain if the url bar is not in edit mode

Not so. You have to click the Omnibox twice.[1] Clicking on the Omnibox once puts you in a completely new state, "edit mode [with corrupted URL]," and clicking on the Omnibox again puts you in "edit mode [with correct URL]."

[1] https://bugs.chromium.org/p/chromium/issues/detail?id=881410...

It seems like it would be confusing for anyone. Especially that it doesn't just remove the lowest level domain if it's www, but any part if it's www. So "www.paypal.www.com" would be displayed as "paypal.com". If that isn't great for phishing I don't know what is.

It's a setting. Settings / Advanced / Show full website address to show the www.

The UX of moving what I'm looking at instantly under click is very unpleasant too, www. and http and even the "secure" "not secure" banners is like 200px shifts.

Lots of google's UI is getting (or has) things shifting instantly under the pointer it's quite annoying. The new gmail design quick-tools are often in the way, unknown until you actually click. Hell calling on my phone shifts the speaker and keypad buttons over instantly when someone picks up. Often placing the person I just called on hold if they answer too fast.

Wasn't this a pretty well covered UX rule to not move shit around on users. Up there with don't use modes and the like.

> The UX of moving what I'm looking at instantly under click is very unpleasant too,

Could you expand on this? What are you referring to?

I don't know if it's the same thing, but I'll bite: it's absolutely maddening to edit or select part of a URL.

Click in the address bar and the entire address is selected, then you click on any part of it to either select a part or to place your cursor in order to add to it, after which Chrome appears to first shift the entire URL to the right in order to show the protocol, then it places the cursor within the shifted URL under your pointer. This causes (attempted) selections to be established from some other place in the URL than the user intended.

Yes, it's this, and it's an absolutely infuriating change.


This is about Google having a fundamental weakness in product management and UX, giving me the equivalent of a Windows Registry setting to change is not helping, practical as it may be.

Its weird too because chrome invented to UX when closing lots of tabs not to reflow the browser UI but keep placing their close X under the mouse until you move away. So its like they understood this once and have forgotten.

Have you used the latest Chrome with the redesign? The tab's X no longer has any hover effects, or doesn't seem to.

After the last few days of wondering about this, I see now that hovering over the X makes is ~5% darker.

Oh yeh I looked at the X on the current tab and didn't agree. The close X on background tabs hover is basically indistinguishable.

ugh!!!!! this is killing me

Fixed in a week (v69.0.3497.92)! Nice one Chrome

Just go try to change the hackernews url right now to .co.uk from .com you'll see.

Those who are saying this change is to make things "easier" for certain non-literate users may be correct, but they should consider whether such a justification is desirable at all from a moral perspective.

No doubt all of us have at some point been forced to learn things that we did not find particularly useful or pleasant to learn at the time, but then later experienced a great "satisfaction of knowledge" when faced with a situation in which that knowledge became advantageous or even essential, and then proceeded to use it to better ourselves.

Imagine a world in which none of that learning took place; one in which you never have to think, everything you see and do automatically satisifies you and keeps you in a blissful state of ignorance. Who makes the decisions in that world; or rather, who can make those decisions? Who is in charge of your life? Not you.

Gradually reducing the motivation to learn, by making things "easy" and hiding/obfuscating anything that could be used as a starting point for more learning, makes for a population that won't think, won't learn, won't question or rebel. It makes them docile and easy to control.

Making statements like "ordinary users will never learn" is one thing, but explicitly making decisions to ensure that status quo is a horrible trend. It's quite a genius plan, and certainly used by organisations other than Google, but thoroughly disturbing.

I've said a few times before in the past: "knowledge is power --- they don't want you to have too much."

Some of my comments from 4 years ago when Chrome hiding URLs was just an "experiment":


Onward to Idiocracy we go!


I'm ok with hiding "www.", but it also hides "m." which is sometimes very confusing (I once opened a m.facebook.com link and was very puzzled why it uses the mobile site when the URl bar just shows "facebook.com").

What you may be surprised to learn is that Chrome isn't just stripping "www." from the beginning of the subdomain. "subdomain.www.domain.com" displays as "subdomain.domain.com"

I just downloaded canary and tried it and you are absolutely right.

about.www.github.io shows as about.github.io

I'm on board with this change in general, but this is absolutely something that needs to be fixed.

Not only is that just annoying and wrong, but it could be dangerous in some situations.

Also, it doesn't just stop at one removal.

shows as

Interestingly though it doesn't remove it if it's the TLD, or the actual domain (so stuff.www.whatever shows as stuff.www.whatever).

I'm guessing this is implemented as s/www\.//

Isn't this a security risk? What if someone malicious takes control of the www.com domain?

The www.com might not be easy to get, but there is probably a www name at another important TLD that can be purchased. And if you own that domain you can easily get SSL certificates too.

Sounds like Chrome could now be a phisher's best friend…

It appears that it only does it for anything subdomainish -- that is, not the first part after the TLD. I tested it against .nz which has a silly mix of .{co,govt,school}.nz second-level domains and directly registered example.nz domains and it always displays at least one "registered" bit.

Which is almost worse because it seems like people have put thought into this.

This entire change is a security risk - there is no guarantee that any subdomain has the same owner as the primary, and that's not even getting into subdomain hijacking.

Let's get into subdomain hijacking a bit. Imagine taking control of www.www.www.google.com .

This is amazing, what a freak show.

Today is a great day to own www.com

Luckily it doesn't work with TLDs or the "domain" part (or whatever the name for the "google" in "google.com" is).

So stuff.www.com shows as stuff.www.com

But www.www.com shows as www.com

Should file this as a bug. If they seriously missed this that is just amazing.

lol, this one is a plain bug if true, and hints at a really rash change, like, not reviewed at all.

It's getting likely it will have to be reverted and a loud slap in the face delivered as should be.

No this is simply bad implementation. As has already been mention, for internet services that also have websites for those services the www. subdomain makes total sense.

> How will you distinguish http://www.pool.ntp.org vs http://pool.ntp.org ?

In the above case pool.ntp.org is a decades old time service, while www.pool.ntp.org is a website describing the service.

http://www.pool.ntp.org redirects to https://www.ntppool.org/en/ as does https://ntppool.org/en/, so I guess that's a non issue

Do you get something at http://pool.ntp.org ? My blowser times out as does telnetting to port 80

ntp is UDP so telnet isn't going to work. Try this:

$ ntpdate -d pool.ntp.org

Thanks. The URL's protocol element threw me for a loop.

That website looks unexpectedly modern.

Facebook aside, don't the majority of sites show you mobile version by looking at headers. Isn't that the whole reason the "Show desktop version" feature even exists in Chrome, to send the desktop header? Very few sites actually use www. vs m.

Clearly, in those cases, people aren't "confused" by the fact that they are seeing a mobile version on www., so other than the fact that you're used to Facebook specifically working this way, wouldn't you just send the Desktop header whenever you get mobile and want desktop?

Just speaking for myself, but often people will link to mobile Wikipedia pages, or mobile versions of other sites, and I'll look at the URL first to see if I can change it to a desktop version.

The fact that you have to even do that shows that the m. pattern is bad. The same url should be shareable without having to worry about such things. It should show as mobile on mobile and desktop on desktop, unless you specifically tell it not to.

> Isn't that the whole reason the "Show desktop version" feature even exists in Chrome, to send the desktop header?

This is indeed what it does; I wish it did more!

Specifically, when visiting responsive pages on a phone where the mobile-viewport-size layout is just 100% broken, I’d love if “Request Desktop Site” actually set the viewport to be that of a desktop browser, and then set a low CSS/viewport zoom level to compensate. I want the dual of what happens when I set the “simulate a phone of X size” option in Chrome’s inspector!

I'm pretty sure Chrome's "request desktop site" option does change the viewport size. Or if it's doing something else, it has the same effect.

Have you tried it in the last couple months? It seems like a fairly recent change in behaviour.

I have a bookmarklet to specifically add/edit the <meta> tag on the page to change the viewport width to 1200px.

Works in many cases, although there still are sites that break with this. I have seen sites that uses the value of window.innerWidth at load and never bother listening for changes in the width. I have seen sites that uses the presence of onTouchMove event to determine whether to use a mobile layout.

Modern websites use the screen width to show you mobile vs desktop view. But before CSS had good support for responsive web design everyone had to create a separate website for mobile and put it on m.website

I don't get what you mean, I'm talking about opening an "m.facebook.com" link on my desktop computer Chrome. So not sure why "Show desktop version" is relevant.

>Very few sites actually use www. vs m.

Well, facebook does that.

It's not about hiding www. or m. subdomains. It's about hiding amp. subdomain, and google is really invested into turning web into collection of amp pages on their own servers.

How many sites do even use this convention of `m.`? In .ru territorybon internet there is around zero such sites.

Looks like they're only going to be hiding `m.` on mobile. On desktop it will still show by default.

Looks like this is intentional. To change it back go to chrome://flags/#omnibox-ui-hide-steady-state-url-scheme-and-subdomains and disable the setting.

Additionally, if this flag ever goes away, the "kFormatUrlOmitTrivialSubdomains" is the internal flag for this, it seems[1], though its description says it's "Not in kFormatUrlOmitDefaults"[2].

Back when they removed the "http:" off of URLs, I used to use a hex editor to turn the kFormatUrlOmitHTTP bit flag off every time I got a new build, so I'd get the URL formatting I wanted, but eventually lost the mental wherewithal to continue the hack every week.

[1] https://github.com/chromium/chromium/blob/3d41e77125f3de8d72...

[2] https://github.com/chromium/chromium/blob/78aae16be65e409075...

I have wanted to figure out for ages how to compute the location of these types of flags/vars in binary files.

Incidentally I want to figure out how to do this on Linux.

I presume I need debug symbol files, which I can download easily.

How would I do this?

but eventually lost the mental wherewithal to continue the hack every week.

That's when you automate it as part of your "set up my environment exactly the way I like it" scripts ;-)

Thanks! This worked great for me and it brought back the https:// part as well.

Until a few releases down the line and it is decided for you that the flag should be removed.

This is the problem. Better to just switch to Firefox now and be done with it. Hopefully it'll send a message.

Until Firefox leadership decide to make the same change "because that's what Chrome does". Sadly, over the history of Firefox (and before that, Mozilla/Seamonkey) the leadership there has always been WAY too obsessed with following IE and/or Chrome rather than just building the best browser and taking some chances.

Seriously, trawl through Bugzilla sometime and look how many bugs are closed with the the justification being some variation of "That's how IE does it" or "IE doesn't support that", etc. And then substitute "Chrome" for "IE" later in history once Chrome took over the universe.

Luckily we still have Vivaldi, Otter, Falkon etc.

Hopefully an upside to user tracking means using this flag is kind of voting on the behaviour. If they're listening.

I hope people start moving from chrome to firefox. The standards they pushing are harming the web in ways.

If only Firefox's JS handling didn't melt my laptop.

Yeah, I switched from Firefox to Epiphany (also called GNOME Web[0]) and I have the best of both worlds: a WebKit browser that's more feature-complete than luakit/surf/etc., with Firefox sync integration and no Google.

It's like an open source Safari clone and it works beautifully.

[0]: https://wiki.gnome.org/Apps/Web

Then turn it off.

I actually like to use websites that were made in the last decade, so instead of doing that I just use Chrome.

i use Firefox @ work and @home for serious stuff. The only reason i don't use Firefox 100% is due to the Android app which is a lag fest.

Mozilla needs to put more resources on Android.

I'm in the opposite boat from you.

Firefox on Android is nice, especially thanks to being able to install an ad blocker.

Firefox on my desktop literally brings my entire machine to its knees, and destroys performance in all other apps that are open.

Pulling that off while not even getting close to maxing out the CPU is rather impressive. :/

Same here. FB chokes FF every other time I open it. My CPU and RAM consumption spikes. I have to close FF and reopen. Sometimes that doesn't work either.

It's annoying and frustrating.

Are you using modern Firefox (v57+)?

Auto-updates are being applied, so whatever latest is, it just updated today in fact.

The only plugin I have installed is uBlock Origin.

Whatever lag FF has is negated 10-fold over Chrome if you install uBlock Origin since there's no more tracking or ads. I can't even imagine going back to mobile Chrome or Safari, I'd rather go back to a basic phone.

Have you tried Firefox Focus?

Firefox focus isn't firefox, though, it's a private-browsing-only browser, and it's built on an entirely different engine (chromium, iirc). There's no tabs, history, saved logins, or anything else. It's far from being a full-featured browser. Want to disable javascript or images for a while because you're on data? Too bad, you can't.

I'm using Firefox Klar (Firefox Focus for Germany): I have tabs (you can't open an empty tab, but you can open links in a new tab), the session history is enough for my needs, and I can disable JS under settings (and also web fonts, but not images (that's a pity)).

For logins I'm using Keepass Android Offline, Firefox Focus/Klar can use it as autofill service (neither normal FF nor Chrome can do this).

So for me FF Klar/Focus is the best Android browser at the moment, it superseded the normal FF (with ad blocker) on my phone.

Edited: bad autocorrects from Gboard.

You're right that Focus is not a full fledged browser, though it is actively being revamped to use GeckoView instead of Chromium. I think the change is in beta right now.

Mozilla is also starting to put more resources into Android (GeckoView, etc). Hopefully we'll see some exciting things in this realm soon.

Focus is so crippled, for no conceivable benefit (as a user), that I find it unusable. A nice concept, but it can't even switch tabs without causing a full page reload, which is like reverting back to the pre-tabbed-browsing stone-age.

And, as mentioned, it's not even Firefox. It's just a cache-less webkit wrapper.

It begins. I saw this this morning, and thought it was something that was coming in the next few versions, not right now.


WTF? People get angry when you just move their cheese without notice.

>People get angry when you just move their cheese without notice.

Sorry, unrelated, but ahah, what is this saying? I love it and have never heard it before. I guess it's from some sort of book? https://hbswk.hbs.edu/item/cheese-moving-effecting-change-ra...

What they're talking about in that article is more dramatic than this.

Even if you believe this is in theory a good idea, in practice it's clear that Chrome has implemented this extremely badly.

As Comment 5 on that issue points out:

> This does appear to be inconsistent/improperly implemented. Why is www hidden twice if the domain is "www.www.2ld.tld"? [...] If the root zone is a 301 to the "www" version, removing "www" from the omnibox would be acceptable since the server indicated the root zone isn't intended for use. This isn't the behavior, though.

> If example.com returns a 403 status, and www.example.com returns a 404 status, the www version is still hidden from the user. The www and the root are very obviously different pages and serve different purposes, so I believe the should be some logic regarding whether or not www should be hidden.

It's not very difficult to come up with a simple algorithm that checks HTTP standard responses and implements this in a sensible way: it seems Chrome's developers haven't even stopped to think about how this should be done properly though.

This is not so much a new policy issue as a buggy implementation issue.

Even dumber, from another comment-

> Another case I ran into:

> "subdomain.www.domain.com" displays as "subdomain.domain.com".

And people like github.com/m can now host something and it'll look as `github.io` hosted it.

This is the third story today about a Google's handling/intentions towards web addresses and general webpages. The theme is Google wants/may want to push a new set of Google orientated web standards. This seems the least likely of the three to be strategically related,however.

The other two: Google may/may not be pushing the open web standard towards their own AMP pages, instead:


Google wants to get rid of URLs but doesn't know what to replace it with:


I stumbled upon this bug too, and here's why this is not okay to me:

For a couple of hours, I thought Citibank Singapore's website was down.

If one tries accessing citibank.com.sg, there's no redirect to the www. subdomain. (That's still the case, if anyone wants to try.)

If Chrome didn't hide the www., I would have been able to tell from Chrome's search/address bar that the various banking services that I've been accessing were all on the www. subdomain.

While that is shoddy implementation on Citibank's part, hiding the www. most definitely didn't help with the troubleshooting process.

It's not a shoddy implementation on Citibank's part. Google are just fucking the web over so we all have to fit their structure.

This is standard incumbent behaviour, they are drawing up the moat bridge, inch by inch.

I think the downvotes here are uncalled-for. This is correct. Google is leveraging a dominant market position to make seemingly frivolous changes that will ultimately benefit them commercially.

Come on, you know where this is going: They are going to hide amp subdomain, so you don't know if you're looking at AMP or the actual destination. And then suddenly the whole world funnels through AMP.

> Come on, you know where this is going: They are going to hide amp subdomain, so you don't know if you're looking at AMP or the actual destination. And then suddenly the whole world funnels through AMP.

That's probably the reason for this utterly bizarre change.

isn't it because urls are Number 1 for phishing thou? wouldn't removing urls altogether and move towards full identity checks be better for the web? and relying on citibank singapore to hire the right people to fix their website... never going to happen.

> wouldn't removing urls altogether and move towards full identity checks be better for the web?

Hey we'd be open to it, but we'd really prefer to have an RFC to read...

Instead, we got a quiet auto-update, without patch notes.

Good bye, open web. It was fun while it lasted.

Hopefully we'll see some antitrust action from the EU over this if they do drop the URL bar.

This is idiotic and harmful. We already lost information about the protocol, because somebody believed it is "too complex" for users. Now we're losing other parts of the URL. It's making a joke of the SSL/TLS padlock, too — what exactly is the padlock supposed to tell me? It used to signify that a "known authority" certified that I'm connected to whatever I see in the URL bar. But now that browsers take liberties with modifying the URL bar as they see fit, it becomes increasingly meaningless.

You can still click and see the whole URL. This is just making it easier for the average user to see the most important thing to them, which is the domain name.

It's not like they're just changing stuff randomly. The TLS padlock change has been going on for a while now, and not without reason. As we get to a point where almost everything is served over TLS it doesn't make sense to tell the user every time. It makes more sense to only notify them of the exceptional situation where we're on an insecure connection.

The certificate authority system is terrible, but it's what we have for now. There's been some advances to help make it better though. CT for example, ensures that if anyone starts making fake certs we can all see it at least.

I suspect (and hope) that browsers will slowly transition to using trusted spotters to verify certificates in addition to (and eventually instead of) authorities. If you remember a few years back when moxie marlinspike made that promising, but underspecified cert verification system that relied on the user supplying a list of trusted verifiers and the browser basically goes to each of them asking "I see cert 88:A4:etc" for domain "google.com", do you see the same thing? The idea was to make it really hard to MITM someone since you'd also have to MITM every verifier the browser asked. Not impossible, but probably harder than getting a fake cert under our current system.

Clicking doesn't reveal the URL. You have to click and use the left arrow key, at which point the protocol and www prefix appear.

Just like when Chrome hid the protocol part of the URL, you can even click in the URL bar and copy it without seeing the protocol (or, now, www prefix) at all. I think it results in a confusing experience when you paste it. (Ordinary users will say: "I copied example.com but I pasted https://www.example.com … why??)

Or double-click - basically get into the editing mode of the URL bar.

What other mode would I be trying to access?

> "This is just making it easier for the average user to see the most important thing to them"

This is exactly the wrong way. The domain name system is simple, easy to learn, partly, among other, exactly because it is without ambiguity. It has been an essential part of our lives for several decades by now and users should be expected to undergo the effort of looking into how it works for 5 minutes once in their lifetime. (Arguably, parsing a URL is an important and essential skill nowadays, like adding.) Obscuring it and introducing ambiguity doesn't only not help, it is an essential hinderance to understanding.

> You can still click and see the whole URL. This is just making it easier for the average user to see the most important thing to them, which is the domain name.

Thanks for adding a step when that user's most important thing is telling us folks supporting them what the actual URL they went to. From other folks in this thread[1], it isn't as simple as just a click.

1) https://news.ycombinator.com/item?id=17928598

> This is just making it easier for the average user to see the most important thing to them, which is the domain name. It's not like they're just changing stuff randomly.

Can you link to the user study or general cost/benefit analysis or something else saying it's not random? I'm having a hard time concluding that the cost of removing parts of a domain name only in some cases is outweighed by the benefit of removing a few characters from the user's address bar.

99.9% of users have no idea what any of the words you just said mean. The change was made for them, not for you (the .1%)

>The change was made for them, not for you

Except, those same users also don't care about things in the address bar. So the change hurts the group of users that actually do care.

They didn't care so far because it was so confusing. The hope is that by showing something that's user-relevant (the name of the website name and the security level), it will become more useful for the average user.

Why should a user see: https://www.wikipedia.org/wiki/Canada?utm=asdioasd&arg=j210d... when all they care about is "Wikipedia.org/wiki/Canada"?

What if the user sees "Wik1pedia.org/wiki/Canada"? Or "аррӏе.com"?

Hiding random parts of the address isn't going to make browsing the web better. The main purpose of URLs is for hyperlinks, not as a highly intuitive user interface. Users who don't know how URLs work don't care what is up there. They only care that what they are looking at is what they expect, and a way to get to where they want to go. And that's a complicated problem.

The URL bar hiding thing isn't for users, it's for Google to push Google search. That's why they attempted to remove the URL bar entirely four years ago and replace it with a search bar.

Don’t Firefox and Safari also have a single bar for searches?

Yet neither company runs a search engine. I don’t suppose it’s possible this is just better for most users?

Firefox will allow you to search via the address bar, but it still has a separate search bar to the right by default.

Also, if you go into about:config and turn off keyword.enabled, the address bar will no longer search.

It's very useful if you don't want to Google internal/client URLs just because you accidentally copied a space at the start or the hostname doesn't resolve in your current environment, etc.

It still exists, but the default changed.

Now you can go to menu -> Customize and drag it where you want.

That has nothing to do my argument. Just because people can't detect homoglyphs doesn't mean we should keep overloaded urls and shouldn't strive to make them more user friendly.

There's still value to be gained from having easily readable urls.

Users don't see that.

Expecting users to use the URL bar to detect phishing via homoglyphs is insanity.

Right. So messing with the URL bar is pointless. If people actually don't like it, get rid of it, but provide some other means to establish authenticity of what you're looking at.





do not have to be the same website and there are cases where it isn't the same site!

Which site you visit does matter and not just for internet banking. If it becomes that easy if we hide things maybe we should hide the half of the traffic lights as well.

Those examples don't prove anything. First off, the http ones would appear different, they would have no lock. And 99.999999% of websites don't have a different www. and non-www. page. Showing www. for that 1 in a billion site, for that 1 in a million user, is insane.

Because arguments aren't always useless like in your example. Might as well just do away with the whole URL bar and just have a green checkmark if Chrome thinks it's the site you want.

I mean, why should a user see "Wikipedia.org/wiki/Canada" when all they care about is "This is the Wikipedia Page for Canada"?

I know you’re being sarcastic, but if chrome could, with perfect accuracy, indicate if this was “the site you want”, why not do away with the url?

Mind you, I’m not suggesting to do away with linking, as some rando suggested this implies. (While chrome doesn’t show the protocol prefix, it still copies the prefix when you copy the url, so imagine a similar ui.) But for most users, wouldn’t a ui that shows “server identity” in some more user-coherent way be what they want?

In particular, do subdomains help or hurt phishing detection?

You're making a hypothetical based on "with perfect accuracy"... but the much simpler change here (www vs. no www) is not done "with perfect accuracy", as clearly outlined by a bunch of comments in this thread.

So you're saying we should show all users www., just because that one site in a billion that shows a different non-www. page, for that one user in a million that would even notice that difference? Chrome is a browser for the mainstream people. The feature you're asking for is for power users and extreme edge cases.

If we could, I'd be all for it. Have an option that says "show URL bar or not" and by default hide the URL bar, optionally show the whole thing. Especially on space-constrained devices like cell phones where every pixel counts. Just show the page's title.

I think we're a long way away from that ideal, though, and some web pages may not be designed with this ideal in mind.

Because there is a huge difference between those two:


If the browser absolutely 100% of the time knew that difference and showed "Wikipedia FR > Canada", wouldn't it be much simpler for the average user?

The browser could even show specialized UI such as "FR" as a clickable dropdown menu to allow users to switch languages. Chrome already does this for searching a single website through the address bar (type domain.com TAB)

Basically these changes are not thought for you. You are not representative of the average Web user.

No, it's really important to retain all those dots and slashes. This is not sarcasm. I'm being completely serious when I say this. It's really easy notation, and the differentiation of context for dots, slashes, question marks and hashtags is really useful.

  Wikipedia FR > Canada
I look at that angle bracket with the white space, and I get chills. And I'm not even drawing attention to that oh-so-glaring omission of the /wiki/ context. Truly horrifying.

Repeat. This is not sarcasm.

Dark roads ahead, friends.

Hiding the URL would be a terrible idea, no matter how much "simpler" it would be for the average user: it would either only be enabled for a handful of websites chosen by Google (which would mean having an inconsistent UI) or create a lot of security issues (what if someone creates a website and manages to also display "Wikipedia FR" with a similar layout?).

Maybe it could show "Canada - Wikipedia" like the page authors intended as a title. Maybe if the page authors want to have links between languages, they could code that in a standard markup language themselves.

These aren't decisions for a useragent to make, and there aren't enough browsers out there that people have a reasonable choice.

Bastardising the url isn't a solution to anything, it's a step towards something that Google, not users, want (in making AMP "trivial").

A useragent is literally a user's agent, it is supposed to help its user browse their target website. It is not supposed to help the target website show things to the user.

I wonder if there is any research or evidence that users actually find “www.” confusing enough to go through the trouble of removing it.

the only thing you've changed is now those 99.9% users can't even find the information they need to ask the .1% for help

great work

All they actually care about is "Wikipedia - Canada". Which is right there on the page.

According to Apple, they care about "wikipedia.org", only.

I like how HN cuts off the end of that url.

It matters because deep-linking is possible on the web.

Hiding it just seems like a trivial UI matter that makes things slightly more obnoxious when you do care.

I'd be surprised if only showing the domain vs domain+path made any difference on phishing results.

I don't think these little tricks do much for the user. For example, browsers now highlight https websites with green in the url bar and show a little lock icon. But how is that actionable information for the user? To what extent does that mean you can trust that website, and how does the average person interpret it? Phishing websites use https, too.

I would avoid stealing any pages from Safari's UI. That browser doesn't even show favicons on browser tabs to let you quickly distinguish them.

You're missing jwr's point. He's arguing that this is harmful for users, especially the ones who don't know what the words mean.

If I were solving this, I'd instead push to eliminate "www" altogether, not sweep it under the rug. It was useful circa 1996, when users might plausibly be using something other than the WWW with a browser. But it has become entirely vestigial.

Why not drop .com then as well? Most sites are on .com domains after all.

It is exactly the same issue.

A domain is a domain. Google is arbitrarily dictating your CNAME from the user's perspective.

What if you don't serve your site off of mysite.com is google going to automatically try again at www.mysite.com?

What if you have distinct content at both domains?

This decision is stupid.

It's not the same issue at all, in that domains with different suffixes are controlled by different people while foo.com and www.foo.com are controlled by the same people.

If you have an example of somebody who needs to serve different web content for foo.com and www.foo.com, I look forward to seeing it. But I've never seen one, and when I've seen it happen accidentally it's due to idiocy.

> while foo.com and www.foo.com are controlled by the same people

Sometimes. Far from always.

In some environments, `www` may be under an entirely different administrative domain, with lesser authority than the top level domain which is delegating web services to the `www` group by way of creating a dns record and/or adding an http(s) redirect to the parent domain.

Having some string values arbitrarily considered trivial is dangerous.

See: lbl.gov has address vs www.lbl.gov has address

The root domain points to hosts at the lab. The subdomain has been delegated off to Google.

This may be true for LBL, but it's not necessarily so. They don't serve different content, and I don't see anything running on lbl.gov that couldn't be handled.

These string values are already considered equivalent, which is why Chrome is making this change, and why every reasonable site has one redirect to the other.

Isn't that simply due to internal politics? Whoever owns foo.com might delegate www.foo.com to someone else, but they ultimately control both.

It's due to different groups controlling different parts of the infrastructure, allowing for separation of privilege -- and is the whole reason www even existed to begin with.

Often these separate groups aren't part of the same organization. They're a different organization or contractor paid to maintain a web presence.

Yes, but those are decisions made by whoever controls foo.com. This may not be a good decision by Google, but I don't think they should be held responsible for a decision that was made by whoever controls foo.com.

This is completely backwards, because Google just made this decision _now_!

Either way, www.example.com and example.com can differ in terms of IP address, underlying hardware, actual website content, and probably other things I don’t know. They are different URLs. It seems problematic to assume they are the same.

Not at all. People already assume they are the same. And they have for 20 years. Nobody reasonable serves up different content on the two URLs. Anybody clueful redirects one to the other. The only reason they're separate is that a) the web wasn't dominant when it was introduced, and b) technology of the time made it hard to manage traffic in ways we can now.

> Nobody reasonable serves up different content on the two URLs.

The third and ninth comments in the linked bug present real world examples of this behavior.

The 9th comment is explicitly described as "bad results"; it's about somebody who doesn't have a redirect. So that for me is in the "unreasonable" category.

The 3rd is about pool.ntp.org, which is a random ntp server, and which shouldn't be serving up web content. They did happen to pick www.pool.ntp.org as the URL for the docs on the NTP Pool Project, but if "www" never was a thing, the would have happily picked something else. E.g. poolproject.ntp.org or ntp.org/pool/ would have been fine.

I realize it is an important distinction, I'm glad you do as well.

Just as ftp.mysite.com is not mysite.com and mysite.com in not mysite.io and http://mysite.com is not https://mysite.com. You get the point.

They are all different and important in my opinion. Any argument that hiding the "www." part makes it easier for the user is equally applicable (and wrong) to ".com"

You can keep repeating your point, but if you want to convince me, you'll have to actually address my demonstration that the two are in fact not equal in practice.

From the bug report that started this thread:

http://www.ntppool.org is not http://pool.ntp.org


https://citibank.com.sg is not https://www.citibank.com.sg


https://m.tumblr.com/ is not https://www.tumblr.com/

Yet Google makes them all appear to be the same.

There are lots of other odd filtering behaviors in the issue if you want to check out the comments

For example, should:

www.www.www.subdomain.www.www.www.domain.com show as subdomain.domain.com

How is that right?

How does making those two destinations appear to be the same thing make the user "safer" under any stretch of the imagination?

I'm not arguing for Chrome's implementation. I'm saying we should do the more useful but harder thing of just not using "www" as a thing in browser URLs. They have correctly identified it as redundant, but instead tried to fix it by being too clever.

As I mention elsewhere, the first two are bad examples. (In fact. ntppool.org and www.ntppool.org are the same thing.) The third is a hack from the era where responsive design, browser sniffing, and polyfills didn't exist. It should probably die too, but doesn't have to here. The m.tumblr.com name is distinct from tumblr.com and is of the form I think better. Note that they didn't use www.m.tumblr.com.

> domains with different suffixes are controlled by different people while foo.com and www.foo.com are controlled by the same people.

This happen fairly often in universities and some other organizations that can have convoluted structure.

Yes, why not drop URL's altogether then? Why display such "technical" things for the plebes?

Sure, "I'm feeling lucky!" should suffice /s

> Most sites are on .com domains after all.

Most sites in English, and even that is doubtful.

There's a full world out there of people and businesses using ccTLD like .fr, .nl, .co.uk, .hr, .rs, .es, .jp...

With "only" 46.5% of all domains being ".com" I guess you are technically correct. When the next highest TDL (.org) rings in at only 5.1%, I think we can agree the the overwhelming majority of sites are based on .com.

Of the TDL you mention the top one is .jp at less than 2%. If you remove the qualifier under the .uk TLD you get an additional 2%. The rest don't make the chart.


It's actually very useful for isolating access to the root of the domain. Say for instance you use a third-party SaaS/CMS to host your website and have other services on other subdomains. If it's hosted on the root of the domain it had more power than if it's on a subdomain.

> If I were solving this, I'd instead push to eliminate "www" altogether, not sweep it under the rug. It was useful circa 1996, when users might plausibly be using something other than the WWW with a browser.

No, it was useful when abstracting machine identity from domain names such that there was a many-to-many relationship was less common, so “www” was the most specific domain name element for the server being accessed. (And a system that needed more than one server might have a homepage on “www”, and various subsites and apps on “www1”, “www2”, etc.)

OTOH, there may be places that still allocate servers that way for simplicity.

As I recall, there are some reasonable dns-related reasons that one might prefer www (or any subdomain) to the bare name.

Modern solution to these issues is having SRV record on appropriate protocol sub-domain which AFAIK (and somewhat surprisingly) is honored by most modern browsers.

Do you have more information about this?


Browsers don't use SRV records for HTTP. With them one wouldn't need www subdomain CNAME hacks.

How can you eliminate "www" altogether? It's just a subdomain like any other now

Well, might as well drop then entire stuff after the domain.com/{dump all this out} (the file path) since non-techy people don't really care about it. All they care is about clicking links and navigating...

/end sarcasm

Maybe the address bar UI will next hide query string parameters because they are an implementation detail? So Google News would be displayed as "news.google.com" instead of "news.google.com/?hl=en-US&gl=US&ceid=US:en".

Doesn't Safari on macos do this?

I'm a Firefox user so I don't typically use Safari, but I just did try it right now and I guess they do!

There thankfully is a setting to show the full url, but yeah, Safari on macOS does that by default. The change occurred around Lion (give or take a major version) if memory serves me well.

From the business point of view that would make perfect sense. The user wont be able to remember the url and even second guess it so that would need the "help" of google, cause everyone knows that "google is your friend".

TBH, so long as there always remains the option to show the full URL, I'd be totally fine with completely hiding it by default. Safari all but does that right now.

99.9% of users don't know either way. The change neither improves nor harms their experience, it merely obfuscates what's actually going on under the guise of "user friendly."

After 20 years, the number isn't 99.9%. But:

Should chrome redirect all DuckDuckGo queries to Google.com because 99.9% use Google anyways?

Yes, and it should also pretend that you searched on DDG by imitating all interfaces and developer console records.

This excuse is used far too often to justify why things have to suck.

This makes me think that developers should start moving away from Chrome. Firefox, for instance, still supplies the full URL.

Citation needed.

Did you ask anyone? Did you do any research?

Presumably your question is for the Chrome team, and I'm gonna assume the answer is: yes, of course they did research. They did research for the padlock, etc too.

Anytime the "average user" or numbers like "99.9% of users" are mentioned, red lights should go off. These kinds of claims are condescending, often untrue, and rarely based on facts.

So they can be hacked in serene bliss. A stupid decision.

Hacked how?

Let's dumb down the internet even more because non technical users feel confused. Maybe we can actually remove the url field and just let the isp decide where they should go? They could offer a choice of 10 popular sites, like TV channels.. :)

Actually Google's long term plan is to do away with the URL completely, they just haven't figured out how yet.


We're always going to need to see URLs. They're not going away. They're just talking about a better default view to show the user relevant information about where they are, which would be a good thing.

Being generous, maybe half of users can look at a URL and immediately identify the domain they are on. That's terrible, and goes a long way to explaining why phishing attacks are so prevalent.

I'm actually pretty excited to see what they come up with. If it's even marginally better than what we have now I'd be all for it. I'm guessing they'll end up showing some combination of the domain and page title by default (which might incentivize more sites to FIX THEIR GODDAM TITLE SCHEMES!).

> We're always going to need to see URLs.

I can see the future now: User enters his target (mybank) into the Google search bar on his Google Android device, the device opens Google Chrome with the Google amp page form mybank already loaded. The user never has to worry about urls or where exactly he is entering his banking information and login critera. Google makes everything a clean and seamless experience and user never has to leave its warm embrance.

Add eyeball tracking into this mix, and we can "allow" users to "experience" unskippable ads.

In all seriousness, I have no doubt this is due largely to Google's frustration at Ad Blockers. If there is no URL, and you're in the Google Garden protocol, there is no way to block ads, or at least no way to NOT download them.

AOL keywords all over again. It's all about building a moat.

Great idea! Hey, since these are non-technical users, why don't we just eliminate those pesky hard-to-use computers and just put the whole thing inside their TV? Y'know, kinda like 23 years ago... https://en.wikipedia.org/wiki/MSN_TV

I also wouldn't put it past the average consumer to prefer 'installing' a website/webapp into their browser as they do an app..

99.9% of motor vehicle users have no use for airbags. We still keep them for the .1%.

This is a very bad analogy. Anyone in a car crash potentially benefits from airbags without knowing anything about them (or even if they exist at all).

The 99.9% of people who don't even know the difference between www and non-www will never directly benefit from seeing www, ever.

> The 99.9% of people who don't even know the difference between www and non-www will never directly benefit from seeing www, ever.

You don't need to know the difference to be able to read the URL off, potentially to someone who does.

It's not impossible (though it's not a good idea) for “example.com” and “www.example.com” to both host web content, and whether or not they know or care about the meaning of the domain name, someone accessing one should be able to, in the event they have a problem, be able to read off which one they are accessing to the person trying to help them resolve the problem.

We must avoid friction between two types of user, 99.9%er and 0.1%er. We could have separate browsers aimed at each.

One browser should be dead simple, secure, and streamlined, aimed at the 99.9%s. Maybe it could be named after a metal.

Another browser, for the 0.1%s, should include technical arcana on screen and have more mutability, perhaps even at the cost of some performance and security. This one could be named after some kind of canid.

unless they need help and get one of those .1 percent to help them out

Airbags have utility for 100% of motor vehicle users.

Well technically that includes planes and motocross also.

There are lots of airbag clothes for motorcyclists now. Apparently they work quite well.

and bicycles!

> It's making a joke of the SSL/TLS padlock, too — what exactly is the padlock supposed to tell me

That's why they're getting rid of it.


The team that implemented this change, also talk about how women confuse the padlock with an icon of a purse...


It's a sign that the Chrome team is too large and should be assigned more meaningful work.

Google wants to destroy the URL so the only way to find something will be via Google... They also want to tie your identity to each webpage that you author via a certificate so that all governments can clamp down on fake information or have the opportunity to in the future.

Given the adoption rate of SSL, I imagine the padlock itself will become useless even without Chrome's changes. Does it mean anything if almost every website has it?

The clearly announced intent of at least Mozilla and Google (and I'd assume Apple and Microsoft but I don't pay as much attention) is to focus on highlighting the insecure state because that has much better security UX. Labeling one site you visit today "Not Secure" stands out. With luck it might be enough that you don't type in your credit card details.

That's why Chrome is moving away from showing the padlock to displaying "Not secure" for sites that aren't secure. The padlock will be going away entirely; secure is the default state.

ideally it means no passive snooping

The padlock was already meaningless.

No, the padlock means that you are likely connected to the website that the URL bar shows you. This is useful and should not be discarded because of condescending ideas about "average users". It also has the advantage of being easy to explain.

Some people assign additional meaning to the padlock, which should not be done. It doesn't mean you are talking to your bank, it only means that you are talking to the website shown in the URL bar and that reasonable (simple) checks were performed to make sure that is the case.

I'd suggest we invent something better before we start breaking it.

It started being meaningless thanks to Let's Encrypt. Before it meant you had to show your ID and banking info to a "reputable" corporation for them to make a cert for you. Yes I know I know, not always the case, but...

LE means that the mantra "if it's https then it's a secure and reputable website" is now outdated.

> Before it meant you had to show your ID and banking info to a "reputable" corporation for them to make a cert for you.

No it didn't. Let's Encrypt made free certificates easier to get, but Let's Encrypt doesn't do less verification than some other CAs/some of their products.

> Before it meant you had to show your ID and banking info to a "reputable" corporation for them to make a cert for you.

No, it didn't. DV certs never meant that (EV certs did and still do, but LE doesn't offer EV and EV isn't and never was necessary for the padlock.)

are you that Comodo guy?

How do you verify the authority owns the content? For instance: AMP urls serve content from a different authority than the one that produces the content.

Google wants to introduce Web Packaging to solve this: https://youtu.be/pr5cIRruBsc?t=543

Yikes. So long first party serving.

Yo, this is... going too far, c'mon now.

While sure, www seems odd now, it's still a subdomain and we're inching into territory of obscuring things that matter for small gains in end-user perception that aren't _that_ impactful.

Yeah this is weird. Which users are bothered enough by the leading www enough to justify messing with semantics?

My bet is this decision is driven somewhere by marketing morons who want 'www.google' (presumably a domain that will at some point exist, the TLD already does) to render as simply 'google'

Technically there's no reason why a TLD can't have an A record. If they own "google." they can point it to whatever address and serve whatever they want.

Until recently, http://ai did exactly this. They also had MX records, so n@ai was a valid email address.

That link works fine for me. It doesn't for you?

Dotless domains are prohibited for new gTLDs:


There are even examples of TLDs that do resolve.

See: http://to. (it redirects to an advertising/malware site for me, so be warned)

Building on that point, everyone I ever meet who's not technical is www-by-default, because they were trained that way...

Yeah, even radio ads.

don't forget the http://

Or even better, http:\\. I think non-techies think that saying "backslash" makes it more correct...

Ah, the ol' h-t-t-p-colon-slash-slash-slash-dot-dot-org


I've heard people say "backslash" when they insist on reading out the whole URL with protocol, which I'm pretty sure is the wrong slash, I honestly don't know, but I've never understood why people felt the need to say it at all. Do they type the protocol into the address bar when they visit sites?

I'm guessing it's related to mobile. Limited real estate means removing www gives you 3 more characters on the end of the url.

Pendantic hat on: 4 www.

Proportional fonts, so no the '.' doesn't get you a full character.

Are you saying each w is netting more than a full character?

No, they're saying that `.` gets less than a full character.



Pretty similar to 4 average characters, really.

My guess is they’re patiently trying to train users to prepare them for a post-url era. I don’t remember where but I recall hearing the google was trying to replace the standard. Sort of like how Apple has had to retrain people to not think so much in terms of files but in terms of apps.

Just a guess though.

This is not only obscuring. This is plain breaking tons of stuff for people.

Can I ask what's broken? I understand it's displayed incorrectly in the worst cases and obscurely in most, but broken?

1. Org hosts physical web server at www.example.org.

2. Google directs user to www.example.org.

3. User sees url as example.org and notes it down.

4. User needs to visit example.org again, but for some reason it doesn't work.

5. User goes to coworker who shows him that example.org does in fact work (hidden www).

6. Endless confusion ensues.

This is bad UX decision on Google part (on top of it breaking published standards)

This is going to be a problem weather or not Chrome changes www.example.org to example.org. There is a _very_ non-trivial chance the person was going to write down example.org anyways.

But why add to the confusion by hiding important information?

If in most uses minds, "www.example.com" is the same as "example.com", then "example.com" is less confusing because they have probably never heard of the word "subdomain".

Have "most users" been measured?

Perhaps they should be taught.

Otherwise let's move on to making nuclear reactors less confusing.

My admittedly anecdotal evidence suggest that they do not. From my other comment: I worked as tech support for a large org (300+ users) most of my career and have dealt with most types of users. I've only seen them interact with the address bar in one of two ways: explorer shortcuts on the desktop/browser bookmarks (few) or stick-it notes on the monitor or keyboard (many).

If you count only the people who require tech support then of course you're only going to see the people who require tech support. But they're not the only users.

Developers and techies are users too and they're much less likely to call tech support in general.

This harms them far more than it helps the people who need help.

And this move was made under the guise of improving ux for normal users and not users who know how subdomains work. What's the point you're trying to make?

As a user who's harmed by this, how do I disable it?

My parents write the exact url for many bill paying sites. I would bet a lot people do the same.

Well, I understood it did not only display differently, but make the actual request to the stripped URL.

I don't know.

Nowhere in the bug report is that stated, nor is that the case, it seems

Why does www matter?

Same exact reason that api.domain.com or beta.domain.com matters - you're on a subdomain, not the root domain. That the internet and world at large made www the "kind of" root domain in many cases is an unfortunate thing, but I've not seen a modern server configuration that doesn't handle this case by default.

But when using HTTP(S), you nearly always expect the "www." domain and the root domain to host the same content. It's very, very rare that this isn't the case. Chrome appears to be hiding "m.", too, which is unfortunate (as would api. or beta. being hidden), but "www." is such boilerplate for HTTP at this point that I don't think it matters whether it's displayed or not.

I think it matters, quite a bit actually. My expectation as a user is that the URL I see in the bar is the URL of the site I'm visiting. If it's not accurate, then why show it at all?

Because the second level domain is far more important than the subdomain. The second level. www.example.com and example.com usually have the same contents, but are always controlled by the same group.

It's not true that the second level and third level are always controlled by the same group (most of .uk for example). You can put an SOA record anywhere.

Are they not respecting the public suffix list?

But the entity that controls the second level domain always controls all of its subdomains, not just www. Historically www has been a special case, but that's not a requirement and that's on the decline.

It seems like the confusion this change causes negates any benefit it could have.

Exactly; wish I had added that to my original point.

>If it's not accurate, then why show it at all?

Don't give them any ideas. They've been moving towards this path for a long time. Full paths being hidden from URLs could be up next.

> It's very, very rare that this isn't the case

Those tend to be the cases when I care very much.

If it doesnt matter, why even remove it?

Because it's added visual noise that 99% of end users don't care about and don't need to care about.

This is just going in a circle. If they don't care about it then it makes no difference. A few characters in an address bar that they pay no attention to is not significant noise. Meanwhile the people that do care have less information.

They don't need to care about checking whether it's there or not or seeing it at all. But in a general sense, users do care about reduced clutter and aesthetics, and this is a way of improving aesthetics and creating a more consistent URL bar. It's a tiny bit more consistency and a tiny bit less clutter, but it's not nothing.

The users who truly care can just click the URL bar. (Someone said you also have to press the left arrow, which if true seems like a bad decision. I would agree the full URL should always be visible whenever the cursor focus is in the URL bar.)

Quoted from a comment in the linked site:

> How will you distinguish http://www.pool.ntp.org vs http://pool.ntp.org ?

> One takes you to the website about the project, the other goes to a random ntp server.

This is so obvious, it's confusing that it's even a question. Hiding www doesn't make things simpler, it just hides complexity.

Currently have a test domain for junk. example.com is broken but www.example.com works fine.

I come across enough sites were one or the other is broken I'd call it important.

If the browser uses some technique to detect that www.domain.com is functionally identical to domain.com for a given domain, then I don't see a serious problem with this. But if they are short of that certainty, they're obscuring a critical part of the URL, and harming usability (e.g., if I want to jot down a site's URL for later use, I might get something unexpected).

That's not the case. I run a server that does not respond to "www." If I enter "www.myserver.com" into the address bar, I get a DNS lookup failure, but the address bar is now showing "myserver.com." That's damn confusing, and this is an idiotic default on Chrome's part.

Likewise, something is saying that their bank only works through www.bank.com. But the URL says bank.com. If they try to type in bank.com it doesn't work.

But hey, now we don't have to see www which has been around forever and is a surprise to no one!

Why would you want to do that?

That seems like an edge case worth submitting a separate bug for.

Think public suffixes! It is long, so definitely not an edge case.

You can write a small script to look up all PSL domains and check if the www and APEX domain has the same dns records.

Our implementation (we control 15 or so) does not, have a www-subdomain for the domains. Anybody can register the www-subdomains.

It's not an edge case, it's a specific customization required to support Chrome browsers only.

Funny enough, I visited a site for some task or other just yesterday for which I just typed the base domain, and got ... nothing. It still required the "www." prefix -- no redirect, "ANAME" DNS-side hacks, VIPs or anything in place to make the base domain a reachable URL. This new Chrome change, if it's truly doing naive subdomain hiding, would be a really bad UX for sites like that.

From one of the comments there:

http://www.pool.ntp.org vs http://pool.ntp.org ?

One takes you to the website about the project, the other goes to a random ntp server.

Well, browsers already support a better form of this feature. On the server you setup a redirect to your preferred domain. You redirect https://www.example.com to https://example.com or vise versa.

> If the browser uses some technique to detect that www.domain.com is functionally identical to domain.com for a given domain, then I don't see a serious problem with this.

Say, for example, if the canonical URL doesn't have a "www" in it?

If you click into the address bar to copy/paste, the full URL will come back.

"If you click into the address bar to copy/paste, the full URL will come back."

Which is also terrible behavior. Unwanted characters hidden into your paste buffer is at best unexpected and capricious behavior and at worst a source of serious, possibly catastrophic consequences (depending on what, and where, you are pasting).

How soon until a doctored up paste buffer contains, by design, a newline character ? I'm sure there must be some use-case that (appears to) call for this ...

Thank you for validating that I'm not the only person who hates Chrome's horrible clipboard behavior. When I highlight something and copy it to my clipboard I expect exactly what I've highlighted to be on my clipboard-- not some editorialized version of it.

On a Mac it requires two clicks.

First click gives you a suggested list of URLs and related searches.

Second click gives you the full domain name.

Lest anyone think that Chrome is being innovative here, this is Safari's default behavior for when the URL bar isn't focused. When you click on the URL bar, the subdomain, protocol, and path all appear.

Which drives me crazy, because my users send me screenshots in support requests. Now I have to spend even more time explaining to them how to click/copy/paste the URL. It was a terrible UX choice when Safari did it, and it's a terrible UX choice now that Chrome has done it.

Fortunately, it's trivial to toggle this setting in the Safari preferences--and then the full URL will appear by default.

A much saner behavior is to hide the URL entirely until focused, not to show a secretly mangled version.

Safari shows only the domain name (minus "www."), so Google News is shown as "news.google.com" until you click to see the full URL "https://news.google.com/?hl=en-US&gl=US&ceid=US:en".

Suddenly "www.com"'s value has skyrocketed in the eyes of scammers. How about:

* login.<target_site>.www.com -> login.<target_site>.com

* members.<target_site>.www.com -> members.<target_site>.com

Even some carefully chosen <target_site>.www.com's will now be valuable:

* login.www.<target_site>.www.com -> login.<target_site>.com

What a stupid idea...

That's just a bug which I'm sure will be fixed in the next release.

For this to actually help scammers (after the bug is fixed) they'd need to own www.example.com but not example.com, which is unlikely to say the least.

Yes, it is a bug, but until it's fixed it's a potential attack vector.

Dear Google, just do syntax highlighting. Make the subdomain gray. You can even color https: green and http: red. But don't hide them. I really think syntax highlighting will accomplish the same thing for less technical users.

Grey is usually understood as greyed out: not selectable, non-functional and therefore counter-intuitive as well.

Why does this matter? Users don't care and its easier to remember/understand that all websites are just "x.com" rather than sometimes being "www.x.com". If you have some server/troubleshooting/network/dev problem with it, the missing info should be moved to developer tools.

This is just removing data that is useless and confusing to 99.9% of users - whats the problem?

Because they are on www. and not *.

What happens when you copy and paste that URL?

Now every single website that wants to support Chrome needs to ensure that https://foo.com is always redirected to https://www.foo.com, or at least works as if it's www. It doesn't matter that most websites already do this, it's not standard, and represents Google breaking standards because they are big enough to do their own thing.

It's just one of the 1000s of papercuts that google is inflicting to keep users from switching web browser.

When you copy and paste, it has the whole URL - just like when you copy and paste now it will include http(s)://

What if you visually copy and paste? Parent is right.

One more reason to set up a redirect from or to www.

Many MANY legacy sites serve different information at those two domains.

And this is wrong.

Maybe you have a niche target and not looking for mass adoption. It's not "wrong" it's just not normally the way things are done.

the "www." still gets copied when you copy the full address bar just like the "http(s)://" before 69.

i’m a user too

This reminds me of Windows hiding file extensions by default. So annoying... now you don't know what your dealing with, it's harder to modify etc

Seriously? This is such a bad change, I hope they revert this patch in the next update. There's a world of difference between the two and this is going to cause an ecosystem nightmare.

> This is a dumb change. No part of a domain should be considered "trivial". As an ISP, we often have to go to great lengths to teach users that "www.domain.com" and "domain.com" are two different domains...

What ISPs teach their users anymore these days? Why the heck do we want to go back to that?

Time for a modicum of historical perspective.

If you care about usability this is clearly an improvement. This is part of a long-running industry trend -- Safari does this too -- to improve the usability of the Internet and technology in general.

At EVERY STEP in that journey there has whining and griping from the more technically advanced folks (like all of us on this forum). They zero in on the negatives, the tradeoffs that come with simplification. They don't see the positives because they're technically advanced enough and don't benefit from simplification (or so they think).

Then after the griping whines (sic) down and they realize the world didn't end and the downsides really weren't that bad and we move forward towards a better, simpler, more usable web.

Like, who actually runs separate HTTP servers on example.com and www.example.com anyway? Everyone is hyperventilating over contrived "the principle of it all" examples. Bottom line, Apple & Google are putting usability above technical pedantry. That's the right priority for mass market technology products.

Two prominent examples noted in the bug: https://citibank.com.sg and https://www.citibank.com.sg are different, http://www.pool.ntp.org and http://pool.ntp.org are different.

Like, big companies run separate servers on the two domains.

That's besides the fact where if you have www.example.www.example.com, it rewrites to example.example.com.

The first citibank url doesn’t load for me, so it is site vs not site rather than two different sites, and both those ntp urls are redirecting me to https://www.ntppool.org/en/ - perhaps there’s less substance than meets the eye to those complaints.

No, actually, there's a really good complaint here -- if you're hiding "www", then the two Citibank URLs look the same when they are actually not in Chrome, and the user will be confused when typing in the URL that they've visited many times before and not actually being able to visit it because now Chrome obfuscates the "www" part.

user1 - https://citibank.com.sg doesn't work for me

user2 - it's fine, here is a screenshot of it working (while showing "beautified" https://www.citibank.com.sg)

how is that not confusing?

I think it's more on the website owners to fix their sites, users expect domain.com to be the same/auto-redirected to www.domain.com or vice versa. I think it's a good thing what Chrome is doing, it will push website owners to correctly set up their domain redirects and in the end, lessen end-user confusion.

Coming back to this a couple of weeks later, they appear to have changed their site, and https://citibank.com.sg now redirects to https://www.citibank.com.sg/

No, users expect to type what they see in the address bar, and get the same result every time.

Can we even call them address bars anymore?

And blaming website owners for working within the bounds of published standards is ridiculous and you know it.

If I asked any non-technical person under the age of 20 the difference between www.google.com and google.com they probably couldn't tell me. Users expect www.example.com to equal example.com. If a website isn't redirecting one to the other, they, are doing something that is incredibly anti-user and it is a good thing that google/apple are forcing them to do it different.

This is not about users knowing the difference between www and no www on a site that redirects one to the other, but about users who don't know the difference on a site that doesn't. See my comment https://news.ycombinator.com/item?id=17930243 for an illustration of one of the issues that may occur.

> Users expect www.example.com to equal example.com.

No, they don't. The expect that the address they write down works when they type it back in the address bar though.

> If a website isn't redirecting one to the other, they, are doing something that is incredibly anti-user and it is a good thing that google/apple are forcing them to do it different.

You keep claiming this with any sort of evidence or backing. This may be anecdotal, but I worked as tech support for a large org (300+ users) most of my career and have dealt with most types of users. I've only seen them interact with the address bar in one of two ways: explorer shortcuts on the desktop/browser bookmarks (few) or stick-it notes on the monitor or keyboard (many). I'd be willing to bet my next paycheck that not 5% of them are aware that google redirects to www.

At least for those users, none of them will benefit from chrome's mishandling of www. Many of them will suffer from it. I'd be willing to stand corrected with a proper study though.

Also, subdomains aren't only used to make http urls pretty, they are intended to be used to refer to actual physical hardware hosts that belong to a given domain. All published standards I know of are written with this in mind. None of them mandate that www be synonymous to the parent domain, not even those responsible for web technologies. Organizations that follow standards are not at fault for following standards.

If having two different hosts on www and the parent domain was truly "incredibly anti-user" (a dubious claim), let google introduce a rfc at relevant standards bodies and have it go through proper scrutiny first.

> If I asked any non-technical person under the age of 20 ...

Which is why technical specifications aren't written or maintained by non-technical people.

I think this is a fair comment but Chrome has been doing this a lot lately. They'll make a change and developers must scramble to fix their websites.

It's the same with autocomplete earlier this year. One day Google decides to ignore autocomplete="off" and all hell breaks lose.

Interesting to note, they have reverted this change. Google now respects autocomplete="off" in some scenarios (i.e. when autofill is not triggered via name attribute).

Note: autocomplete !== autofill

If this is the worst example anyone can come up with, debugging a misconfigured site while relying exclusively on screenshots of beautified URLs, then I think it proves my point.

There will always be tradeoffs in advancing usability. This is objectively a small one. The problem is the unstated lack of appreciation for the value of usability improvements, because it's usually a more technically sophisticated person criticizing it who's comfortable with the way things have been. If you care about usability, that is an immensely net positive gain.

How is having to "just know" that you have to type www to get that page to load, despite it being presented without www a step forward for usability for technically unsophisticated people? It just seems confusing to me. I get that it looks "cleaner" but I am having a hard time figuring out how it makes anyone's life easier, or how it actually makes the web more usable.

Also noted in the bug:

m.tumblr.com IS NOT a mobile variant of tumblr.com

I kinda wished I had that blog now so I could put a proof of concept up to show why this is a very bad idea from a phishing perspective.

Just because a major bank and government organization use an unusual domain configuration, it doesn't mean that it is good practice.

And just because it's not a good practice doesn't mean that making both www and non www version look the same to the user and having it "just randomly not load" is a step forward for usability.

but showing an incorrect url is a bad practice

So citibank and NTP will need to change their domain structure. That's okay: those are confusing structures.

This is a tradeoff -- continuing to support already-confusing differences is not worth the loss of the ease-of-use gain referenced in the grandparent.

It seems like the Internet is moving more to a "centralized" design where certain actors have decided "well, here's how we're going to do things now, deal with it".

The golden age of the Internet is already dead, guys. We're unfortunately over the hump.


This change goes far beyond merely hiding the "www." prefix. I'm not making up these examples:

1. "m.tumblr.com " and "tumblr.com " are BOTH displayed as "tumblr.com" even though they're literally different sites.

2. "www.example.www.example.com" displays as "example.example.com" which means all "www" subdomains, whether leading or not are being stripped out.

3. In the extreme case, "www.m.www.m.example.com" shows up as "example.com" which is pretty misleading.

Usually, the Chrome team is very thoughtful about decisions that impact security. I'm surprised this was released in such a half-baked state. I hope this is not an indication of how Google's plan to "kill the URL" will work out.

If true, that's seriously concerning from a "detecting visual-similarity domain hijacking" security perspective.

FWIW the Chromium agrees that 2 and 3 are bugs and will fix them.[1]

1 is by design though. Remember that they are only partially hidden; when you focus on the address bar it reveals the true address.

[1] https://bugs.chromium.org/p/chromium/issues/detail?id=881410...

My understanding is that it's not when you focus it but when you attempt to edit it, either via double clicking or by moving the cursor.

These types of usability steps disempower the user from having control. It doesn't surface what's actually happening and tries to "fix mistakes" with blunt overly-generalized presumptions.

This isn't the first time it's happened.

Take the https-everywhere changes. If you explicitly type something like "http://example.com.:80/" but the browser knows about an SSL cert for the domain, it will attempt to shuttle you off to https, failing to do the SSL handshake because of course it's port 80. Adding the protocol and the port isn't enough of a hint that I know what I'm trying to do.

What's worse, this is a domain-wide setting. If you have say, local.example.com, the browser will try to protect you again.

You have to go to an obscure preference and make the browser forget about this on a domain-by-domain basis, which will reset the next time it gets a wildcard SSL cert for the domain.

This thing is of the same vain, it's the new Ctrl-Q of the browser world; a feature that, due to some dogmatic ideology, doesn't have a "knock off the bullshit" setting.

Somehow irreversibly child-proofing software by making it not do what you tell it and hide important details has become fashionable, it's nonsense and needs to stop.

You begin by criticizing those who would over-generalize, and then proceed to over-generalize by complaining about some SSL stuff. Again this is "the principle of it all" argumentation that doesn't cite any actual concrete real world issues with the change at hand.

I haven't used it yet so I can't cite any actual concrete real world problems it solves either.

Are you claiming an attitude of hiding important technical details and creating dysfunctional smart systems in the name of usability doesn't exist? Or that these are unrelated things and the narrative arc isn't there?

There's numerous other examples, such as the gtk3 file-chooser dialog which hid a number of important controls in the name of usability.

What about mobile vendors, to make their devices friendly, remove a number of android customizations such as the ability to disable ambient display?

What about sites like reddit who removed a number of features such as the ability to edit a comment on their mobile site as a UX improvement?

Or what about the laptop vendors who remove keys such as Escape?

It appears that all technologies are slowly tending towards an aspirational goal of being designed for illiterate toddlers.

Luckily I know where this comes from.

Illiterate toddlers, as it turns out, use lots of software on tablets and smartphones. Looking at usage metrics and presuming that everyone is a competent adult, efforts at user-error reduction tend towards humans with diapers and pacifiers as they make the most errors and any demographics statistics of those errors will false positive as their parents, since the infants don't have their own devices.

Why do you think bright colors have better click thru rates for ads or using cartoon mascots or smiling men and women of child-rearing age increase inbound traffic and are seen as an important branding strategy for user engagement? Could it be toddlers tapping?

Maybe that solves the problem of why these boosting effects only appear on mobile devices since 3-year olds can't operate a mouse.

These trends give rise to design rules and "insights" which have a contagion effect on all software, just like the soft keyboard on the iphone led to the removal of physical keyboards from effectively all smartphones. Until every complex tool is dumbed down enough to be mastered by those who haven't learned primary colors or geometric shapes yet, this insanity will continue.

Look at Youtube's recent redesign moves for example. They recommend the video you watched yesterday again and do so for weeks. The only people that want that are under the age of 6 who watch videos on loops. I'm confident they have strong empirical data mixed with the failure to properly segment users to back this decision up. Toddlers are controlling the direction of software and it needs to stop.

What exactly is the usability improvement from hiding part of the domain name? Maybe we should be hiding ".com" because that's trivial too? Better yet, why show "google" at all if from the page it's clear you're on Google? Might as well just fullscreen the content pane and be done with it.

> What exactly is the usability improvement from hiding part of the domain name?

Quite simply the www subdomain is confusing and unnecessary. See comment from the ISP admin I cited re: user training.

> Maybe we should be hiding ".com" because that's trivial too? Better yet, why show "google" at all if from the page it's clear you're on Google?

Those aren't really serious counterexamples. ".com" is obviously not trivial; there are plenty of TLD variants in use. It is approximately 0.0000000001% as common to run separate HTTP servers on example.com and www.example.com. And clearly different domains can spoof one another's content so that's not a way to be "clear" you're on google.com, whereas this is not generally an issue with subdomains.

Again your reaction is just sort of knee-jerk exaggerated resistance to change, not actual real world problem cases.

> Quite simply the www subdomain is confusing and unnecessary.

Sometimes it's unnecessary. How is it confusing? Millions of non-sophisticated users became sophisticated users typing it, millions more type it every day. It doesn't seem prima facie more confusing than a pronoun or other oft-repeated article. Consider the beginning of my last sentence in this paragraph -- would you consider the "It" confusing, even though it's not strictly necessary?

> Again your reaction is just sort of knee-jerk exaggerated resistance to change

Perhaps your reaction is knee-jerk teleology of change as progress?

As a suggestion: maybe spend less time characterizing the approach of people that you disagree with on this topic, and more time articulating actual arguments ("the www subdomain is confusing and unnecessary" counts, even though it's arguably not particularly strong), unless you'd eventually prefer it when people make the discussion partly about the shortcomings of your approach, which are far more glaring than you've clearly spent time considering.

What is the usability improvement of www?

That example.com and www.example.com can go to different places? Why would you want to force anyone to run web traffic on their bare domain?

mail goes to mail.example.com

voip goes to voip.example.com

vpn goes to vpn.example.com

Why should www be any different?

WWW was absolutely revolutionary.

> They don't see the positives because they're technically advanced enough and don't benefit from simplification (or so they think).

I'm noticing despite the invocation of vague concepts of progress and usability... you haven't articulated any particular case for how this represents either. No model for why it's simpler or more usable.

"Safari does this too" or imprecise aspersions about the supposed "whining and griping from the more technically advanced folks" doesn't really cut it.

I could guess that what you mean is "oh, if you can omit something and yet it's implicitly understood, obviously that's a simplification," though that's obviously a guess. If I've misunderstood, well, there's counterexample #1, to get a little meta. If I've managed to guess correctly, the broad topic of language and notation is a fairly rich well as to the potential for ambiguity or outright miscommunicated meaning when things are moved from consistent and explicitly denoted syntax to implicit syntax. Or examples for when the implicitly understood isn't particularly burdensome to use or even require in some cases.

"the world didn't end" .... lots of bad ideas that make things marginally worse don't end the world.

"the principle of it all" Again, this is a pretty vague and imprecise charge. Do you understand what the particular principles people are registering their objection on? If so, why not name them and respond?

> I'm noticing despite the invocation of vague concepts of progress and usability... you haven't articulated any particular case for how this represents either.

I literally began my comment by citing this:

‘As an ISP, we often have to go to great lengths to teach users that "www.domain.com" and "domain.com" are two different domains...’

It takes only a very small amount of thought and empathy with the average user to understand how the extraneous www prefix can be confusing. It can lead to failures like thinking you have to use it with every website. It enables fraud by making “wwwexample.com” look more normal. Etc.

Your comment is a textbook example of “the principle of it all” argumentation. You cite no concrete examples of problems caused by hiding www from the UI. Not that we should expect none, but the right conversation to have is whether the usability benefits outweigh them. Not technical pedantry or “vague and imprecise” warnings about what may come if we let this pass.

> I literally began my comment by citing this:

That comment seems to be a rebuttal to the point you're attempting to make. It'd seem more apt to say you brought it in to interrogate it, rather than to say you "cited" it, and beyond rhetorical frustration ("why would you do this?"), it's not clear to me that you engaged it at all.

> It takes only a very small amount of thought and empathy with the average user to understand how the extraneous www prefix can be confusing.

It takes only a very small amount of thought and empathy with the average speaker to understand how the article at the beginning of this sentence is functionally extraneous (and is even optional in informal speech), and yet isn't particularly burdensome to use.

Perhaps the thing you're claiming is prima facie obvious with "only a very small amount of thought and empathy" is instead an unexamined assumption on your part and reflects assumptions about the average user that you have no particular claim to over anyone else in this thread.

> It can lead to failures like thinking you have to use it with every website.

"Failure" is a curious term here. The overwhelmingly common "failure" of someone adding it is comparable to the "failure" of forgoing a contraction for its full expansion. Or the article example I used above.

It's technically possible, I suppose, that a www|m.domain.tld record will simply not exist. That's a reflection of the reality that www|m.domain.tld and domain.tld don't actually resolve to the same server, and pretending they do breaks DNS. And not only is it a good bet that the failure we're worried about is more common than the failure you're worried about, the sensible way to address the potential failure case you're concerned about would be to allow an implicit redirect reflected in the URL to take place only if the www record does not exist. That'd be the user agent being helpful instead of making assumptions that break DNS.

> It enables fraud by making “wwwexample.com” look more normal.

This is half a worthwhile point. But only half a point because unless one goes whole hog in eliminating subdomains entirely, you can't really take out example.com.internet2.ru, and even if you did, there's also example-internet2.com, so this is part of a class of problems prefix elimination can't solve, which is a sign that maybe it's not worth it if there are tradeoffs (and there are).

> Your comment is a textbook example of “the principle of it all” argumentation.

You keep using this phrase. It sounds like what you mean is "the people I disagree with don't really have reasons they're just attached to some convention that doesn't matter because reasons." If there's a more precise meaning, try rephrasing.

> You cite no concrete examples of problems caused by hiding www from the UI.

Since your comment didn't contain any clear criticisms of the recent state of things, it seemed best to see if I could elicit those first.

Also, the most prominent concrete problem examples of how this breaks DNS weren't exactly hiding if you read the linked issue.

On a deeper level, though -- and this is an answer to your interrogation of the comment you brought into the thread -- this also handicaps people's ability to actually learn by observation how domains and subdomains and others aspects of URLs work through observation. Presumably your natural response to this would be to appeal to the tastes of the average user and saying they don't care about such things and that's only a concern for technically advanced users and "don't make the user think." Spotting you the accuracy of that model of an average user (which I've yet to see a comprehensive case for), perhaps some of those things are true, and yet, as a combined package, the conclusions it leads to are often wrong. Why?

"Don't make me think" is a starting place for good UX. Not the end. The next principle for really great software might be best articulated as "make affordances for optionally advancing use." Learning how domains and subdomains work isn't required for anyone to use the web -- it never has been, because people have always been offered hyperlinks and search boxes from starting points. But the URL bar offers a real affordance for starting to unpack details of how URLS work (including domains and subdomains). An "average user" may never care to start with, but this isn't advanced tech, it's accessible to anyone who can learn how to parse the parts of a physical address, not by nerdy study but simply by incidental observation... all while not requiring people who may tune it out entirely to make any more effort than they might with the controversial change under discussion. Casting subdomain (or path) details of that language into an implicit and ambiguous new convention makes it less likely they'll pick it up. So even assuming this change can be made w/o breaking DNS -- which doesn't appear to have been addressed -- it's removing an affordance into a simple and relevant (if linguistic) tool for navigation/orientation.

So there's your harms to consider.

Cities and states superfluous from physically mailed addresses these days if all you want is for your letter to arrive. Sometimes it's convenient to even simply give zip codes in some exchanges of address info (or to collapse the whole thing into a bar code). Would you suggest that it's harmful or confusing to continue to have cities and states as an allowable convention in addresses? If some postal/shipping service mandated the convention of leaving out cities/states, could you see that there's a credible case for harm, though it's redundant information?

That's a comparable situation with www.

And beyond the marginal utility of space savings of ~3-4 character widths on a small screen, there's no argument I've seen for a benefit in removing it that stands up to scrutiny.

Are you ok, man? Peering through the pseudo-intellectual smokescreen the only thing approaching a concrete example of a problem is the claim that this UI change "breaks DNS", which is trivially false. No real world examples are cited unfortunately.

And your analogy betrays fundamentally confused thinking. Omitting "www." would be more comparable to omitting "1st floor" for one story building addresses than omitting the city or state. The only address this would "break" is a building with different tenants operating out of "123 ABC St, 1st floor" and "123 ABC St", which is a misconfigured building if you could even find an example of that.

> the claim that this UI change "breaks DNS", which is trivially false. No real world examples are cited unfortunately.

As mentioned in the comment you're replying to here, at least half a dozen such claims with examples are in the comments on the issue/ticket that started this thread, the same one that you even pulled a quote from. They've also been invoked throughout the entire HN discussion. I'm not sure if you missed them, or if you're implying that examples such as singapore banks or m.tumblr.com are simply made up.

> Omitting "www." would be more comparable to omitting "1st floor" for one story building addresses than omitting the city or state.

A building/floor analogy has its own issues, but it deals in enough of the same concepts as city/state/zip that it's serviceable if you prefer it as an avenue.

"1st floor" would indeed be extraneous (though correct) on single floor buildings, so sure, many people might choose to omit it from an address scheme on buildings with single floors. Plausible and not a problem in that case. And of course, people can add it for multi-story buildings where it's not extraneous. Finally, they can even add it on single story buildings if for some reason they're in the habit of using the convention, or if they don't know whether a building has multiple floors but know they want the first, and it's still technically correct and locatable in either case. And that strikes me as a reasonably apt analogy for the state of things before the change under discussion. Not a bad state of affairs really, unless someone wants to make a case that adding "1st floor" when uncertain represents a burden.

Now suppose one or more of the postal/shipping services mandate that "1st floor" is a trivial expression, and will therefore be hidden on all envelopes. Does that seem like a good idea? When an address is displayed for buildings that actually have more than one floor, who will know whether the 1st floor is implied? How will they know the floor wasn't accidentally omitted instead? Does the existence of these questions -- vs the question of whether to add 1st floor or not in the previous state of things -- and and any answers there may be really constitute an experience improvement for readers/writers of addresses?

> Peering through the pseudo-intellectual smokescreen

You know, that might be the sort of thing a keen intellect that's cutting through mumbo-jumbo would say, or it might be the sort of thing that someone who's not confident that their engagement with / responses to the arguments in play speak for themselves. Seems like a bit of a gamble about how it'd come off.

That a misconfigured Singaporian banking site is the worst example in the world anyone can come up with is perfect evidence that the apoplectic reactions are unwarranted, like so many over the history of changes like this in our industry. And my comments are restricted to the "www" prefix.

> How will they know the floor wasn't accidentally omitted instead?

This is nonsensical. This www change does not hide subdomains that meaningfully differentiate among "floors". It only concerns the entirely redundant "www" ("'1st floor' for a one story building"). Accidental omission of "www" a total non-issue... in the real world, where we live.

> That a misconfigured Singaporian banking site is the worst example in the world anyone can come up with is perfect evidence that the apoplectic reactions are unwarranted

The "worst" example? I don't think I turned in a ranking. You asked for concrete examples of a problem. It is one. It's part of an unknown but decidedly non-zero number of examples where the www subdomain meaningfully differentiates hosts in the real world, where we live.

If there's a specific reason this example or others aren't worth considering, that's a bit of goalpost motion, but a more clearly articulated case can be worth it.

> my comments are restricted to the "www" prefix.

I'm glad your comments are. The change under discussion does not appear to be. Per comment #16 under the ticket:

"the domain m.tumblr.com is shown as tumblr.com."

Apparently the policy of identifying some subdomains as "trivial" is not limited to www.

Sortof raises the question -- once a player like Google decides it can designate a subdomain as trivial over its common (but not universal!) redunancy, what keeps them from stopping with www?

> It only concerns the entirely redundant "www"

Commonly redundant is critically distinct from universally redundant.

And allowing domain holders the possibility of treating them as redundant is a distinct situation from unilaterally imposing it.

It's about as smart as hiding filename extensions, and the results will probably be similar. Increased confusion and more security risks.

I agree with the first sentence. MacOS has hidden extensions for... ever? And MacOS is generally considered the easiest to use and most secure desktop OS. So there you go.

> And MacOS is generally considered the easiest to use and most secure desktop OS.

Yup, that's why most non-technical users and also system administrators prefer it! (Oh wait, that's Windows and Linux)

> If you care about usability this is clearly an improvement. This is part of a long-running industry trend -- Safari does this too -- to improve the usability of the Internet and technology in general.

I am genuinely interested in knowing if this is a usability improvement. Citing "Apple does it too" is not convincing to me.

It seems to me if URLs are meaningless to you, hiding part of them is meaningless to you so why do it? It seems just as plausible that it's only unusual subdomains that confuse users ("Is foobaz.example.com the same entity as example.com?!"), so hiding common ones might only make uncommon ones more confusing.

The behavior of safari is quite as bad and very confusing. Want to copy the url you see? Nope, you copied https as well.

What you see is not what you get seems counter-intuitive imho.

The comment you are responding to (mine) literally cites this answer:

"As an ISP, we often have to go to great lengths to teach users that 'www.domain.com' and 'domain.com' are two different domains"

It takes only a little bit of thought and empathy with the common user to imagine how this could be confusing. Are you supposed to type in www with every site? Why did it not work when I typed it in? Etc... It's just confusing and unnecessary.

They're not changing whether you have to type it in. They're making it look like you didn't type it in when you did. This adds to the confusion you're complaining about, as the URL bar will look the same when you type in domain.com and it works as it does when you type in www.domain.com and it doesn't work.

Welp I responded to the wrong comment. Sorry for the noise!

> If you care about usability this is clearly an improvement.

This is a false dichotomy. There are tons of ways to improve the implementation without just hacking parts of the URL out of the "omnibar."

You could split the omnibar and/or allow it to be resized so both pieces of information can be shown. You can use differential color highlighting and/or text formatting to convey which parts google considers "trusted." You could implement an "expert mode" button somewhere that turns all of these "improvements" off. You could add an extra field that shows security-critical information not just about the URL but about the resulting connection(s) to the host.

This is hardly exhaustive, which is also a good description for Google's effort on this one.

I appreciate your effort but I have to say, none of those are would remotely constitute usability improvements for the average user. Your proposals all involve increasing the complexity of the information presented to the user. This has been the root of bad UI for ages. The harder design decision, and what our industry has slowly gotten better at, is how to reduce information.

If you care about usability this is clearly an improvement.

The cost/benefit doesn't work out. It's a usability improvement, but it comes with a huge cost, where many, many sites have the fundamental function of a URL -- that of an address/specifier across the internet -- basically broken. The person who implemented this change either didn't work out the cost, or decided they didn't care.

Everyone is hyperventilating over contrived "the principle of it all" examples.

How would you feel if primary keys in your database started to change their semantics? How would you feel if your phone started to change the telephone number it dialed? I guess we're kind of alright with the message app editing our messages as we type them. Now we're supposed to adapt to our tools, not the other way around, and insisting on tools doing what we say is "pedantry."

> How would you feel if your phone started to change the telephone number it dialed?

If they replaced the country code with a flag, that would be an improvement. If they replaced the carrier number with the logo of the company that's also not bad. These are UI changes and the one you proposed about the phone numbers are great :)

If they replaced the country code with a flag, that would be an improvement. If they replaced the carrier number with the logo of the company that's also not bad. These are UI changes and the one you proposed about the phone numbers are great :)

If I were to propose a change, it would be 1-to-1. What happened in Chrome 69 isn't one to one. It's a surjection, which broke lots of URLs. That's the difference between a great UI change and an inconsiderate, idiotic one.

You are overestimating the cost. This is going to help kill the www on the long run.

Just how much is www costing the world? I think compromising the function of URLs is going to be much more than whether or not there's a www in one.

Again you are shamelessly arguing "the principle of it all" without actual real world examples of problems.. Cite some real problems and we can have a more productive discussion. It doesn't help to falsely equate hiding "www." with corrupting phone numbers.

Read the linked bug report. Concrete examples were given. A super glaring one to me is that:

m.tumblr.com IS NOT a mobile version of tumblr.com

If I tried, I could likely come up with a ton more, but so could you if you tried.

Cite some real problems and we can have a more productive discussion.

Already cited elsewhere.

It doesn't help to falsely equate hiding "www." with corrupting phone numbers.

It's a valid comparison. Both are specifiers. It's the same sort of shenanigans with poorly planned "abbreviated" phone extension dialing and outside number prefixes in my company's office causing wrong numbers and accidental 911 dialing.

What are the improvements of hiding part of the URL?

>Like, who actually runs separate HTTP servers on example.com and www.example.com anyway?

My university doesn't work with example.com but does work with www.example.com. I see how someone could waste a lot of time trying to solve a nonexistent network issue because typing example.com doesn't work, but he had seen it working before.

No this is Google (and others) attempting to fix confusion caused by developers making systems that are hard to understand for users. Nothing is stopping developers from redirecting or setting up DNS to make the two domains go to the same place.

I personally feel like any negative user experience should be addressed by the developers of the website, not the browser.

Meanwhile I'm sure there are people out there who want to maintain a distinction between www.x.y and just x.y.

Exactly, thank you! I think first and foremost the browser should be clear and easy to use for the general user, not an easy to debug frontend for devs, if we do that the browser will be a mess. Devs know where to look to see what the real address is if they get reports of there site not working, and can then fix the issue on their end. It's not like the issue isn't something that you can't change on the server end.

The people who run the sites should do all they can to make the experience as hassle-free as possible. This means no www.domain.com and domain.com going to different places, 99% of users will expect to end up on the same site regardless of where you add the www or not, to them it's the same as adding the htttps or not, it should "just work" regardless what they put in the front on the URL. (of course there will always be some edges cases, but I feel it's worth it for a better end-user experience.

IDK, just my 2c.

Not sure why you are getting downvotes but I agree with you here. I think this change is going to benefit both technical and non technical users. How many websites make the mistake of not handling www and non-www the same? Or having a valid certificate for both domains? It's not just a pain for the non techies

How is this a usability improvement? What are the positives I'm not seeing?

> Like, who actually runs separate HTTP servers on example.com and www.example.com anyway? Everyone is hyperventilating over contrived "the principle of it all" examples.

I do. Over and over again. Having been doing that for 2 decades, and with a limited budget, I require things like SNI[1] to work. But of course, I could just triple the tech budget...

EDIT: And wrt "Time for a modicum of historical perspective." - please give it to me. I am keen to learn what I did wrong all the time.

[1] https://en.wikipedia.org/wiki/Server_Name_Indication

And how does this break SNI? It's just a UI change.

I agree. I can't remember the last time myself or anybody else actually typed "www." when going to a website.

However, Chrome should handle this intelligently like it does with localhosts vs. internet hosts. If you manually type in "http://" into the address bar (e.g "http://my-local-domain"), it will not search the usual DNS and will honor any locally configured domains you may have on your network.

This doesn't seem to be the case now with this 'www' concern, though. With slenk's example of http://www.pool.ntp.org and http://pool.ntp.org - the only way to access the second link properly is to click the link. Typing it in the address bar loads the same website as first link - so it appears Chrome is automatically adding 'www' or somehow making the request differently making the second link's page inaccessible via the address bar.

Could you have not used a similar argument for AOL Keywords?

Safari's URL masking is doing a terrible disservice to society by making people ignorant. Make the Web more usable by educating people, not by hiding basic information from everyone.

The thing is, there are people who care about usability for whom this is not "clearly" an improvement. If it was clear, we wouldn't be having this gigantic discussion...

Yes, this point is important. We need to get the web/internet as simple as changing channels on a television. That is what the masses are capable of.

I don't want anything about the Internet to be like television, and while I know you're trying to make an analogous statement I think the real threats to freedom on the Internet are coming from entrenched interests who would take what you say literally.

I am being literal. The internet is not about freedom. It's about making money and influencing thought and action.

Realize that the majority (and I'm willing to say probably over 95%) of people today us the internet on their phone. They have a few "apps" which they switch through by swiping left or right (changing channels). The use Facebook, Amazon, their chosen politically aligned news site, Google search, and a few other things. Probably not much more than the 6 channels of TV I had as a kid.

And the Google searches are rarely if ever for anything beyond the first few links or the sponsored content. Really, page 2 and beyond might as well not exist. The users don't go there. Google could just return results from major organizations and users would be fine with it.

I lived through PC era, from start to finish. It took me a long time, but eventually I realized that 99% of the population was never going to understand how computers work. Just the concept of a file and a folder is beyond most people (that's why phones don't have them). Once I gave up on expecting people to learn about computers, computing became much easier. Phones, tablets, and so on, for the masses, the desktop for me. And now I didn't have to go fix peoples screwed up computers - they don't have them anymore.

The importance of a domain name, much less a sub-domain, is now irrelevant. A search result, of of the first few, is what matters. You will never get users to understand a URL.

At one time the internet might have been for us hackers, but that time has passed. It's now the domain of the masses.

How does hiding part of the domain make the web any easier to use?

if anything, it should be time to declare that its a bit insane to expect lay people to be able to read from right to left and left to right concurrently, and understand all the delimiting punctuation correctly, along with unicode. its time to restructure url rendering to be right to left, and combine delimiters. dashes, periods, slashes, and subdomains and hundreds of tlds have made it too confusing.

https://www.google.com./ should become either https/com/google/www/ OR secure/com/google/www/ OR secure.com.google.www. Browsers can then just always bold the third word?

I think re-ordering like that would actually benefit technical users more, but it's probably too late to make such a change. I don't think we'd need that change for browsers to put the parts they consider most relevant in bold. That logic is simple enough right now.

really we are talking about rendering, right? not actually changing the protocol itself? (:lock: = lock unicode/emoji = secure http)


I am a strong proponent of the URL being part of the user interface. I should be able to manipulate what i request of a service by modifying the url. The text in the url should mean something.

Making the rendering that different from the actual URL would be very confusing, especially if it varied from program to program or within programs. Links would look one way on websites and then have a completely different form in the url bar after clicking them.

That particular rendering choice would also have the downside of being the same for both https://www.google.com/maps/hawaii and https://google.com/www/maps/hawaii

Yeah you might have to double punctuate after the subdomain, BUT it still makes it clear what domain you are on.


And its not like the browser couldnt render it both ways, either on click or hover.

And how would the client distinguish the part that goes to DNS from the HTTP path?

Why even bother with URL language? It should just be 'Google Search' in the bar and you can see the URL if you want.

I have many websites which I don't let Google to index, how would my users access them?

If you're not in Google's index, then it's likely you're not in Google's interest to use Google's product to visit your site, since the company prefers if you use their search rather than type links directly or follow links from other links, hence they want to kill the URL — https://www.wired.com/story/google-wants-to-kill-the-url/