Hacker News new | past | comments | ask | show | jobs | submit login
Google adds experimental setting to hide full URLs in Chrome 85 address bar (androidpolice.com)
869 points by vezycash on June 14, 2020 | hide | past | favorite | 698 comments

While this might be useful for a casual user, hidden URLs are a huge problem for web developers. Asking for a screenshot from client is not enough, now I'll have to provide additional instructions how to copy/paste full URL when reporting issues.

Not to mention all possible problems with misconfigured servers when www and www-less domains lead to same website, but some script refuses to work on one of those. While it was easy to spot with just a glance at URL bar, now one has to do additional clicking or opening devtools.

Probably we are at the crossroads where developers need separate, not dumbed-down version of browser.

I'm already annoyed by the hassle it has become to copy a substring of the URL into the clipboard, due to the schemes being hidden. Nothing has been won by hiding http(s):// as well as the www subdomain.

I had a frustrating experience with this only yesterday.

I was trying to search for the word "aquarium" but chrome kept filling in "https://aquarium.org", I would delete the ".org" and only the word "aquarium" was shown in the search bar, which was the word I was trying to search.

Of couse "https://" was hidden, so I was actually submitting "https://aquarium" which was not a valid domain and it took many frustrated clicks and enters to actually google the word that was shown. Absolutely infuriating as the true state of the search bar was hidden.

Even when the scheme is visible I have problems copying a URL substring. Chrome always wants to select the ENTIRE URL instead of just the subpath I'm double-clicking... endlessly frustrating >:(

Them acting like this is some kind of user experience thing is the most insulting part.

Most user experiences aren't the same as developer user experience.

But it is.

The URL bar is a disaster for end users. It's full of random junk that users can't read so they stop trying, which means they can then be tricked by phishing websites hosted on any domain at all. Research shows about 25% of users don't look at the URL bar at all even when typing in passwords, they navigate purely by sight, so it's impossible to stop them being phished. The human cost of the resulting hacking is significant.

The fact that the software industry has routinely prioritised historical inertia, the needs of web developers, and all kinds of other trivialities over the security of billions of people is embarrassing. I'm glad to see the Chrome team finally get a grip on this and make the URL bar actually useful for people.

> The URL bar is a disaster for end users. [...] 25% of users don't look at the URL bar at all even when typing in passwords

"Side view mirrors are a disaster for drivers. 25% of drivers don't even check them before making a turn." [I'll stop the metaphor here, as I think my point was clear]

This change does exactly nothing to improve security. As for usability, it just puts one more layer of paint over the underlying "complexity" - and we've seen before how well that works (see basically every part of Windows 10 for examples).

As someone who has worked on the front line of the fight against phishing and account takeover in the past, I can assure you and others that you're dead wrong. Making this change was a recommendation I made to the Chrome team years ago because the number of people who would reliably type in their username and password to a site hosted on hacked web servers (supershop.co.hk/account_login.php etc) was just so high. And when those accounts got hacked, scamming and sometimes even extortion would follow.

Your side view mirror metaphor is unfortunately not clear at all. The side view mirror is simple and performs its function correctly as designed. It can't really be improved without totally replacing it with something else like a camera. Now of course not everyone will use the URL bar even if it's redesigned to work correctly. But right now the bar is practically designed to look as intimidating and useless as possible.

Perhaps you're so used to parsing URLs in your head you don't realise it, but URLs are a baroque and absurd design that nobody without training could properly figure out. It's basically random bits of webapp memory and protocols splatted onto the screen in a large variety of different encodings. In a desktop app dumping RAM straight onto the screen would be considered a severe bug. On the web it's tolerated for no good reason beyond history.

To give just one example that has regularly confused people in the past: URLs are read left to right except for the domain name (the important part) which is read right to left. You don't stop reading a domain name at .com, you stop reading it at the third slash or possibly a colon, but that form is rare.

As someone who has had to teach grumpy old high school teachers how to not fall for phishing and mitm attacks, I really can't see the problem here.

The way I used to teach was very simple and very effective: there are 3 parts to a URL - the first part tells you if the connection is secure, the second part tells you who you're connected to and the third part tells you where on that site you are. The first part needs to be httpS, the second part needs to be the site you're expecting and the third you can ignore. They're even shaded differently to make it easier to read. "If you're going to Google and the black part ends with anything but google.com, call IT" made sense to even the oldest and most reluctant people I've had to deal with. The problem was actually getting them to check every time and not forget.

It seems to me that this change will not help people without training, change nothing for people with training, and make sharing links even more confusing for everyone.

Are you saying someone is less likely to get phished on "supershop.co.hk" than on "http://supershop.co.hk/account_login.php", even where the http:// part is replaced with a red padlock and /... is grayed out?

I see only one real solution to phishing: don't let users type passwords manually. WebAuthN and password managers both automatically read the domain and won't try to authenticate on a domain that isn't a perfect match. I've had more success with that than any other anti-phishing measure I've tried deploying (history-based domain trust, explicit trust on first use popup, detecting unicode gaps and domains in credential fields...).

Sure, absolutely. People understand domain names, they're found on billboards, adverts, business cards, all over the place. And it's a simple text match. Does the bar say "google.com" or "google.co.uk"? Yes? Then you're on Google. So when it's simple people get used to checking and can be reasonably told they're expected to do it.

The greying out and replacement of padlocks etc, the anti-phishing training, it's all just working around a historical design problem in browsers. There's no need for it to exist. Notably, mobile apps don't have this problem.

> Nothing has been won

google could make an amp-only web experience without dissent.

hide the URL bar

javascript -> webasm

hiding all this benefits data collection and advertising. Seems obvious to me.

It is frustrating isn’t it

>Probably we are at the crossroads where developers need separate, not dumbed-down version of browser.

Yes, we are.

Funny enough, Firefox has a 'Developer edition', but that's just the Beta build, with some features turned-on by default.


Firefox's still an option.

The quality of Firefox on Windows as software is much to be desired.

Due to slow loading, I'm considering whatever they call the Microsoft browser today.

Microsoft doesn't have a browser today, it's just a repackaged Chrome supporting the same Blink monopoly.

I am honestly surprised Firefox on Windows is slow for you. I don't personally use Windows, but many on here say it works well there since v57.

I have tried to fix this slow loading problem. Happening on 2 separate gaming laptops with SSDs.

Maybe I don't know what to Google.

Try to DuckDuckGo "why is Firefox slow on Google sites" - or just try to guess the answer...

Blink isn’t a monopoly... yet.

For what it's worth I don't experience slow loading with Firefox on Windows, or any notable slowdown in any way.

I definitely do. FF being perceivably slower than Chrome continues to keep me off it.

I only notice this on certain Google sites.

Start Firefox in the Profile selector and try using a fresh new profile. Or start in safe mode. Some old, forgotten configuration or addons are sometimes the cause. For example a privacy-enhancing addon CleanURLs disables ETag functionality by default. So all sites using ETag for caching content won't be cached. Big loss.

Is this still true in Firefox >= 57?

Not in my experience.

Not in my experience.

Not for serious web development (dito for Safari): the developer tools are atrocious compared to Chrome, the battery life experience is way worse with FF, and while Chrome is a notorious CPU and RAM hog, FF is worse.

That combined with both FF and especially Safari being way behind in terms of standards adoption/development really sucks at the moment. Chrome desperately needs competition.

In my experience FF is not behind chrome with new features. It's usually on par, or even better. Sure safari is annoying. But the developer tools in firefox are alright. I even find them better when it comes to debugging css stuff.

It's odd that you mention how Chrome needs competition but also decry lack of features in other browsers. That is one of the ways they are attempting to monopolize the space.

The standards are meant to be agreed upon by several parties, including both Google and Mozilla. Google implements new features in order to control the way they work, instead of allowing input by all the parties involved. It's always faster for one company to just do whatever they want than for a group of organizations to come to an agreement on what to do. The slower way results in better and more equitable implementation for everyone though.

Why do you say so with such certainty, though? I find it nearly impossible that you never saw someone claiming the opposite. There's even such a response to your own comment so clearly this experience is not universal.

Google does have an amazing access to data on how 80 - 90% of users are using the Internet, for many Google is the entry point for their Internet experience. Maybe their data is telling them that the URL bar is basically unused?

That's probably because most advanced users turn off all telemetry when possible. And http://localhost is probably ignored in statistics anyway.

I mostly leave telemetry on in products I care about specifically so that my advanced use cases get logged.

Thats admirable, but you are a minority within a minority.

Honestly, if most "advanced" users turn off the features that Google uses to gather data to improve UX, it's strong signal UX isn't important enough to "advanced" users for Google to optimize for it.

It doesn't matter if .1% of users turn off their telemetry, their use case wasn't going to be optimized for either way. In fact the Google employees themselves are part of that .1%, they don't need the data to tell them what's important to advanced users.

What? If most "advanced" users turn off telemetry then they want a terrible product?

Then they're valuing other things more than a UX that caters to them.

Yeah, like their privacy? Since when did getting a good product that respects your privacy become an oxymoron?

Automatic metrics are only one tool in a toolbox that includes focus testing and design aesthetic.

But if a whole subset of users exclude themselves from that tool, they're going to get the UX that's only as good as the other tools in the toolbox are capable of building.

You know software with good UX used to exist before telemetry became a thing?

Definitely, and it continues to exist after as well.

But telemetry gives web developers an extremely simple and convenient tool to know what users are actually doing without even inconveniencing the users with explicit questions. I've done web development with a good telemetry set built into a page, and it is extremely informative regarding how users actually use the tool, as opposed to how the UX designers have predicted flow through the tool will be.

To give a concrete example, a user might tell you that configuring permissions is "hard," and sitting with them during over the shoulder testing (which is expensive) might tell you a little bit about why. But without even asking the user, page telemetry can tell you that they are making a transition jump from the permissions configuration page to the page listing all of the resource names, because that's what's slowing them down---the UI didn't give them enough information to configure the resources because we assumed they knew what the resources were named.

For a browser, anonymized usage stats can tell you whether most users keep all their bookmarks flat at the top of the bookmark bar or deeply nested in multiple subfolders, and that's usually valuable for deciding whether you want to emphasize a flat bar or folder management in the design.

If most power users disable automatic anonymous telemetry and also use deeply-listed folders, no one should be surprised if deeply nested folders doesn't get better.

Yeah, invading people's privacy always makes things easier for everyone else, doesn't it? Doesn't mean people who care about it don't want or deserve quality...

"Deserve" is complicated. To a first approximation that ignores a lot of details... What have they done to "deserve" it? They didn't buy it. They aren't making the process of figuring out what they want particularly easy.

People who don't show up to vote also "deserve" a good government by virtue of being people who have to live in a governed society. It's harder to make one for them if the system for selecting leaders is missing their input, regardless of what they deserve.

Popping out of the government analogy and back to software, power users are also in a position where they are more capable of adjusting their experience to suit their needs. All things being equal, a company with finite resources to develop software should dedicate those resources to assisting the non-power users more often than power users.

While you argue your case, that one has to vocalise if one wants something, well, you are still ignoring the basic want of not having your privacy violated and the fact that you can vocalise something willfully, without it being spied away from you. I'm also extremely suspicious of the suggestion that this is something only power users would want.

You can certainly vocalize something willfully. But the people who don't have to do any vocalization at all and are generating megabytes to gigabytes of data on how the application is used by their mere use of it are going to always have a default stronger voice than people who bother to show up on message boards to voice specific concerns.

I actually agree that if you are willing to ignore privacy concerns and a potentially large part of your userbase, then you can simply send megabytes to gigabytes of telemetry and pretend that is the best you could have done and that you have the best data. I'm simply saying that's not a good idea.

a) It's not a large part of the user base who switches off telemetry and they have the telemetry to know that

b) for being "not a good idea", it's pretty much industry standard now for everything from business software to video games.

> a) It's not a large part of the user base who switches off telemetry and they have the telemetry to know that

So you're claiming that it is typical for software with telemetry support to ignore your choice and still send telemetry about you turning off telemetry? That sounds wrong, but I cannot say I investigated this deeply.

> b) for being "not a good idea", it's pretty much industry standard now for everything from business software to video games.

As I understood the discussion, we were in fact discussing whether this is a good idea and whether it makes sense, so I think it's fair game to comment on it. As for it being an industry standard, that sounds like an overgeneralization. It is certainly not typical of software I use.

> So you're claiming that it is typical for software with telemetry support to ignore your choice and still send telemetry about you turning off telemetry? That sounds wrong, but I cannot say I investigated this deeply.

No; I'm saying missing data leaves holes that can be measured. They know, for example, how many people have downloaded Chrome and how many daily Chrome users they get at google.com (because Chrome will still send a valid UA string if it has telemetry turned off). They can estimate how many users have telemetry turned off from those signals to a pretty decent degree of accuracy; certainly enough to know whether telemetry is telling them about 90% of users of 30%.

For (b), I'm curious what software you use. It's pretty standard in games, online apps, and business software. It's absent in a lot of open-source (mostly because a lot of open-source lacks a centralized vendor who would be willing to pay the cost to collect and interpret that data to improve the software).

Is Chrome's telemetry so invasive that it reports about all URLs visited? Otherwise I don't see how daily Chrome visitors on google.com would be helpful in this estimate.

I avoid online apps, I don't play a lot of games (and if I do, they're not big titles which are likely to have telemetry) and yes, I primarily use FOSS.

> (mostly because a lot of open-source lacks a centralized vendor who would be willing to pay the cost to collect and interpret that data to improve the software).

This is almost surely an element of it, but I think a respect for privacy and a general distaste for telemetry among FOSS users are more important.

But they don't have that signal...

Missing data leaves its own wake. Google has numbers to extrapolate how many turn off usage reporting. They lack automated signal in how the users use the tools.

I do this as well. As an end-user, I actually find some telemetry useful to diagnose things like:

* Apple Watch battery cycle count (not viewable in any UI but is viewable in telemetry logs)

* Clues about why a particular app recently crashed

That would be nice. They should share that data to help others understand the decision they are making. Or at least they could reference the data in their decision making.

there is a trick for at least chrome before 85, where if you install the google-made extension "suspicious site reporter" it will show the full url including protocol (which you can't even do with flags so it tampers with something internally which means they don't have to do this at all)

Asking a user to install "suspicious site reporter" in order to send a bug report is going to throw up a few problems.

I highly doubt a non-default extension has extra permissions that aren't available to regular extensions...

Can anyone reproduce this?

The extension ID is literally hard coded in the code of the scheme hiding code. https://source.chromium.org/chromium/chromium/src/+/master:c...

How a proprietary extension can get preferential features illustrates that chromium is open source in name only.

Hardly. If you look at firefox's source, you will find several extensions that are hard-coded for special handling. This is not new.

True, but the distance between Firefox source and Firefox is config+compile.

You cannot compile Chrome. I've heard that Chromium can be compiled and run, but I've never actually seen it, or hear of anyone using that professionally.

I'm not sure what your point is... You thought it was weird that an extension would get preferential treatment, and I pointed out that this is true for Firefox also.

My point is that you are correct.

There are other factors at play which mitigate the preferential treatment, but it's definitely there in Firefox as well.

I looked into it several months ago and it looks like that extension is whitelisted to do it. If you try to repack the crx file and install it, the address bar doesn't get changed.

Developers at least the experienced ones never left firefox. For the new developers who got hooked on chrome over the last 10/15 years time to move to a developer friendly browser.

At this point ie becomes more useful.

This is just simply inaccurate. I have 25 years of web dev experience and left Firefox because Chrome's dev tools were far, far superior to those of Firefox.

Blink is the most common browser engine. Firefox has some nice developer tools but if you don't test in blink throughout the day then you're just asking for problems.

Blink will remain the most common if that's all devs continue to optimize for. Not a way to change anything for the better.

And it's what devs will continue to optimize for while it continues to be most common.

The goal of most web developers is to make pages users can use, not get mired down in the never-ending browser wars.

It's the most common because devs optimize for it. That's a Catch 22 you can't break out of if you continue to optimize for it, an infinite loop.

Yes. That is network effect.

As the smaller vendor, it's incumbent upon Mozilla to break it. Expecting individual devs to do it collectively when it really isn't in their selfish interests is waiting for a unicorn to appear.

> As the smaller vendor, it's incumbent upon Mozilla to break it.

That's kind of impossible to do for a smaller vendor without wider developer cooperation.

I remember Mozilla only started to breach IE's dominance once devs were so sick of IE that they installed Firefox on their mom's computer despite tons of sites being made for IE.

It's possible for something like it to happen again with Chrome, but less likely since Google's a lot smarter and not too lazy to implement latest tech, so it will sure take longer without some activism and evangelism from web devs.

> expecting individual devs to do it collectively when it really isn't in their selfish interests is waiting for a unicorn to appear

Selfishness is a lot more complicated thing than people give it credit for. A lot of 'selfless' acts could be alternatively described as selfish in that it makes one feel good. Free software developers already do a lot of work for the wider community, where probably it would be a lot easier to just use the proprietary, already feature-rich, counterpart than trying to develop a libre alternative. But the movement understands that long-term, having as much free software as possible is what will in the end help preserve general-purpose computing in the sea of silos. It takes some discipline, sure, but long-term it's actually in one's selfish self-interest.

But that's the thing, if Google is responsive enough to implementing new technologies and improving their browser, there's no reason for most web devs to advocate for an alternative browser. A (high-quality) monoculture is actually much much easier on most web devs, because it minimizes the number of browsers they have to support for quirks.

> A (high-quality) monoculture is actually much much easier on most web devs, because it minimizes the number of browsers they have to support for quirks.

Short-term, sure. Long-term it opens devs to Google's whims and makes the "open" web barely more open than Apple's AppStore.

But that's short-term vs long-term thinking and I can't deny most would prioritize the short-term. Here's hoping there's still enough idealists, even among web devs to avoid that fate and bring about for the web what GNU did for UNIX in the 80/90s.

GNU did great things for UNIX. It hasn't really demonstrated much utility in the user experience improvement space. The flow there, in general, appears to be that the big, closed source commercial interests devise new approaches for user interface operation and the open source community copies the ones that work.

The only space I can name off the top of my head where my open-source architectures have outstripped Windows and Mac in UX is virtual desktops.

If you're taking casual computer experience, KDE's still way more customizable than any of the commercial desktop environments out there.

Of course proprietary software has more funding to hire designers and such, but in terms of actual functionality, I'd contest your claim.

If you're talking developer user experience, it's not even a race. The FLOSS ecosystem has a landslide lead here. In fact the whole point of WSL is to try to keep devs on the Windows platform by bringing that experience to Windows more directly.

> If you're taking casual computer experience, KDE's still way more customizable than any of the commercial desktop environments out there

Customizability is orthogonal with out-of-the box UX, the original axis of comparison here. In fact, the two are often at odds.

Same as saying a benevolent tyrant is the best govt. Viewed from a certain angle it could be arguably true, yet none of us would trade democracy for it because we understand quality and efficiency are not the only factors but need to be weighed with other ethical and social ones.

The history of biological evolution shows that monocultures invariably fail catastrophically. Diversity is the main way to guard against unpredictable events of the future. Software is not exempt from these general rules I'd presume.

I'd be conservative extrapolating from lessons of government and biology to software engineering principles. Software doesn't change via random mutation and natural selection pressure; most open source projects are benevolent dictatorships of some flavor or other.

Sorry to interrupt the party here but I feel compelled to point out that Firefox's is actually the third most common engine after Blink and WebKit. The web is not and will not be a monoculture as long as the iPhone exists. There's already two ~trillion dollar juggernauts involved.

Blink is a fork of WebKit though. Firefox is the last big non-WebKit browser afaik.

Perhaps Mozilla should never have made breaking changes that pushed people away? The UI change that killed my extensions made me look elsewhere. Chrome's much faster js engine sealed the deal.

I've looked into switching back to Firefox, but what I've found is that they don't allow me to use my own extensions. I would have to use a beta version of Firefox or submit all of my extensions that only I use for approval to Mozilla or have to reinstall my extensions every time I close Firefox. None of these seem good options to me.

If you're relying on bug reports to find why a page is broken you're in for a bad time because 99% of the time the user isn't going to report anything. They're just going to think your page is broken and stop using it. Use a telemetry and error reporting service like Sentry or Rollbar. These services can strip sensitive data on the client before it gets logged.

Not every bug results in an error being thrown or any other signal that you could automatically detect. From my experience most of them are way more subtle.

While parent’s point is for devs, any kind of support situation falls into the same issues.

Grand-parents/friends/org users not being sure to be on the right site after a redesign, not seeing amazon in the right language, etc. There’s countless of questions that can be solved faster by looking at the URL.

Sentry only capture software exceptions. It doesn't tell you a page is rendering badly in the user's browser.

You can send your own events to Sentry. It's not just for exceptions.

How do you make an event for rending looks weird, or text is unreadable or ?

Have a "report issue" button which leverages Sentry's (or your own, or some other service's) metadata collection and sends a report to you.

My QA team sends me URLs and screenshots sometimes. Often the first thing I look for on a screenshot is the URL.

How can I tell you don't work in IT? Almost all companies including mine have ONE site to choose from to do any one thing. If it doesn't work, they either ask their colleagues (which only works if it's not the first time someone is using this around them) or create a ticket for IT.

I've been a web developer for about 25 years. I can assure you if something doesn't work the users usually won't raise a ticket. They'll work around the problem. On the occasions when they do raise a ticket it'll usually contain minimal information. Having a telemetry system to correlate it against gives you some information to use to debug the problem. That's helpful.

Understanding that users have more important things to do than spend time on bug reports is an important lesson to learn. If you can gather data without relying on someone whose job is to worry about other things then you will make everyone's life easier.

I actually wouldn't mind if the URL bar was replaced with a breadcrumb bar on some sites, like news and forums. Imagine something like

Example.com > Worldnews > 2020 > 06 > 14 > Big aquatic monster spotted outside of Tokyo


forum.example.com > Sport > Football > Spain > Real Madrid

It could then work like in Explorer in Windows 10, where you can press one of the breadcrumb separators and see a menu with siblings, or go straight to all news this month. It could use some manifest file in a standard format on the server for the directory information.

Of course, this should never replace the URL completely, you should always be able to get to it easily. But URLs aren't necessarily always the best solution for navigation. We tokenize code and apply different colors, mouse over pop-ups, and links, why should the URL bar be a long raw text string when it's really contains structured data?

This Google nonses of hiding everything except the domain is not a good solution IMO, it doesn't solve a problem and makes it harder to navigate, not easier.

I really dislike any attempt to modify strings like this. I find it invariably causes problems in edge cases. What if a site handles slashes differently to how Google expects? Where do GET arguments go? What if I want to modify the URL? Breadcrumbs are great when each part is navigable, but does example.com/worldnews/2020/06 actually lead anywhere, or is it an invalid address for the site? I have absolutely no interest in Google being allowed to dictate what should and should not be a valid address.

Probably worse than the change itself, though, is the tendency of anyone who makes such a change to start playing fast and loose with actually representing the underlying address. You mention Windows 10's address bar - it's one of the worst offenders. My Windows Explorer is currently sitting in my downloads folder, which is at "C:\Users\Wyatt\Downloads". The address bar reads "This PC > Downloads". When I click on the address bar to edit the address, it changes to just "Downloads". What part of all of this is in any way useful to me or the likely action I'm trying to take when I click on the address bar?

"This PC > Downloads" may point to the same directory as "C:\Users\Wyatt\Downloads", but Explorer may also handle or display differently or with different options. I've had various issues with this, such as not being able to copy the full actual path from the address bar, a sub-folder in one of these "This PC" folders or libraries showing no columns with no option to show them, and sometimes being indistinguishable from the Public folder. The full path matters in Explorer, Finder, and browsers, and should never be hidden without an easy visible way to show the full path or have it always show.

In Windows Explorer, if you click to the right of the breadcrumbs, you will get a text input with the full path to the current directory. If a solution for URLs were to attempt to switch to breadcrumbs (seems like it should be site-configurable via a meta tag or something), then a similar click to the right of the breadcrumbs could expose the underlying URL.

if you click Downloads, it won't give the C:\Users\User\Downloads path, it'll just give Downloads.

> I really dislike any attempt to modify strings like this. I find it invariably causes problems in edge cases. What if a site handles slashes differently to how Google expects?

I think it would have to be some standard format that websites use, not just string manipulation in the browsers. And certainly not some Google dictated feature! For the same reason, each part would have to be navigable on these sites, to work as I described. There's various possible solutions, like meta tags or some manifest like breadcrumbs.jsonld mentioned in another comment.

The fact that Windows Explorer doesn't show the full URL in special folders is a separate issue, I only mentioned it for the breadcrumbs example.

If you're talking about something other than manipulating the URL, I don't understand what problem you're solving. Sites which believe that breadcrumbs would be helpful for navigation already have breadcrumbs, I see no reason to force it on everyone else.

But I disagree with you that not showing the full address in Windows Explorer is a separate issue. In my experience loss of edge-case functionality is a core aspect of changing interfaces. Maybe in another world the address would be preserved, and my use case would still work. But someone else's unusual use would not.

Instead of manipulating the URL it would replace it, and instead of each site doing it their way it would be handled by the browser in the browser UI. Sites can implement their own back button too, doesn't mean that's where it belongs.

Think about how Powershell use objects instead of text to chain together commands. The address isn't just text, it's structured data, why not treat it as such and make it more useful?

Your arguments are definitely salient.

However, I think there is something to this idea - a breadcrumb style approach by default in Chrome would encourage developers to use paths in more standard ways that refer to resources, not heavy parameter coupling. As you noted, there are technical barriers to implementing this solution, which might encourage some other good things - servers providing resource discovery so that the browser can understand valid paths when visiting a site.

I address that in my point: Google deciding what paths developers should use is precisely what I don't want. I'll decide what resources should be discoverable on my site, not Google. I'll decide what paths should be valid, not Google.

Google has too much power to dictate standards already, and has been quite happy to use that power for their own sake, rather than the good of the user. I'm not interested in giving them any more.

My original point was not about Google specifically, it was about a new feature in browsers in general. I absolutely agree that Google have too much power already.

And like I wrote in my first post, the resource discoverability could be handled by the site itself via some manifest file in a standard format, like robots.txt. It wouldn't be dictated by anybody else.

I find the way Explorer in Windows 10 handles this behavior to be annoying and inconvenient. It finds ways to change paths into new canonical locations, for example browse to C:\Users\Yourname and instead of giving you breadcrumbs like Local Disk > Users > Yourname, it simply shows "Yourname" as a special home folder. When you click back in the address bar, there are no breadcrumbs anymore, it's erased your trail.

Attempts to make things simpler by hiding the truth about where you really are in navigation seems like a way to make the web less discoverable except by Google. If you're on a web site you can usually learn more about its structure based on URL format. This makes that more difficult.

But there's nothing stopping websites from offering that without browser support. It can just show that at the top of the page. Everything it provides is under the authority of the website. The URL needs to be provided by the browser because it's not entirely under the authority of the website, but that's not the case for a breadcrumb bar.

Yes, imagine if the browser vendors decided to _improve the usability of the URL bar_ instead of trying to remove it...

The only difference between

  Example.com > Worldnews > 2020 > 06 > 14 > Big aquatic monster spotted outside Tokyo
  forum.example.com > Sport > Football > Spain > Real Madrid

is a little bit of reformatting and upcasing and linkifying (or otherwise making selectable) the individual path segments of the URL.

And probably some clever logic to deal with the randomforum.php?fid=12345&tpcid=984.3&page=5 goop that is still all-too-common... :/

>And probably some clever logic to deal with the randomforum.php?fid=12345&tpcid=984.3&page=5 goop that is still all-too-common... :/

You say that as though websites like that are random small sites. HN has that kind of a URL, so do YouTube and Google.

Hacker News doesn't have breadcrumbs either. The concept of a directory hierarchy inherently doesn't fit.

You could try to map the parent ==> child relationship of every individual post URL, which might be cool, but think about how long the URLs would get.

For sites with breadcrumbs though, the URL absolutely should follow the crumbs (and I've argued for such at my company).

> I actually wouldn't mind if the URL bar was replaced with a breadcrumb bar on some sites ...

Which sites?

Anyway, almost everyone else would mind. Especially if there was no option to revert to normal behaviour.

> It could then work like in Explorer in Windows 10, ...

That sounds like the worst of both worlds. If people want Explorer in Windows 10 behaviour - can't they just run Explorer in Windows 10?

If people want Chrome as it was yesterday, they've basically got no option now.

> But URLs aren't necessarily always the best solution for navigation.

The Chromium devs demonstrated their lack of interest in being able to navigate via URL / location bar a half decade ago when they changed the default on all operating systems to be single-click in location bar to 'select the whole address'.

I'm beginning to think they are not our friends.

Note this is talking about Explorer, more recently named File Explorer, rather than Internet Explorer, the browser.

Argh, of course. My mistake.

I typically run up breadcrumbkiller as part of any Microsoft Windows desktop build, so I rarely see that configuration for long.

For myself (and I'd wager most people), I want to clear the URL and go to a totally different URL much more often than I want to manually manipulate the URL I'm currently on, so I like the change in default. Many casual users probably didn't even know a quick way to select the whole URL when it wasn't the default.

The option they have is Firefox

There were several extensions that transformed location bar like you propose, but naturally just did simple domain, path and query segments transformation into clickable buttons producing breadcrumbs. It was a delight to use and I miss them in Firefox since Quantum leap prevented them to work any longer.


They by default worked very similarly the aforementioned Windows Explorer, what in focused state with keyboard input turns into "raw" text field.

On some sites, this could be done using breadcrumbs.jsonld:


Sadly not used everywhere, but maybe browser support would encourage its usage by site owners.

I’ve implemented this on my site. The pain in the ass is that Google will only sometimes show you the breadcrumbs, so it’s very difficult to tell if you encoded it correctly.

Very few URLs are perfectly hierarchical in a way that would work with this scheme. For example, look at the URL of this page you're reading now.

I think that's a great example of the benefits though. HN could continue to use their fairly opaque URLs in the background, but instead show something like

news.ycombinator.com > 2020 > 06 > 14 > Google hides full addresses in URL bar on Chrome 85

This makes it easy to not only see where you are, but also quickly click on a part of the address to go to that hierarchy, or a sibling like yesterdays posts. It makes sense that a forum like this would have a way too see all posts from a day, month, or year.

Of course, most if not all users here are comfortable with URLs so they're probably not the ones that would benefit the most. But I think most common users, the ones who Google everything instead of typing in an address, would use the breadcrumb bar while today they probably see the URL as some weird text string they have little interest in or understanding of.

While we often create a mental map between some sort of logical hierarchy and the segments of the url this doesn't have to be the case. A specific domain should be authority on this, not he general-purpose tool used to access it.

There was an extension in Firefox that used to do exactly this. Seems like it did not survive the Mozilla War Against Addons, unfortunately.

At chrome://flags there is one called #omnibox-context-menu-show-full-urls, which I have turned on.

This enables you to right click on the address bar, and turn on the option "Always show full URLs". It will always shows the full URL including the protocol, but I suspect they will remove this flag at some point.

I don't think they will remove it soon, since it was just added¹ after a lot of complaints about the default behavior².

Now how does this new flag interact? Has anyone enabled both to see?

1: https://bugs.chromium.org/p/chromium/issues/detail?id=106157...

2: https://bugs.chromium.org/p/chromium/issues/detail?id=883038...

A heads up for those with lots of tabs open: This requires a browser restart, and setting it once seems to set it for all profiles.

If you happen to close your browser and lose your tabs, use the reopen closed tab menu option, it'll bring all the closed tabs (even if multiple tabs).

Even safer is to use an extension like Session Buddy, to explicitly save tabs and windows, including exporting to files.

I can't find this option on Chrome on Linux. I had to get an extension to show the full URL, but it only works for 'https://' URLs, not for 'http://'.

This drives me crazy when debugging. Whenever I copy-paste IP addresses from the browser address bar into my console, I have to manually delete the `http://` at the front. I work on a P2P project so this is an extremely common situation for me.

Are you running version 83 or later? I think they introduced the flag in that one.

Another solution for Windows and macOS users (no Linux sadly) is to use Edge Chromium, which does it by default, and you prefer to donate your data to Microsoft rather than Google like me :)

Oh wow, http:// and https:// are back! I've been waiting years for that option.

Thanks. I enabled that flag last week and thought it didn't work. I didn't know there was a second step. Much better now.

Wow, thank you so much! My nervous system already feels better with this working.

Seriously, I keep looking at the address bar to make sure the URL is still there and I'm not dreaming.

Absolutely true, but well-meaning advice like "just use an adblocker" and "why not use a VPN" somehow doesn't quite cut it for me. Defaults matter.

Yes they always remove such flags later, does not matter if it's there right now.

This is exactly what I'm afraid of. there used to be chrome://flags/#omnibox-ui-hide-steady-state-url-scheme-and-subdomains (when I google how to make chrome show the full url bar, this is the recommended answer) but it's been gone entirely for several Chrome versions. I even toggled a flag to undo flag deprecations in Chrome 78 to get this back but that didn't work very long - I think this flag has been totally dead since Chrome 80ish.

I personally don't care much what the default is for the normal user, but I want to be able to have my full urls.

Which is ridiculous, because there’s thousands of these flags for things they don’t have an agenda to see gone.

In fairness, the OP is about another such flag, so the same argument would apply to that one.

> At chrome://flags ...

Is there also a #upgrade-to-firefox-immediately flag ?

OK, I just don’t get it anymore. I mean I’m a happy Firefox user, so it’s not like this personally impacts me, but how in the heck is seemingly nobody acknowledging that this has been the behavior in Safari for a long time now? This seems to be a recurring pattern.

I've got a couple of guesses. Apple is somehow regarded as being a pro-user company, not having plans of taking over the web. Also, Apple users are accustomed to UI changes that result in visual simplicity. Nevermind that their actions result in patronizing the user just the same as Google does, in their case those actions are more likely to be perceived as innocuous.

Cmd+f Safari, I'm equally surprised no one but you mentions it.

However: I believe Apple's motives are aligned with their users and they want their browser to be as safe and as easy to understand/use as possible. Their primary intention is to sell their shiny expensive hardware.

With Google it's more controversial, because who knows what's the plan. Combined with AMP there is a reason to be wary.

Ofc, one can make bad decisions based on good motives.

The other day, our 7 year old told me that [ is 5B and ] is 5D. I was quite impressed that he knew this, and I asked him how he knew it. He told me it was from reading the address bar in Roblox. Needlessly hiding technical details from kids is going to limit their learning.

This, exactly. People learn not only when they are forced to, but naturally from observing their environment too. The more opportunities to "spontaneously learn" you take away from them, the less they will learn.

Here's a comment I made from several years ago when Chrome tried to do before what it's trying again now (it's not the first time): https://news.ycombinator.com/item?id=7678729

Maybe this next point is starting to go into the realm of conspiracy theory, but I see far too much evidence of it every day: companies are doing this because they don't want users to learn. They want to keep users naive, docile, and compliant, and thus easier to "herd" for their purposes. They don't want people knowing the truth behind how things work; they would rather "developers" (and only those who explicitly chose to be one --- probably for monetary reasons) learn from their officially sanctioned documentation (which does not tell the whole truth), and not think or discover for themselves.

(I've memorised most of printable ASCII because I did a lot of Asm programming decades ago, so I instantly understood what you mean.)

Not sure how much actual conspiracy is in there, but I've definitly noticed that the gap between "consumer software" (highly optimized for ease of use but also highly limited and designed with a specific intention how users should interact with it) and "professional software" (powerful and flexible but only usable after extensive training, often command-line only) is widening instead of closing.

There are also definitly conscious design descisions how "cryptic" a particular feature should appear to users. I remember several bugzilla threads with discussion whether a config option should be exposed as an "ordinary" field in the settings or only as an option in about:config, so that normal users won't find it.

Coffee hasn't kicked in yet, took me a while to figure out you were saying '[' url encodes to 5B and ']' url encodes to 5D

I read that sentence 10 times. Wow.

This is how I learned them too (though about:… in IE5).

Is that a meaningful piece of learning, though? I’ve been doing webdev since the 1990s and still look up character codes if I need them.

I think the value is not in memorizing such trivia; for a 7 years old it might be discovering the pattern of data encoding and its why and how. It opens all sort of paths of discovery in future for understanding software.

You'd be surprised how often veteran developers fail to grasp intermediate Unicode concepts (surrogate pairs, for instance) probably since they skipped over (or was not curious enough about) implementation details of such abstractions.

Sure, but it's the curiosity about how things work under the hood that matters. If he sees [ being replaced with %5B,he'll ask why it does that. And that leads to learning.

Are you gatekeeping a 7 year old? Learning about character encoding from first principles is an awesome accomplishment.

I’m saying learning character encoding doesn’t support this:

> Needlessly hiding technical details from kids is going to limit their learning.

I watch kids learning circuitry via redstone in Minecraft on iOS and Xbox - walled gardens, yet impressive learning nonetheless.

I agree kids can learn useful stuff in Minecraft, no doubt.

But when I was his age, all I had was MS-DOS 3.3. And I had to CD around to various directories, DIR *.EXE to remember the names of executables, etc. It was an environment that exposed more technical details, and kids who are predisposed to learn technical details learn a lot just by using it. Windows 10, doesn't promote the learning of technical details to anywhere near the same extent.

(I try to make up for it a bit. I introduced him to DOSBox.)

Wouldn’t it be better to be able to devote limited learning time to more useful things than locating oddly named executables, though?

It is a mistake to conflate “it was harder for me” with “I learned more”.

My kids can program more complicated stuff in Minecraft than I could at their age. Part of that is having a tool that’s fun and abstracts away the boring bits.

> Part of that is having a tool that’s fun and abstracts away the boring bits.

What is "boring" varies from person to person.

I know, when our son plays Minecraft Java Edition, he likes to play it with the debug screen (F3) on.

He doesn't understand what most of the details on that screen mean, although he is learning a few. (He was asking me to explain what X, Y and Z coordinates were.) But, even if he doesn't understand most of it, he still likes it, and probably sooner or later he'll ask me more questions about it.

When he told me this, I thought he was talking about ASCII, but I had to actually double-check with "man ascii", because 5B/5D sounded familiar but wasn't 100% sure he was right.

But the point is not that he memorises the ASCII table. The value is that he learns that computers internally represent letters/punctuation as numbers. The underlying concept is what's important, and the learning of specific values is mainly useful as a way of learning and reinforcing that underlying concept.

Yep! Kids learn by observing.

Firefox became quite fast again after Quantum. For those of us who never "bought into" the whole Chrome ecosystem, there's always been adequate alternatives. Will check out: https://www.palemoon.org/

They have quite an interesting conversation on github: https://old.reddit.com/r/linux/comments/7w61aw/pale_moon_rem...

I'd stay clear of that project and use mainstream Firefox instead. And afaik they still don't support WebExtensions.

I currently run firefox developer edition - gives me access to custom extensions.

Suppose there never was an URL you could share.

Suppose you always had to tell people to 'Google it'

Suppose 'I feel lucky' was always the default, and the result was sold to the highest bidder.

Safari has been this way since 2014. I've never seen any pushback on Apple doing it over the past six years.

It's genuinely a benefit for the vast, vast majority of users, where the only important piece of information really is the domain name, to check which site you're actually on. And for more info, you can just click. Copying the URL becomes no more difficult.

The URL path beyond the domain is as useful to most people as an IP address, in other words not at all -- it's just noise. And displaying noise is bad UX. Pretty much only website developers and administrators and SEO people care about the full URL. Granted, there are a lot of those people here on HN, so I understand the pushback, but we're not most users.

But at the end of the day, I don't understand why people seem totally fine with Safari doing this, but not Google?

As long as you see the full url when you hover/click on the bar, I am all for it as well.

If find some of reactions on this ridiculously hyperbolic "biggest attack on the web in years" ? seriously ?

I get it, Google is a gigantic monster that does not necessarily act in its users best interests, but that does not mean we need to bring the pitchfork each time they launch an app update.

If recent history with AMP has shown anything, it is that yes, we need to bring our pitchforks every time.

And also precisely because of AMP, this might be a very dangerous step towards blurring the lines between original and AMP pages.

On that note, I noticed recently that Google search result links (on Firefox?) get rewritten. That is, you see the actual page URL when you hover over the link, but it's changed to their own redirect URL as soon as you click it.

I'm sure they've always been tracking these search result clicks, but I think this is a somewhat new behavior, and I find it highly deceiving.

Chrome sends back your click 'behind the scenes', whereas Firefox does not, so Google forces you to click through their link so they can track your activity (and if you hit the back button, you also jump through their redirect)

Ublock Origin can block this behavior. Here's a posting with links to more resources on hyperlink-auditig


The people who use Safari is exactly the demographic that this change targets. Chrome users includes most developers who are the ones to complain.

I’m a developer. I use safari/WebKit for 95% of my browsing. You can enable the full address among a bunch of other excellent developer settings and move on with life with a browser that works great.

Developers are <1% of users - anything outside of dev tools is not changed with them in mind.

I’m in favor of full URL, but frankly didn’t notice until just now that I haven’t enabled Safari’s Show full website address preference.

I strongly disagree. Ordinary users I interact with either understand the basic concept of the URL or understand it after an initial explanation. It becomes empowering to them in ways I often do not anticipate.

If I recall there was some grumbling, but Apple being Apple, they do what they want.

But why can't this be a toggle or user setting like in Safari? Why is it a 1-true-google-way of doing things when clearly there are users who want to keep it (even if its just web developers)?

How do you know it won't be a toggle?

Right now it is a setting to enable in Chrome.

What makes you think that once it becomes default, the switch won't remain to be able to turn it off?

Chrome is built by developers. Presumably, they pay attention to what developers need from it. Which is why their debugging tools overall are so amazing.

Perhaps you should withold criticism of what you assume they'll do until they, you know, actually do it.

Reading URLs is actually really hard - even for experts. This video covers the problems well: https://www.youtube.com/watch?v=0-wB1VY3Nrc

This is bad for web security, since the registerable domain is the part you have to trust, but it's surprisingly difficult to figure out that part.

However I feel a bit uneasy about this since URLs are important and tell you where you are on a website. I prefer Firefox's approach which emphasises the registerable domain in the URL bar and fades out the rest of it, making it easier to spot the important bit. However it's still quite subtle - it could do with being a clearer distinction.


The video points out things like: how do you spot an eTLD? There's .com, but what about .co.uk? .github.io? Do you know all the exceptions? There's basically a database of them and you just have to know them to correctly interpret the security origin of the domain.

The way we use DNS (reversed) does make URLs kind of confusing for specificity, like:


I am going to go against Hanlon's razor here, but doesn't the slow push away from URLs benefit Google?

A few years later, instead of type news.ycombinator.com, you would need to search for "hacker news", scroll through the ads and then click on the link.

So it could be a slow transition to inserting a sort of interstitial ads inter your browsing.

I'm against this. But: it's 2020 and still a huge number of people I deal with every day type the name of our product into their search bar and then login at the first site returned. Most don't know the difference between a browser and a specific website. It's all just a big Smush to them apparently.

Remember when ReadWriteWeb wrote an article about a new Facebook login feature and users who usually Googled facebook login and pressed the first result got all confused and angry? I don't think the average user today is any more knowledgeable about URLs.


Services that cater to the very lowest tier of users should also factor in the risk of that if they want to reap the rewards too. And it's a great thing about an open platform such as the Internet.

I'm also against this in theory, but in practice I don't care much. We shall see.

It's $current_year is never a good excuse. $people not knowing stuff should not mean that everyone should get dumber to match $people.

URLs are confusing though. I am a veteran URL user and I learned something about them from this thread. Hiding them isn't necessarily better but many replies here seem to be denying that they are imperfect.

Imperfect and stable+standardized is preferable to unstable+unstandardized, in my opinion.

I agree. What I'm saying is clearly if you want to defeat this then a totally different approach needs to turn up because 20 years of mainstream internet use has resulted in zero user education.

I doubt that. You mention in your parent post that people don't know the difference between a browser and a specific site. I think most people do. At least if they use more then 1 site.

What I meant was that in a tech support context if you ask people what browser they're using they will often say something like "I went to Product Name" or "I'm on Product Name". Then ask them what actual address they visited or again ask them what browser they are using and they will say something like "I went to the Internet".

I am not calling anybody dumb. I'm saying they don't care and don't know there is any reason to care.

I still doubt that. People know how to enter URLs. They chose not to because it's oftentimes easier just to search where you want to go. Google Chrome hasn't gotten this market dominance by people not caring. It has to installed actively.

Even if, showing the URL bar changes nothing for them, so why hide it?

It feels like a long time ago people were talking of computing in context of educating and empowering users rather than accessing commercial services.

> $people not knowing stuff should not mean that everyone should get dumber to match $people.

Exactly. We should push on the opposite direction (educate people and make the concepts clearer).

This, along with the inability of disabling the async dns feature in the latest Chrome for desktop versions (thus making pihole/adguard irrelevant), makes me accelerate the change to another browser.

I hate to be the person who's like "you're holding it wrong" but your usage of DNS is incorrect according to the RFCs. All configured DNS servers are assumed to serve the same content. The idea of every DNS request "trying" the first server, timing out, and then the next, and the next is a calcified implementation detail.

A DNS client looking at the list of servers, and marking the speed and reachability of each server is the most basic optimization. It makes no sense for clients to add n seconds to every request for every unreachable DNS sever.

The async DNS feature using Chrome's internal DNS client which behaves differently than glibc and so pihole appears to not work. Chrome is not injecting its own DNS servers into the mix or whitelisting anything, it always uses your system's DNS servers, it just looks them all up in parallel which it is allowed (and encouraged) to do by the RFC.

Make sure all your configured DNS servers are pihole and everything will work.

Also not opening external applications links in incognito mode if that is the last (or the only) chrome window with a focus. It still drives me nuts.

I'm curious - why do you want to disable it?

Because I want to use my own DNS server and block ads at the DNS level rather than the browser level. With this move, Google has effectively white listed adsense / adwords to not be blocked regardless of the network settings of the device.

this is confusing.


seems that the problem is not async itself, but that chrome ignores the system DNS settings and uses googles own DNS servers instead.

That is a good point, I might have been pointing at the wrong issue in Chrome. I have only seen this behavior happen since 2-3 days ago and all my research pointed me to async dns being the culprit. I am really eager to find out if this can be disabled in any way, but my Chrome time has come to an end with recent developments.

Seems like a bug. Some environments cannot reach external DNS servers, so it would break resolution in general. This happened before and was fixed: https://bugs.chromium.org/p/chromium/issues/detail?id=265970 I couldn't find any report for the current issue though - maybe you should start one.

Chrome doesn't respect DNS settings anyway. I have in my resolv.conf:

    search my.home
And entering hostname of my server would just googles the hostname of my server, instead of trying a lookup and googling only afterwards (or never, I don't see why the browser should contact the owner just because I mistyped the url).

This point merits an explanation. The file /etc/resolv.conf is a configuration file for the `dns` NSS module used by the glibc resolver.

Google's async DNS feature uses Chrome's own internal DNS resolver which doesn't call gethostname(). It would be incorrect for Chrome to parse this file and attempt to "respect" your settings because NSS is a series of black-box system-specific modules. If you removed the dns module from /etc/nsswitch.conf then resolv.conf wouldn't even enter the mix on your system and then Chrome would do the wrong thing. If the dns module behaved differently on your system and /etc/resolv.conf was actually /etc/resolv.json or /etc/resolver.conf then Chrome would again do the wrong thing.

When resolving a name applications have two choices, either look up the name with glibc, send the request through the NSS gauntlet of black-box modules and take whatever it returns of perform the DNS request itself and ignore everything on the system. Any sort of hybrid approach would be more confusing.

Hmm, so does this mean /etc/hosts will no longer work either, etc.? That's handled by the same glibc function too.

Why not just use a syntax highlighting approach on the address bar? Protocol one color, domain another, slashes one color, query params another, etc.

It's a pretty obvious solution, especially to any programmer.

I'm having a hard time thinking of a situation where you have information, some more important and some less, where the correct solution is to delete the less important information. It still has importance!

Using color to convey information is very difficult to make it be accessible while still working with themes and being aesthetically pleasing at the same time.

What you described is more like highlighting the function signature in one color and the entire body in another. Syntax highlighting for the URL would be more like domain/subdomain in one color, emphasising query fields in one color and params in another, with colors potentially varying potentially based on their type/significance.

That might help make the URL more readable but again doesn't really help if the relevant parts of the path/query string for trust isn't immediately apparent.

Because Ux/ui designers and art inclined people would lose their mind

or breadcrumb trail as someone else said.

I'd personally prefer if ot wasn't hidden but after first hamd experiencing non tech savvy family members trying to decipher the query part and some accusing someone of trying to hack them I'm for the change. The whole query string philosophy is such an outdated hack. Today it's just thousands of tracking qieries.

What I hate is how poorly they worded when websites use http instead of https. It says "connection not secure" which makes people think there is a hacker somewhere hacking their connection. What they should have done and must correct is to make the wording "this website is not following safety guidelines". I'm tired if explaining.

Maybe it should say "connection not secured". It would be much more factual.

> trying to decipher the query part and some accusing someone of trying to hack them I'm for the change

At least back in the day they will have the opportunity to learn what a query string is and that no, nobody is hacking anyone.

However with these stupid changes there will no longer be an opportunity to learn even if you wanted to.

If you dumb down users, they become dumber.

People should not educate themselves on the details of implementation of the browser or internet as much as they don't educate themselves about the details of the car. We have many other complex systems with simple end user goals and people don't have to care about the details. The Americans don't even have the gear shift

What is the conclusion of your hypothesis though, if users become unable to find and validate their services ie. like online banking, from possible fishing sites or otherwise hacked pages. Knowledge is power. So we need to be careful about making people impoverished or too reliant on centrally commanded portals, or prepare to face the consequences.

Querystring an outdated hack?

Look pilgrim, that's the standard for you right there. It's called RFC3986.

3.4. Query

   The query component contains non-hierarchical data that, along with
   data in the path component (Section 3.3), serves to identify a
   resource within the scope of the URI's scheme and naming authority
   (if any).
Today they are used for tracking strings more than anything else and that's also the reason why they are hiding them. People didn't complain if they were used as search queries

Wonder how long it’ll be before it shows the proxied URL on amp pages...

I think they're already trying to force that at the network level instead of the browser level using signed exchanges.


Signed exchanges are quite neat actually, they do not seem to depend on AMP at all. You could even use them to get arbitrary static resources hosted via IPFS in a seamless way.

For sure, just as AMP is also a neat technology that can be used by companies other than google.

But my understanding is that they intend to use signed exchanges specifically for their amp URLs, finally finishing their efforts of forcing people to go to google.com without them ever having realized it.

Ffs. I’m going back to gopher at some point.

Which is soon to be usurped by Google because it starts with “Go.”

Gopher over SNA/OSI is worse than WWW/TCP/IP in terms of ability to publish UGC.

That's a feature

Google Network Control Program?

Or just switch to Firefox...

Firefox is going down the toilet as well. Lots of me-too-isms are appearing.

Really my point was in jest. I think we need to trash the entire www and start again with something content only focused and hard same-origin policy and far far far lighter than what we have. I tried browsing the web on a dual core celeron N3010 recently and it was unusable on all mainstream browsers.

Wouldn't surprise me. Users are the product, what's to expect.

Yes, walling the garden has been the goal all along.

A couple of Google Chrome devs talk about the issues surrounding the readability of urls and their security implications and possible solutions in an episode of their podcast[0]. I think they make a compelling argument for hiding most of the url in part to prevent phishing however I do think they should allow this behaviour to be toggled via a flag.

[0] https://youtu.be/0-wB1VY3Nrc

Do you know if this information is somewhere more accessible than a 20 minute video?

Hiding the https and www is already frustrating enough, and this change would make Chrome barely usable for my purposes.

The claimed purpose is basically just to prevent phishing.

They explain a number of reasons why it is difficult for people to extract from a URL the part which is relevant to security, ie. the bit that affects who has authority over the page and how your cookies will be separated by the browser. The cookie sharing actually had some rules I didn't know about as a non-web developer but experienced URL user. They show how every browser is already going some way towards this but they all have some problems, for example Safari shows the full domain not just the important part.

Looks like this will be great for reflected XSS attacks. Even advanced users will not be able to notice there's something weird going on outside of the domain name part of the URL. Perfect!

Basically any page on the website with this vulnerability will be useable to show a fake login page, and user will not even notice he's not on the /login, but on some weird path + ?_sort=somejavascript

Not that it's that hard to clean up url via history api after you get access to the page via XSS atm, but there's still some short period of time where the full url is shown in such a case, that may provoke suspicion.

Stick "?jsessionid=<random 80 character string>" in front of the xss and no one will ever look.

Their goal is full AMP dominance. Just look at these evil guys faces. It's clear enough that they're going to pass their frustrations onto you, no matter what.

Conspiracy theory: This change is dictated by the Google AMP team that wants to take over the world without us knowing

> Conspiracy theory: This change is dictated by the Google AMP team that wants to take over the world without us knowing

I was just about to write this but I don't necessarily think it's that far off.

With signed exchanges, AMP pages have the ability to hide the fact you're accessing content through Google [1]. In 2016 Google wrote about testing 'mobile-first indexing' because more people are using mobile devices than desktop browsers [2].

[1] https://developers.google.com/search/docs/guides/about-amp#a... [2] https://webmasters.googleblog.com/2016/11/mobile-first-index...

If Google can control the URL narrative (keeping users from bouncing off AMP pages) it's just one more ability for them to be MITM.

I wonder if they’ll eventually hide the URL path from extensions (for security) and serve ads off google.com. Even serving ads from somewhere under google.com/amp would probably cause problems for ad blockers. Or maybe extensions see the rewritten URL only, so CanSignHttpExchanges is a way of changing third party trackers and ads into first party.

Also nice to see DigiCert helping them out, but I’m not surprised with how DigiCert’s product lineup isn’t much more than a test of how much of a sucker you are.

I disagree with the decision strongly, but I'm a developer and probably a "power user". A casual user might not even know what a URL is.

Do you know there's a staggering amount of users who type "google.com" into Google?

> A casual user might not even know what a URL is.

And this will add a few extra hoops for them to jump through before they learn, so that they'll never have to leave the reassuring embrace of Google's ad trackers. How convenient. :)

I mean... not to be contrarian but does the average user need to know a URL?

Do I need to know an address to drive my car somewhere?

My knowledge that a place exists and I want to go there is sufficient to get me there, without having the physical address memorized.

As a power-user I obviously navigate through URL far more than the average user, but I am not convinced that say a 50 year old nurse using my web software needs to ever touch a URL even a single time, or that it would be beneficial to her user experience to even know what it is.

Maybe a slightly better analogy: with the URL, if you know your address, you can go straight there. With a car/driving, it would be like instant teleportation. Not knowing the URL means using Google search, and not knowing the street address means driving around and seeing a bunch of billboards. Removing the URL bar is like removing the ability to teleport so you can make sure people see the billboards.

That's implying there's an "after they learn" - even without those hoops.

Knowledge does not guarantee action, but there will be no action without knowledge.

It's not unimaginable to me that there will be no knowledge anyway, regardless of whether the URL is shown or not.

Hiding the URL won't make people learn what it is. And what great thing can be accomplished when one changes it - this is how I learned a lot of things when WWW was starting.

Are we promoting idiocracy now? If someone doesn't know what it is he/she should find out, or live with not knowing.

A large part of the reason for this confusion might be Google's long-standing efforts to blur the line between search and plain URL-based navigation, starting with integrating search into the location field in Chrome – Firefox and Safari[1] used to separate them, which makes the concepts clearer and avoids sending URLs and (local history) searches to the search provider unintentionally.

[1] http://toastytech.com/guis/osx14safari2.png

I remember this being the magical thing together with less wasted vertical space that made me switch to Chrome back in the day, it was (and still is) so simple! Always had the search field taking up precious space in Firefox but never really using it because it was a different mental route, like this is smart but why am I not using it?

I think it is down to having to consciously decide to search before starting to type instead of just start typing, if you couldn't remember the URL just misspell it and search and it works, a more specific page throw in another word and you get the correct page as a search result essentially every time.

Looking at that issue of consciously deciding whether to search from another angle, if I haven't decided yet, why would I want to inform Google of what I'm typing? Perhaps it's a private address of a private server with private information in the query string.

In any case, if I do decide to search, the search field is just a ctrl+k away, so the additional convenience of combining the fields never seemed that great to me. (But for Google, of course, it's a very convenient property of this design that everything the user types happens to end up being sent to Google.)

No sorry i just know of people that type google into bing.com ;)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact