I would have no trouble with hiding "https://" in the address bar, as long they show it for other protocols (including http). It might help move us to a world with https everywhere even faster.
I still prefer Firefox: protocol, subdomains and path are greyed out but still clearly legible. This way I can eyeball "on which site am I?" quickly (and read google-secure-payments.google.via.net as via.net for example) and still have access to the full URL in 0 clicks.
I have the opposite complaint: when I highlight a portion of the url in chrome, it prepends the http:// anyway. If I have the whole thing selected it would make sense to prepend the http://
Yes, on Windows. You do have to select the first part of the url, but I often just want the host and port, ie localhost:8080 instead of http://localhost:8080
This is so minor but it slows me down working every day. If I didn't select it, don't copy it. If I clicked in the center on a specific spot, drop the cursor right there instead of selecting a section.
Interesting. What OS and Firefox build are you on? Firefox always copies the full URI for me. (Windows, Firefox 69 - and I'm pretty sure I get the same behavior on my Linux machine at home, but I'm at work right now so can't confirm.)
This reminds me of how Windows frustratingly hides file extensions by default.
This sounds low-upside/high-downside to me, but it's the sort of thing with simple arguments-for (easier for clueless users! Cleaner!), and nuanced arguments-against (eliding rarely-useful details in special cases causes ambiguity, and can lead to confusion is some cases, particularly for clueless users).
There was an article recently about "hostile architecture" and this is similarly a "hostile software design" to prevent users from doing something the developers don't want them to do.
Its hostile for power users but it prevents grandpa from renaming 'IMG_144.jpg' to 'Idaho_grandkids', hammering enter on the 'are you sure you want this' nag screen and then wondering what broke his picture.
On Macs before 2001, that used to work! We didn't used to cram pieces of file metadata into file names. File type was stored its own slot. The hidden-ness of a file was stored in its own slot.
I find it amazing that despite the industry's trend in recent years away from string-typing and towards static-typing, metadata-in-file-name just won't die.
I'm all for removing old ways of doing things that were bad, when we have a better way to do them. What we have here, with type-in-extension (and visibility-in-prefix), is a legacy system for which there is so far no workable substitute (on any system, much less all systems). A person can't be fully computer literate and not know what extensions are.
UTI has some nice features, but it's still based on extensions (which are keys into the database with the information you really want). This weird halfway point, where every user has to be a "power user" of the filesystem in order to not break their own files, will live on until either types are made primary again, or until we're all reduced to using standalone "apps" with app-specific storage and we can't share any data except as the app author decided to allow.
Apple used to be the innovator in file metadata. They stopped when Steve came back because proper metadata made it more difficult to share files with Windows and Unix. And we're still stuck with crap metadata to this day.
Given the implementations I've seen of file metadata outside of the extension I'm okay with this - every time OSes (especially windows) tries to strip away the extension it ends up switching the file type from being a description of "what am I" to "what can open me", I really dislike "what can open me" especially when we get into a lot of common office work file formats. Knowing the difference between .rtf, .txt, .doc and .docx is important because of the special attributes of those files - additionally I frequently run into issues with character encoding and that should be an even easier problem to solve so... my outlook is not very optimistic.
Agreed. Apple's pre-OSX metadata system was two pieces: "What am I" and "Who created me." This was an amazingly flexible system that just worked and was the best of both worlds.
That's fine, not every task that is possible must be directly supported by the OS - third party apps to fiddle with data that is considered non-primary workflow like that is OK.
It's not the picture that broke, it's not the user that is dumb, but the system that is broken. Dot extensions is a concept that should never have seen the light of the 21st century. Or at least it should be a last resort for the system to guess the file format for those formats that don't have magic numbers in their headers or for lesser known formats.
Or more generally the header should contain the metadata of the file: date modified, filetype, comments, icon, etc, rather than spreading that between the filesystem (with different filesystems having inconsistent dates), the OS, etc.
I disagree, locality data such as ownership, modified date, comments, permissions even, should not be part of the file, it breaks things like repeatable builds, version control and anything where the contents of the file are considered static.
shadow files (._*) and directory clutter (.DS_Store/desktop.ini) are one solution, but they're ugly and frustrating, having separate areas in the filestore (like a resource fork, but designed to be ephemeral) are a much more sane solution.
I don't think it breaks anything. If the world had gone this way, all the functions you use to read a file would read the data from the first byte after the header.
In that situation the metadata is no longer part of the 'file' then, and would be lost by a 3rd party copy function unless extra API functions are used to copy the metadata.
At this point, it might as well be a "resource fork" or whatever you want to call your secondary file contents at the fs layer shrug
I like the design principle "it should be easy to do easy things and possible to do hard things". Showing the extension, but requiring terminal use to change it would a decent compromise.
Files had a type code and a creator code. The type code told application whether they could open the file. The creator code told the Finder which application to open when the file was double-clicked.
It was impossible for the user to change either (in the stock OS).
It worked well enough and ensured the example 'grandma' problem above could not occur.
The Finder maintained a 'desktop database' which, I believe, was used by the operating system to determine which Applications were able to open files given their type. This was updated automagically, so given a floppy disk with an application on it, and a file that could be opened by that application, the user could insert the disk, double-click the file and the application would be launched to open the file--even if the application was previously unknown to the machine in question.
Power users could use ResEdit or some other tool to change such attributes of a file.
He's mis-stating it slightly. You couldn't fiddle with a file's creator code, but there wasn't much reason to.
The type codes were editable, in the sense you could choose the default app you wanted to open files of that type, which is really what you'd want to modify most of the time.
I don't remember this being the case. I don't think it was possible to change either a file's type or creator code without a tool such as ResEdit that wasn't included in the OS.
BTW, I think you're referring to the 'creator' code, not the 'type' code. The 'creator' code determined which application would open a file when it was opened. And it was per-file; I don't remember any mention of a system-wide 'default application' or anything like that.
This was actually rather nice in practice. It meant a JPEG image created by e.g., GraphicsConverter would be opened by GraphicsConverter when double-clicked; whereas one saved by a web browser would be opened in the web browser. But either could be dragged into either application in order to open the web-browser-saved image with GraphicsConverter.
Wait a tic - so the creator code forced the opening program to always be the same for a given file? There way no way to declare a preference that `.csv` files should now be opened with textedit or some such?
Does Mac still have resource forks? Either way, I'm sure it still has decent magic number decoding. Who (except family members in windows?) would care about the missing extensions?
The only real reason i care is for vim file detection (in Linux). How sad is that? :'(
Resource forks are still supported. At least they were with HFS+. They might be gone in APFS. In any case with each release Apple makes it harder to access them.
APFS still supports resource forks. (They're not as baked-in as HFS/HFS+, though.)
APFS has good extended attribute support. Whereas linux extended attributes might be limited to 64k, APFS extended attributes can be significantly bigger (I just created a 32MB one).
The xnu kernel has some ugly hacks (pre-dating APFS) which make the "com.apple.ResourceFork" available as a file ("/..namedfork/rsrc").
On HFS/HFS+, file/..namedfork/rsrc will always exist (and usually be empty). on APFS, file/..namedfork/rsrc will generate an ENOENT error if the resource fork/attribute doesn't exist.
Oh macOS is worse. It shows or hides extensions based on whether the user chose to include it in the save box or not. Command-line apps' output will always have the extension included.
Yeah, I have to google it every time I want to do this. It should be easy to find. Like maybe right click, show hidden files. Could be in the "View" menu. There's several obvious places it could/should be, but it's not. It's horrible ux.
https://github.com/sneak/osximage - i encourage you to snag the user setup scripts from here. the complete nbi building is out of date as it’s fallen out of favor with current releases but the configuration scripts can all still just be batch run on a fresh install to make osx sane.
And, there is a setting to tell it to not do that, but even then, for some files it is hidden anyways. But, it is possible to use regedit to change it so that the extensions are not hidden even for those cases.
On windows? I've never experienced/noticed this, what extensions are still hidden? Hidden files is a separate setting from hiding extensions (which is good), but I've never noticed a file that wouldn't display the extension after changing the setting. I can't remember specifically for any of the earlier versions, but this has been my experience on 7 & 10.
Google must really hate URLs. My search results recently stopped showing the full path of the URL, just the domain name. It was a huge pain because I was looking for an item at Ikea and couldn't tell if a result went to their American site or to their UK, Saudi Arabian, Qatari, etc. site (apparently the same item can have small differences in different countries— I almost bought the wrong lightbulbs because the UK version of my lamp uses a different size bulb).
Honestly, I suspect the motivation is either the Chrome team is too big and people are bored and looking for changes to make, or someone is angling for a promotion and wants to make a very visible change so they can show "impact" when they put together their promotion packet.
Google sees folks falling for google-secure-payments.google.via.net type URLS. They see all the gaming people do with URLs in search results (including abusing the names of other brands like Amazon).
There are lots of developers who like distinguishing these subtle topics. I can tell you in a larger enterprise deployment most of these changes will be welcomed (yes, people do still click on bogus URLs believe it or not - FAR more often than I would expect). Or think that the url amazon.lowprice.com is an amazon website in a search result.
Folks claiming google is hiding the owner of the website forget that the actual owner of the website is reflected by the END of the domain name, not the earlier parts. In some shared hosting situations this is confusing, but in the end whoever controls the end actually is in control.
Before the change: www.google.com
After the change: google.com
Before the change: google-secure-payments.google.via.net
After the change: google-secure-payments.google.via.net
Seems to me like the same trick is available.
Now do it like Firefox:
Grey: www
Black google.com
Grey: google-secure-payments.google
Black: via.net
Warning: Check the black portion of the URL and make sure it's the right one!
I'm wondering how the particular address bar change helps alleviate this though. As I've seen it implemented, the only subdomain they'll "elide" is www, so if it is google-secure-payments.google.via.net it will still show as such in the address bar, not as via.net. So I don't understand their reasoning in how this will help from a security standpoint.
It seems to me that a decent scheme to make the domain-owner more visible would be to separate out the domain and display it separately from the entire URL, the way that Safari does.
I'm not a fan of Safari's changes, but at least its easy for me to see how they could improve security -- it's a decision with real benefits, and the only question is whether they outweigh the downsides.
But it's difficult for me to imagine a phishing scenario where hiding `www` will make things better. The only scenario I can think of (where `www` redirects to a different site) is made worse by hiding it from the URL.
via.net -> google -> secure-payments / whatever/whatever.html
I mean, I'd hate it, but at least it puts the public suffix up front and centre^Wleft. And it doesn't hide and part of the URL from the user, it just mangles it (but in a way that arguably enhances the user's ability to understand WTF they're looking at).
> Folks claiming google is hiding the owner of the website forget that the actual owner of the website is reflected by the END of the domain name, not the earlier parts.
Perhaps this distinction could be made more clear by switching to reverse domain name notation.
URLs represent and identify content,controling them means you control content. In the short term these changes mean little but in the long term this will benefit google immensely. Google just loves slippery slopes.
Identity and payment are extremely important to any ordered system of social interaction. More than content itself,controlling them helps one control everything else.
I am trying hard to avoid believing conspiracy theoris about Google.
Maybe a legal requirement for intetnet standards compliance makes sense?
The focus of this change is to improve the identification of content. This coupled with payment has been subject lots of fraud where users misled about the owner of a website.
google-payments-secure.via.net is not a google payment website despite the google in the domain name. The key part that signifies the ownership / identify of the person hosting the site is the END of the domain, the google.com part. That is the owner of the web property, not whatever appears before that.
That's what google uses as an argument too and I am too familiar with the problem.
The problem here is that Google is in control of the solution and the solution is designed by them without consulting everyone else and in a way that would be advantageous to their long term dominance and prosperity.
I don't trust google and they certainly don't have my consent to shape the way I and the society I live in interact with each other.
I will say it a million times if needed. I do not trust google. Period. They ask forgiveness instead of permission and they love slipperly slops and bait-and-switch psychology tricks to get their way.
There are other ways to do this that do not hide parts of the URL.
Educating people should not hide information, but instead present information in a way that is more understandable. I think the Chrone team is making URLs harder to understand.
Legally requiring internet standards compliance would help in some ways, but imagine the hurdle that creates for future startups who have something different and maybe better in mind
Well, claiming that it is compliant even though it isn't, is false advertising, so that is how it should be applied, rather than requiring compliance even for the thing that isn't supposed to be. They already require warning labels on some stuff, so it should be required that warning label too, then, maybe.
They can propose the change as an internet standard like everyone else before lettig users interact with that product? Hurdle? Yes,but the cost-benefit analysis seems obvious here.
I'm certainly not a fan of the new URI scheme, but it is worth pointing out that Safari has already made the same change. Furthermore I'm not convinced said change is a net-negative for the average consumer.
Safari added the setting "Show full website address", which does just that. I wouldn't have a problem if Chrome followed suit and defaulted to the new scheme, but gave us the option to show the full URI.
I was reading your comment thinking, "What? No!" and then kept reading. I apparently checked "Show full website address" when the change was first made.
The next change will be that it will trick users into thinking they are on are on a real site like example.com but will instead be on Google.com/example.com.
But chrome will remove google.com just like http/s.
Signed HTTP exchanges. It was actually the main complaint about AMP because it caused confusion among users. Technically it makes no difference wether the content is hosted by the author or any AMP CDN.
The question is more a philosophical one. If it’s wrong that the URL doesn’t point to where files are “actually” stored. Or rather that the package delivery and package creator aren’t the same party. In my mind this hasn’t been the case for a long time now anyway.
I would argue the issue is not technical at all, and this should not be part of the discussion.
The issue is deceit and fraud. A browser should _never_ ever under any circumstances be able to display one companies domain differently than all others. Especially when it's their domain.
Google.com, or any of Alphabet's domains, should not get special preference or treatment. And they certainly should not be trimmed out of an address for the benefit of Alphabet/Google to the detriment of all others.
To place this in context -- Safari already does this and even more, hiding the path after the domain as well.
Firefox does something in the middle where it makes the "https://www." and path lighter gray, while the domain name is black.
I think this is just about ease of use and displaying the most relevant information. Regular users think of it as "google.com", not as "https://www.google.com". And the full URL is still there whenever you click through to select or copy.
This is "mobile-first" mentality in a malignant retroactive form, where it is taking away functionality and behavior that is already established in the desktop environment to better match user expectations set in the mobile space.
Mobile browsers often elide the protocol and trivial hostnames from domains in order to economize on precious screen space of phones. Desktop browsers do not face the same constraints. A desktop browser can play to the strengths of desktop computing—taking a "desktop-first" point of view—and display the full URL with the abundant space available. See Firefox's display of the full URL with a highlighted color for the domain. Alternatively, they can be made subservient to mobile and adopt conventions established from constraints that do not exist in the desktop space.
Mobile-first is frequently damaging to new desktop software projects, but in my experience it's atypical for mobile limitations to be back-ported to established desktop software.
It's just so bizarre how strongly the Google product people continue to insist that this change is beneficial to users when so many users themselves simultaneously insist that it's not. Combined with the fact that their explanation is dubious at best (i.e. www is not a technically a "special case"), I find it very hard to believe they do not have additional, confidential reasoning for making this change.
"The Chrome team values the simplicity, usability, and security of UI surfaces.
This change is the complete opposite of simple. Simple is showing the real URL. Complex is trying to remember in what cases Chrome hides part of the URL and trying to guess what website you're viewing.
If the bar was called the "http headers bar" then simple would be showing the full http headers, but it is called address bar, so the simple thing to do is to show the address.
I work for well-known tech company [redacted] and visiting our website with the www omitted does not work from within our network. But if you've just updated Chrome, it would now be unclear whether you typed the site name right.
My previous employer [redacted] had its marketing site on www and actual SaaS application product on www-less. Again a case where mis-typing would be made more confusing by Chrome.
They're not good practice on the part of the website operators, for sure, but they are real examples of the root and www domains being different. In my experience there are also lots of old or government sites that just plain don't work without the www.
And when the person phones up tech support “ma'am, does it show the www. in front of the website?” “no” “sorry you need to retype” “it still doesn't show www.” is going to happen.
Or someone will write down the URL from the screen, and when someone else types it in, it won't work.
Oh, right, and there's the same issue with sites where https:// has a different site to http://!
This is extremely annoying for a project I'm working on that involves subdomains. If I set nginx to redirect `example.com` to `www.example.com` I want to verify it in the browser.
my chrome hasn't updated yet, but my understanding of this change was that you could display the full unmolested URL by simply clicking in the address bar. is that not the case?
Can someone help me understand why this is a big deal? Safari has had this change for a while and I haven't felt like I'm missing anything. Where's the slippery slope? Does have to do with AMP?
It has nothing to do with AMP. There's a lot of reasons why, but one reason is for example, two different sites can technically be hosted on www. and the non-www version, if the site isn't set up to redirect the www to the non-www, for example. So this could, in certain scenarios, pose a security risk in that if you pretend that www and the non www are the same sites -- when they're not -- it can make it easier for those that hijack one or the other to get away with it. Now, that's fairly extreme and esoteric. But there are actually times in development when you use subdomains, which this makes more difficult, and there's other subdomains they're considering trivial which, in fact, aren't. This is really good discussion on the topic and why this change just simply ignores so many technical issues, best practices and simply realities: https://bugs.chromium.org/p/chromium/issues/detail?id=881410
Here's a problem: people often share screenshots with the address bar in the screenshot.
If they share it from Safari with default options, the address is mostly useless, it's just the domain (and www may elided), but it doesn't even look like a url, so whatever.
If they share it from Chrome with these new options, if it's like the last time Chrome released this, it looks like the full url, but it's not:
This is great and all until you (like me) need to be able to differentiate www.example.com from example.com and https:// example.com versus http:// example.com. My website doesn't forward example.com to www.example.com (due to errors on my part that I don't know how to fix, my email is in my profile and if you can help I'd greatly appreciate it).
I don't know if there are any scenarios in which a properly-configured website (mine isn't) needs to differentiate example.com from www.example.com. But I liked the ability to do so easily.
>need to be able to differentiate ... https:// example.com versus http:// example.com
this requirement is handled by the "not secure" badging, which is much more obvious that requiring users to pick out the "s" in the middle of a string.
Based on your certificate, are you using Cloudflare? If so, this shows how to handle these redirects: https://www.bybe.net/cloudflare-enforce-ssl-redirect-http-ht... I use Github pages+Cloudflare to host my page and that's how I do it, free and quite resilient.
HTTP headers are hidden. That's great and all until you need to examine them for some development task.
iframe URLs are hidden. That's great and all until you need to verify that your iframe is loading the correct content.
Yet people don't complain about these. They are happy using the dev tools to access this information that is sometimes useful for developers and almost entirely useless for ordinary users.
What a slippery slope. I imagine a future when the entire URL itself has disappeared and we live in some sort of Google-controlled walled garden environment, like what they're trying to do with AMP[0]. Some sites only work with WWW prefixed as the APEX DNS record is misconfigured and points to nothing. I've even seen some sites point to `0.0.0.0' but had a CNAME record for WWW and I could then view the site.
I introduced him to a new tool I built for internal development, and wanted him to access the locally running instance of our codebase.
Took us some time to figure out that chrome was trying to connect to https:// instead of http://, which wasn't enabled. I think it said something in the error message about that, but who reads these anyways.
This behavior will train users to believe that the www in domains isn't important, when it actually serves a very important purpose.
You can't cname example.org, which makes it very hard to use a CDN to serve it unless the CDN provides anycast ips or you delegate DNS to the CDN.
If they're intent on making the address bar useless, they may as well go full hog, like Apple does in desktop Safari -- the address bar shows the domain only, until you click.
If the url isn't important enough to display, then they shouldn't display it. They can display just the domain. They shouldn't display almost the url, but they removed an important thing.
Edit to add: if you type in an almost url from the screen or a screenshot, it's likely to not take you to the same page, and you'll be confused as to why. If you type in the domain only and go to the home page, that's not that confusing.
Google's business model ultimately revoves around deception. They need their assets to not know or care where they really are or who they really are talking to or who is reviewing thier words and movements.
I had to laugh when they started parading thier fake-human avatar "to call and lie to people".
But really, their big value item is language engineering.
"As a ____, I want to get users to our []-controlled version of our site, without alerting the user aware they are on our []-controlled version of our site."
So in this sense there are points for filling it. It's still the correct domain, so this is quite a niche user story.
I absolutely do not want it to hide any URI schemes or subdomains at all. (I am able to change these settings in Firefox, at least. I could also disable Unicode rendering for the domain name, but to disable Unicode rendering for the filename I had to write an extension.)
By default, Firefox's address bar hides the http:// scheme but shows the https:// scheme (to emphasize that the connection is secure). Firefox users that want to see the http:// scheme can set the "browser.urlbar.trimURLs" about:config pref to false.
I find this really annoying. We have an internal SPA with buggy routing where it'll render for both http and https but the requests made by the client fail because they mirror the protocol when hitting the backend.
It's a pretty trivial issue that has sat around for a while. While I run Firefox, I would normally distinguish what version of the site a user is on by the green lock.
While http does show a "Not secure" segment next to the URL, everything just shows as black and white in dark mode, making it harder to distinguish at a glance.
First "bug" raised this morning about naked domain not redirecting to www...
SEO departments in my experience still insists that the www subdomain is necessary (e.g. they have no idea so better not touch it), yet Google instead of publicly coming out saying that it's pointless goes as far as hiding it. I don't get the point.
If I have one webpage with different languages and want my visitors recognize it from url, what are the options?
Previously I could have www.example.com/en/ and www.example.com/fr/ etc
If subdomains are not possible there's no hope that visitors using Chrome would see it?
We all know why they really did it - to hide AMP page prefix in the future, to silo all internet (=Googlenet) users on their servers and won't raise much suspicions from them.
Google wants nobody to know URLs exist, and that everyone is forced to search Google for everything, even sites they know, and there visit fake AMP-sites portraying to be from a server they are not, all while Google tracks every single keystroke you make.
This is just another tiny step in that overall plan, and it is 100% evil.
Don’t waste your time trying to talk the Chrome-team into reason. You are not their customer, nor their employer. They will not listen.
If you don’t like what Google is doing, use other products. Firefox, DDG, iPhones etc.
Firefox and Safari already made similar changes. Safari made the exact same change, and Firefox hides http:// already, but not https://, chrome is just using a lock icon to represent the https:// instead of the actual string.
From the company whose mobile line of business started with an music player that hid file extensions. I can still remember when I first started running into people that didn't know that the songs on their ipod were files. Certainly builds a "moat."
This is so pointless. Why? To make the URLs prettier for the average user who they assume is too stupid to function or something?
Meanwhile most URLs around the web have UUIDs or other garbage appended to the end for the sake of tracking, which I doubt they intend to do anything about any time soon, so what even is the point of hiding the protocol and a single subdomain? Just leave the URL alone.
Simple UX: don't move things around more than necessary. I honestly don't understand why people think this second click is unreasonable; the power-user shortcut (ctrl-l) already expands immediately anyway.
It's unreasonable because it violates user expectations. One click already makes the field editable and selects the text, having a second click change the text has no precedent whatsoever. It's also confusing because this means that clicking to position the insertion point suddenly moves the text out from under where you clicked (thankfully it does still place the insertion point at the right location, but that location is no longer where you're looking and pointing).
There's also just no reason for this. It's unnecessary overhead to showing the full URL and there's no benefit.
The point is that the most common thing people do when clicking the URL is typing something else in the search bar, and the second most is copying the URL. In both cases, having the URL change is a problem, because this includes people who don't know what the http:// means or what a subdomain is, and don't immediately understand when a URL is the same as another. Your extra click saves my mother, and millions like her, much more confusion than the click costs you.
You probably mostly know technically fluent people, and think ‘my mother’ is a euphemism, but it's not. If we can get to a world where she can see an unfakeable padlock and read ‘facebook.com’, rather than remember whether it was ‘https:’ or ‘https.’ in the URL bar, and which parts of the string of letters to skip, we've made her life less hostile.
The corresponding problem you're complaining about is incredibly trivial in comparison. If you're actually editing the URL, you see the whole URL. If you need to know these implementation details, they're vastly easier to find than, say, HTTP headers.
If you type something in the bar it doesn't matter what it shows.
If you copy the URL, the fact that what you copy is literally different than what you're looking at is confusing. You copied "example.com", so why does the clipboard now contain "http://www.example.com/"?
> If we can get to a world where she can see an unfakeable padlock and read ‘facebook.com’, rather than remember whether it was ‘https:’ or ‘https.’ in the URL bar, and which parts of the string of letters to skip, we've made her life less hostile.
Whether the URL bar shows "https" or a padlock has nothing at all to do with the unnecessary confusion inherent in Chrome requiring a second click on the URL bar to reveal the full URL.
Nothing in your comment even begins to explain the benefit in presenting a fully-editable text field that allegedly shows the URL, except it's still hiding information that is only revealed once you try and perform another text editing action. If someone looks at the URL bar and sees "example.com", and they click on it and it immediately turns into "https://www.example.com", that doesn't harm them in any way. In fact, that's exactly how mobile browsers such as iOS Safari have worked for years and nobody's complained. Chrome is deliberately doing something different than all precedent and there's no obvious reason for this beyond their desire to be unique.
I gave a straightforward argument: it keeps these complexities away from people who don't understand them as much as reasonably possible, while still leaving them within trivial reach when they are necessary. If you don't like the argument, whatever, but acting like Chrome is making this a double click instead of a single click because of some kind of differentiation strategy or ego or whatever is just silly.
Off topic : there seems to be a disconnect between the chrome devs and users. Another instance was the automatic signing in to chrome incident. I've lost trust in the chrome team and have switched to Firefox full time and honestly there's nothing that I miss. The firefox devs seem to better understand their users, frequently blog about changes that positively impact users. I have a lot more faith in firefox even though it's not perfect (mr. robot incident and others).
Have they though? How many of their users understand what "www" or "https" mean? For those that have a vague idea, how many ever look?
I don't like the change either, for a variety of reasons, but I don't think I'm their average user either. For the average user, seeing the domain and nothing else likely improves security.
This isn't about users, it's about a scam to trick people into thinking that something is being served from their site, but it's not, it will be from Google. (ie, AMP)
It's not always 1-3 characters. google-payments.sbc.net for example. For the third time, I'm not arguing in favor of Google's implementation. What I'm saying is that this has nothing to do with Google being out of touch with users as the GP suggests.
> It's not always 1-3 characters. google-payments.sbc.net for example.
It will consider google-payments as being trivial and hide it? I may have misunderstood something, the article mentions clearly only "www" and "m" and if anything it made me read more and it seems like they no longer hide "m" (which make it much better because now the only mistake can be made only between www and without it, which should be quite rare).
> What I'm saying is that this has nothing to do with Google being out of touch with users as the GP suggests.
You said that:
> seeing the domain and nothing else likely improves security
Sorry but you are arguing that it will improve security. I'm asking you to prove that it does improve security.
I'm not arguing whether it's in touch or not with their users, it's meaningless, security is not an esthetical choice.
>Sorry but you are arguing that it will improve security. I'm asking you to prove that it does improve security
My opinion on that bit isn't cemented in yet (why I used "likely"), but people are fooled by real-sounding domain names. They don't know what TLS is and they don't know what the prefix is, but they do know the difference between "google.com" and "avs.net".
> They don't know what TLS is and they don't know what the prefix is, but they do know the difference between "google.com" and "avs.net".
Sure but hiding www won't make google.com or avs.net more obvious.
This is an example that I wrote in another comment.
Before the change: www.google.com
After the change: google.com
Before the change: google-secure-payments.google.via.net
After the change: google-secure-payments.google.via.net
In the past, I guess the second URL would have included www at the beginning.
The dangerous one didn't change... the not dangerous one did change but doesn't matter really. They are just as similar.
I think the chrome team just considers its users to be your average corporate america employee, and no longer considers developers or tech literate people to be their main market.
I doubt that, because the number of times they completely ruin chrome as an intranet browser in the last few years with TLS handling, self signed cert handling, and general settings window changing malarkey shows that they are not even catering to that market very well, especially when they add a workaround and take it away in a very short window which for better or worse is often much shorter than most large organization's change control windows. Firefox generally keeps most workarounds working for as long as I've needed to worry about it, but Chrome's timelines seem arbitrary and the UI changes make keeping Chrome configuration guides a constant churn.
There are people whose entire workflow is constantly bypassing self signed certificate/browser warnings, and the interface to undo an override is persistently changing as well. The method to get the certificate details of the site you are connecting to (which helps for self signed soup) has also been changing constantly over the last 5 years for Chrome, but for browsers like Firefox have basically been the same thing. e.g. Chrome 56 https://www.ssl2buy.com/wiki/how-to-view-ssl-certificate-det... has a totally different procedure to what you can do in Chrome 75, where it is back in the site details drop down (where it was before Chrome 56).
Really it's any case that you navigate to a site and get the Chrome error page for a TLS related reason. Many people who administer enterprise applications are not technical people and so they don't even know this sort of thing is coming. They get other people to do the technical/software updates but are generally just there to keep the system alive and get value from the system, but Chrome doesn't clearly explain to them what happened and they go to IE/Firefox and it works fine. For most people this is the limits of their troubleshooting and they have no recourse. Then on top of it the procedure or documentation that they used last time (often generated by a technical resource they may not have anymore) no longer works and they are stuck. It's a very frustrating experience for a lot of people and I wish they handled it better.
I think the issue the upper poster is referring to are things like when chrome deprecated ancient certificate features, which enterprise-solutions still happen to use by default 15 years after they were deprecated.
(One such issue were certificates with a common name and no subject alt name.)
Yes, the common name and the no SAN was one of those problems. It didn't help that 90% of all tutorials to do a self signed CA only set a CN, and sometimes you had dependent internal systems. How this appears to people who end up servicing tickets are just 'Chrome doesn't work anymore' and having to answer many people that 'Chrome won't work any more until a larger business process resolves, and there's nothing we can do about it' really sucks. Also to an end user who might be a nontechnical administrator of a enterprise application there was no indication that it was going to become a problem, they just show up to work one day and they can't work.
That issue bit me in the ass, but my biggest complaint about it was the horrible error message that Chrome gave making it impossible to figure out what the problem was.
Brave (https://brave.com/download/) is a good alternative too. It doesn't have the terrible UX (IMHO) that Firefox has. But it is built with Chromium (TMK).
I use Brave as my daily driver, but this trickled down into Brave as well this morning. I was confused for a minute what I had done with a subdomain on one of my sites, and then got very frustrated when I realized the Chromium team had put this back in place. A few #omnibox... tweaks later and I have it back, but it is certainly annoying.
This is why forking Chromium is not a solution. Unless a team is committed to maintaining that fork, eventually they'll be forced to merge in any of the changes that Google wants to push.
The only way a Chromium fork works is if the team is willing to stop merging after they fork and take over development themselves.
Tried to pay off my contract & upgrade my phone on O2 UK's (mobile network operator) website.
Card transactions failed multiple times because of a blocked popup.
Ended up having to wait a week for the pending transactions to be released (£600 worth. Rang up Monzo, my bank, and they'd tell me I'd get the money released in pending state the next week which was true!).
I still prefer Firefox: protocol, subdomains and path are greyed out but still clearly legible. This way I can eyeball "on which site am I?" quickly (and read google-secure-payments.google.via.net as via.net for example) and still have access to the full URL in 0 clicks.