Hacker News new | past | comments | ask | show | jobs | submit login
Google Takes Its First Steps Toward Killing the URL (wired.com)
417 points by glassworm 83 days ago | hide | past | web | favorite | 257 comments



I’m against chrome taking always the URL. While they make points about security and dumb users, I don’t believe they have altruistic motives.

An issue with AMP is the terrible URLs- because they are hosted by google. Once Chrome no longer has URLs, then Chrome gets to decide what name to show you in the address bar - potentially a name that has very little to do with the actual location of the document. Maybe I’m being a bit extreme or pessimistic, but do not believe this is a good change for the web, and I don’t think Google can be trusted to be the stewards of the internet.


They don't say anything about "dumb users", and I wish the conversation around this didn't even use that phrase. Hardhats aren't just for clumsy people, seatbelts aren't just for bad drivers, and memory protection isn't just for careless programmers. There's no shame in engineering a situation to maximize safety for the people who will use it.

There's a whole lot of smart people who like static typing for the safety benefits it brings to their programming. Web browsers are the front lines: where one click on a stringly-typed reference can cause unknown third-party code to run on your computer. You are protected only by your human ability to spot the visual difference between two strings, one of which is constructed by someone trying their hardest to trick you. If you advocate for type safety within software, you should also advocate for a better system than URIs.


> Hardhats aren't just for clumsy people, seatbelts aren't just for bad drivers, and memory protection isn't just for careless programmers. There's no shame in engineering a situation to maximize safety for the people who will use it.

There's an inherent bias in the choice of analogy. There are many potential metaphors that aren't as favorable as hardhats or seat belts. How about a leash or blinders?

> There's a whole lot of smart people who like static typing for the safety benefits it brings to their programming. Web browsers are the front lines: where one click on a stringly-typed reference can cause unknown third-party code to run on your computer.

URIs have a rather well defined type and you can easily detect a malformed URI. The problem Google seems to intend to address, and the way they intend to address it, has nothing to do with the type of the URI but ultimately how its value should be conveyed and heuristic restrictions on what forms the values may take. This leads to situations where valid URIs are treated as invalid, based on arbitrary heuristics, not a tightening of the type of value a URI encodes.

The more apparent problem, from your description, seems to be that websites can at all use your computer to execute arbitrary third-party code. Addressing this by vaguely limiting what URIs you may access is a weird way not to take the bull by the horns. To somehow hide information that is part of the URI to make it more clear what resource the URI represent is just ass-backwards.


> URIs have a rather well defined type and you can easily detect a malformed URI.

The average internet user is unlikely to be able to detect a malicious URL. Do you think the average person can tell which of these URLs is legitimate, and owned by Example inc.?

- example.com/profiles/al

- example.com.profiles.al

- examp1e.com/profiles/al

- example.co/profiles/al

etc etc.

Safari already only shows the hostname to help with visual identification - this doesn't help with different-but-similar hosts, but it does help regular users to see what website they're on, if they are unfamiliar with the protocol/host/path structure of URLs. Which they shouldn't have to be.


I disagree that what Safari is doing is good for the user. Safari used to show https://stripe.ian.sh/ as "Stripe, Inc". By hiding the URL, it significantly increased the phishing potential of the website. Feel free to visit the above false stripe site, its not malicious and has a great write up in the issue. In this case they are putting a lot of blame on EV Certificates - which I agree causes more harm then good - but Safari's decision to cover up the URL made the issue significantly worse.


that is unrelated to the "hide the path" functionality - as you pointed out, it's because they displayed the EV name instead of the URL. That's a separate, and much more harmful, UI choice - because EV certificates are poor evidence of association.


> The average internet user is unlikely to be able to detect a malicious URL. Do you think the average person can tell which of these URLs is legitimate, and owned by Example inc.?

I think you are missing my point of that statement, which is that this is not a problem with the type of the URI, and that there is no basis for the idea that "If you advocate for type safety within software, you should also advocate for a better system than URIs."

Either way, to answer your question, no one can tell which of those URLs are legitimate without Example Inc. first communicating their official address to them.

> Safari already only shows the hostname to help with visual identification - this doesn't help with different-but-similar hosts, but it does help regular users to see what website they're on, if they are unfamiliar with the protocol/host/path structure of URLs. Which they shouldn't have to be.

I'm against the idea that a user shouldn't need to know what they are doing on such a fundamental level. This is an attitude that people tend to have towards software and computers in general that doesn't really exist for other useful-but-dangerous technology with mass appeal like cars. It promotes magical thinking which I think may leave the users even less aware of the risks than they already are. Risks that don't somehow stop existing because you hide trivial information from the user. If properly educating people in using these systems is not an option, maybe letting them touch the hot stove isn't such a bad thing.

When you try to water some information embedded in an URI down for the dumbest user, you invariably hide or even misrepresent information. Safari only displaying host names is a great example of this, but another favorite of mine is how Chrome displays "Secure" in the address bar to indicate HTTPS with a verified certificate. In reality, it is of course only a very limited sense in which anything I do at that address is secure. A sense which the user that this was watered down for most likely won't recognize, instead being instilled with a false sense of security. By all means, color code the different parts of the URI, add tool tips or whatever, but don't hide what's actually there from someone that has every reason to care.

When some user on example.com starts impersonating Al, how does Safari hiding everything but the domain help the user differentiate "example.com/profiles/al" from "example.com/profiles/fakeal"?


I'm not trying to be BOFH here... but it's not unreasonable to expect web users to have the same basic knowledge of a URL that they have for a telephone number. Most(perhaps nearly all?) of telephone users in the US know that in 123-456-7890, the "123" is an area code, "456" is an exchange, and "7890" is the line number. URLs are not that different. A similar knowledge of URLs would serve users well.


It's not really about the structure of the URL it's there's not a good way for someone to know what URL goes with an IRL identity.

What's the URL for Valve's Steam?

* steam.com

* valve.com

* steamgames.net

* store.steampowdered.com

* steampowered.com

If you guessed none of them except for steampowered.com, congrats!

What's the URL for Americal Eagle, the clothing store?

* americaneagle.com

* americaneagleoutfitters.com

* aeo.com

* ae.com

* aerie.com

If you guesses all of them except americaneagle.com congrats!


You're right, my phone number comment really didn't add up after I read your response. I can't think of a good solution for your example, even something as well known as Steam or American Eagle falls apart when the average person is presented with multiple choices. Short of spending loads of $$$ buying every domain similar to your company name, I don't see any good solutions. Sad state of affairs :(


This is an abuse issue at the DNS level, not a problem suitable for end user cosmetic witchery in the browser. Fix DNS abuse.


It would, but the web is a mass-market project - while it would be nice if everyone understood how to read a URL, we should cater to the lowest (within reason) common denominator - especially when it comes to security.

The threat model for phone numbers is considerably different, not least due to the link-based nature of the web and email. The URL takes on the role of both the number and the caller ID (if caller ID didn't suck) - you should be able to be confident that you're talking to who you think you're talking to.


> There's an inherent bias in the choice of analogy. There are many potential metaphors that aren't as favorable as hardhats or seat belts. How about a leash or blinders?

A leash and blinders are put on an animal which is not free, to control it. But nobody is proposing removing your freedom to use any web browser you want. In what way is that metaphor applicable?

> The problem Google seems to intend to address, and the way they intend to address it, has nothing to do with the type of the URI but ultimately how its value should be conveyed and heuristic restrictions on what forms the values may take.

I suppose it depends on how one defines "type". I'm not just thinking of the "java.lang.String" level, but the broader level of anything that can be checked by a compiler, without evaluating it.

Consider the basic problem of navigating a link. We get a big stream of bytes from the network. It's pretty easy for the computer to identify URIs in it, by the syntax of HTML and CSS and URIs themselves. It's not an easy problem for humans -- I wouldn't trust myself to always accurately identify URIs in an arbitrary buffer! It's hard to tell what's a valid URI, or where (say) the URI ends and a color or some raw text begins. That's a type problem, and humans are bad at it.

This project sounds like the next level beyond that. My computer can already parse the stream and analyze it to find the URI, and automatically paste it in my URL bar when I click near it. Nifty. But "URI" is a richly structured type (just look at the URI class in your favorite programming language), and the browser can do far more with it, even just at the UX level, than simply treating it as an opaque string.

> The more apparent problem, from your description, seems to be that websites can at all use your computer to execute arbitrary third-party code.

No, sorry if that was misleading. I was not presenting it as the problem that this team at Google aims to solve, but as an example of the security issues at stake. Just as seatbelts aren't the only safety feature keeping my face intact on the highway, a sandbox shouldn't be the only safety feature keeping my disk intact on the internet.

What's the difference between viewing the source of some malicious code, and running that malicious code? Only the type system: it's in a SCRIPT tag, or a PRE tag. What's the difference between seeing a malicious link, and following that malicious link? Pretty much the same thing.


> A leash and blinders are put on an animal which is not free, to control it. But nobody is proposing removing your freedom to use any web browser you want. In what way is that metaphor applicable?

As I said, there is an inherent bias in the choice of analogy. The ones I present are just as vague and useless as yours and present an opposite bias.

> I suppose it depends on how one defines "type". I'm not just thinking of the "java.lang.String" level, but the broader level of anything that can be checked by a compiler, without evaluating it.

How did I give you the impression that this is the level I was addressing it on? URIs have a much more restrictive type than simply a sequence of characters.

> Consider the basic problem of navigating a link. We get a big stream of bytes from the network. It's pretty easy for the computer to identify URIs in it, by the syntax of HTML and CSS and URIs themselves. It's not an easy problem for humans -- I wouldn't trust myself to always accurately identify URIs in an arbitrary buffer! It's hard to tell what's a valid URI, or where (say) the URI ends and a color or some raw text begins. That's a type problem, and humans are bad at it.

It's an easy job for the browser simply not to accept malformed URIs. Unfortunately, browsers like Chromium deliberately accept entirely malformed URIs and even interpret valid URIs the wrong way. IMO that would be a good place to start looking if you had a genuine interest in improving security.

> This project sounds like the next level beyond that. My computer can already parse the stream and analyze it to find the URI, and automatically paste it in my URL bar when I click near it. Nifty. But "URI" is a richly structured type (just look at the URI class in your favorite programming language), and the browser can do far more with it, even just at the UX level, than simply treating it as an opaque string.

Yes, because it is well defined what a URI consists of this is easy. Chromium already does to color highlight the different parts of the URI.

> What's the difference between viewing the source of some malicious code, and running that malicious code? Only the type system: it's in a SCRIPT tag, or a PRE tag. What's the difference between seeing a malicious link, and following that malicious link? Pretty much the same thing.

So what is Google doing to address this that has anything to do with the type of URIs? Absolutely nothing.


Or advocate for disabling the ability for your web browser to run 3rd party code just by clicking a link.

Google is not trustworthy because between Chrome and Search they have too much of a stake in the eventual outcome of anything that could replace URLs. Any system that eventually does replace URIs should be able to tell me at a glance: is it http, ftp or a local file? Is there a TLS cert? What is the domain of the server I am accessing? Approximately where in the site directory am I?

I simply don’t trust browsers or sites which try to obscure what is actually useful information because some UX guru told them it was unnecessary. If anything, I want URLs with even more information. Which version of “http” is in use would be a nice start at this point.


Feels like every point you mention can be designed in a user friendly way. I’m a UX designer and have some ideas for this.


Please write them down and publish somewhere; we need more proposals that empower users to be out there to see and discuss.


Good luck to you, and I mean that sincerely. I think making something “better” than the URL while maintaining its power and information is a difficult project. If you are willing to take on the challenge, more power to you.


I agree with you, I want a User Agent to empower me as I browse the web and not hide things from me. Even if the majority of users don't understand URLs (which I dispute) - its certainly not hurting or impending them.

In reference to running 3rd party code, knowing the URL is even more important as it decides what is considered 3rd party or not. When I'm on an AMP website - google is the 1st party and the content provider is now considered the 3rd party. I don't agree with allowing google to be considered the 1st party in that instance.


Different resources can be loaded with different versions of http, so, I don't know how you can say which version of http is in use. But, even if you could, I struggle to see what useful decision that would enable anyone to make outside of niche cases.


> If you advocate for type safety within software, you should also advocate for a better system than URIs.

Well, maybe. Before I advocate for a better system than URIs, I'd like to be convinced that this is a problem that can be solved in a satisfactory way and that the solution doesn't involve ceding control over Internet names to some unaccountable party.


> They don't say anything about "dumb users", and I wish the conversation around this didn't even use that phrase. Hardhats aren't just for clumsy people, seatbelts aren't just for bad drivers, and memory protection isn't just for careless programmers

There are frequently tradeoffs between safety and flexibility/performance/value, and the proper tradeoff to be made can depend on the sophistication of the user. So are you just upset that that a derisive word was used, or are you arguing that in this particular case the proper trade-off is independent of user sophistication?


Both! It's condescending because it's simultaneously suggesting that:

- there's a class of users who are simply dumb

- the speaker does not consider themselves to be in this class

- one can avoid this hazard by being sufficiently smart

none of which I agree with, and none of which I see evidence for here.

Across every industry I've seen, when new safety devices are invented, old-timers brush it off as unnecessary. (Survival bias: I didn't die!) When they retire, the next generation grows up using the new safety devices, and sees no problem with it. If anything, using the safety device is a signal that you're doing something dangerous! Professional drivers wear more seatbelts, not none. The tradeoff you speak of is usually backwards from what you claim.

This one is shaping up exactly the same. People who grew up with URLs are complaining that Google is trying to "hide" them, even though that's not what anyone on this project said. The generation being born today will wonder why anyone ever used a networked computer by clicking on links with zero assistance in determining whether it was at all legitimate.


The fact that people tend to resist change (true) is logically independent of the fact that safety features carry real costs that may not be worth the benefit (also true). Finding examples of unjustified complaints is not a good argument that most complaints are unjustified.

People get used to anything. Their failure to realize what they are missing is not a good measure how much they are missing. Indeed, the people who are most able to assess the cost of the new compared to the old are exactly the old timers who have experienced both, not the new comers who haven't.


You can avoid this issue through significant domain experience. Knowledge != smart. I find your assertions regarding assurance and networked computing disturbingly naive. If this 'fix' was simply altruism and a crying need it would be apparent to more of an audience than Google. This seems more like sheeps clothing and misdirection from Google - as usual.


you are actually protected by the browser's sandbox... this analogy does not address the fact that you add another party that tricks you with the removal of url and that is Google


Hiding the URL has nothing to do with seatbelts on a car.


That's how analogies usually go


It's not a good one. The seatbelt is analogous to say, double checking if you really want to download thriller.mp3.exe.

Hiding the URL is like blocking out the windshield and telling the user to trust the GPS.


Another way to look at it is if you are in an accident, a seatbelt will 99.99% of the time stop you from being ejected from the car. A hard hat will prevent your skull from fracturing when hit in the head. Goggles keep foreign matter out of your eye. The SafeBrowsing blacklist prevents Chrome from automatically loading malicious payloads on unwary users. So in what way does hiding the URL measurably stop the dangerous action? It doesn't. A less hyperbolic parallel would be how cars have changed from showing gauges for engine performance and have only a single "check engine" light for any problem detected by the ECM.

That said I think this change is inevitable. My experience with non-technical users is that the URL is irrelevant Most do not know the difference between typing an address and typing a search phrase.


My brother still lives because he crashed sideways into a tree.

Wearing a belt would have suffocated / decapitated him, because the car seat moved to the front.


I like that analogy :)


And: seatbelts have negligible downsides, and are very hard to use for any purpose other than accident restraint.

The analogy to hiding the URL would be something like a seatbelt that only unlatched at the car's stop only in neighborhoods it considered "safe".


That's how bad analogies go.

Seatbelts in Europe are required by law, in Brasil it's possible that there are no seatbelts because it's cheaper.

Seatbelts make it safer unregarding your driving skill. As an experienced développer, I want to see the URL. I want to copy the original one.

URL's are useful outside security. That's why it's one of the SEO variables.

It's a bad analogy


Just to point out, in Brasil seatbelts are also required by law, so it´s not possible that "there are no seatbelts because it´s cheaper".


Ford focus is the safest car in Europe. In another country not, it even didn't had seatbelts. I thought it was Brasil sorry

Source : cousin who has a garage and sells Ford.


I was looking past the headline to the actual content of the article, which was much less click-bait-y: "rework how browsers convey what website you're looking at, so that you don't have to contend with increasingly long and unintelligible URLs".

The article never once says "hide the URL". We have no idea what form their solution will take (and they probably don't, either, yet) but they make it clear that they know the URL itself does have value and they're not going to just hide it.


I think that Google is a big organization and the fact that the search team thinks that AMP is a good idea is not necessarily evidence that the Chrome team does - or even if they do, that they'd want to let AMP be considered as some origin other than google.com. For instance:

- Google's phishing quiz https://phishingquiz.withgoogle.com has a question where the right answer is that a URL that starts with google.com is actually an AMP page for a URL shortener that sends you to a Google login phishing page.

- The Chrome team's document about displaying URLs https://chromium.googlesource.com/chromium/src/+/master/docs... is at best neutral on the case where "a domain owner is willing to supply content from a third-party within their own address space," calling out AMP as a specific example of this, and pointing out (in the section "A Caveat on Security Sensitive Surfaces") that anything in the renderable area of the webpage is below the "line of death" and untrustworthy.

- Google search once penalized Google Chrome for breaking SEO rules. https://searchengineland.com/google-chrome-page-will-have-pa...

- Also I think I've seen Chrome developers tweet things that are less-than-happy about AMP, but I can't find them any more.

(That said, I do agree that we shouldn't be trusting Google to be stewards of the internet - and that is a huge part of the value Firefox provides, honestly - just that I think in this regard they're unlikely to abuse the trust.)


In a conflict of ideas between the Search team and the Chrome team, I'm pretty sure it's irrelevant what the Chrome team wants.


yup we're seeing product teams getting shit on all over every product space if they're not core. Amazon did the same thing to their UI/Search team with their sponsored results that turned web/app search into a massive clusterfuck of unrelated "Promoted" items


And that is why I buy less and less on Amazon. If I don't have an exact product number, searching on Amazon is a painful waste of time.

So I search on Google or eBay, find a product number, run a search in Amazon. Until a few years ago, that resulted in a cheaper price but now more often than not, Amazon is not the cheapest (even accounting for shipping); and increasingly they are no longer the fastest either.


> Google's phishing quiz https://phishingquiz.withgoogle.com

I don't trust this URL, so I won't click on it/do the quiz. Does that mean I pass?


The quiz shows you a scenario and asks you whether you think it's legitimate or not.

That link is legitimate, so no, you fail ;)


Their decision to host a quiz about that topic on a nonstandard Google domain is just too ironic.


While Google is certainly not monolithic we have previous evidence of really bad ideas being pushed across the entire product line (Google+, for one)


I commend you for taking the complaint seriously, but Google Derangement Syndrome is epidemic on HN. Any conspiracy theory that reaches even Pizzagate levels of plausibility gets upvoted, including dross like the above.


I am of the opinion that the reality behind AMP is much more insidious than security and dumb user mitigation.

I believe that AMP is the mortar that builds the solid connection between users and ALL their activities and views and logins and accounts and handles across their browsing behavior.

Which links you clicked, not matter what site you are on or who you are logged in as. It seems to completely dissolve any anonymity, period.


As always with Google, there are two outcomes. An improvement to some engineering or social issue, and an increase in the amount of data flowing into Google from the rest of the world.

Of course the latter is pure coincidence, but it's always there.


> Of course the latter is pure coincidence

You should have added a /s. It's not a coincidence, it's part of how Google say they are improving their services


I've been cussed out on HN before for saying pretty much exactly this, though. There are people who genuinely drink the Google kool-aid.


Actually, you totally have a point. Consider the case of China and its Great Firewall. If you can control the URL and separate it from the content, then you can control what people see on yet another level e.g. we aren't blocking what Wikipedia says about the Tienanmen Square massacre this is what that URL actually says. tl;dr gaslighting / rewriting history on a mass scale if you can't use the URL to get to particular server.


Now imagine taking all that data and building the "social credit score" on the sentiment analysis for the consumption habits of all users' actions online...


And adding mandatory, recorded, voice commands for a subset of functionality.


I'm happy this community is coming to its senses. When I voiced this concerns ten months ago the downvotes were in the double digit.

most of the contention was that people was able to 'self-host' amp, as if that was going to happen for anything but few selected cases.

> the solid connection between users and ALL their activities [] seems to completely dissolve any anonymity, period.

and wait to hear what cloudflare does!

> We collect End Users’ information when they use our Customers’ websites, web applications, and APIs. This information may include but is not limited to IP addresses, system configuration information, and other information about traffic to and from Customers’ websites


I'm happy this community is coming to its senses. When I voiced this concerns ten months ago the downvotes were in the double digit.

I noticed that of all the controversial stuff I say on here, it's the stuff that criticizes Google that gets downvoted. The obvious conclusion is that there's a heavy number of Google employees here who feel, I suppose you can call it "passion."

However it's obvious to me that there's an authoritarian posturing that Google is pursuing. Surely the stopping of disabling of video autoplay should have been a big red fucking flag to stop using Chrome if you value your privacy and control over your equipment. You don't need to be Richard Stallman to understand that Google is a core ad-tech company, and they're doing everything possible to facilitate that industry and its dubious profits.


Amen.


To be fair to cloudflare, this is what the website being protected pays them to do. They can't effectively block malicious actors that are 100x more sophisticated than me, without being able to analyze that data.

I understand the risk of "evil" they could pull off with such a huge portion if internet traffic flowing through them, but the "you pay us for this service" model means they don't _need_ to monetize in evil ways. Whether they end up doing that or not remains to be seen.

I'm 100% against AMP though. As noted elsewhere, Google has repeatedly demonstrated they're not to be trusted.


Correct, we do not need to monetize user traffic in evil ways. Our customer is the owner of a website, or someone exposing an API backend for an app, etc. They pay us for the service. They pay us more than it costs us to run the service. If we started monetizing our customers' traffic we'd be out of business in a flash.


well of course we all love cloudflare, it's great, we pay for it at work and I use it on almost all my projects at home.

but while that database might not be monetized by cloudflare, the mere existence of it makes it a high value target for everyone else.

we got all high confidence on cf's engineering, but we're one inch away from having everyone skeleton in the closet running out at large.

I mean usa just had an election whose result was allegedly influenced one way or another by state sponsored hacking. it doesn't get more higher stakes than that.


Exactly. I am not concerned that CF will "monetize" - I am concerned they will share data with government/other parties whom users have no knowledge of, or recourse against, said sharing.

Select * from population where political-affiliation is far-left AND age >30 AND reddit-user-name contains etc...

etc...


I think a lot of Google people hang out on HN, so there's a downvote brigade for Google criticism. Same with Apple, Tesla, Microsoft, and other large or cult following tech companies.


Eventually they just want to whitelist the "good" sites and make it harder to access small independent sites, further centralizing the web and getting more tools to censor content.

Imagine that they would make it hard to access a small competitor to Gmail or restrict a website that has news they don't like.


Outright restricting would seem unlikely given the anti-trust scene they are having to put up against. But, I can totally imagine them introducing an extra step under the garb of "protecting" users like a dialog, a popup coupled with a red banner on the URL is enough to discourage significant number of people from visiting the target website.


You don't have to outright restrict. Just adding a little bit of friction will keep 90% of users away and will completely destroy adoption for any independent business.

I really think it's time to drop Chrome and to campaign for others to drop Chrome. I'm sticking with either Safari (on my Mac) or Firefox.


Introducing “trusted content providers”. Default opt in, opt out only if you acknowledge you’ll be exposed to phishing (red scary screen).


Anyone remember "AOL keywords"? You would pay AOL to make your content discoverable. And some keywords were reserved only for AOL use.


The AMP team is working on the exact opposite, which is adopting Web Packaging [1] that fixes the URL issue you are describing.

https://amphtml.wordpress.com/2018/11/13/developer-preview-o...

[1] - https://github.com/WICG/webpackage


As a nice side effect of Google pushing Web Packaging, we get one step closer to having web apps that are signed with an offline key and served with a clear version number.

This would mean you could have at least a TOFU security model, where a web app that you trust can't be replaced (without you knowing) by an insecure version you haven't seen before.

Add some binary transparency [1] logging on top of that, and it might be possible to make browser-based JavaScript crypto almost as secure as the equivalent desktop app.

[1] https://wiki.mozilla.org/Security/Binary_Transparency


You could do this quite simply by requesting trusted apps to be identified by Named Information (ni:// or nih://) URIs (see https://tools.ietf.org/html/rfc6920 ) using a digest algorithm of sufficient strength. But the ability to "seamlessly" replace web apps is something that many websites would insist on, I think. Of course ni:// and nih:// can be applied to documents as well. They work on the IPFS model, where you enter some digest of the desired content as your URI (it's actually a URN, not a URL!) and then it's the user agent's job to fetch it from wherever, perhaps in a decentralized way.


From the parent:

> ... then Chrome gets to decide what name to show you in the address bar - potentially a name that has very little to do with the actual location of the document

That doesn't seem to be the opposite of what the parent post is describing, it seems to be an implementation of it.


How so? The web package comes from the real source you see. The only reason Google needed to serve the content from its own domain is because of security and limitation of content delivery. But with this, they can serve you a "package" that's identical to what you get from the source.


Common techniques like relative paths will allow the proxy-cache to see much deeper into a site than the end user is aware. The url bar might say that you're on some site you trust but your traffic may all still be openly readable by Google (or some other proxy). Of course, Google is always going to know about links that users click in its search result list and a huge number of sites blindly run Google (or other third party) scripts anyway but this opens up a new vector for Google to see into your traffic.



Didnt read past the first section since that appears outdated already. AMP Project is run similar to nodejs by adopting an open-governance model. This was first announced here - https://amphtml.wordpress.com/2018/09/18/governance/ and the went live later that year.

https://amphtml.wordpress.com/2018/11/30/amp-projects-new-go...

In the link above you can see non-Google decision makers that drive strategy and vision of the project.


> AMP Project is run similar to nodejs by adopting an open-governance model.

OK, but the only thing I want out of AMP is for it to not exist. Is there any chance that I can get involved in AMP's open governance with a "stop existing" goal?

It is already quite surprising that AMP has gone in the direction of "We'll accept signed webpackages and publish those so we aren't acting as the origin". But my goal is that even this should not need to exist: websites that genuinely load fast on their own, comparable to downloading the webpackage, should be ranked as high as AMP websites. And it's preferable for websites to do that. So there should be no boost for AMP websites, just a boost for fast pages, and if AMP does anything it should just provide guidelines for how to build fast pages. Examples of fast pages include HN and most things published before 1998.

At the end of the day AMP exists because it's privileged by Google Search, and AMP is privileged by Google Search because Malte Ubl has whatever amount of influence he does within Google and has convinced them that AMP is a good idea (or other people have decided it's a good idea and have put Malte Ubl in charge of making sure it happens, or whatever). No matter how many non-Google people you put on the steering committee you won't change that. You don't have the internal access to change Google's mind about it.

This is like saying that it's okay that I should be happy living in a city that always votes $party because I can get involved in the party. If my personal political views are $opposing_party, that statement is technically true but completely useless.


> OK, but the only thing I want out of AMP is for it to not exist. Is there any chance that I can get involved in AMP's open governance with a "stop existing" goal?

You can choose not to use it. It's just like when you find a project on Git(hub|lab|etc) and it uses a language, tool, or package manager you've never seen before. You either try to work with it or look at other projects.

If you don't want to deal with AMP, you can click the link icon at the top of the page, then click the link so that you actually end up on the webpage you wanted to visit. You can't force everyone to adopt the "amp shouldn't exist" model just as much as you can't force "electron shouldn't exist and everyone should write native apps" on others.


1. I do in fact click the link icon. Half the time it takes me to an AMP version of the website (not the google.com/amp version, but example.com/amp/ or something) instead of the full version.

2. Why is it being framed as me forcing everyone to adopt the "AMP shouldn't exist" model, instead of Google forcing everyone to adopt the "AMP should exist" model?

3. You're talking about me as a consumer. As a publisher, I don't want to use AMP, but I want the favorable SERP placement that comes with using AMP. I think that my website satisfies the actual goal behind AMP, of loading fast. But that isn't enough, and I have to use AMP - and force my visitors to either use AMP or click through AMP (making it slower, and defeating the point of everything). As a publisher I'm actually pretty excited about the webpackage stuff (and it'll be straightforward since I'm using a static site generator), but it's still not the same as being able to run a real website that actually loads quickly.

4. None of this answers my question, which is not "How do I, personally, avoid using AMP" but "Does the AMP open governance model, in which people can allegedly become involved in setting the direction of AMP, allow people the opportunity to make AMP cease to exist"?

5. Google's monopoly power in search results and vertical integration makes everything more complicated. Electron does not have monopoly power on native apps, and nobody is giving an artificial boost to native apps that are written in Electron. Any advantage to Electron is due to Electron's own technical merits.

My bet is that the majority of people on the AMP advisory committee are primarily there because they need to avoid unfavorable placement on the Google SERP and so they're forced to implement AMP and want to make sure they can still render half-decent web pages using AMP, not because they inherently like AMP.


The number one reason I constantly use desktop mode is so that I don't get AMP pages. I've even switched search engines to get rid of them, but still have to sometimes use Google search because it finds what I'm looking for.

>You can't force everyone to adopt the "amp shouldn't exist" model just as much as you can't force "electron shouldn't exist and everyone should write native apps" on others.

Considering that the reason AMP is even used is because Google puts AMP results higher in search results you could argue that Google might be leveraging their market position into using a technology under their control.


> If you don't want to deal with AMP, you can click the link icon at the top of the page, then click the link [...]

Apart from that that is "dealing with AMP," why is it so hard for Google to offer a "no amp in search results" setting? It's not like there is no case or desire for it.

Until they take that simple step, I see no reason to assume that pushing AMP isn't on an ethical par with distributing crapware. Google should know better.


> my goal is that even this should not need to exist: websites that genuinely load fast on their own, comparable to downloading the webpackage, should be ranked as high as AMP websites.

My understanding is this is also Malte's goal, and the goal of the Google Search folks. We need a way that Search can know that a website (a) will perform well and (b) can be preloaded in a privacy preserving manner. Right now only AMP can do this, but with Web Packages people will be able to do this without AMP. Once you can get (a) and (b) without AMP I will be super surprised if Search still prioritizes AMP.

(Disclosure: I work at Google, on making ads AMP so they don't get to run any custom JS. Speaking only for myself, not the company.)


I don’t want web packages, nor AMP, how do I get the ranking bonus and lightning bolt icon with my website (which has Google Pagespeed of 100 and Chrome Lighthouse Pagespeed of over 95)?

That’s the goal. Killing web packages and AMP, and actually ranking websites by its actual speed.

With web packages or AMP, if I navigate from Google Search to Page A, and then from Page A to Page B, Google can see that I went to page B. This is wrong. In an ideal world, Google wouldn’t be able to track anything, but as they are able to, we should limit this. As web packages and AMP lead to more ability for Google to track stuff, they need to be eradicated.


> I don’t want web packages, nor AMP, how do I get the ranking bonus and lightning bolt icon with my website (which has Google Pagespeed of 100 and Chrome Lighthouse Pagespeed of over 95)?

First, those metrics say how well optimized your site is, not how long it takes to load. For example, a tiny site that's text and a single poorly compressed image might load in 500ms but get a low score, while a large site that loads in 5s can still get a perfect score if everything is delivered in a completely optimized way. These are metrics designed for a person who is in a position to optimize a site, but not necessarily in a position to change the way the site looks. When speed is used as a ranking signal [1][2] Google isn't using metrics about optimization level, it's using actual speed.

But ok, metrics etc aside, Google could switch to using loading speed instead of AMP to determine whether a page is eligible for the carousel at the top, and whether to show the bolt icon. But AMP means a page can be preloaded without letting publishers know that they appeared in your results page. You can't just turn on preloading without solving this somehow. AMP is kind of a hacky way to do this, and I'm really looking forward to WebPackages allowing preloading for any site in a clean standard way.

> With web packages or AMP, if I navigate from Google Search to Page A, and then from Page A to Page B, Google can see that I went to page B.

No, web packages don't allow this, what makes you think they do?

(Disclosure: I work at Google on making ads AMP so they don't get to run custom JS. Previously I worked on mod_pagespeed which automatically optimizes pages to load faster and use less bandwidth. Speaking for myself and not the company.)

[1] https://webmasters.googleblog.com/2010/04/using-site-speed-i...

[2] https://webmasters.googleblog.com/2018/01/using-page-speed-i...


AMP isn't "run openly" because everything key to Google's business is non-negotiable, and at the end of the day, Google will do what Google wants. This is what I found out when a few of us tried to talk about AMP4Email, where before threatening us with the code of conduct (for bringing up valid security concerns), Malte Ubl admitted that AMP4Email would be implemented however the Gmail team wanted to implement it, and that no amount of community concerns on the GitHub were going to have any say in the matter.

The reality is the "open governance" AMP spec means nothing, because as a monopoly, the only AMP cache which actually matters is Google's. And it's implementation is Google's proprietary business, not part of what they allegedly allow open governance of.


Oh god I'd forgotten about AMP4Email. It's such a blindingly obvious bad idea, and the GitHub response was overwhelmingly negative. They do not care at all.


> AMP Project is run similar to nodejs by adopting an open-governance model

I won't believe this until I see the tech lead and the committee go from being totally opaque to actually answering direct questions.

So far "after months of research" they can't even provide a description of how working groups are selected, which are the criteria, what exactly they govern and many other things described, e.g. here: https://github.com/ampproject/meta/pull/1#pullrequestreview-...

Or, e.g. they are going to have an AMP4Email working group. For a feature that no one asked for, no one wants, no one was discussed with. How did it get there? Oh "Gmail team said they are going to do it". But yeah, the "Approvers" working group will surely have something to say about this. Riiiight.


How's this different from open source android-proprietary play services and open Chrome-proprietary translate?


No, I can see that Google presents it as them backing off.

Who are the actual decision makers? What percentage are google employees?


a name that has very little to do with the actual location of the document

I'm sorry to tell you that you've already been tricked — not once but thousands of times. You think you're browsing wired.com? Wake up Neo, you're really on Fastly. Every major site uses CDNs. Once AMP uses Web Packaging to show the URL of the original site instead of google.com, it will be... no worse than any other CDN.


It is true. The only way you will see where you are is to observe DNS.

What the browser authors like Google are doing is further hiding DNS from the user. Many years ago Firefox was already hiding and highlighting parts of urls in the address bars to "protect" users. It will only get worse.

I often navigate by first doing automated lookup using only non-recursive queries, then storing the address permanently until it changes (rarely does), and finally after I have the needed address, navigating to the url. I see every DNS answer, and thus I can see all CDN use.

Fastly has really emerged over recent years and is very popular among sites appearing on HN.

CDNs are popular but not all sites on HN or elsewhere use them. There are still many, many sites that do not use CDN's nor share IP addresses with other sites.


> Every major site uses CDNs. Once AMP uses Web Packaging to show the URL of the original site instead of google.com

I disagree. Although wired.com might be served by fastly, fastly is not in the URL scheme and is treated as a 3rd party. There are a lot of browser restrictions on 3rd parties. Having fastly become a first party would definitely be different in terms of browser restrictions


It's the same argument about file hierarchies. Apple doesn't think users should even be aware that there is a file hierarchy (iOS). I don't think users are confused at all by files and folders. But somehow Apples seems to think they are.

What I do not understand is that the percentage of people who didn't grow up with a computer is only going to go down, not up. These evolutions I think go in the wrong direction.


> I don't think users are confused at all by files and folders. But somehow Apples seems to think they are.

Agreed. And am glad that they were forced to reconsider and have introduced a FILES (https://www.imore.com/files-app) App on ios.


It seems still not possible to attach a USB drive, though :(


What does AMP have to do with anything? The fine article is about flagging potential URL homograph attacks. It's a reasonable thing for a browser to do, since those attacks are incredibly hard to spot.

I don't know why Wired clickbaited the headline like this.


They've been running wild with "KILLING THE URL" for some time now. Put it right next to "Google murdering ad blockers" and "Google decapitating hangouts classic".

Is there any unbiased article out there going over the proposals?


WIRED published a (relatively positive!) article last fall entitled "Google Wants to Kill the URL": https://www.wired.com/story/google-wants-to-kill-the-url/ I'm interpreting the title as a reference to that.

And the relevance is that AMP shows its own fake-URL bar, and Google could choose to "kill the URL" by trusting https://www.google.com/amp/ with not-URL overrides. (But, they could do that presently and trust themselves with URL overrides.)


The reasonable thing would be to display the real domain, and the punycode one, in the same bar. Then you can at least detect some homograph attacks. Or possible use a monospace font for the URL bar. So that 0 with a dash and O are actually visible at first glance.


Please distinguish between fonts-for-data and monospace-fonts. E.g. http://input.fontbureau.com/info/#writing has a sample of a not-actually-monospace font with IMHO awesome legibility in the mentioned "matter-of-fact" category of typefaces.

0 with a dash and O without are not at all bound to monospaced fonts.

FYI, I left console editors due to their inability to handle my preferred Input Sans Narrow Light 14pt (16pt on my 110 dpi screen), just so you can understand the pain of monospace.


Of course, but selecting a monospace font is already in built in all operating systems, so this could be a quick and dirty fix.


I think the only hope would be pushing more people to drop Chrome and start using different browsers. This would reduce the leverage Google has to bully other browsers to follow them.

This might sound impossible today, but 10 years ago we'd be saying the same thing about IE.


Firefox has become increasingly usable. I've transitioned to it and DuckDuckGo, using Chrome only for the Google business apps. On mobile on Android it gets mad and freez-y if you open too many tabs, and on desktop it gets angry and pause-y if you're behind a proxy, but otherwise it's pretty decent.


Killing or obfuscating the URL is really only good for Google. They don’t want people going directly to any website, they want you arriving at it through a Google property so they can pitch ads to you or at least track you through that funnel.


It's dumb that I actually have to say this.... But chrome isn't actually killing URLs or even taking steps toward killing URLs, despite the clickbait story title.

Instead they're taking steps toward better identifying fraud. But nobody would read the story if that's all it said.


Reminds me of chromium dev discussions about severely limiting ad-blockers, for your safety of course, and only for the users.

Not for Google's massive benefit.


If I'm understanding correctly, it sounds like the plan is to use heuristics and machine learning to guess which URLs look "tricky".

I'm highly skeptical of an approach that involves training users to rely on a black-box ML system. That just makes them ever more dependent on technology they can't possibly understand and puts more power in Google's hands. By being the sole arbiter of what is "tricky," Google gets to blacklist the entire Internet.

It would be better to help users understand the URL. I don't mean expecting users to parse the syntax on sight; I mean finding ways to display or represent it so that the important information is easier to see and fraud is easier to spot.


Along the lines of your last thought (helping users understand the URI, displaying it in an idiot-proof manner), this wouldn't be at all hard if they simply had 4 separate areas for protocol, hostname if any, domain & public suffix combined, and path. e.g.:

    https://www.example.net/foo.html
becomes:

    [https] [www] [example.net] [foo.html]
Then they can colour the public suffix e.g. black and the rest of it light-grey, much like they do already, BUT it's also clear which box you always need to look at to determine the site's identity.

It could go even further and obscure the contents of the first, second, and fourth boxes, until you mouseover or focus it (but all of the boxes should appear light red in background for http, and light green for EV, even if you can't see the text in them), and the last one should be far from the one before it, to avoid e.g.:

    [https] [www.example.net] [example.org] [foo.html]
    [https] [www] [example.org] [www.example.net/foo.html]
(It would be easy to accidentally think you were somewhere at example.net with both of the above, even though you're really somewhere at example.org)

Clicking on any box (or the regular Ctrl+L) could turn it back into one box (for easy URI copying) and defocusing it will revert it again. Power users could set a knob to simply always display the 1 bar they've been looking at for the last 25+ years.

Maybe there could even be a conditional 5th area for the query parameters (GET variables) which isn't even shown by default (without input area focus), who knows.

    [https] [news] [ycombinator.com] [reply] [id=19032043&goto=item%3Fid%3D19031237%2319032043]
Just my wild 4am ideas... probably lots of things wrong with it I can't imagine right now.


I'd personally invert the order of the 2nd and 3rd areas. Yes, it'll look ugly, but it's way easier for users to parse for phishing:

https://example.com.phishing.com -> [https] [phishing.com] [example.com] [foo.html]


You can go Big endian all the way,

[https] [com] [phishing] [com] [example] [foo.html]


phising is not the only issue with urls


while we're at it we could make the query parameters into a textfield which could expand into a table, for easier editing of values


Ditch the protocol and show a lock or not. My parents don’t know what “https” means.

Or ditch the protocol and not render http at all by default.


It is worse then a blacklist.

Blacklist is easy to understand, as long as we trust Google (lots of us don't) everything would be fine.

With ML, not even Google have a full picture of what's going on.


At this point ML to govern things like autoplay or address bar is just whitewashing for biased data. You feed biased training data into an ML algorithm and now it's unbiased! Select your training data set and other parameters such that you get a result you want and you're good to go. The history of machine learning for tech is a history of obvious biases creeping into training data - whether it's a medical algorithm "learning" that background markings are an indicator of disease instead of classifying the tissue it's meant to, or recidivism risk algorithms going off race & gender to the point that they produce actively bad data, or face detection algorithms thinking that asian people are squinting.

I don't even think that youtube necessarily should get an autoplay prompt on first use, but it's pretty convenient that ML-based approaches like this are used instead of much simpler approaches.

Lots of research is going into creating adversarial data given known ML algorithms, as well. If this address bar ML is running on the client (it'd have to, right?) then it's not hard to do a training run against it to come up with custom tailored URLs/sites to get the ML to classify your attack as good.


While I agree that it is whitewashing biased data, maybe they get good results with new and unseen URLs that try to look like some relevant page using the same tricks as the scam URLs in the corpus.


Really good point.

ML equals diffusion of responsibility.


No, it doesn't. Google is still responsible. The team at G maintaining it is still responsible.

An ML solution is a completeness vs correctness trade off. ML can make the blacklist virtually infinite long, whereas a human team would likely burn out (and make more/different mistakes).


And what do you think will happen when they mess up pas.com/signup? Do you think they'll fix that for you? I hope the team at firefox is licking their chops to implement a better and distinguishing alternative.


Until the day a Google domain is accidentally blacklisted. Then suddenly there will be an internal whitelist where they get to decide what goes in.


Does this really sound that alien to people? I seem to recall the internet being controlled and blacklisted from a large portion of the population by AOL. Recall when people used to think that AOL keyword search was the entirety of the internet? This doesn't seem that much different from the old AOL tactics in my opinion.


> Recall when people used to think that AOL keyword search was the entirety of the internet?

I'm not the OP but I personally don't remember any of that because I'm not an American (like a major part of populace on the Internet) and I've never used AOL. And maybe AOL failed in America exactly because it did the things you mention they were doing, i.e. "controlling and blacklisting" a large part of the Internet.


Not alien but just a highly undesirable outcome.

In the 90s while mass AOL CD mailings were going out there was fear that "AOLization" of the internet would happen.

The same incentives for AOL curated and walled garden are present today for Google, Facebook etc.


AOL wasnt the internet. It was a private network that eventually allowed you access to the internet as its popularity increased. Once dialup died they made a broadband client.

If Google is trying to make their own private internet on top of the public internet I'm sure a few antitrust regulators will start asking about their hold on search and ad markets.


Agree. And what fun we'll have when Google's ML systems screws up our authentic site's classification. I'm sure they'll jump right up with an apology.


>I'm highly skeptical of an approach that involves training users to rely on a black-box ML system.

google did it with youtube, if they do it to chrome i don't know if they can handle the developer frustration that will ensue(i'll put a nice red fullscreen browser incorrect banner on my website if users visit from chrome)


It's not ML, according to an update to the article at the very end:

> Correction January 29, 10:30pm: This story originally stated that TrickURI uses machine learning to parse URL samples and test warnings for suspicious URLs. It has been updated to reflect that the tool instead assess whether software displays URLs accurately and consistently.


My take was that the plan is to come up with a plan. Not the first time I hear of that regarding URIs.


Great example of why we need diverse browser culture, instead of a Chrome monoculture.

When we hand Google a browser monopoly, we hand them de facto authority over everything related to the web.

It will effectively be up to a single for-profit entity with questionable-at-best motives how we see the web in fundamental ways.

That's not to say the work they're doing here is bad. Might be good. And it's a long ways from production. But that's besides the point.


Exactly. Having recently switched from Chrome to Firefox, I'm puzzled by language such as "Google is thinking about killing the URL". Nope, still featured prominently in my address bar, protocol and all, and Google has zero say about whether it stays that way.


I don't see the outrage here. Using heuristics to determine the likelihood of a URL being fake, sounds like a good idea as long as it's weighted against false positives.

That said, I've never understood why browsers do not highlight the hostname separately from the path. Many phishing domains are of the form: google.com.auth.something.else.realistic.looking.tk/fake-path-stuff and are so long that the user just sees google.com and moves on. Something as simple as underlining the hostname or making the path a slightly lighter hue would be a huge usability improvement in being able to stop phished hosts.


Firefox does highlight registered domain "looking.tk" in white font and the rest of the URL is colored gray ("google.com.auth.something.else.realistic.looking.tk" and path).


I've long been curious how exactly they determine the "registered domain", and your comment made me finally look for the answer. It looks like they use (and semi-manually maintain) a list of "effective TLDs": https://www.publicsuffix.org/

The maintenance process is described here: https://github.com/publicsuffix/list/wiki/Guidelines


For me it is black for the domain vs. dark grey for the rest (FF65). I actually never noticed this before, nice!


I think its dependent upon your theme. For example with a dark theme it is like so : https://i.imgur.com/bwsyhjN.png


Oh of course! Much more pronounced there.


Chrome does this as well. news.ycombinator.com is in black and the rest of the URL is more grey.


This isn't sufficient, as it highlights the sub-domain, which is the main trick scammers are using. For exmaple: http://google.com.gmail.inbox.totallynotgoogle.com


Changing the UX for URLs in a browser that has overwhelming market share is going to change how people think about and assess the identity of websites they visit.

Over time, it could make other browsers feel less familiar, old fashioned, and maybe even shady for most people.

It may end up improving security for some (we'll see), but it may also improve the security of Chrome's market share (whether that's the motivation behind the move or not).


The article talks about showing warnings when accessing misstyped URLs, not "killing URLs."

Nothing in the article I've read suggests they're doing anything of the kind. What a bunch of clickbait bs.

If you search this HN comments page, you'll find 3-4 other people claiming the same thing.

This article is an insult to news reporting.


Yes, I was very confused by the title as well. The introduction briefly talks about "changing the way site identity is presented" but after that it talks about just flagging suspicious URLs. Nothing in there about actually "killing" them.


There HAVE to be underlying motives at play here. If Google really cared about "dumb users" – I agree phishing and such attacks do pose a threat to the average net surfer, but surely they can use ML to bolster and enhance the warning or safe browsing system that they already use, and use that as an additional plus point or marketing point. To imply that this is being done merely for the sake of the "user" is laughable.

This is simply a red herring to (eventually) guide user interface behavior towards a system or set of systems that moves towards obscure, non-transparent and centralized control.


Very misleading headline. The article does not say Google wants to kill the URL; it says Google wants to take steps to make sure users can more easily see what domain they're on and make it harder for scammers to spoof legitimate domains.


It wouldn't be good journalism if the idea of the change wasn't all twisted to some conspiracy theory.


This is why I stopped using Chrome years ago. My current Firefox is configured to display the protocol along withe the FQDN, without any munging thankyouverymuch.

Google is self-isolating into a walled garden. Good riddance.


All of these "help the dumb Chrome user" additions to Chrome really need a way to be turned off. All the Chrome team needs to do is have a Developer Options section, which they already do in Android, that allows us advanced users to undo all the asinine changes to the browser over the past 2 years:

- OS-style scrollbars were removed [1]

- Backspace no longer goes back [2]

- Can't click on the "Lock" in URL bar to see certificate info anymore

- Tabs were redesigned, and take up more space

- Extensions are no longer a simple list, but are now gigantic unnecessary "cards"

- "Chrome Web Store" link from Extensions section is now hidden underneath hamburger menu

- Half the themes for the old design are literally broken now

Those are the biggest buggers for me, and Google simply throws up the middle finger <^> to advanced users and expects us to understand these changes are better for everyone. NO they are not! Let us turn these "features" off in Developer Options.

[1][2] There are extensions that can re-enable these top two removals. The rest cannot be changed.


> while URLs may not be going anywhere anytime soon

Nothing in the article talks about killing the URL, just flagging spam URLs like G00GLE.com (with zeros) that might be security risks.


>"What we’re really talking about is changing the way site identity is presented," Stark told WIRED. "People should know easily what site they’re on, and they shouldn’t be confused into thinking they’re on another site. It shouldn’t take advanced knowledge of how the internet works to figure that out."

> And while URLs may not be going anywhere anytime soon, Stark emphasizes that there is more in the works on how to get users to focus on important parts of URLs and to refine how Chrome presents them.

I read that as "Google is working on decoupling site identity and navigation from URLs."


Because scammers are exploiting the way we use URLs for the detriment of users. URLs are great as a mnemonic point of entry but you can easily scam someone out of a trusted website into your cloned version of it without them even noticing; it's a very slow and very dumb attack where the shady URL itself is the corrupt man-in-the-middle.


I understand the purported rationale for killing URLs. My comment was clarifying where the google spokesperson made a statement I interpret as “we are going to kill URLs.”


Jesus Christ, how do they get these titles? do the people posting even read the damn article?


Instead of posting an empty rant, which breaks the site guidelines, it would be helpful if you'd suggest an accurate and neutral title, preferably using language from the article itself. Then we can change it and everyone's eyes can stop hurting.


The headline appears to be based on the first and last sentences of the intro paragraph. The latter concludes with the description:

> Google's first steps toward more robust website identity

(Headlines, in case anyone hasn't heard, are usually written by an editor rather than the author of the article.)

If the title is to be changed, I suppose that quote from the article's original language may be more accurate and less clickbaity.


That's not neutral enough. If we used that title, we'd get equally many complaints about it being biased, plus new complaints that the moderators are taking a side.


Some other less obvious title options pulled from article quotes:

> flagging sketchy URLs

> changing the way site identity is presented


lost in the discussion is that major brands spend millions buying their own name back from google.

see an ad on tv and then type in "progressive" into the omnibox and click on the first link (paid ad by Progressive Auto Insurance on its own brand) and google gets $10+

type in "progressive.com" and google gets $0


A bit conspiratorial and counter to Hanlon's Razor, but an interesting and reasonable theory. But $10 for one click? How do you know that?


The click price is open info.

a quick google search on keyword pricing would send you to keyword planner that will show click price.

If you think it’s fake news just bid on the keyword yourself.

It’s a bit offensive to claim conspiracy when facts don’t align with your understanding of the world.


> If you think it’s fake news just bid on the keyword yourself.

I don't see where I even implied that it was fake news. In fact, I inquired as to where one could find this information.

> It’s a bit offensive to claim conspiracy when facts don’t align with your understanding of the world.

I explicitly called it: "an interesting and reasonable theory" lol. "A bit conspiratorial" does not mean "false" or fake news. And what is my understanding of the world then? I assume you know since you have suggested these facts don't align with it. I disagree, in fact these things align very well with what I consider to be my understanding of the world, but perhaps you can enlighten me.


https://www.wordstream.com/blog/ws/2011/07/18/most-expensive...

if it's your own brand you get quality score 10, so that cuts the price from the approx $54+ that another bidder would have to pay.


Whoa that's crazy! Thanks for the link.


Firefox and Duckduckgo here we come! Recent Google/Chrome changes have me baffled. It's a great time to switch. Vote with what you use folks.


This combo works fine. Also, using DDG as your default search engine gives you more diverse results, because you can also just do g! prior to the search terms to automatically send to search to EvilCorp --- oops, I mean Google --- as needed.


Hard to call these the first steps. The entire point of the omnibar is to blur search and address.


This is to protect dumb users. Can't we train them instead? I'm getting fed up of everything in my life being dumbed down so that idiots don't hurt themselves with it.

Can we put up a "you must be at least this willing to learn stuff in order to use this internet" sign on the information freeway on-ramps?


Google is a company which provides a product, chrome. It is in their intrest to protect those users in a way that they desire. Why would they leave idiots hurting themselves when they can enhance their product by protecting those idiots, which also improves the satisfaction by the users.


People aren't dumb just because they aren't experts on URLs.


You're creating a false dichotomy.

I am defining "dumb" as "incapable of determining what url they are visiting and therefore vulnerable to scams". That's a looong way from any sane definition of "expert".


Am I dumb to not open the can of oil I put in my car and check whether its heavy metal content is too high to be fresh oil? It's just not practical — if Google can mass-check things for me with TrickURI, it will very likely protect me (a professional programmer) against phishing. Let alone non-web professionals who are nonetheless experts in their field.

I'd rather my doctor and car mechanic and grocer get to focus on their areas of expertise than have to learn some baroque rules about links in their email.


I have no idea about the car oil thing. I don't own a car and rent one when I need one. All aspects of the maintenance and care of any car I might be driving I leave to its owners. Which is not that far from what Google are proposing - that someone else (Google) takes responsibility for the machine we're operating and makes sure it's safe.

The problem, of course, is that this trains us to be incapable, and leaves us incapacitated if anything goes wrong. If my rental car breaks down I have no idea what to do except ring the rental company and hope they can send someone to fix it. Likewise, if Google's filter makes a mistake (which it will) then the user has no ability to make any kind of decision on their own. They'll click on the fake bank, lose all their money, and whose responsibility will that be? Google won't pay them back - they just provided a free tool. The bank will want to shift responsibility ("you must have done something unsafe, Google stops all phishing attempts, so you must have told them your login details"). The net result is that while most people will be safer, some people will be in a worse position than they are now.

It doesn't solve any problems for anyone, it just makes us helpless if there is a problem.


No, but they're at last lazily negligent if they spend a significant part of their day online and still haven't learned how URLs work.


Really? What else should anyone learn in other to use daily? Plumbing, electricity, typography, agriculture, electronics, woodworking, metalworking, law, the list goes on. Do you have a cursory knowledge of all those fields?


> Do you have a cursory knowledge of all those fields?

You don't have a cursory knowledge of everything you use/own and believe it's alright?

Do you know you shouldn't cut your electric wire while having them plugged in?

Do you know you shouldn't put your finger under the knife while you are cutting carrots?

Do you know you shouldn't put metal into your microwave?

You do have cursory knowledge about the tools in your house. You give an absurd list of example in your comment, but theses aren't the tool in your house, they are completely different field that sure are required to give you the tools/food you have, but they aren't part of any of your tool-set required to live. I fully expect someone that use a fountains pens to know how to change its ink cartridge, just like I fully expect a plumber to know how to use a wrench safely and a web user to know how to safely navigate the web.

Sure the tools can be made safer and better, but that doesn't means to remove actual feature from it. You wouldn't make the wrench out of rubber because the metal is too hard and can hurt someone, you expect the wrench user to know it or to teach him if required.


> Do you have a cursory knowledge of all those fields?

Well... yes? Don't you?

Most of that is taught in schools, and I feel one can reasonably expect an adult to have some cursory knowledge about all of the above - enough to at least reason about the basics and to know when to hand a problem off to a professional.


This is dishonest. Things like plumbing, electricity, typography, agriculture, electronics, woodworking, metalworking etc. are the creative acts akin to programming. I did not say they should learn programming.

But you're using a knife daily, so I presume you know what is dangerous about a knife, (not how to make one!), or say you use a credit card. Should you not have at least a vague idea of the concept before you use it responsibly? That's all am asking for.


But if I hire an electrician (or even wire something myself, which I've been known to do) I don't have time to check whether the wires are aluminum instead of copper. Or if their insulation is flame-retardant. I rely on regulations and inspections and even the good will of the manufacturers.


You presumably rely on several indicators that you have learned to look for when doing this type of work. There are several indicators in an URL that indicate the likelihood of it being legit, including proper spelling, valid SSL, reasonable path given the activity etc.

You presumably rely on brands, yes? Same with the URL, make sure it is in fact the 'brand' you're looking for. Am not saying it's 100% foolproof as buying counterfeits isn't hard either. But at least do the bare minimum to check.


Yep, I do make some kind of checks. Mostly I rely on the label and brand though. And TrickURI is a kind of label verifier.

To make sure I'm not accidentally buying H0me D3p0t brand nomex.


It's dumb users that provide all the value that turned Google into a behemoth. Pretty powerful feedback loop there. The Internet wasn't ruined in a day but it sure happened quick and superorganism-like parasitic corps like Google are for the most part directly responsible.

If you train dumb users to the point of non-dumbness they will install ublock origin. Train them a bit more, they will install umatrix. Then they won't use Chrome. Or Gmail. Or anything Google.


Showing URLs is essential. Google isn't able to determine what sites are safe. You can test it by signing up for a lot of Google Alerts. I get fed many shady URLs in Google Alerts and have clicked on some of them accidentally because the URLs aren't displayed in the emails anymore. If the URLs had been clearly displayed, I wouldn't have clicked on them.

A tool that warns people when something appears wrong with a URL would be useful, but hiding URLs from users would be a terrible idea. It will create a generation of people who don't even have the capability to visually scan a URL to see if it's safe.

Technology shouldn't be dumbed-down for people so much that many people who are capable of learning how it works will never see enough of the details to become curious. I've even met professional web developers who don't understand URLs well due to the current URL trimming in most browsers.


I'm a bit confused on when the URL checking is applied. Is the browser going to maintain a list of "good" domains and validate against that? If so then that'll need a lot of maintenance. Getting yourself on that list would need to go through some process that probably isn't transparent. If your legitimate website is too similar to one on the known "good" list, then it'll also probably be a process to get that resolved too all while you're losing visitors. If you attempt to automate this then there's a new risk that someone gets a malicious site in.


The existing safe browsing system is more like a blacklist and it sounds like this new system will be similar. https://blog.chromium.org/2012/01/all-about-safe-browsing.ht... But if the machine learning model is small enough one could imagine building it into the browser.


Safari already does this to some extent and as a dev I hate it. I have 2 jsfiddles open in separate windows. The URL area just shows "jsfiddle.net" so there is no way toa see if 2 identical looking fiddles are the same or different nor which was is the fork and which one is the original.


I might be misunderstanding something (not familiar with jsfiddle) but if your problem is that you can't see the full URL you can change that at Preferences>Advanced>Show Full Website Address


Sure if you know to look for it you can find that setting and fix it, but how do you know which other settings also default to shady dishonest behavior and can you find all of them to fix? Obscuring the URL to present one thing as being the same as another is definitely a shady malware-style practice. Intentionally deceiving the user should not be the standard default for the default browser of one of the top OSes.


Not on Mobile Safari :(


iPad is TOTALLY a valid laptop replacement!


It is for a lot (dare I say, most?) people.


excluding programmers or content creators - the people that apple is trying to push away from macs to ipads


Solutions: True Sight browser extension to identify CDNs like AMP[1].

It shows an icon in the address bar to inform you whether the website is in part or wholly being hosted by CDNs (eg Google, Cloudflare, CloudFront, etc).

You can block partial CDNs (eg scripts, images) using NoScript or uMatrix, for full webpage CDNs like AMP you'd have to observe the True Sight warning and navigate away manually at the moment.

For partial CDNs it's also worth noting the Decentraleyes extension, which loads popular resources from it's offline source rather than the CDNs.

[1] https://addons.mozilla.org/en-US/firefox/addon/detect-cloudf...


That's useful, but it's not what this article is about.

This article is about a tool called "TrickURI" which detects misleading names like "G00gle.com"


True Sight seems interesting. I am exploring ways to avoid Google Amp completely ...


Google continuing to undermine the founding principles of the web for that sweet sweet ad revenue, a very minor part of the actual web experaince for a dwindling population of ad clickers. Digging a hole straight down sugarcoated with BS and delusion.


Its all about control. They want to own you.


This is an attempt to seal off one of the main pathways out of interaction with its monopoly. “This will make you safer” is a common refrain of those who seek to maintain control over everything you do.


The actual work doesn't seem to be at all — it's about a tool called TrickURI that identifies names that are visually confusing such as "G00gle.com". The headline is clickbaity.

So if I go register a name "neolefty" that nobody has ever heard of, TrickURI isn't going to object unless there's a fantastic service out there already called "ne0lefty". Which sounds legitimate to me. Kind of trademark-law-y, but consensus is how language works.


Headline is misleading. This is about Google's research into how users are misled by tricky URL's and what to do about it.


The article is about how Google PR people are now openly advocating for removing the concept of a "URL" from web browsing for security reasons. What are you talking about.


Please don't spoil a perfectly fine comment by putting in a swipe at the end.

https://news.ycombinator.com/newsguidelines.html


This is the second comment I've seem where you being a hallway monitor is far more disruptive and annoying than whatever you're responding to. FYI making it clear that I'm being contradictory is better than a passive aggressive declarative statement, which is a position I shouldn't have to explain, but since you're apparently so concerned about how other people communicate, there you go.


I'm sorry it's annoying. Unfortunately, you have a history of incivility on this site. Comments like "what are you talking about" have no purpose other than to add hostility, and other comments of yours have been similar (e.g. https://news.ycombinator.com/item?id=19004244 and below). That's not cool, and if you keep it up we're going to have to ban you again. So please just use HN as intended, and then we won't have to annoy you with moderation.

Among other things, that means posting with respect for fellow users, regardless of how wrong you think they are.

https://news.ycombinator.com/newsguidelines.html


I don't see any PR people mentioned in the article, just engineers.


Validate and confirm that what you're looking for is good, yes.

Obfuscate what you're looking at, no.


The only new bit of information I learned from this article is Google's Trickuri project: https://github.com/chromium/trickuri

It contains all sorts of interesting test cases to test how various URIs are displayed in various parts of the UI.


You only need a threat to introduce any kind of "safety measures". This is a trend that is done by governments all over the world as an excuse to enter and take, do surveillance and monitor all kinds of activities. It's no surprise that they always put safety in front of measures designed to align with their marketing agenda. The masses will agree unfortunately and blindly subscribe.

I'm still waiting for them to find a "safety-first" reason to ban adblock from the store.


Do you know what is not confusing at all? And is also fraud proof? AOL keywords.

I always wondered who Google would be, but I never guessed AOL.

This is a terrible idea, but I suppose it is easier to organize the worlds information this way.


Came here to post this; they're re-inventing AOL keywords. It was bad then and it'll be bad now. Chrome is user-hostile.


Besides the fact that this article is a clickbait, "killing URLs" is a actually something that is happening. I'm surprised nobody put this in relation with the fact that google has a search engine, something much more important to this topic than the fact that it has a web browser. Hiding URLs and making them 2nd class citizen means nothing alone. The reason it is done is to promote an alternative way of accessing/finding things, with the goal of controlling how people move/navigate (since this is directly translatable to ad-money and behavioral data collection). Just think about every single ad-based (eg lock-in based) social network/service/web-app: they want you to navigate through their searchbox and result page, because they want to control what you see and keep you from going away.

So killing URLs isn't much about this particular action from google or about web-browsers, it's actually embodied by the much broader trend towards searchbox-centered ui and "related stuff"-based navigation (instead of an absolute classification system like trees, tag hierarchies etc). Please f*ck off with your search engines and recommendation systems, just give me some tools to build taxonomies so i can organize myself. I know what i want and i know how i want it classified. IT was meant to organize (and process) databases please just stick to that, i want a library not a bookshop.


Exactly! No need to remember websites anymore, just go back to google, search again and click on some ads while you're there. Sorry, but I don't trust Google or their self-serving proposals at all.


for everyone that says or implies that “google is out to get us”:

we have had users that thought the small green https lock next to the url meant a shopping site (the lock resembles a purse). so seeing it meant “safe shopping” by google.

users are much less knowledgable than anyone here can imagine.


Shouln't web certifacates be supposed to solve the identity issue !? It's not the first time Google tries to kill the URL though. The reason why Google want to kill the URL is that they want to replace it with a Google search bar, and that instead of typing in a URL you make a google search. Google pays browsers a lot of money in order to make the url bar work that way.


Why not simply show identicons that represent a hash of the domain you're on? https://vorba.ch/2018/url-security-identicons.html


I don't think simply and identicons belong to the same sentence.


I just watched the talk "don't say 'simply'" the other day [1]. You're right, "simply" is wrong here.

[1]: https://jameshfisher.com/2018/09/13/dont-say-simply-writethe...


This has been a long time coming. Wacky things like 'https://' awhere never going to fly with the general public. Though tech support for them is going to get a bit harder now.


"Google Takes Last Steps Towards Killing the Open Web by Making Everyone Register With Centralized Certificate Authorities Just to Run an HTML Server."

I sincerely hope I live to see this quintessentially evil corporation go bankrupt.


> the complicated part is developing heuristics that correctly flag malicious sites without dinging legitimate ones.

Its not complicated its impossible, google just don't care if they kick other people off their internet.


So, google would become an authoritative source of all websites if this goal is achieved?

are they proposing that DNS standard is "updated" so that every request at runtime must verify that a website is legit?


The TrickURI tool mentioned in the article: https://github.com/chromium/trickuri


>"Killing the url" The title is a little clickbait, I guess. Google only discussed displaying a warning when a url looks phishy. Or did I miss the killing/hiding url part?


They have been doing it for awhile. Removing https://, removing the padlock, removing www., Hide the path...


Https still shows up for me, as does the padlock, www is still showing as well (that change was fairly quickly reverted after the feedback they got).


Safari was doing it long before Chrome ever did.


Any large software project is taken over by bureaucrats if all basic issues have been solved by the real programmers.

Chrome introduced garbage like rounded corners, now this.

It's the Gervais principle in action.


since the problem is with "careless" users when it comes to "noticing" or "reading" a URL, which have a position in the screen that I suppose most users know by now, then, I think that the solution should be designed around UI/UX principles, since one of the purposes of UI/UX (correct me if i am wrong) is to let and help the user "read" and "notice" every bit of information using every pixel in the screen.

if the goal is to help the user "notice" and "read" URLs, then why not just (for example):

-use another font for URLs, one which mitigate the similarities between characters

-make the URL bar bigger with a bigger font

-use a small popup (à la Wikipedia) that shows the URL clearly when the user hover it. Actually something happens when you hover a link in some browsers, a lost, grey container with a black font appears in the bottom left of the window, it contains the URL of the link.

but if the goal is to setup "yet another input device" for a database of shady URLs and domain names, well... you will surely need to integrate not only with the UI/UX parts of the browser which have the greatest usage share.

or if the goal is to hide one important part of the mechanics of the web (and make software feels even more magic...) then, don't show the URL at all.


There is an AMP proposal for certificates to replace DNS for web site identity. In that scenario, Chrome would stop showing URLs and would only show the human-readable identity that signed the page, as validated by a certificate. This would allow signed site pages to be navigated offline or replicated widely or blacklisted.

Demo video & more info: https://news.ycombinator.com/item?id=17923156


I don't really want to drill into what's happening with AMP, but I am skeptical of attempts to move from domains to some concept of real world entities with signing. We tried that with EV certs, but it turned out pretty easy, albeit expensive, for bad guys to get EV certs.


I am skeptical that they can kill the URL. It's like those smart inbox startups that decided to kill email a few years ago but they ended up shutting down and email has become a stronger and more important tool in communication now. Some things don't need to be replaced or killed. If it works, it works. URL is the foundation of the web and killing it would be more difficult than killing the email. Just my 2 cents.


Frankly I'd rather use a browser that blocked all URLs with non-ASCII characters than one which didn't show the plain URL in the address bar.


Maybe I missed something, but I didn't see anything about removing address bars or promoting AMP in there... Then again its late here.


It's it convenient that Chrome's moves against the URL also makes it easier for Google to MITM/proxy the whole world using AMP?


This is a classic example of 'manufacturing' a problem so you can 'solve' it in a self serving way.

At some point technical discussions will have to rise above current naivette and various parroted 'laws' and demonstrate understanding of the real world, corporate and human behavior instead of lamenting after the horse has bolted.


While I'm no great believer in their particular solution being motivated by pure altruism, it's simply false to suggest that the problem is manufactured.

Telling whether a site is being served by who you think it is is fast becoming a crucial skill, to the point where I explain at least the basics to even the most technically illiterate people I know who use a computer. It's a real problem, in substantial need of a solution.


Why can’t the solution simply be education?

I don’t buy this learned helplessness. Schools need to evolve with the times.

We teach kids calculus in school, surely we can teach them to read a freakin URL.


Why don't all browsers simply visually emphasize the TLD and second level domain (or third level in a case like example.co.uk) in any given URL?

Yes, sometimes it is not easy to recognize the real address in a scam URL like "dropbox.com.scam.ru" but browsers could make absolutely clear what the TLD and SLD are.


The version of Firefox I'm on right now does. Most of the current URL is a light gray, while the "ycombinator.com" portion is white which gives much more contrast against the dark gray background of the URL bar. (I'm using the dark theme, but I'm pretty sure the normal theme does this, too.)


I understand their motivation, but in none of the articles murmuring about them "taking away the URL", I have never so far seen any concrete approach how their replacement could look.

Even if you are willing to sacrifice the "you can write them on a napkin and share them with everyone" feature, it's not clear to me what other identifier would fundamentally solve the identity problem:

Even if you forced every website to get an extended validation certificate from a preselected CA and then based website identity solely on the certificates, what would stop you from registering a misleading company name? (There are precedents for that, btw. Search for "World Conference Center Bonn Scandal" if you want to read some hilarity)

Additionally, as the article mentions itself:

> The big challenge is showing people the parts of URLs that are relevant to their security and online decision-making, while somehow filtering out all the extra components that make URLs hard to read.

I feel the approaches we have seen so far rest on the assumption that the top and second level domains of the hostname are the only "important" parts of an URL and the rest can be hidden. I think this assumption is simply false, even for a vast number of non-technical use-cases: Often, "identity" is not just about the organisation behind an URL but also about the content - e.g., you'd like to know which article of a blog a link leads to.

More importantly, many sites are divided into user profiles, where the identity of a user is given by a subdomain or a path segment. Just knowing you're on "https://facebook.com" doesn't tell you whether you're viewing the actual profile you want to view.

Finally, even the "cruft" is sometimes important, if only for knowing it's there. E.g., I frequently remove tracking/referral arguments before sharing a link - both to make the link easier to remember and to disrupt tracking.

Also, unrelated:

> The Chrome security team has taken on internet-wide security issues before, developing fixes for them in Chrome and then throwing Google's weight around to motivate everyone to adopt the practice

Is that how we imagined internet governance to work? Didn't we have standards bodies like the W3C or the IETF that were supposed to make decisions on that scale?


> Search for "World Conference Center Bonn Scandal" if you want to read some hilarity

FYI: I tried, and the only Google result is this exact HN post.


Ah, apologies. I guess my optimism was too great that this made more than local news.

I can't find an english-language article about the story, so here is a german one:

https://m.dw.com/de/der-bauskandal-um-das-world-conference-c...

To summarize the story:

Bonn used to be the capital of West Germany during the Cold War. When that was over, however, Berlin got reassigned as capital and Bonn went back to being a mostly ordinary small town.

They never quite got over the demotion though and the city made numerous attempts at staying internationally relevant. One project was to become a UN base of operations for Germany. Apparently for that, it's a requirement that your build an oversized hotel and conference venue.

The city had trouble finding investors for the project, but eventually a korean company - "SMI Hyundai" stepped forward.

If you want to believe the official records, then apparently due diligence went out the window at the mention of the name "Hyundai". City officials assumed that they were somehow affiliated with the automaker and were quick to trust them with city-backed loans in the millions.

It turned out they were scammers and not even remotely capable of contributing to the project. In the end, the city had damage of several 100 million euros.

The company had never had any relation to the automaker either. It just "happened" to have the term "Hyundai" in its name...


I feel the end game is that all websites will be served to you by Google, like an invisible proxy. And you wouldn't be able to see it. Google search already modifies the result links with google ones, to see which result you clicked. AMP plays to that goal too.


Reminds me a bit of the "URL"/transition component of Doug Crockford's post-web secure distributed app delivery idea.

https://youtu.be/O9AwYiwIvXE?t=2400


This is a worrying development. It seems like this tool would help endangered users but also penalize legitimate sites/services. Not to mention, this is Google shaping the internet to its needs again.

Also, the title is clickbait so now I feel bad for upvoting.


A better idea might be to allow EV certificates to contain a large logo which is displayed next to the URL bar, and ensure that only trademarks can be placed by their holders into these certificates.


Wouldn’t displaying the top level domain name in bold or another colour make identification easier, or am I missing something?


Yeah folks at Google want their search engine to be the entry point for the web - not the url.

We need to find (and promote) alternatives to Google (and Facebook). They are making the world a WORST place.


Google should never have given pagerank rewards for 'meaningful' URLs, they should have given it for meaningful link descriptions.

URLs should be opaque, then we wouldn't have the mess of people trusting them. Also we would have HATEOAS instead of OpenAPI :D


So by obfuscating data from users - Google is salvaging Internet?


The way they killed all their other useful products?


My wild guess, based on events this week:

The step zero has been partnering with MS to kill Firefox.


can we just kill Google instead?


Please don't post unsubstantive comments here.


I’m sure that’s a common sentiment and wouldnt be too surprised if an incident occurred


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: