An issue with AMP is the terrible URLs- because they are hosted by google. Once Chrome no longer has URLs, then Chrome gets to decide what name to show you in the address bar - potentially a name that has very little to do with the actual location of the document. Maybe I’m being a bit extreme or pessimistic, but do not believe this is a good change for the web, and I don’t think Google can be trusted to be the stewards of the internet.
There's a whole lot of smart people who like static typing for the safety benefits it brings to their programming. Web browsers are the front lines: where one click on a stringly-typed reference can cause unknown third-party code to run on your computer. You are protected only by your human ability to spot the visual difference between two strings, one of which is constructed by someone trying their hardest to trick you. If you advocate for type safety within software, you should also advocate for a better system than URIs.
There's an inherent bias in the choice of analogy. There are many potential metaphors that aren't as favorable as hardhats or seat belts. How about a leash or blinders?
> There's a whole lot of smart people who like static typing for the safety benefits it brings to their programming. Web browsers are the front lines: where one click on a stringly-typed reference can cause unknown third-party code to run on your computer.
URIs have a rather well defined type and you can easily detect a malformed URI. The problem Google seems to intend to address, and the way they intend to address it, has nothing to do with the type of the URI but ultimately how its value should be conveyed and heuristic restrictions on what forms the values may take. This leads to situations where valid URIs are treated as invalid, based on arbitrary heuristics, not a tightening of the type of value a URI encodes.
The more apparent problem, from your description, seems to be that websites can at all use your computer to execute arbitrary third-party code. Addressing this by vaguely limiting what URIs you may access is a weird way not to take the bull by the horns. To somehow hide information that is part of the URI to make it more clear what resource the URI represent is just ass-backwards.
The average internet user is unlikely to be able to detect a malicious URL. Do you think the average person can tell which of these URLs is legitimate, and owned by Example inc.?
Safari already only shows the hostname to help with visual identification - this doesn't help with different-but-similar hosts, but it does help regular users to see what website they're on, if they are unfamiliar with the protocol/host/path structure of URLs. Which they shouldn't have to be.
I think you are missing my point of that statement, which is that this is not a problem with the type of the URI, and that there is no basis for the idea that "If you advocate for type safety within software, you should also advocate for a better system than URIs."
Either way, to answer your question, no one can tell which of those URLs are legitimate without Example Inc. first communicating their official address to them.
> Safari already only shows the hostname to help with visual identification - this doesn't help with different-but-similar hosts, but it does help regular users to see what website they're on, if they are unfamiliar with the protocol/host/path structure of URLs. Which they shouldn't have to be.
I'm against the idea that a user shouldn't need to know what they are doing on such a fundamental level. This is an attitude that people tend to have towards software and computers in general that doesn't really exist for other useful-but-dangerous technology with mass appeal like cars. It promotes magical thinking which I think may leave the users even less aware of the risks than they already are. Risks that don't somehow stop existing because you hide trivial information from the user. If properly educating people in using these systems is not an option, maybe letting them touch the hot stove isn't such a bad thing.
When you try to water some information embedded in an URI down for the dumbest user, you invariably hide or even misrepresent information. Safari only displaying host names is a great example of this, but another favorite of mine is how Chrome displays "Secure" in the address bar to indicate HTTPS with a verified certificate. In reality, it is of course only a very limited sense in which anything I do at that address is secure. A sense which the user that this was watered down for most likely won't recognize, instead being instilled with a false sense of security. By all means, color code the different parts of the URI, add tool tips or whatever, but don't hide what's actually there from someone that has every reason to care.
When some user on example.com starts impersonating Al, how does Safari hiding everything but the domain help the user differentiate "example.com/profiles/al" from "example.com/profiles/fakeal"?
What's the URL for Valve's Steam?
If you guessed none of them except for steampowered.com, congrats!
What's the URL for Americal Eagle, the clothing store?
If you guesses all of them except americaneagle.com congrats!
The threat model for phone numbers is considerably different, not least due to the link-based nature of the web and email. The URL takes on the role of both the number and the caller ID (if caller ID didn't suck) - you should be able to be confident that you're talking to who you think you're talking to.
A leash and blinders are put on an animal which is not free, to control it. But nobody is proposing removing your freedom to use any web browser you want. In what way is that metaphor applicable?
> The problem Google seems to intend to address, and the way they intend to address it, has nothing to do with the type of the URI but ultimately how its value should be conveyed and heuristic restrictions on what forms the values may take.
I suppose it depends on how one defines "type". I'm not just thinking of the "java.lang.String" level, but the broader level of anything that can be checked by a compiler, without evaluating it.
Consider the basic problem of navigating a link. We get a big stream of bytes from the network. It's pretty easy for the computer to identify URIs in it, by the syntax of HTML and CSS and URIs themselves. It's not an easy problem for humans -- I wouldn't trust myself to always accurately identify URIs in an arbitrary buffer! It's hard to tell what's a valid URI, or where (say) the URI ends and a color or some raw text begins. That's a type problem, and humans are bad at it.
This project sounds like the next level beyond that. My computer can already parse the stream and analyze it to find the URI, and automatically paste it in my URL bar when I click near it. Nifty. But "URI" is a richly structured type (just look at the URI class in your favorite programming language), and the browser can do far more with it, even just at the UX level, than simply treating it as an opaque string.
> The more apparent problem, from your description, seems to be that websites can at all use your computer to execute arbitrary third-party code.
No, sorry if that was misleading. I was not presenting it as the problem that this team at Google aims to solve, but as an example of the security issues at stake. Just as seatbelts aren't the only safety feature keeping my face intact on the highway, a sandbox shouldn't be the only safety feature keeping my disk intact on the internet.
What's the difference between viewing the source of some malicious code, and running that malicious code? Only the type system: it's in a SCRIPT tag, or a PRE tag. What's the difference between seeing a malicious link, and following that malicious link? Pretty much the same thing.
As I said, there is an inherent bias in the choice of analogy. The ones I present are just as vague and useless as yours and present an opposite bias.
> I suppose it depends on how one defines "type". I'm not just thinking of the "java.lang.String" level, but the broader level of anything that can be checked by a compiler, without evaluating it.
How did I give you the impression that this is the level I was addressing it on? URIs have a much more restrictive type than simply a sequence of characters.
> Consider the basic problem of navigating a link. We get a big stream of bytes from the network. It's pretty easy for the computer to identify URIs in it, by the syntax of HTML and CSS and URIs themselves. It's not an easy problem for humans -- I wouldn't trust myself to always accurately identify URIs in an arbitrary buffer! It's hard to tell what's a valid URI, or where (say) the URI ends and a color or some raw text begins. That's a type problem, and humans are bad at it.
It's an easy job for the browser simply not to accept malformed URIs. Unfortunately, browsers like Chromium deliberately accept entirely malformed URIs and even interpret valid URIs the wrong way. IMO that would be a good place to start looking if you had a genuine interest in improving security.
> This project sounds like the next level beyond that. My computer can already parse the stream and analyze it to find the URI, and automatically paste it in my URL bar when I click near it. Nifty. But "URI" is a richly structured type (just look at the URI class in your favorite programming language), and the browser can do far more with it, even just at the UX level, than simply treating it as an opaque string.
Yes, because it is well defined what a URI consists of this is easy. Chromium already does to color highlight the different parts of the URI.
> What's the difference between viewing the source of some malicious code, and running that malicious code? Only the type system: it's in a SCRIPT tag, or a PRE tag. What's the difference between seeing a malicious link, and following that malicious link? Pretty much the same thing.
So what is Google doing to address this that has anything to do with the type of URIs? Absolutely nothing.
Google is not trustworthy because between Chrome and Search they have too much of a stake in the eventual outcome of anything that could replace URLs. Any system that eventually does replace URIs should be able to tell me at a glance: is it http, ftp or a local file? Is there a TLS cert? What is the domain of the server I am accessing? Approximately where in the site directory am I?
I simply don’t trust browsers or sites which try to obscure what is actually useful information because some UX guru told them it was unnecessary. If anything, I want URLs with even more information. Which version of “http” is in use would be a nice start at this point.
In reference to running 3rd party code, knowing the URL is even more important as it decides what is considered 3rd party or not. When I'm on an AMP website - google is the 1st party and the content provider is now considered the 3rd party. I don't agree with allowing google to be considered the 1st party in that instance.
Well, maybe. Before I advocate for a better system than URIs, I'd like to be convinced that this is a problem that can be solved in a satisfactory way and that the solution doesn't involve ceding control over Internet names to some unaccountable party.
There are frequently tradeoffs between safety and flexibility/performance/value, and the proper tradeoff to be made can depend on the sophistication of the user. So are you just upset that that a derisive word was used, or are you arguing that in this particular case the proper trade-off is independent of user sophistication?
- there's a class of users who are simply dumb
- the speaker does not consider themselves to be in this class
- one can avoid this hazard by being sufficiently smart
none of which I agree with, and none of which I see evidence for here.
Across every industry I've seen, when new safety devices are invented, old-timers brush it off as unnecessary. (Survival bias: I didn't die!) When they retire, the next generation grows up using the new safety devices, and sees no problem with it. If anything, using the safety device is a signal that you're doing something dangerous! Professional drivers wear more seatbelts, not none. The tradeoff you speak of is usually backwards from what you claim.
This one is shaping up exactly the same. People who grew up with URLs are complaining that Google is trying to "hide" them, even though that's not what anyone on this project said. The generation being born today will wonder why anyone ever used a networked computer by clicking on links with zero assistance in determining whether it was at all legitimate.
People get used to anything. Their failure to realize what they are missing is not a good measure how much they are missing. Indeed, the people who are most able to assess the cost of the new compared to the old are exactly the old timers who have experienced both, not the new comers who haven't.
Hiding the URL is like blocking out the windshield and telling the user to trust the GPS.
That said I think this change is inevitable. My experience with non-technical users is that the URL is irrelevant Most do not know the difference between typing an address and typing a search phrase.
Wearing a belt would have suffocated / decapitated him, because the car seat moved to the front.
The analogy to hiding the URL would be something like a seatbelt that only unlatched at the car's stop only in neighborhoods it considered "safe".
Seatbelts in Europe are required by law, in Brasil it's possible that there are no seatbelts because it's cheaper.
Seatbelts make it safer unregarding your driving skill. As an experienced développer, I want to see the URL. I want to copy the original one.
URL's are useful outside security. That's why it's one of the SEO variables.
It's a bad analogy
Source : cousin who has a garage and sells Ford.
The article never once says "hide the URL". We have no idea what form their solution will take (and they probably don't, either, yet) but they make it clear that they know the URL itself does have value and they're not going to just hide it.
- Google's phishing quiz https://phishingquiz.withgoogle.com has a question where the right answer is that a URL that starts with google.com is actually an AMP page for a URL shortener that sends you to a Google login phishing page.
- The Chrome team's document about displaying URLs https://chromium.googlesource.com/chromium/src/+/master/docs... is at best neutral on the case where "a domain owner is willing to supply content from a third-party within their own address space," calling out AMP as a specific example of this, and pointing out (in the section "A Caveat on Security Sensitive Surfaces") that anything in the renderable area of the webpage is below the "line of death" and untrustworthy.
- Google search once penalized Google Chrome for breaking SEO rules. https://searchengineland.com/google-chrome-page-will-have-pa...
- Also I think I've seen Chrome developers tweet things that are less-than-happy about AMP, but I can't find them any more.
(That said, I do agree that we shouldn't be trusting Google to be stewards of the internet - and that is a huge part of the value Firefox provides, honestly - just that I think in this regard they're unlikely to abuse the trust.)
So I search on Google or eBay, find a product number, run a search in Amazon. Until a few years ago, that resulted in a cheaper price but now more often than not, Amazon is not the cheapest (even accounting for shipping); and increasingly they are no longer the fastest either.
I don't trust this URL, so I won't click on it/do the quiz. Does that mean I pass?
That link is legitimate, so no, you fail ;)
I believe that AMP is the mortar that builds the solid connection between users and ALL their activities and views and logins and accounts and handles across their browsing behavior.
Which links you clicked, not matter what site you are on or who you are logged in as. It seems to completely dissolve any anonymity, period.
Of course the latter is pure coincidence, but it's always there.
You should have added a /s. It's not a coincidence, it's part of how Google say they are improving their services
most of the contention was that people was able to 'self-host' amp, as if that was going to happen for anything but few selected cases.
> the solid connection between users and ALL their activities  seems to completely dissolve any anonymity, period.
and wait to hear what cloudflare does!
> We collect End Users’ information when they use our Customers’ websites, web applications, and APIs. This information may include but is not limited to IP addresses, system configuration information, and other information about traffic to and from Customers’ websites
I noticed that of all the controversial stuff I say on here, it's the stuff that criticizes Google that gets downvoted. The obvious conclusion is that there's a heavy number of Google employees here who feel, I suppose you can call it "passion."
However it's obvious to me that there's an authoritarian posturing that Google is pursuing. Surely the stopping of disabling of video autoplay should have been a big red fucking flag to stop using Chrome if you value your privacy and control over your equipment. You don't need to be Richard Stallman to understand that Google is a core ad-tech company, and they're doing everything possible to facilitate that industry and its dubious profits.
I understand the risk of "evil" they could pull off with such a huge portion if internet traffic flowing through them, but the "you pay us for this service" model means they don't _need_ to monetize in evil ways. Whether they end up doing that or not remains to be seen.
I'm 100% against AMP though. As noted elsewhere, Google has repeatedly demonstrated they're not to be trusted.
but while that database might not be monetized by cloudflare, the mere existence of it makes it a high value target for everyone else.
we got all high confidence on cf's engineering, but we're one inch away from having everyone skeleton in the closet running out at large.
I mean usa just had an election whose result was allegedly influenced one way or another by state sponsored hacking. it doesn't get more higher stakes than that.
Select * from population where political-affiliation is far-left AND age >30 AND reddit-user-name contains etc...
Imagine that they would make it hard to access a small competitor to Gmail or restrict a website that has news they don't like.
I really think it's time to drop Chrome and to campaign for others to drop Chrome. I'm sticking with either Safari (on my Mac) or Firefox.
 - https://github.com/WICG/webpackage
This would mean you could have at least a TOFU security model, where a web app that you trust can't be replaced (without you knowing) by an insecure version you haven't seen before.
> ... then Chrome gets to decide what name to show you in the address bar - potentially a name that has very little to do with the actual location of the document
That doesn't seem to be the opposite of what the parent post is describing, it seems to be an implementation of it.
In the link above you can see non-Google decision makers that drive strategy and vision of the project.
OK, but the only thing I want out of AMP is for it to not exist. Is there any chance that I can get involved in AMP's open governance with a "stop existing" goal?
It is already quite surprising that AMP has gone in the direction of "We'll accept signed webpackages and publish those so we aren't acting as the origin". But my goal is that even this should not need to exist: websites that genuinely load fast on their own, comparable to downloading the webpackage, should be ranked as high as AMP websites. And it's preferable for websites to do that. So there should be no boost for AMP websites, just a boost for fast pages, and if AMP does anything it should just provide guidelines for how to build fast pages. Examples of fast pages include HN and most things published before 1998.
At the end of the day AMP exists because it's privileged by Google Search, and AMP is privileged by Google Search because Malte Ubl has whatever amount of influence he does within Google and has convinced them that AMP is a good idea (or other people have decided it's a good idea and have put Malte Ubl in charge of making sure it happens, or whatever). No matter how many non-Google people you put on the steering committee you won't change that. You don't have the internal access to change Google's mind about it.
This is like saying that it's okay that I should be happy living in a city that always votes $party because I can get involved in the party. If my personal political views are $opposing_party, that statement is technically true but completely useless.
You can choose not to use it. It's just like when you find a project on Git(hub|lab|etc) and it uses a language, tool, or package manager you've never seen before. You either try to work with it or look at other projects.
If you don't want to deal with AMP, you can click the link icon at the top of the page, then click the link so that you actually end up on the webpage you wanted to visit. You can't force everyone to adopt the "amp shouldn't exist" model just as much as you can't force "electron shouldn't exist and everyone should write native apps" on others.
2. Why is it being framed as me forcing everyone to adopt the "AMP shouldn't exist" model, instead of Google forcing everyone to adopt the "AMP should exist" model?
3. You're talking about me as a consumer. As a publisher, I don't want to use AMP, but I want the favorable SERP placement that comes with using AMP. I think that my website satisfies the actual goal behind AMP, of loading fast. But that isn't enough, and I have to use AMP - and force my visitors to either use AMP or click through AMP (making it slower, and defeating the point of everything). As a publisher I'm actually pretty excited about the webpackage stuff (and it'll be straightforward since I'm using a static site generator), but it's still not the same as being able to run a real website that actually loads quickly.
4. None of this answers my question, which is not "How do I, personally, avoid using AMP" but "Does the AMP open governance model, in which people can allegedly become involved in setting the direction of AMP, allow people the opportunity to make AMP cease to exist"?
5. Google's monopoly power in search results and vertical integration makes everything more complicated. Electron does not have monopoly power on native apps, and nobody is giving an artificial boost to native apps that are written in Electron. Any advantage to Electron is due to Electron's own technical merits.
My bet is that the majority of people on the AMP advisory committee are primarily there because they need to avoid unfavorable placement on the Google SERP and so they're forced to implement AMP and want to make sure they can still render half-decent web pages using AMP, not because they inherently like AMP.
>You can't force everyone to adopt the "amp shouldn't exist" model just as much as you can't force "electron shouldn't exist and everyone should write native apps" on others.
Considering that the reason AMP is even used is because Google puts AMP results higher in search results you could argue that Google might be leveraging their market position into using a technology under their control.
Apart from that that is "dealing with AMP," why is it so hard for Google to offer a "no amp in search results" setting? It's not like there is no case or desire for it.
Until they take that simple step, I see no reason to assume that pushing AMP isn't on an ethical par with distributing crapware. Google should know better.
My understanding is this is also Malte's goal, and the goal of the Google Search folks. We need a way that Search can know that a website (a) will perform well and (b) can be preloaded in a privacy preserving manner. Right now only AMP can do this, but with Web Packages people will be able to do this without AMP. Once you can get (a) and (b) without AMP I will be super surprised if Search still prioritizes AMP.
(Disclosure: I work at Google, on making ads AMP so they don't get to run any custom JS. Speaking only for myself, not the company.)
That’s the goal. Killing web packages and AMP, and actually ranking websites by its actual speed.
With web packages or AMP, if I navigate from Google Search to Page A, and then from Page A to Page B, Google can see that I went to page B. This is wrong. In an ideal world, Google wouldn’t be able to track anything, but as they are able to, we should limit this. As web packages and AMP lead to more ability for Google to track stuff, they need to be eradicated.
First, those metrics say how well optimized your site is, not how long it takes to load. For example, a tiny site that's text and a single poorly compressed image might load in 500ms but get a low score, while a large site that loads in 5s can still get a perfect score if everything is delivered in a completely optimized way. These are metrics designed for a person who is in a position to optimize a site, but not necessarily in a position to change the way the site looks. When speed is used as a ranking signal  Google isn't using metrics about optimization level, it's using actual speed.
But ok, metrics etc aside, Google could switch to using loading speed instead of AMP to determine whether a page is eligible for the carousel at the top, and whether to show the bolt icon. But AMP means a page can be preloaded without letting publishers know that they appeared in your results page. You can't just turn on preloading without solving this somehow. AMP is kind of a hacky way to do this, and I'm really looking forward to WebPackages allowing preloading for any site in a clean standard way.
> With web packages or AMP, if I navigate from Google Search to Page A, and then from Page A to Page B, Google can see that I went to page B.
No, web packages don't allow this, what makes you think they do?
(Disclosure: I work at Google on making ads AMP so they don't get to run custom JS. Previously I worked on mod_pagespeed which automatically optimizes pages to load faster and use less bandwidth. Speaking for myself and not the company.)
The reality is the "open governance" AMP spec means nothing, because as a monopoly, the only AMP cache which actually matters is Google's. And it's implementation is Google's proprietary business, not part of what they allegedly allow open governance of.
I won't believe this until I see the tech lead and the committee go from being totally opaque to actually answering direct questions.
So far "after months of research" they can't even provide a description of how working groups are selected, which are the criteria, what exactly they govern and many other things described, e.g. here: https://github.com/ampproject/meta/pull/1#pullrequestreview-...
Or, e.g. they are going to have an AMP4Email working group. For a feature that no one asked for, no one wants, no one was discussed with. How did it get there? Oh "Gmail team said they are going to do it". But yeah, the "Approvers" working group will surely have something to say about this. Riiiight.
Who are the actual decision makers? What percentage are google employees?
I'm sorry to tell you that you've already been tricked — not once but thousands of times. You think you're browsing wired.com? Wake up Neo, you're really on Fastly. Every major site uses CDNs. Once AMP uses Web Packaging to show the URL of the original site instead of google.com, it will be... no worse than any other CDN.
What the browser authors like Google are doing is further hiding DNS from the user. Many years ago Firefox was already hiding and highlighting parts of urls in the address bars to "protect" users. It will only get worse.
I often navigate by first doing automated lookup using only non-recursive queries, then storing the address permanently until it changes (rarely does), and finally after I have the needed address, navigating to the url. I see every DNS answer, and thus I can see all CDN use.
Fastly has really emerged over recent years and is very popular among sites appearing on HN.
CDNs are popular but not all sites on HN or elsewhere use them. There are still many, many sites that do not use CDN's nor share IP addresses with other sites.
I disagree. Although wired.com might be served by fastly, fastly is not in the URL scheme and is treated as a 3rd party. There are a lot of browser restrictions on 3rd parties. Having fastly become a first party would definitely be different in terms of browser restrictions
What I do not understand is that the percentage of people who didn't grow up with a computer is only going to go down, not up. These evolutions I think go in the wrong direction.
Agreed. And am glad that they were forced to reconsider and have introduced a FILES (https://www.imore.com/files-app) App on ios.
I don't know why Wired clickbaited the headline like this.
Is there any unbiased article out there going over the proposals?
And the relevance is that AMP shows its own fake-URL bar, and Google could choose to "kill the URL" by trusting https://www.google.com/amp/ with not-URL overrides. (But, they could do that presently and trust themselves with URL overrides.)
0 with a dash and O without are not at all bound to monospaced fonts.
FYI, I left console editors due to their inability to handle my preferred Input Sans Narrow Light 14pt (16pt on my 110 dpi screen), just so you can understand the pain of monospace.
This might sound impossible today, but 10 years ago we'd be saying the same thing about IE.
Instead they're taking steps toward better identifying fraud. But nobody would read the story if that's all it said.
Not for Google's massive benefit.
I'm highly skeptical of an approach that involves training users to rely on a black-box ML system. That just makes them ever more dependent on technology they can't possibly understand and puts more power in Google's hands. By being the sole arbiter of what is "tricky," Google gets to blacklist the entire Internet.
It would be better to help users understand the URL. I don't mean expecting users to parse the syntax on sight; I mean finding ways to display or represent it so that the important information is easier to see and fraud is easier to spot.
[https] [www] [example.net] [foo.html]
It could go even further and obscure the contents of the first, second, and fourth boxes, until you mouseover or focus it (but all of the boxes should appear light red in background for http, and light green for EV, even if you can't see the text in them), and the last one should be far from the one before it, to avoid e.g.:
[https] [www.example.net] [example.org] [foo.html]
[https] [www] [example.org] [www.example.net/foo.html]
Clicking on any box (or the regular Ctrl+L) could turn it back into one box (for easy URI copying) and defocusing it will revert it again. Power users could set a knob to simply always display the 1 bar they've been looking at for the last 25+ years.
Maybe there could even be a conditional 5th area for the query parameters (GET variables) which isn't even shown by default (without input area focus), who knows.
[https] [news] [ycombinator.com] [reply] [id=19032043&goto=item%3Fid%3D19031237%2319032043]
https://example.com.phishing.com -> [https] [phishing.com] [example.com] [foo.html]
[https] [com] [phishing] [com] [example] [foo.html]
Or ditch the protocol and not render http at all by default.
Blacklist is easy to understand, as long as we trust Google (lots of us don't) everything would be fine.
With ML, not even Google have a full picture of what's going on.
I don't even think that youtube necessarily should get an autoplay prompt on first use, but it's pretty convenient that ML-based approaches like this are used instead of much simpler approaches.
Lots of research is going into creating adversarial data given known ML algorithms, as well. If this address bar ML is running on the client (it'd have to, right?) then it's not hard to do a training run against it to come up with custom tailored URLs/sites to get the ML to classify your attack as good.
ML equals diffusion of responsibility.
An ML solution is a completeness vs correctness trade off. ML can make the blacklist virtually infinite long, whereas a human team would likely burn out (and make more/different mistakes).
I'm not the OP but I personally don't remember any of that because I'm not an American (like a major part of populace on the Internet) and I've never used AOL. And maybe AOL failed in America exactly because it did the things you mention they were doing, i.e. "controlling and blacklisting" a large part of the Internet.
In the 90s while mass AOL CD mailings were going out there was fear that "AOLization" of the internet would happen.
The same incentives for AOL curated and walled garden are present today for Google, Facebook etc.
If Google is trying to make their own private internet on top of the public internet I'm sure a few antitrust regulators will start asking about their hold on search and ad markets.
google did it with youtube, if they do it to chrome i don't know if they can handle the developer frustration that will ensue(i'll put a nice red fullscreen browser incorrect banner on my website if users visit from chrome)
> Correction January 29, 10:30pm: This story originally stated that TrickURI uses machine learning to parse URL samples and test warnings for suspicious URLs. It has been updated to reflect that the tool instead assess whether software displays URLs accurately and consistently.
When we hand Google a browser monopoly, we hand them de facto authority over everything related to the web.
It will effectively be up to a single for-profit entity with questionable-at-best motives how we see the web in fundamental ways.
That's not to say the work they're doing here is bad. Might be good. And it's a long ways from production. But that's besides the point.
That said, I've never understood why browsers do not highlight the hostname separately from the path. Many phishing domains are of the form: google.com.auth.something.else.realistic.looking.tk/fake-path-stuff and are so long that the user just sees google.com and moves on. Something as simple as underlining the hostname or making the path a slightly lighter hue would be a huge usability improvement in being able to stop phished hosts.
The maintenance process is described here: https://github.com/publicsuffix/list/wiki/Guidelines
Over time, it could make other browsers feel less familiar, old fashioned, and maybe even shady for most people.
It may end up improving security for some (we'll see), but it may also improve the security of Chrome's market share (whether that's the motivation behind the move or not).
Nothing in the article I've read suggests they're doing anything of the kind. What a bunch of clickbait bs.
If you search this HN comments page, you'll find 3-4 other people claiming the same thing.
This article is an insult to news reporting.
This is simply a red herring to (eventually) guide user interface behavior towards a system or set of systems that moves towards obscure, non-transparent and centralized control.
Google is self-isolating into a walled garden. Good riddance.
- OS-style scrollbars were removed 
- Backspace no longer goes back 
- Can't click on the "Lock" in URL bar to see certificate info anymore
- Tabs were redesigned, and take up more space
- Extensions are no longer a simple list, but are now gigantic unnecessary "cards"
- "Chrome Web Store" link from Extensions section is now hidden underneath hamburger menu
- Half the themes for the old design are literally broken now
Those are the biggest buggers for me, and Google simply throws up the middle finger <^> to advanced users and expects us to understand these changes are better for everyone. NO they are not! Let us turn these "features" off in Developer Options.
 There are extensions that can re-enable these top two removals. The rest cannot be changed.
Nothing in the article talks about killing the URL, just flagging spam URLs like G00GLE.com (with zeros) that might be security risks.
> And while URLs may not be going anywhere anytime soon, Stark emphasizes that there is more in the works on how to get users to focus on important parts of URLs and to refine how Chrome presents them.
I read that as "Google is working on decoupling site identity and navigation from URLs."
> Google's first steps toward more robust website identity
(Headlines, in case anyone hasn't heard, are usually written by an editor rather than the author of the article.)
If the title is to be changed, I suppose that quote from the article's original language may be more accurate and less clickbaity.
> flagging sketchy URLs
> changing the way site identity is presented
see an ad on tv and then type in "progressive" into the omnibox and click on the first link (paid ad by Progressive Auto Insurance on its own brand) and google gets $10+
type in "progressive.com" and google gets $0
a quick google search on keyword pricing would send you to keyword planner that will show click price.
If you think it’s fake news just bid on the keyword yourself.
It’s a bit offensive to claim conspiracy when facts don’t align with your understanding of the world.
I don't see where I even implied that it was fake news. In fact, I inquired as to where one could find this information.
> It’s a bit offensive to claim conspiracy when facts don’t align with your understanding of the world.
I explicitly called it: "an interesting and reasonable theory" lol. "A bit conspiratorial" does not mean "false" or fake news. And what is my understanding of the world then? I assume you know since you have suggested these facts don't align with it. I disagree, in fact these things align very well with what I consider to be my understanding of the world, but perhaps you can enlighten me.
if it's your own brand you get quality score 10, so that cuts the price from the approx $54+ that another bidder would have to pay.
Can we put up a "you must be at least this willing to learn stuff in order to use this internet" sign on the information freeway on-ramps?
I am defining "dumb" as "incapable of determining what url they are visiting and therefore vulnerable to scams". That's a looong way from any sane definition of "expert".
I'd rather my doctor and car mechanic and grocer get to focus on their areas of expertise than have to learn some baroque rules about links in their email.
The problem, of course, is that this trains us to be incapable, and leaves us incapacitated if anything goes wrong. If my rental car breaks down I have no idea what to do except ring the rental company and hope they can send someone to fix it. Likewise, if Google's filter makes a mistake (which it will) then the user has no ability to make any kind of decision on their own. They'll click on the fake bank, lose all their money, and whose responsibility will that be? Google won't pay them back - they just provided a free tool. The bank will want to shift responsibility ("you must have done something unsafe, Google stops all phishing attempts, so you must have told them your login details"). The net result is that while most people will be safer, some people will be in a worse position than they are now.
It doesn't solve any problems for anyone, it just makes us helpless if there is a problem.
You don't have a cursory knowledge of everything you use/own and believe it's alright?
Do you know you shouldn't cut your electric wire while having them plugged in?
Do you know you shouldn't put your finger under the knife while you are cutting carrots?
Do you know you shouldn't put metal into your microwave?
You do have cursory knowledge about the tools in your house. You give an absurd list of example in your comment, but theses aren't the tool in your house, they are completely different field that sure are required to give you the tools/food you have, but they aren't part of any of your tool-set required to live. I fully expect someone that use a fountains pens to know how to change its ink cartridge, just like I fully expect a plumber to know how to use a wrench safely and a web user to know how to safely navigate the web.
Sure the tools can be made safer and better, but that doesn't means to remove actual feature from it. You wouldn't make the wrench out of rubber because the metal is too hard and can hurt someone, you expect the wrench user to know it or to teach him if required.
Well... yes? Don't you?
Most of that is taught in schools, and I feel one can reasonably expect an adult to have some cursory knowledge about all of the above - enough to at least reason about the basics and to know when to hand a problem off to a professional.
But you're using a knife daily, so I presume you know what is dangerous about a knife, (not how to make one!), or say you use a credit card. Should you not have at least a vague idea of the concept before you use it responsibly? That's all am asking for.
You presumably rely on brands, yes? Same with the URL, make sure it is in fact the 'brand' you're looking for. Am not saying it's 100% foolproof as buying counterfeits isn't hard either. But at least do the bare minimum to check.
To make sure I'm not accidentally buying H0me D3p0t brand nomex.
If you train dumb users to the point of non-dumbness they will install ublock origin. Train them a bit more, they will install umatrix. Then they won't use Chrome. Or Gmail. Or anything Google.
A tool that warns people when something appears wrong with a URL would be useful, but hiding URLs from users would be a terrible idea. It will create a generation of people who don't even have the capability to visually scan a URL to see if it's safe.
Technology shouldn't be dumbed-down for people so much that many people who are capable of learning how it works will never see enough of the details to become curious. I've even met professional web developers who don't understand URLs well due to the current URL trimming in most browsers.
It shows an icon in the address bar to inform you whether the website is in part or wholly being hosted by CDNs (eg Google, Cloudflare, CloudFront, etc).
You can block partial CDNs (eg scripts, images) using NoScript or uMatrix, for full webpage CDNs like AMP you'd have to observe the True Sight warning and navigate away manually at the moment.
For partial CDNs it's also worth noting the Decentraleyes extension, which loads popular resources from it's offline source rather than the CDNs.
This article is about a tool called "TrickURI" which detects misleading names like "G00gle.com"
So if I go register a name "neolefty" that nobody has ever heard of, TrickURI isn't going to object unless there's a fantastic service out there already called "ne0lefty". Which sounds legitimate to me. Kind of trademark-law-y, but consensus is how language works.
Among other things, that means posting with respect for fellow users, regardless of how wrong you think they are.
Obfuscate what you're looking at, no.
It contains all sorts of interesting test cases to test how various URIs are displayed in various parts of the UI.
I'm still waiting for them to find a "safety-first" reason to ban adblock from the store.
I always wondered who Google would be, but I never guessed AOL.
This is a terrible idea, but I suppose it is easier to organize the worlds information this way.
So killing URLs isn't much about this particular action from google or about web-browsers, it's actually embodied by the much broader trend towards searchbox-centered ui and "related stuff"-based navigation (instead of an absolute classification system like trees, tag hierarchies etc). Please f*ck off with your search engines and recommendation systems, just give me some tools to build taxonomies so i can organize myself. I know what i want and i know how i want it classified. IT was meant to organize (and process) databases please just stick to that, i want a library not a bookshop.
we have had users that thought the small green https lock next to the url meant a shopping site (the lock resembles a purse). so seeing it meant “safe shopping” by google.
users are much less knowledgable than anyone here can imagine.
I sincerely hope I live to see this quintessentially evil corporation go bankrupt.
Its not complicated its impossible, google just don't care if they kick other people off their internet.
are they proposing that DNS standard is "updated" so that every request at runtime must verify that a website is legit?
Chrome introduced garbage like rounded corners, now this.
It's the Gervais principle in action.
if the goal is to help the user "notice" and "read" URLs, then why not just (for example):
-use another font for URLs, one which mitigate the similarities between characters
-make the URL bar bigger with a bigger font
-use a small popup (à la Wikipedia) that shows the URL clearly when the user hover it. Actually something happens when you hover a link in some browsers, a lost, grey container with a black font appears in the bottom left of the window, it contains the URL of the link.
but if the goal is to setup "yet another input device" for a database of shady URLs and domain names, well... you will surely need to integrate not only with the UI/UX parts of the browser which have the greatest usage share.
or if the goal is to hide one important part of the mechanics of the web (and make software feels even more magic...) then, don't show the URL at all.
Demo video & more info:
At some point technical discussions will have to rise above current naivette and various parroted 'laws' and demonstrate understanding of the real world, corporate and human behavior instead of lamenting after the horse has bolted.
Telling whether a site is being served by who you think it is is fast becoming a crucial skill, to the point where I explain at least the basics to even the most technically illiterate people I know who use a computer. It's a real problem, in substantial need of a solution.
I don’t buy this learned helplessness. Schools need to evolve with the times.
We teach kids calculus in school, surely we can teach them to read a freakin URL.
Yes, sometimes it is not easy to recognize the real address in a scam URL like "dropbox.com.scam.ru" but browsers could make absolutely clear what the TLD and SLD are.
Even if you are willing to sacrifice the "you can write them on a napkin and share them with everyone" feature, it's not clear to me what other identifier would fundamentally solve the identity problem:
Even if you forced every website to get an extended validation certificate from a preselected CA and then based website identity solely on the certificates, what would stop you from registering a misleading company name? (There are precedents for that, btw. Search for "World Conference Center Bonn Scandal" if you want to read some hilarity)
Additionally, as the article mentions itself:
> The big challenge is showing people the parts of URLs that are relevant to their security and online decision-making, while somehow filtering out all the extra components that make URLs hard to read.
I feel the approaches we have seen so far rest on the assumption that the top and second level domains of the hostname are the only "important" parts of an URL and the rest can be hidden. I think this assumption is simply false, even for a vast number of non-technical use-cases: Often, "identity" is not just about the organisation behind an URL but also about the content - e.g., you'd like to know which article of a blog a link leads to.
More importantly, many sites are divided into user profiles, where the identity of a user is given by a subdomain or a path segment. Just knowing you're on "https://facebook.com" doesn't tell you whether you're viewing the actual profile you want to view.
Finally, even the "cruft" is sometimes important, if only for knowing it's there. E.g., I frequently remove tracking/referral arguments before sharing a link - both to make the link easier to remember and to disrupt tracking.
> The Chrome security team has taken on internet-wide security issues before, developing fixes for them in Chrome and then throwing Google's weight around to motivate everyone to adopt the practice
Is that how we imagined internet governance to work? Didn't we have standards bodies like the W3C or the IETF that were supposed to make decisions on that scale?
FYI: I tried, and the only Google result is this exact HN post.
I can't find an english-language article about the story, so here is a german one:
To summarize the story:
Bonn used to be the capital of West Germany during the Cold War. When that was over, however, Berlin got reassigned as capital and Bonn went back to being a mostly ordinary small town.
They never quite got over the demotion though and the city made numerous attempts at staying internationally relevant. One project was to become a UN base of operations for Germany. Apparently for that, it's a requirement that your build an oversized hotel and conference venue.
The city had trouble finding investors for the project, but eventually a korean company - "SMI Hyundai" stepped forward.
If you want to believe the official records, then apparently due diligence went out the window at the mention of the name "Hyundai". City officials assumed that they were somehow affiliated with the automaker and were quick to trust them with city-backed loans in the millions.
It turned out they were scammers and not even remotely capable of contributing to the project. In the end, the city had damage of several 100 million euros.
The company had never had any relation to the automaker either. It just "happened" to have the term "Hyundai" in its name...
Also, the title is clickbait so now I feel bad for upvoting.
We need to find (and promote) alternatives to Google (and Facebook). They are making the world a WORST place.
URLs should be opaque, then we wouldn't have the mess of people trusting them. Also we would have HATEOAS instead of OpenAPI :D
The step zero has been partnering with MS to kill Firefox.