My personal opinion is that it's a very bad change and runs anti-thetical to Chrome's goals. I hope the data backs that up as well.
But regardless, this change is far from shipping as the new default behavior and the reaction here will certainly have an impact on the feature's future. As mentioned, please feel free to disable it at chrome://flags/#origin-chip-in-omnibox
I hate how mobile safari has removed URLs, I find it very disorienting. I'm constantly looking at the URL bar to see if I've navigated to a new page or not, to see what the page I'm on is named and its purpose is, and to make sure I went where I clicked and wasn't just redirected somewhere else (sometimes this can be really confusing, for example mobile versions of webpages that dump you out to the main page instead of the mobile version of the page you were hoping for).
While I can see the anti-phishing advantages of emphasizing the domain, hopefully this wouldn't come at the expense of the rest of the URL. Right now chrome grays out the rest of the URL, which is nice, but if you want to be less subtle that's fine too - turn the domain into a button, or draw a box around it or whatever, but please leave the rest of the URL passively visible.
I think a lot of the negative reaction also comes from replacing it with a google search box. Not very classy. There used to be just the URL bar in browsers, then there was the URL bar and the search bar, then chrome simplified it into just the URL bar, which allowed you to search if you prefixed with ?, and now you can search with no prefix. The new change would just make the whole thing a search field. If you want to optimize chrome for people who don't know how to use the internet and won't learn that's google's choice, but don't expect me to use it or recommend it.
I don't even think it would help there. In fact, I think this would help fraudsters. If I think about the various scam attempts on steam for example.
They direct you to a url like www.stempowered.com/q?phishlogin=true or something.
Knowing that a correct steam url would never have this sort of thing in its url would be the first thing to notice if you were already duped into clicking on a link that lead to the above url.
If the browser then only displays "stempowered.com", it would be way more difficult to notice you are on a phishing site. Just because you didn't notice the missing "a". And let's face it. The average consumer/user does not go and verify any certificates.
This is incorrect. Entering a URL into the field produces the exact same behavior as it does with this option disabled. Typing a URL and pressing enter goes directly to that URL. Typing part of a URL that has been previously visited (like "face" => "http://www.facebook.com/") will default to visiting that URL.
However, if I enter "face" into the text field on google.com, then facebook.com is the top result. If I enter a complete URL like "https://news.ycombinator.com/item?id=7677898" or even "news.ycombinator.com/item?id=7677898", it turns it into a link to that page. The only difference between the search field on google.com and the URL field in chrome is whether it knows about my internet history, and maybe that's just because I have web history turned off. Sometimes I get the "Google Search" behavior and sometimes I get the "I'm feeling lucky" behavior.
In cases where the google search gives the wrong response, chrome gives the same wrong response, for example on intranet servers.
So, if it doesn't display the URL, and it behaves exactly like the search field on google.com, and it doesn't correctly navigate to some URLs entered into the field, I don't think you can really consider it a URL field any more.
In other words, it will make the whole thing a search field, with some smart url-friendly behaviors, so that most of the time when you enter a url it will take you to that url.
What? Intranet addresses work perfectly fine in the chrome url bar, unless it 1) has spaces (which aren't technically allowed in urls, and you can type %20 to avoid) or 2) is only letters (which you can avoid by just appending a slash or something).
Also, if the omnibox automatically redirected to sites instead, then yes it would pretty much be identical. But it doesn't and it's not.
Getting back to the earlier question, is the omnibox a URL field or a search field? Well, it's a combination of the two. But it's sort of like a UX version of the ship of Theseus.. if you slowly remove all the behaviors of a URL field and replace them with those of a search field, at what point does it become something different? When does it become a search field with URL behaviors instead of a URL field with search behaviors?
If you look at the screenshot in the linked article, the field says "Search Google or type URL" instead of showing the URL. I think that's the watershed moment. Given all the behavioral changes already, subjectively I'd say that's not a URL bar any more. Even if the omnibox behavior is exactly the same as now, it's no longer showing the URL.
I hope it doesn't make its way into the release version of chrome.
In other words: what Chrome has already done for a very long time.
With repeated exposure to URLs more people will learn how URLs work. Hiding URLs means that people will never be able to learn.
It makes me click on the bar to actually see where I'm at in the filepath, please don't take that design mistake and apply it to the web.
Not including a path bar by default doesn't mean they "try to hide the information", it's just a different UI design.
Or, drag a file from Finder to the terminal --> terminal prints the full path to the file.
From a UI perspective for user it looks like the "Search Google or Type URL" text are would be searching the Amazon.com site - however it sounds like it searches Google?
My guess is that this will drive a lot more traffic to Google and then much more opportunity for a website to the lose that user because Google will be able to show other listings - including ads - prior to showing whatever is on your own site, even if is only your website in search results (which as I said before, looks like won't be the case).
As a developer and website owner this would motivate me strongly against Google and strongly turn me off from Google if they continue to try to funnel traffic back to their own website.
It's right to address this behaviour in the interface design. Rather than somehow telling users they're wrong Google can work with that behaviour. Yes it has benefits for Google in tracking and profiling user behaviour too.
For many-many users the web entry point is the search engine they use as their homepage, that is "the internet" for them. This was the paradigm that AOL developed and there are vast swathes of users that cut their teeth on AOL.
It's probably a good hint that browsers could find success at imitating a similar search interface in the new page/new tab UI. Offer a clean page with a text search field, with very fast results that have good context.
Chrome and FF have a "View History" page with search, but it seems to be fixed by date and with no way to sort for relevance. IE doesn't appear to have a history search (I think this is baked into Windows search instead?).
"Don't be evil" gave way to "Invading privacy for fun and profit" long ago.
Google has avoided being obvious about Chrome giving them more revenue up until now, as far as I'm concerned. Whether this will also swing in their favor or not, I can't be sure at the moment.
And with Safari on iOS 7 only showing the domain name, there is some precedence with the approach described in the article.
Clarification: I'm not sold on the "Upstream goes sour? Time to Libre$thing" trend as a good thing in the long term, but I doubt the potential for it to occur has escaped Google's notice.
Is this true? Does Chrome phone-home with details of user settings?
If you are using Canary and NOT reporting back, you don't know how Canary works and shouldn't be using it.
That said, I sincerely hope that this particular anti-pattern's efficacy in converting people from typing in URL's to searching for websites isn't the most important metric.
Let's look at REST. Everybody is using it for HTTP APIs, or to be precise: everybody pretends to use it. Because, as many know, a REST API is only a true REST API, if it follows the HATEOAS paradigm. A paradigm which is in fact really cool. But why do we think it's cool? Because Roy Fielding found in his thesis that the (human) web is basically HATEOAS. He says the web is so successfully because of that.
But in reality... Hardly any HTTP API uses HATEOAS. In fact many popular APIs hide the HTTP stuff completely from the API consumers. (If everybody was using HATEOAS, we would never have to update the client libs, right?) Something similar goes for normal websites. Most URLs are not human readable, even HN is an example. The URLs are just numbers, there was even a post discussing that recently... Most News websites have even more complicated URLs, they are not made for humans and thus to me something like memory addresses. The GMail URLs (the webs most successful mail client) are also very funny looking, I wonder if there are users who manipulate the URLs by hand, or who bookmark their outbox.
I somehow like looking at URLs but am I supposed to edit them as an end user or draw conclusions from their look? (And is Google? ;))
BTW: URLs were interesting in the 90s for identification because only Geo Cities and friends had domains. Now everybody owns a domain.
Here is a data point that may be of interest: On YouTube, since links are filtered from comments, many users link to other videos by posting the "tails" of URLs - some with "watch?v=xxxxxxxx", some "?v=xxxxxxxx", and some just post the random-looking video ID part with nothing other than "see video xxxxxxxx". In other words, there's evidence to suggest that a reasonably large portion of the otherwise "computer-illiterate" have at least a basic understanding of how URLs work and will edit them manually to get what they want.
Edit: or to put it another way, there are people who, upon having made extensive use of YouTube (or possibly other sites), have been able to notice the patterns in all of its URLs, and use that knowledge to succinctly name a video, without explicitly giving the entire link. They are also implicitly teaching others about this knowledge in the process. This is a perfect example of the kind of learning experience that would be deprived from those whose browser hid the path in URLs.
For me this is another evidence that the main argument is broken. Youtube is super successful but in fact it is really restrictive when it comes to hyperlinking and mashing things together.
Update: just for clarification because of the downvotes, Youtube does not filter Youtube urls.
(I've basically never participated in YT comment discussions. There's definitely a lot of idiocy, but it's also interesting to just observe and see the sometimes surprising positive things like this that can occur.)
URLS are pretty rigid. If tommorow http was swapped out for a different protocol what would happen?
You'd be better off referring to the article posted as Allenpike's article on removing urls (or some such).
Fuzzyness feels more natural. I can bookmark a rigid URL, but what if later it moves? I might be better off bookmarking a signature of the article (a very basic form might be Author and Title).
The search engine's have a signature of articles, and if you are lucky that signature will be matched somehow against your loose search query. The success of search engines depends upon how well they order and match against your input.
Personally I find the HN way fairly readable. I mean item no 7677898 is something I can read and understand and I note similar systems are used for quite a few things in the real world like phone numbers, zip codes and passport numbers.
It's stuff like "https://www.google.co.uk/search?q=address+white+house&oq=add... that I find going on unreadable.
People shouldn't be drawing conclusions from the URL (apart from the query string and the domain part, but that latter is a whole other story). The URLs are supposed to be unchanging and not break, and that's strongly incompatible with having them contain human-meaningful information. And in particular (though not exclusively) with using path-segments to communicate a tree structure for your website. Such tree structures are inevitably torn down and replaced over time on most long-active websites (especially those which are the public-facing homepage of a long-lived organisation), and the result of placing them in the URL is inevitably link breakage. Hiding the URL by default is therefore good, as it should help to prevent the user from seeking meaning in the URL or the site owner from placing it there.
And links (with the exceptions noted above) shouldn't contain meaningful information for automated use either: it's Hypertext As The Engine Of Application State, not Link Structure As The Engine...
(Tree-structured site guides are fine and useful means of navigating, by the way; they just don't belong in the URL.)
Around '99 I did not rely on bookmarks, instead I saved interesting articles to my hard drive because links would break so often. Even today the problem remains and I don't even dare to say whether it got better or worse.
Maybe there are smarter concepts than HTTP-style URLs that we are not aware of yet. Might be also interesting regarding privacy, because many people actually do not want static hyperlinks to their personal information that last a million years.
It's great that you vet these changes on Canary and do user testing, but it's troubling that a change like this isn't first extensively vetted from a perspective of 'does this hurt our users? does it compromise their privacy? does it increase the odds that they will get sent to the wrong websites? does it hide important information in some cases?'
I suspect that this UI change is actually going to make people more vulnerable to phishing in cases where the domain is not a guarantee of identity; for example, an XSS on a google-controlled domain (where the full URL would make the attack obvious, but only showing domain hides it), or an attack hosted on a 'user content' domain that uses subdirectories to distinguish between different users/sites.
A more straightforward example is that all my gmail accounts have 'mail.google.com' as the domain in my browser, regardless of whether one of them is an Apps domain (thus security sensitive) and another one happens to belong to a sibling or significant other or something.
This feature just seems intrinsically misguided and poorly considered. I appreciate that your UX team is trying to aggressively improve things, but they seem to be acquiring a long track record of poor decisions.
I've since elected out of Google search as my browser's defaults.
Oh I don't think that's true at all. They just know that they're big enough that it doesn't matter what a bunch of internet weenies think when most of their audience doesn't know or care enough to understand the UI change, let alone grasp the business interests that drive it.
It doesn't make any sense to hide the protocol when you reveal the URL, and I found myself looking for a "Copy URL" button. Since copying the URL is what I most often try to do when selecting a URL, it would make sense.
It makes browsing feel much calmer, and I'm pretty sure I'll keep it enabled for a while.
I guess this goes to the app-ification of the web.
There's a dangerous slippery slope here. If we're OK with this happening, are we then OK with getting rid of that domain further down the line? What about routing all traffic through Google first so it can check if a URL is "safe" or not. The whole thing strikes me as creating a more locked-down web.
URLs and View-Source are fundamental elements of the vision Tim Berners-Lee. WorldWideWeb, the first web browser had it front and center. Mosaic moved it to the top, where we most know the URL scheme to exist. Safari moved the URL box to the same line as other navigation (what is now more common). Every step of the way, the scheme has been getting reduced for usability purpose.
But the problem with that approach is that it communicates that the web is "hard" instead of educating users in how to understand it and how to build on it.
And we, as technical people have not helped much here. Look at the URL up here. Yes, we know that it's hacker news but what does the ID mean? It has no semantic meaning to a user (unless you know that this is the 7678580th story on HN and care about that). A nice URL would be something like http://hackernews.com/story/google-experiments-with-URLs
Wordpress actually does this by default, even adding a date scheme to it, which makes the web a better place:
http://site.com/year/month/day/story-title-can-go-here is an easy to read URL and yes it's a pain to code properly when you're dealing with a dynamic site but hey it's our jobs to make sure we do things that are beautiful for users.
So maybe this is a wake up call.
I can now see the reasoning behind it - many people I know do not actively use the url bar except for searching. even when they want to check their facebook or favorite website, they just enter "facebook" in the url and use the (google) search results to get where they want to. for such users, the search box is much more important than the url itself, so why waste the UI space...
I NEED TO EDIT URLs. I need to copy and paste URLs. It was already annoying enough with it's removing of the protocol because sometimes I make a typo, try to edit it and it messes up and removes the protocol forcing me to edit it a 3rd time only after it goes as searches for something.
Even as just a user I copy and paste URLs all day long. Into FB, into Twitter, into stackoverflow answers, into HN responses.
I don't even see how this is better for Google. Links make up pagerank no? Links are what Google uses to be the best search engine. How is making it harder for people to copy and paste URLs good for Google?
I'm sure I'm in the minority as a semi-webdev but dammit, don't fix what isn't broken. Or at least give those of us with different use cases a way to get shit done without getting in our way. Sure, this may or may not be better for my grandma but it's not for me. I causes me frustration daily already. This is only going to make it anger inducing.
I'm using Canary with this option enabled, and all you have to do is click the domain box, then you can freely view, edit, and copy the URL.
All this update does is hide the path portion of the URL. That's it, so IMO, this story is way overblown. Google isn't removing the URL bar, they're just acknowledging the fact that 99% of users don't need to see 99% of the URLs they visit on a daily basis.
Why do Hacker News readers need to see a URL that looks like this?
Why do users looking at Amazon Fire's landing page need to see this?
Why do EBay shoppers need to see this?
They don't. Just trim it down to the domain and call it good.
Oh, and it's worth mentioning that to copy/paste URLs, it's still only one click away, because when you click the box, it auto-selects the entire URL.
Edit: As I review my post, in the context of this story I find it humorous that even Hacker News trims the URLs I pasted because of how obnoxious and unnecessary they are.
While we're talking about things we don't need, let's include this change.
The fact is, for most of the last two decades we've already had a UI where users who don't care to attend to the URL don't have to, and users who care to notice can. What does this add? Nothing. But it does take away some legibility for people who care, and discoverability for people who might learn to.
99% of people using web browsers really get no cues from the path? Cite please. URLs aren't high tech any more than the address to your house is, and my observation is that even non-developers who are simply experienced browsers pick up cues -- even from barely legible URLs mostly meant to be parsed by machines. You don't have to be a programmer to observe that typing a string in takes you to a page, or that the string changes when the browser loads a change, and put together the address correlation until you start to understand what a URL is without even really thinking about it.
Or at least, you wouldn't have to be a programmer to learn to make that connection based off of simple observation skills if we kept the current model. If we move to this new hide-the-URL UI, probably you would (self-fulfilling prophesy!).
And sure, the web has lots of URLs that don't provide a lot of easily parsed cues. In the interest of being a little less selective, though, let's look at a few others:
Do people need to see these things? Nope. Can they derive utility from being able to see them? Yes. And they do.
When you stop by the local coffee shop, do you take note of what its address is?
The users have an idea what URLs are, they just don't care.
That's like clicking on a bookmark. But when you need to tell someone else (who is unfamiliar with the district) where that coffee shop is, or vice-versa, that information becomes really important. Hiding the path and showing only the domain is like telling someone "it's in California".
On the other hand, if the URL is displayed, then there will be many who take note of the fact that it changes whenever they click on a new link or go back/foward, and it makes them mentally associate "that piece of text" with "this page I'm looking at" - they don't even need to know the term URL to do that. It's a bit unfortunate that browsers don't have "Address:" next to it anymore, because that would've made this association so much easier (someone seems to have made the same observation almost ten years ago, although it was FF vs IE: http://cheeaun.com/blog/2004/09/address-label-for-address-ba... ). Having made that association, they can then tell you exactly where a page they're looking at is, and vice-versa.
Postal address? No, but I do know (for example) I'm on Main street, in that tiny alley one block west of 4th Ave, in the building just left of that weird giant sculpture of a bird.
I'm constantly aware of and can describe my relative location, even if I'm not always capable of expressing it using an absolute designation like postal address.
On the web, though, there is no such relativity. One website is not near or far from or above or below another. I cannot conceive of my location, much less express it, in any way but absolute address.
So: since I do not travel to reach places on the web, how do I know where I am? The address bar.
Similarly, ever tried to find your bud's house the first time in a subdivision, esp. at night? Those places all look the same and you drive past 4 times before you figure out one of the house's numbers and deduce it. The pizza guy the other night thanked me for having my number clearly displayed and lit, for the same reason.
Of course once you get into a building, it still remains annoying - I mean it's still not really obvious how the internal room/suite numbering works. That portion of addressing is totally up to the architect, and a lot of times is only intuitive after you know the space you're in (if ever).
Legible URLs are important when you need to decide to follow the link; once you're on the page, it's less important because there's bigger and more legible cues about the content right on the pages themselves then.
I noticed my 10 year old brother trying to find a song the other day in a peculiar fashion. He simply typed in youtube and "genre name." He click on the 2nd link of the search results. Then clicking on the 3rd item of a side bar linking to a playlist. He was navigating the web through links provided by google like we use the directory system.
In this case, knowing the specific url would be mind numbing and utterly useless. However, the distinction between google and the web is just too blurred for so many people.
However, I think drgath's point is ultimately correct. For some people, there really is no internet besides Facebook. For some no internet without Google. Even crazier: I've met someone who does not know the internet without Siri.
If we hold onto things like the directory structure, then we couldn't have "advancements" in user interface design like iOS. Could we eventually get a web without urls like iOS is an operating system without a visible directory structure to end users? I think it would be a big win. The directory structure was replaced by single purposed apps. What will url's be replaced by?
The holy grail of companies. Full control over 95% of the users by sacrificing 5...
you are just dead wrong thinking this is good design. obscenely profitable? Yes.
Good design is made by serving the extreme 5% while accommodating the 95... Take it from someone who actually majored in product design and usability. The rationale for user interfaces is that the 95% will at some tasks be the 5percentile, and if you don't serve them, over time you lose them. People think ios is a hallmark of usability only because the market has too much canon fodder so the 95percentile seems infinite. But eventually enough people will be fed up by being unable to send a file from one app to another the way they want. And will move to whatever crap interface that at least have a file system that allows them to compete the task.
Oh I can't upvote you enough, this isn't just for design, this is how advancements in tech happen in general. I remember when CVS was the dominant version control systems and old fogies didn't need this subversion nonsense. I read an essay in defense of svn that argued "just keep using cvs and don't worry about it, there's always going to be a minority that needs key features most people don't, let them have svn."
We're now two generations of version control down the road, both svn and git ate their predecessor's lunch by catering to the needs of the handful that were unsatisfied. Once the new thing works, most people eventually comes along because the key features turned out to be pretty nice, even if not necessary. That's how software progresses in general. Walled gardens exist to prevent others from making the next product that could eat the current one's lunch. Why else would it be verboden to "duplicate" iOS functionality?
Does Command+L still work? Because one click is too many.
With that one small change, I'd still be happy.
while you're at it you might uninstall chrome and install ... don't even remember how the AOL browser was called... But it obviously was superior to any modern browsers with their lowly urls instead of AOL keywords.
That may solve this particular problem, but then which browser do you use instead? Firefox has had its history of similar changes (although you can still use extensions), Opera post-12 lost a ton of customisability, and IE, although appearing to have the fewest irritating interface changes, also has the most rendering quirks. (Personally I'm less concerned with the rendering quirks than the UI changes, so I tend to use IE most of the time, but I use various versions of all four on different sites - at the moment I have Firefox and Opera open as well.)
In my mind, they're all headed down the same path, just at slightly different rates. The idea that you can choose a different browser is starting to become more and more of an illusion.
There are a few fundamental uses I see, and they're somewhat distinct:
• Reading. For which stripping 99.99966% of Web formatting would be preferred. If I got content-heavy sites in a form similar to what Readability, Pocket, Instapaper, etc., delivers, I'd be much happier. Well, slightly less grumpy. I found it interesting that the Kobo tablet was, for a while, advertising its browser as doing just that (from what I can tell of the revised copy, they're offering built-in Pocket). See: http://www.kobobooks.com/tablets Online forums are another special case.
• Commerce. Here authentication and payment are concerns. Neither are built in to existing HTML standards.
• Applications. Something that's more than just putting words (and images) on a page, or buying stuff. For this I'm actually inclined to think that the Mobile app model might be more appropriate. Say I want ... an email tool, or an interactive mapping tool, or a host monitoring solution, etc. Running this in a separate process space, in a separate windowing context, individually controllable, etc., would be a huge win.
The other element is user state: there are very few cases where I need a specific browser page running at all times (selected apps are the exception). What I do want is to be able to return to the page state I'd last left it at. With very aggressive paging out of state to local storage, and/or simply leaving a marker of "this is where you were at", and being able to recall it as needed, overall performance would vastly improve.
Neither Firefox nor Chrome presently offers this. On Firefox, there's a single process space, such that all tabs become unresponsive when system resources are exhausted. On Chrome, there are multiple subprocesses, which are individually much heavier-weight than Firefox's own tabs, but which can be individually killed. You have to reference them indirectly, however, through a task manager, rather than being able to simply kill the tab you're on at the moment (you can close it, you cannot kill it). In both cases I'm finding myself constantly manually managing resources. I've also found myself abandoning Firefox for Chrome as with the former I've got to kill/restart the whole thing, while Chrome gives more granular control, and Chrome tends to crowd out FF for resources. Both are very far from optimal.
I was trying to think what's the difference between a 'frozen' web page / state and a bookmark. There is a difference, but for some pages, a bookmark is enough. I've always thought that tabs are really just perhaps a more convenient bookmark, but they are prone to abuse. Restarting Firefox with howevever many tabs, doesn't now reload all tabs like in the past. You loose state, but they are lighter in weight.
A bookmark doesn't retain where you are on a page. Only the page's location.
Bookmarks also don't relate to your current browser session. I usually organize mine by topic. For current task work what I want is a stack or other less-organized list that I can skip back and forth on.
Firefox's session restore was configurable. I'm a few revs back on Debian (24.4.0), so I'm not sure what all's changed recently.
Perhaps listening to a passage of text and typing some notes and reading a web page belong to a given task, and I might want to stash that away and pick it up later. Although I'm probably kidding myself thinking I can multi-task, and manage multiple tasks, sessions. Freezing one or two though might be useful.
If bookmark management was better in the browsers I'm sure people would use them far more.
You'd almost think the browser developers want you to save all state to their proprietary Web-based silos or something.
Want to read? I either fire up the Adobe PDF reader and load something from my history or I open my email/dropbox/trello card and tap an attachment, which opens up the PDF reader.
Almost all the interesting ideas in academia come in .pdf format, not .html. It just fits our use case better.
Want to shop? Fire up amazon i guess or fire up a web browser and go to amazon. Who cares if the "Amazon" uses standard HTML forms? The user doesn't.
Want to $x? Fire up $app_that_handles[$x].
Suggesting some file format, or perhaps presentation format.
ArXiV does not specify a style guide so you get this weird mix of IEEE/PAMI/single-column/double-column, but that's not really a detrement to its readability since journals wouldn't usually pick an unreadable style anyway.
I figured the format was likely "academic articles, mostly prepared with LaTeX, published as PDFs", but the commenter was being less than clear, even on reiteration.
Having file-format-specific reading utilities is stupid, awful, and is precisely the type of Windows-centric (and to a lesser extent Mac-centric) behavior I absolutely loath.
Applications centered around tasks however are a bit of a different story, and that's more of what I'm describing.
For reading, Adobe's an absolutely horrible example. Particularly on tablets. Don't make me remember the time I was buried to my waist in a colo cabinet trying to sort out load balancer issues while reading the 300+ page manual on my Android smartphone using the Adobe reader app ... which would reset to the front page each time it got kicked out ... which is precisely what was happening as a recruiter was calling me despite my repeatedly hanging up on her (and having net nil reception regardless). There are some modestly better PDF readers (say, evince), though must fail on the basis of not positioning the text optimally for reading.
Contrast with a stunning exception to the usual rule that online readers are crap: the Internet Archive's book reader. I discuss it briefly here: http://redd.it/1w0n83
The beauty of it? It autocrops the page to the visible content on it. Screenshot: http://i.imgur.com/Reg8KLB.png
You can further maximize the browser (F11) and remove the navigation elements so that _all_ you are seeing is the text you're reading. Page navigation is quick and intuitive. The entire thing is, incredibly, better than any desktop PDF viewer I've encountered.
What I'd really like is something somewhere between Calibre and Zotero: that will manage a selection of documents, organize and manage them, spawn viewers (preferably good and useful viewers -- Calibre on Linux fails massively in this regard). And, if I specify it, renders everything with minimal markup.
As for shopping: it's not that I'd fire up the Amazon app, I'd fire up the shopping app. You want a standard, uniformly designed client with solid security, not a mash of individually created apps, each with its own security flaws and excessive permissions.
Splitting shopping from web-browsing would also prevent surveilling users across the Internet from the shopping interface itself.
For specific application-based tools. Some sort of general app framework that could be fired up. The main distinction between it and the reader would be that a reader app would assume that it's valid at any time to dump state to disk and bail, whereas you could configure an app for how you wanted it to behave (I might want a monitor to be up 24/7/365, while a social networking app could shut down if I haven't interacted with it for 15 minutes).
And that same way of doing things would work quite well for a web browser, IMHO.
There are many annoyances I have with Windows, and Windows 8 specifically. That is not one of them.
When did they remove the path?
So that's just some multiple mount point weirdness, not removing the path.
(There are many things I dislike about the UI in Windows 8, but MS has not removed configuration and features quite as aggressively as others.)
It is not clear if they are dumbing the web down for users or just to get people onto Google search.
Once you obscure URLs, web browsers become _impossible to support_. Of course Google doesn't care, because support isn't part of their lexicon.
It's easier to direct them to it because it always has the same prompt text. "Do you see a box that says 'Search Google or type URL?'? Great, type this into it..."
The thing that's harder is getting them to read the url back to you.
All of those people entering partial URLs in the omnibox, causing a Google search, releasing CO2 is terrible when they could be using local bookmarks.
I'm not sure what gets sent with regards to spelling helpers? That could almost be like a keylogger? I wonder how it works. I've turned that off also.
Edit: Apparently a useful comment offering a solution to the parent's comment is not HN worthy?
The default behavior is generally indicative of how the devs want things to be used.
So no, not an actual solution. A temporary one, yes, but not longterm.
Your comment didn't deserve a downvote but please don't assume it to be objectively useful.
On the other hand seeing the URL helps me decide if I'm on the correct webpage. mybank.com vs pretendingtobemybank.com Of course I try not to click those links in the first place but I do look at the URL to check that some link I clicked took me to the right place.
There's also plenty of apps I use where the URL is useful info. For example JSFiddle. If there's a /N in the URL I know I need to click "Set As Base" before I'm done. Of course if that's the only site I need that on maybe not a good argument. I'll have to think about if that comes up on other sites for me or not. That's also a dev only issue probably.
I wouldn't mind having 2 modes, user mode and dev mode. I can certainly accept that non-devs have different needs than devs. My only point is I'm trying to avoid frustration. I have enough that in my life. I don't want chrome to pile on more and I can see that their current URL bar already causes me frustration often while doing dev and even while just socializing on the net. At some point that frustration will lead me to find something less frustrating.
I don't hate this change so far (especially as clicking on the domain gives the full URL - although that's perhaps not very discoverable if you don't know about it already)
But would you consider that a tolerable loss, or still too much? (I think I'd say it's still too much, but I am also a web dev and thus a power user.)
I do actually have an irritation, when I'm in Chrome and I do a CTRL+L and CTRL+C, and then I go to paste what I think I've copied, actually I have an unexpected http:// at the front of what I've copied. Sure that's what most people want, but it isn't what you see is what you get.
The "senior trying to use a computer" image is interesting in that it seems to imply that the seniors of the future will be just as clueless about how to use the Internet as the ones today, which may unfortunately not be far from the truth.
I predict that eventually browsers will become almost unconfigurable, highly locked-down, and be less controllable by the user than a television. As the article notes, "the URL will [...] that many users will never even realize is clickable." From there, it's not hard to imagine at some point the decision to remove even that "clickableness", on the basis that "no one will bother to", and by that point the frog has been thoroughly cooked. Open-source or not, almost no one will have the will or knowledge (except the few elite) to modify them to make them work as they desire. Users can be more easily "herded" and persuaded, if they have little knowledge of how things work; just keep them consuming and complacent, because knowledge is power, and we don't want them to have too much of that. Appease and mollify them with eye candy and doublespeak. Welcome to the future of corporate control, mindless consumption, and fashionable ignorance.
Sorry for the negativity, but this trend I find really unsettling.
The innards of a modern car are incomprehensible to all but "the few elite", and its interface goes a long way to hide all that complexity. I only have the vaguest idea how it works, and am perfectly happy to outsource its maintenance to professional mechanics, because all I care about is that it works.
This should apply to computers. My family love their iPads and Macs, because they abstract away all the crap they don't care about in, in favor of letting them get stuff done. It's a form of reverse snobbery to insist that no, my grandmother actually should care deeply about whether she's searching via DNS or via Google, or that my preschooler needs to understand the difference between HTTP and HTTPS.
No, no, a thousand times no. What happened to cars - the replacement of mechanical, inspectable, (dare I say it) hackable components with electronic black boxes was not a good thing. You used to be able to fix and replace most things in an automobile engine with parts from the local auto shop and a shop manual. No longer. Now you have to spend hundreds or thousands to get the correct electronic doohickeys to talk to the closed source, locked down, DRM'd to hell and back engine control modules. And many of the parts aren't repairable in any meaningful sense - you have to go to the dealership to get a new widget, and if your car is too old or too rare, you're just SOL and you have to buy a new car. If this is the world you want for software, I want no part of it.
It's fine if you want to turn over your vehicle every three years.
'Modern' cars are not fixable - even the specialists tear our their hair on some cases and give up completely.
Classics are rising steeply in value for people who want to own and understand a piece of machinery for its own sake. Much of modern automotive history will disappear into the maw of the crusher because irreplaceable, irrepairable parts render them useless.
more like eight years, which is not a bad deal.
That really has nothing to do with what is being discussed though. The reliability and efficiency of modern cars are consequences of advances in technology and engineering, not user interface redesign.
In other words, just like modern cars, it's not as necessary to pay attention to the innards for continued operation.
I believe hiding implementation details is part of this trend to reduce complexity (which is good) but also to wrestle control away from the user (which is bad).
I did this just last week with a fresh Windows 7 installation because the drivers for the Ethernet adapter had to be manually installed...
I am joining many people here, who prefer having configurable tools. I do not want to configure 'IRQ' on any tool I use, but if it will misfunction, I will search for it and find out what IRQ means and how I adjust it to my needs. Worse, is being unable to change 'IRQ' when you need to, because people in charge decided it draws 95% customers away, and you are being in a 5% boat.
Actually, yes, it is relevant.
A large chunk of the reason behind the efficiency in particular of newer cars is the addition of those opaque black boxes. Being able to have the car's engine directly tune things according to complicated algorithms, instead of power-hungry mechanical controls, etc, etc.
I was commenting on the black boxes, not the software contained within.
There have been two steps forward. Simultaneously released alongside, but not dependent on, one step backward.
So why do people defend the step backward under the guise of the steps forward.
You can have complex, highly configurable software that's still locked down, and you can have simple, abstracted software that's open source. It's not a result of the simplification of UIs.
Potential hyperbole and the fact that Rushkoff was talking about programming more so than general computer knowledge aside, I still think there's a relevant point there. I personally feel that giving up all pretense of needing to know how my computer works would put me at a far greater disadvantage in the coming years than were I to do the same with my car. My car is very useful yes, but I don't use it to view the world, my country and its politics, my culture, my future, my finances, and make decisions based on those views.
Luckily, society (mostly) seems to have recognized that in the case of literacy and essentially forces everyone through it, instead of assuming that people are morons that can not be taught anything. Unfortunately, the same can not be said about a lot of recent technological development.
A browser that doesn't show the exact URL of a page is like a car with a built-in GPS that doesn't show the exact location where it's at. After all, who cares about street addresses? The address is occuplied by a Starbucks and we're in Mountain View, so let's just show a Starbucks logo surrounded by a shape that vaguely looks like an outline of Mountain View. You want to go someplace else? We'll show you your destination and the series of turns you need to make to get there, but we won't show you anything else on the way.
If you think a preschooler doesn't need to know how URLs work, you are vastly underestimating the curiosity of a typical preschooler. If you show him a bar with a bunch of letters in it, he'll start typing random letters into that box to see what happens. Likewise, if you show him a detailed map of the town, he will want to explore parts that he's never been to so far. Tinkering and exploration are the foundation of every science, including computer science. Therefore, I don't think it's a good idea to discourage tinkering and exploration, whether in a car or in a browser, unless the benefits greatly outweigh the long-term costs.
With cars, the benefits are probably quite large, since complexity is exactly what makes modern cars so safe and efficient. With browsers, I'm not even sure what the benefits are supposed to be, other than the obvious financial benefit to Google. People who don't want to tinker with the URL bar will just ignore it most of the time. Also, we're talking about the desktop browser here. There are plenty of pixels to waste.
Latitude/longitude would be more like IP addresses and HTTP headers. They require some technical knowledge to use and understand, but they're still quite human-readable unlike raw GPS signals or ethernet frames.
(As an aside, most GPS units also display the altitude. I live in a mountainous region, so I often make use of this figure.)
I'd like that to be true, but I think we lost that battle a long time ago. Google results aren't a readable URL; nor are products on Amazon or Ebay or anywhere else I can think of. Newspaper-type URLs are often "fake human-readable"; the URL is something like http://somepaper.com/12345-Local-Man-Found , but in fact http://somepaper.com/12345-Local-Man-Still-Missing will give you exactly the same story. Even HN stories aren't human-readable, just an opaque id number.
But I don't think "fake human-readable" URLs break the analogy with physical addresses. There are many different ways of writing the same address:
987 Some Avenue West, Unit 123, Brooklyn, New York, NY 12345-6789
Unit 123, 987 W. Some Ave., New York 12345
123-987 Some Av W, NYC, NY
And the numbers aren't just opaque identifiers (except for the zip), at least if you're walking down the street: you know that 28 is next to 26, opposite-ish 27, and halfway to 56. There's nothing that corresponds to walking along the street on a website.
With our editing and creative capabilities getting away from us..(with the keyboards/mouse as last frontiers) we are ending in the same bi-class system of the TV/Radio.. a class that creates the content, working for the monopolies of the industry.. and us only, consuming..
Thats not how the Web was supposed to be.. the idea that we can be both, independent, create novels, music, programs and publish ourselves in pure freedom .. thats what we are loosing by every move of the tech monopolists of our time..
Usability is one thing, being teached to be just a user or a consumer of something is to get back to the XX century, just with a new powerful medium..
Perhaps even the radio/TV revolution of the XX should be free back than, in our own hands.. so people could create tv and radio stations (on free frequencies of course).. but the olders missed this train
It's happening all over again, and it has something to do with this capitalistic nature in formation of big monopolies and their neurosis for controling their results, and create loyal consumers to their products.. when we accept the label of consumers, we giveup our natural right of being human beings.
Technology must create the channels, not BE the channels themselves.. i think that's the original comenter's point
He also mentioned that the cars are pretty much impossible to work on on your own since you need to have the right diagnostic equipment, as opposed to a car one might have bought 15-20 years ago, where a good manual was all that you needed to get into the thick of it.
While I agree that knowledge of the inner functioning should not be required for using a product, I think that it would be nice if there was some sort of effort made to allow one to poke inside. I am guessing that with Chrome there will be some sort of setting that you can use to undo this change (I use Firefox, and rarely, but occasionally, use the about:config tool).
This concept is something I've been playing out in my mind and that I'm starting to explore in my programming. A simple interface that "doesn't make me think" (me being the user), and a well-tucked-away "Advanced" button that, having given the proper warnings, allows the user to poke around on the inside.
I simply don't understand that this point, which is bloody obvious, is completely lost on the so many UI (re)designers. I don't mind you simplify (and most of the time thats all we need) but what is the point behind cutting-off all access for good for those who wish to tinker?
For example in the recent Firefox 29 release, the add-on bar has been taken out. They might argue most of the people don't care about it and even though I disagree (I spent 20 minutes trying to put FoxyClocks (to display world time) everywhere else and it simply didn't fit), its ok as long as you put an option in the Preferences to turn it back on (I ended up doing that via installing an extension/add-on). I don't even mind if you turn the option off by default. It is incredibly frustrating to see the designers think that their way is the only way and their use case is the sole case.
Sorry to hijack your point but I suppose you made it well enough to elicit a rant.
If we start considering even those as optional, where does the simplification end? Then why don't we take out the bookmark bar, navigation bar, menu bar, status bar etc and attain supreme simplification by displaying a single text field which should lead to search. Surely the user can search for add-ons from that field and get whatever they want. It'll have the side benefit of helping users attain UI nirvana as well.
Go for it, show bookmarks on the new tab page.
It could certainly stand to be shrunk at the very least.
Yes please, I have my browser configured to no menu bar, saves a nice bit of space.
There's a popup when I hover a URL and otherwise I get to save space.
More on point, the add-on bar was a dumb idea and behaved weirdly. Good riddance. While a built-in real status bar would be nice, an extension to provide one is pretty good too.
So they won't care about the technology stack, as long as that spreadsheet, text document, 3D image, .... can be edited, saved and printed or a game played.
You can call the function of a car "simple" and a computer infinite in scope, but a twelve year old is allowed to use the latter but not the former. This implies the scope (for mischief) is in some sense far narrower with the computer...
I do get what you're saying of course. But I feel like in the past decade or so we've moved past the car analogy :)
So for this specific case, if Chrome wants to hide URLs, go for it; and I'm sure there's a configuration toggle somewhere to turn them back on if you're one of those people who care.
But imagine this happening, average users will become fully IT illiterate. Growing children will no longer know anything about computers, as they grew up in an environment where everything is hidden from them for sake of simplicity.
What will happen, after our generation(s) all get old, and the growned up illiterate children take place of improving world's technology?
We might have an engineering shortage in future ( we do already ), but it will be for many factors , not just lack of opportunities to tinker. If it isn't addressed, we will go long periods of time without nice things (think of the relative stagnation of the web from 2000 to 2008).
Being able to attend to URLs offers significant utility to a portion of users, though, and this UI change takes that away.
You and your family don't want to think about URLs? Fine. Nobody's asking you to.
But they might ask you to apply that ostensible concern for other people's use patterns a little more broadly.
Not knowing the innards of such a car has nothing to do with the UI - just the same as the people complaining about the change in UI don't need to know the innards of the browser: the source code.
And no, your preschooler isn't in need of understanding the difference between http/s, but sticking with your analogy, your preschooler also isn't driving. Maybe playing with a toy car instead, but not the full monty.
This makes the same false assumption about the world when applied to computers: The world does not exist of a binary-human type: people who are experts and people who are not.
I own a 28-year old Volkswagen van. It is completely hackable: the only electronics are three relais. But I don't hack it all by myself. I still, gladly drop it at the local garage to get something fixed. I can stop in nearly any town at the local garage and get stuff replaced, fixed or solved. I've had a waterpump fixed in Germany, my brakes replaced in Sweden, the battery replaced in France and so on.
And that is where the importance of hackability comes into play. Not the fact that /I/, myself can open up a browser or tweak it, but the fact that someone in my proximity can. Instead of having to ship my Macbook-pro to the US to get a fan replaced, my local fixit-guy can open my Thinkpad and replace the fan. Instead of having your computerized and closed-down car towed to the nearest official BMW-garage, I can drop my car at any place where they have a set of screwdrivers and some nuts and bolts and have it fixed.
My smartphone has a "simple" mode that people can activate for their hypothetical "computer illiterate grandmothers". The option to enable it is even presented to the user during initial setup, so people who feel intimidated by their phone can enable it themselves right out of the box. However other users are not forced to use the interface optimized for the computer illiterate.
A very bad analogy.
Auto drivers don't have be concerned about phishing attempts. Nobody sneaks into your garage and replaces your 2009 Toyota Camry with a near perfect duplicate that's wired up with snooping and tracking devices in order to steal your identity, bank accounts, logins, etc.
After 20 years educating the public on what URLs are and how they work we're going up and change things around just to appease the "senior citizen / soccer mom" stereotype. Bad idea. How about we design software for the next generation of tech savvy kids instead of 75 year old senior citizens who still haven't figured out how to use a computer mouse no matter how many times they've been shown?
Also one last point. The software UI was the abstraction of the hardware. We don't need to further abstract the abstraction.
When forced on us we rejected it, but eventually we walk right into it of our own accord saying "it will be simpler this way."
One of the 3 reasons why I dropped Ubuntu for Mac OSX was that Ubuntu... didn't allow me to configure my mouse speed. They "merged" the speed and acceleration control of the mouse (which is quite unclever) and also prevented it fom going <1, while I'm usually comfortable at 0.25. It made my tracking devices unusable, thus it made Ubuntu unusable.
>synclient MinSpeed=1.2 AccelFactor=0.25
Make it permanent in /usr/share/X11/xorg.conf.d/50-synaptics.conf
My 6 year old laptop has multi-touch pad emulation on Ubuntu Gnome 14.04... I couldnt be happier.
Ease of use enables the user to perform actions with more impact. Think of it as Python vs C. You need a lot more knowledge to get started with C, but you can customize almost every aspect of your program. With Python, you lose some customizability, but you can do a lot more in a lot less time and understanding. If you need the customization and you have the knowledge, you can also build C extensions for Python (which would correspond to the chrome flags).
I'm not saying I'm for this change or not. Testing it out on real users will decide its fate, I'm quite ambivalent about it. I'm just speaking for ease of use in general.
Practically all developers started as users who got curious about something and wanted to learn. In some ways, the less information a UI exposes to them, the less inclined they will be to ask - because they don't have anything they can particularly ask about. I'm extremely opposed to hiding the default hiding of the URL scheme for this reason: users are far less likely to ask "what's HTTP?" Certainly many won't care, and to them it's "just another part of the website's name", but future developers are (or should be) the ones who do, so it potentially reduces the number of genuinely curious and inquisitive developers. At the same time it conditions them to think that such opaqueness is the norm, the way things should be when they write their own applications, and the vicious cycle repeats.
For instance in politics, good laws become narrowed, twisted or get replaced because politicians have to keep their place in society.
Removal of editable URLs will push a few percentage of address bar led traffic to to search.
If there's no way to enter urls, then all that's left is to search google.
this is terrible. From what I can tell only 6 people have been involved
in this so far. Going to do my best to stop it.
As long as there remains a power user toggle to show the full URL, seems like a positive change. Of course, I may be missing some edge case.
Dogfoodable Servo based browser arrives in Q4 this year according to https://github.com/mozilla/servo/wiki/Roadmap. Probably the most important project on the internet right now.
>Update: As of version 36.0.1966.0 this has been removed. Iterate quickly!
The worst, though, seems to be that it has been removed for a very strange reason. It's not that the URL bar takes up precious space in the browser, since it has been replaced by a search bar that takes up the same space. It's also not that it's something that's disorienting or a distraction to the user - as far as I know, most users have been browsing with URLs present in their browsers since they every laid eyes on a browser. What's the reasoning behind it? Phishing? Yeah, sure...
It's about obscuring the workings of the web for one reason only: advertising. Anybody who understands how the web works knows that advertising on it is a joke and can not ever work if the users have the tools and insight to trivially circumvent it.
Google e.a. have been working very, very hard at obscuring the fabric of the web to stop people from doing that. Everything from killing RSS to gradually turning the browser into a dumb box is a part of that agenda.
Phone numbers suck and are a usability nightmare.
But people are used to them and everyone understands it.
No URL - no decentralized web.
I understand why Google wants to place itself as the key search and directory for the web. I remember AOL Keywords 'go to www.webvan.com, AOL keyword : webvan', the ads would say on the radio.
But the web is just a bunch of content hooked together with URLs. Heck, websites are just a bunch of content hooked together with URLs.
I already hate sites that are not linkable or discoverable due to excessive postback-ing and unbookmarkable deep results.
I will always seek out a browser that displays the URL. There are ways to defeat phishing without obscuring the URL.
Then there's also physical addresses, which have been around even before phones, and from the point of view of computer science, they would be considered a horrible "ugly" mess, yet people also seem to have no problems dealing with them in their daily lives.
I think we found the answer. Chrome now looks like a half-burnt Firefox, with an emaciated URL in a separate box from what has effectively become a search bar. The same two boxes are there, only their sizes are reversed -- accurately reflecting the respective vendors' priorities.
Expect Google to make more changes along the same line. What, did you really think they were funding Chrome out of a kindness of heart? Now that Chrome is a leading (if not the leading) browser, it's time to make some money. Google is the new Microsoft. They have the power to change the web as it sees fit, but instead of safeguarding the open web, they'll try to replace it with a walled garden. After all, what good is a browser if people use it to visit URLs that don't begin with google.com?
It'd be like taking the street addresses off houses and mailboxes simply because we had GPS now. Bad. Bad.
And I don't even want to start down the evil empire road, but I have to bring up the fact that getting rid of URLs just continues to solidify Google's desire to control user behavior. Click/speak a search phrase, see a list, and go to where we tell you. The internet is just a series of back rooms to Google's front door.
Don't do this, guys. I'm already half out the door with Chrome already. This would push me the rest of the way out and start making me actively tell people that Google is not looking out for their interests. I'm sure many other technical folks feel the same way.
The change is just a way to force users to use the "share me" buttons so google can benefit by promoting G+ or controlling the flow any way it wants. Its a good way to keep track of user flow/spread of links whereas copy-pasting a link to an IM hides that information.
In general this change doesnt have to do with bad UI good UI just pressure to use a more controlled web experience. URLs are good UI, they have extra info and the more experienced user can even manipulate them to everyone's benefit
Users: Okay, you removed the http from the url by default, but can I have an option to disable it?
Chrome: No, just accept it
I like to explicitly know what I'm copy pasting.
It offends my personal sensibilities to make websites with ugly links.
Other than that ... I don't know. I can't decide how I feel about this, but I can't help but think that I really don't care about the URL being accessible. Not like I ever do anything with it.
Is really irritating.
There's some interesting cookie-looking values in those URL parameters too, not sure if there's anything privacy-critical in them but it's still a bit of a concern.
This is like saying the only time I care about electricity is when I need to power something. The fact that there's a universal textual way to refer to and link everything – like via copy and paste – is precisely why the web is so powerful.
If there was a way to tell Chrome "Share this website on Skype to this person" or "Send this to this-or-that IRC channel" I would never care about the URL. The only reason I interact with it is because I want to share the page with someone specific (rather than using a spamming/sharing widget).
But I never manipulate the URL directly, or care about its specific parts.
Hell, if there was a keyboard shortcut for "put reference to current page on copypaste stack" I'd never click the URL at all.
Why are moving away from powerful and flexible systems that allow us to consume and produce? Why is the trend so constantly towards sealed, black-box, consumption exclusive habits? It makes me so sad to see.
* Web apps get more complex, giving you the ability to work & create on the web, Google docs let you literally stop mid-sentence, switch the device and keep writing your essay on the go
* Tumblr, wikipedia, twitter, - all relatively recent additions to the web and definitely not "consumption exclusive"
I'm not sure where your "consumption exclusive" comes from. I'd call it just "inclusive". The web tries to be for everyone, and that includes people who couldn't care less about a 20 character hex id when they want to write up their great concept for developing rural areas in Rumania. Sure, you will always have more consumers than producers on the grand scale of things. But I'm failing to see any reason to see recent trends as anything close to what you describe.
C-l (lower case L) to select the URL and then C-c to copy is one way to do this on many systems without without clicking anything.
Yes, android's intent system is neat. But after years of experience with it, it's sadly not a pancea.
go to the top level domain directly instead of hunting and hoping that they have a link there
going up several directories on an ftp site
go from reddit.com/r/starcraft to reddit.com/r/nba
go from a foo.github.io/bar docs page to github.com/foo/bar instead of hunting, and hoping, for a link
increment the page number on a blog by several instead of clicking next repeatedly
prepend http://www.google.com/url?q= to NYTimes urls to avoid paywall
add ?limit=100 to see more comments on reddit comment page
going to a new topic on wikipedia
This change was akin to Microsoft removing the Start menu.
And it highlights the importance of open source software.
And demonstrates how greed (seriously, how much MORE money and success does google need at this point) betrays quality.
Keep using duckduckgo, everyone!
Well that's nice, but some of us still have (e.g.) parents to help over the phone. Not to mention the address/URL/etc. field is an important part of computer literacy.
Do you not check the url before you give out your credit card details and website passwords?
With this change, all you'd see is "google.com" which totally seems legit for providing your username and password, without the additional form URL.
A URL has a thing called a chip?
Calls that this will 'break the web' are hyperbolic. You can still click hyperlinks. That's what makes the web the web.
URLs are untidy, even for technical users, and hiding them when not being entered won't hurt anyone.
I see URLs the same way. Once I hit a URL just load the page ( analogous to compiling the source code) and show me the content (finished executable). Origin chip does just that - the gritty entrails are hidden, but accessible, if necessary, with a click.
It seems like most of these changes come from some suited marketroid fresh out of a new paradigm meeting, rather than an actual honest-to-god engineer wondering what would make the web better.
The web ate AOL, now AOL closed-model is resurfacing.
An anecdote: my friends girlfriend was reading an article on mobile safari and wanted my friend to read it. Instead of sharing the URL, she actually took screenshots and messaged it to him! We both found this fascinating! I keep seeing this sort of behaviour on Twitter and Facebook where people share screenshots of tweets and posts instead of the URLs.
That's quite convoluted, and makes people look retarded :P
Alternatively, people can do what they like, and if linkable URLs are superior then things will trend that way anyway.
May not be great, but it's not completely gone (which the article seems to indicate).
Still, I can't imagine it will make web dev easier if the URL isn't always plainly visible.
Turn into this:
[amazon.co.uk] WD-Desktop-SATA-Drive-Green > dp > B008YAHW6I
That would encourage cleaner URLs while exposing the whole thing (">" is just "/"). Perhaps an API could be created to tweak them a bit. I'm not entirely sure how to handle GET params though. A box for each parameter? One box with all of them? As for clicking the first box, it has to open up the panel that normally opens if you click on the padlock / paper so it's still one click to configure cookies / see certificates and not two.
Right now I don't like the implementation as the URL is too hidden but I do like the idea; this will make it a bit harder to do phishing. Honestly, the old approach was the right idea too; dim the rest of the URL so it's more prominent what domain you are on.
As an end user I might see the Halifax part of the url at the beginning of the address and feel comfortable entering my credentials. If this was hidden and all I saw was sh.ly then I'd know I was on the wrong website.
You and I might be comfortable seeing that from the address bar right now but I expect 80% of users would struggle to see that.
Google has gone insane with mirror gazing.
It's a really excellent point.
Conversely, you could argue that URLs are valuable to search engines as an indicator of relevance, organization and what can be crawled.
Apps don't merely siloize their content—they break the most powerful way in recent history to decide if it's valuable.
I realize that I have an anti-change bias, so lets look at the pros vs cons: pros - people know what site they are on, have a little more space in the user bar, cons - all of the above. Which do people care about more? Im betting the latter.
Extra note: IMO, the solution to "cruft" is not "remove it", but rather, "make it readable".
The fundamental idea isn't all that bad. But look, even in the 300px image you still have room to show more:
So while I know I want to enter a new URL or search about 20x as often as I want to edit or copy the current URL, and, therefore, I am fine with needing to click somewhere else to "view or edit" the URL (default being that the search becomes blanked) - on the other hand, no reason I shouldn't glance a the full URL by default.
So I suggest that you show the URL in light grey and clicking makes the field clear, but berhaps the domain name could be darker, and clicking that part will meke the whole field editable/copyable etc.
It's a subtle difference but it could work.
I'm certainly glad they reverted.
I would guess they will allow me to adjust the view to always show the url in which case I don't care. It is just one in a super long list of very bad Ux decisions by Google in my opinion. If they actually don't let me see it, I'll just use some browser that does.
I don't quite understand all the concern with "width flicker", when the real problem is that important information is being hidden. Maybe they've always been attempting to deemphasise URLs, and thus are trying to divert attention away from that...
(Comment 37 seems indicative of their general attitude: "we get to decide what you want, and you will like it.")
There's an extension available to fix this, but it just feels terribly absurd that you have to use one to remove something that was deliberately introduced to slow down the browsing experience, on the browser that Google loves to advertise as being the fastest.
If clicking the label reveals the longer version of the URL, and its still editable, then this could be really huge I think.
I'll love to try it out.
Users reporting problems LOVE to send me screenshots. (Usually pasted into an MS Word document, yep). That's about the only reliable info I can get from users trying to report something they believe is a bug.
At least when the screenshot includes the location bar, I can see what site they are actually on, and in many cases with some squinting actually recover the URL.
It's gonna make support a lot harder when/if URLs stop appearing here.
I have no idea how relevant this will be to anyone else, and it probably is not a good reason to leave URLs there, but, oh boy, it's gonna be rough. We'll have to actually try to train users in how to find and copy-paste the URL, which we haven't had too much luck doing even when it's in roughly the same place in every single browser; a future where we need different browser-specific instructions, so we first need to ask them what browser they are using, then tell them how to find and send the URL, at which point they've already moved on and no longer care about the 'bug' they found before.... ugh.
I wouldn't mind if all links were the kind that you see in shortening services, though.
In fact even the naming of a file and the folder structure never been an issue to teach.
Hidding an address does look like spam, not honest, cheating by design because the user doesn't know where s/he is without digging (clicking/taping once or more).
As this information was there effortless, if it become an effort to find out, then people are not going to do it.
One may think that information was therefore not needed on the first place.
One may think twice and say : an information visible effortless might be needed because of its effortless nature. It’s there and part of the context we use things.
By example it’s needed to reassure the usage of something.
Why not starting to hide signs on the roads then. After all driverless cars don’t really need them and for everybody they distract the attention of the drivers.
For this reason alone, the change seems like a big usability fail.
The advantages are clear: the user never hits a broken page (theoretically) or finds that content has been relocated.
The disadvantages are also clear: no more bookmarks, no more simple sharing.
Hypermedia APIs attempt to address this problem to a degree with various forms of CURIEs. It could be interesting to see a web based on that (imagine href="hn:threads:buying-the-url").
I'm not. I understand why it's hard to swallow though, because I think a valid criticism of the REST architectural style is that it removes bookmark-ability.
> It's good if you can discover all content through hypermedia, but why should you prevent direct access?!
Well, precisely because direct access implies that out-of-band knowledge is driving the interaction rather than hypermedia. I would refer you to Fielding's discussion of the topic where he notes:
> A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations. The transitions may be determined (or limited by) the client’s knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand).
> It's like saying that there should be an indes in every book (yes!), and therefore you should not ever tell anyone on which page of the book they can find something relevant (wtf?).
Interacting with a web service/site is not like interacting with a book (or a physical address), because there is no permanence as there is with physical objects. Have you never tried to go to an old bookmark to find that the content has been moved (and you get a nice 404)?
Anyway, I wasn't saying that this is all A Good Thing, just that the change in chrome doesn't seem at all at odds with REST (however, as many have pointed out, it can be at odds with usability)
And that addresses can become dangling is completely besides the point. To solve that, you need a more stable addressing system instead of just not using addresses at all (which you really can't, if you think you can, you are confused and probably about to create an even less stable addressing system).
Still think my original point stands - that this UI decision isn't so different from entering a service for the first time.
> To solve that, you need a more stable addressing system instead of just not using addresses at all
I think this is generally solved with wishy-thinking and redirects.
Most people barely know what an URL is.
See also the time Google removed the + operator from search. Hardly anyone used that operator, and it was used incorrectly most of the time.
It's a shame that Google do not release their test data but it's obvious that they have a huge amount of user clicking and typing that they can analyse.
Anyway, I think most people don't care about the URL looks like at all as long as you can link the page somehow, the recent rise of all URL shorteners kind of proves that, not exactly something I'm a fan of.
Safari in OS X Mavericks has Tweet/Facebook/etc https://www.apple.com/safari/images/overview_builtin_2x.jpg
I dont see this as a benefit to users, most people I know dont even have a concept of what programming languages are. If I were to say Ruby they would be thinking of the gem.
What annoys me though, is that we still use "word", "excel" and PDF. Well not me, but everyone else seem to do it.
The HTML was a good invention, but too bad so few people use it.
Maybe hyper text sounds too old and and maybe a sexier name is all it needs to become popular again.
Now the purpose of Chrome seems to be as a data-collection and ad-delivery platform for Google. Admittedly, many folks navigate the web by google search already, but this is a step towards making that the ONLY way, at least if you're using Chrome.
Click url, it highlights the whole field, click again to remove undesired select all, now finally highlight the hostname, copy, paste on your ssh client and look like an ass for not noticing http was added to your clipboard out of freaking nowhere!
It's funny though we did all these efforts to put URLs at the center of our APIs just to see Google hiding them from humans.
I would bet that someone who wanted the URL would click the URL-shortened button.
I don't think the convention can be assumed to be true for any website to the degree needed to implement a feature like you suggest. There would be too many errors when people click on those buttons, and people would blame the browser, not the website.
This goes to show that innovation is only what you can forcefully shove down your users' throats. Years before Chrome or Firefox even existed, Opera had this feature in 1998 by way of custom inline searches (e.g. type "g search this in Hoogle" in the address bar), and later defaulting to Google searches for non-urls. This didn't catch on until Mozilla and Google did it, just like most people don't buy into something until the "right" person tells them it's kosher. Software and hardware are practically fashion; outside of small minority groups that see the value of a product for themselves, you need to convince the majority through nice acts of salesmanship (making something look 'cool', or by simply using the 'take it or leave it' approach) to invest (time/will power/money) in something that will help them.
He was exactly doing a research in how to remove the URLs from the user view, without changing the HTTP protocol.
In Portugal we only got to have access to the Web around 1994.
Before that most of us could only afford BBS connections.
Instead at any time we can just call one of the ubiquitous Google-taxis, and ask it to take us to the place we can vaguely describe by some approximate references. When we get there, if it wasn't where we wanted to go, well that was our fault for not being more specific. We should try the Google-taxi again. But it might take a while to be sure it wasn't where we wanted, since... no visible address!
Seriously, this isn't just the stupidest browser-change idea ever. It's a deliberate move to dumb the net down and shift web functionality towards more total control by Google. You do realize Google censors search results, right? So if searching becomes the only way most people know to refer to/find a site, removing it from search engine results is equivalent to removing it from existence.
This isn't about 'UI tidyness' at all, this is about dis-empowerment of users, ensuring that naive web users never become more aware of how it all works, and ultimately about Control.
Personally I use full URLs all the time. I keep lists of article URLs in text files (like these: http://everist.org/archives/links/ ) as well as saving articles because they may disappear. I often explore in sites by direct editing URLs. I demand to see full URLs on mouse hover, before clicking links.
The 'hide/tidy the URL in the address bar' foolishness has been getting worse and worse for some time, and is a pain. Chopping the protocol off, graying out paths, shortening... I refuse to use a browser unless I can configure it to stop messing with the URL. No I don't want it animated, with bits appearing or disappearing depending on what I do. If you're complaining about superfluous visual detail, how is moving and changing the visible URL around all the time not worse than any static URL, no matter how long and machine-like? A static long URL I don't care about is fine, but if it _moves_ it demands attention.
I can't believe the people pushing this actually expect to get away with hiding Universal Resource Locators from web users. Literally, taking down the street signs and expecting people to trust google and other search engines to faithfully perform the task of taking us to places we want to go, without ever trying to _influence_ where we actually end up going.
Just like Google isn't trying trying to force fundamental and harmful browser functionality changes down our throats. Or coerce us all to joyfully become Google+ users. Or force everyone to use their real names in online forums, Or build Skynet for some reason (ref their ongoing purchases of every AI group they can.)
Also, take that "It's OK, the URL is still available, it's just hidden way down in here" assurance and shove it. Same thing as UEFI secure boot - "It's OK, the ability to install some other OS is still there, you just have to thenyzzzt em-thup jksdfh!" How can you be so naive? It's a process, a series of planned steps, and after the nth little harmless step, the capability won't be there at all. Most people won't even remember it ever existed.
All you people applauding this move... you've got to be kidding. Useful idiots perhaps? Or part of the choir.
If this sounds negative, do you understand how negative I think the idea of hiding URLs sounds? I'm having great difficulty refraining from using offensive language. The concept deserves a large serving of it.
The question has nothing to do with what "URL" means, the various ideas about how to make an efficient UI, or even the current knowledge and skill-level (or lack thereof) found in the median user. Those are distractions.
Instead, the only question any of you should be asking is if removing URL visibility serves, in the long run, to educate and empower users, or if it instead removes power from users - even those that do not yet exercise that power.
Often - and especially here on HN - there is a tendency in geeks to avoid the hard political and sociological issue. Unfortunately, some issues are inherently non-technical at their core, and attempting to avoid those hard questions by limiting attention to the technical minutia, a political or sociological choice is still being made. All too often, it is leaving that choice to those who seek to steal power, making those that avoid the real question into useful idiots.
Because this is a crowd that enjoys scifi, a quote from the end of Sleeping In Light:
"[Babylon 5] taught us that we had to create the future, or others will do it for us.
It showed us that we have to care for each other – because if we don't, who will?
And that strength sometimes comes from the most unlikely of places.
Mostly, though, I think it gave us hope that there can always be new beginnings,
even for people like us."
On the concerning side, there's nothing Mozilla likes more than copying Chrome.
On the hopeful side though, they have remained strong in the face of combined search+URL fields, so this may be one place they continue to resist.
You're treading on thin ice there, Google.
Not "bad" in the sense of rendering or security holes (Read: IE6), but bad in that "experiments" can be "deployed" to users at any time, and a company with the aspergian-tendences of Google has control over that.