Not to mention all possible problems with misconfigured servers when www and www-less domains lead to same website, but some script refuses to work on one of those. While it was easy to spot with just a glance at URL bar, now one has to do additional clicking or opening devtools.
Probably we are at the crossroads where developers need separate, not dumbed-down version of browser.
I was trying to search for the word "aquarium" but chrome kept filling in "https://aquarium.org", I would delete the ".org" and only the word "aquarium" was shown in the search bar, which was the word I was trying to search.
Of couse "https://" was hidden, so I was actually submitting "https://aquarium" which was not a valid domain and it took many frustrated clicks and enters to actually google the word that was shown. Absolutely infuriating as the true state of the search bar was hidden.
The URL bar is a disaster for end users. It's full of random junk that users can't read so they stop trying, which means they can then be tricked by phishing websites hosted on any domain at all. Research shows about 25% of users don't look at the URL bar at all even when typing in passwords, they navigate purely by sight, so it's impossible to stop them being phished. The human cost of the resulting hacking is significant.
The fact that the software industry has routinely prioritised historical inertia, the needs of web developers, and all kinds of other trivialities over the security of billions of people is embarrassing. I'm glad to see the Chrome team finally get a grip on this and make the URL bar actually useful for people.
"Side view mirrors are a disaster for drivers. 25% of drivers don't even check them before making a turn."
[I'll stop the metaphor here, as I think my point was clear]
This change does exactly nothing to improve security. As for usability, it just puts one more layer of paint over the underlying "complexity" - and we've seen before how well that works (see basically every part of Windows 10 for examples).
Your side view mirror metaphor is unfortunately not clear at all. The side view mirror is simple and performs its function correctly as designed. It can't really be improved without totally replacing it with something else like a camera. Now of course not everyone will use the URL bar even if it's redesigned to work correctly. But right now the bar is practically designed to look as intimidating and useless as possible.
Perhaps you're so used to parsing URLs in your head you don't realise it, but URLs are a baroque and absurd design that nobody without training could properly figure out. It's basically random bits of webapp memory and protocols splatted onto the screen in a large variety of different encodings. In a desktop app dumping RAM straight onto the screen would be considered a severe bug. On the web it's tolerated for no good reason beyond history.
To give just one example that has regularly confused people in the past: URLs are read left to right except for the domain name (the important part) which is read right to left. You don't stop reading a domain name at .com, you stop reading it at the third slash or possibly a colon, but that form is rare.
The way I used to teach was very simple and very effective: there are 3 parts to a URL - the first part tells you if the connection is secure, the second part tells you who you're connected to and the third part tells you where on that site you are. The first part needs to be httpS, the second part needs to be the site you're expecting and the third you can ignore. They're even shaded differently to make it easier to read. "If you're going to Google and the black part ends with anything but google.com, call IT" made sense to even the oldest and most reluctant people I've had to deal with. The problem was actually getting them to check every time and not forget.
It seems to me that this change will not help people without training, change nothing for people with training, and make sharing links even more confusing for everyone.
Are you saying someone is less likely to get phished on "supershop.co.hk" than on "http://supershop.co.hk/account_login.php", even where the http:// part is replaced with a red padlock and /... is grayed out?
I see only one real solution to phishing: don't let users type passwords manually. WebAuthN and password managers both automatically read the domain and won't try to authenticate on a domain that isn't a perfect match. I've had more success with that than any other anti-phishing measure I've tried deploying (history-based domain trust, explicit trust on first use popup, detecting unicode gaps and domains in credential fields...).
The greying out and replacement of padlocks etc, the anti-phishing training, it's all just working around a historical design problem in browsers. There's no need for it to exist. Notably, mobile apps don't have this problem.
google could make an amp-only web experience without dissent.
hide the URL bar
hiding all this benefits data collection and advertising. Seems obvious to me.
Yes, we are.
Funny enough, Firefox has a 'Developer edition', but that's just the Beta build, with some features turned-on by default.
Due to slow loading, I'm considering whatever they call the Microsoft browser today.
I am honestly surprised Firefox on Windows is slow for you. I don't personally use Windows, but many on here say it works well there since v57.
Maybe I don't know what to Google.
That combined with both FF and especially Safari being way behind in terms of standards adoption/development really sucks at the moment. Chrome desperately needs competition.
The standards are meant to be agreed upon by several parties, including both Google and Mozilla. Google implements new features in order to control the way they work, instead of allowing input by all the parties involved. It's always faster for one company to just do whatever they want than for a group of organizations to come to an agreement on what to do. The slower way results in better and more equitable implementation for everyone though.
But if a whole subset of users exclude themselves from that tool, they're going to get the UX that's only as good as the other tools in the toolbox are capable of building.
But telemetry gives web developers an extremely simple and convenient tool to know what users are actually doing without even inconveniencing the users with explicit questions. I've done web development with a good telemetry set built into a page, and it is extremely informative regarding how users actually use the tool, as opposed to how the UX designers have predicted flow through the tool will be.
To give a concrete example, a user might tell you that configuring permissions is "hard," and sitting with them during over the shoulder testing (which is expensive) might tell you a little bit about why. But without even asking the user, page telemetry can tell you that they are making a transition jump from the permissions configuration page to the page listing all of the resource names, because that's what's slowing them down---the UI didn't give them enough information to configure the resources because we assumed they knew what the resources were named.
For a browser, anonymized usage stats can tell you whether most users keep all their bookmarks flat at the top of the bookmark bar or deeply nested in multiple subfolders, and that's usually valuable for deciding whether you want to emphasize a flat bar or folder management in the design.
If most power users disable automatic anonymous telemetry and also use deeply-listed folders, no one should be surprised if deeply nested folders doesn't get better.
People who don't show up to vote also "deserve" a good government by virtue of being people who have to live in a governed society. It's harder to make one for them if the system for selecting leaders is missing their input, regardless of what they deserve.
Popping out of the government analogy and back to software, power users are also in a position where they are more capable of adjusting their experience to suit their needs. All things being equal, a company with finite resources to develop software should dedicate those resources to assisting the non-power users more often than power users.
b) for being "not a good idea", it's pretty much industry standard now for everything from business software to video games.
So you're claiming that it is typical for software with telemetry support to ignore your choice and still send telemetry about you turning off telemetry? That sounds wrong, but I cannot say I investigated this deeply.
> b) for being "not a good idea", it's pretty much industry standard now for everything from business software to video games.
As I understood the discussion, we were in fact discussing whether this is a good idea and whether it makes sense, so I think it's fair game to comment on it. As for it being an industry standard, that sounds like an overgeneralization. It is certainly not typical of software I use.
No; I'm saying missing data leaves holes that can be measured. They know, for example, how many people have downloaded Chrome and how many daily Chrome users they get at google.com (because Chrome will still send a valid UA string if it has telemetry turned off). They can estimate how many users have telemetry turned off from those signals to a pretty decent degree of accuracy; certainly enough to know whether telemetry is telling them about 90% of users of 30%.
For (b), I'm curious what software you use. It's pretty standard in games, online apps, and business software. It's absent in a lot of open-source (mostly because a lot of open-source lacks a centralized vendor who would be willing to pay the cost to collect and interpret that data to improve the software).
I avoid online apps, I don't play a lot of games (and if I do, they're not big titles which are likely to have telemetry) and yes, I primarily use FOSS.
> (mostly because a lot of open-source lacks a centralized vendor who would be willing to pay the cost to collect and interpret that data to improve the software).
This is almost surely an element of it, but I think a respect for privacy and a general distaste for telemetry among FOSS users are more important.
* Apple Watch battery cycle count (not viewable in any UI but is viewable in telemetry logs)
* Clues about why a particular app recently crashed
Can anyone reproduce this?
You cannot compile Chrome. I've heard that Chromium can be compiled and run, but I've never actually seen it, or hear of anyone using that professionally.
There are other factors at play which mitigate the preferential treatment, but it's definitely there in Firefox as well.
At this point ie becomes more useful.
The goal of most web developers is to make pages users can use, not get mired down in the never-ending browser wars.
As the smaller vendor, it's incumbent upon Mozilla to break it. Expecting individual devs to do it collectively when it really isn't in their selfish interests is waiting for a unicorn to appear.
That's kind of impossible to do for a smaller vendor without wider developer cooperation.
I remember Mozilla only started to breach IE's dominance once devs were so sick of IE that they installed Firefox on their mom's computer despite tons of sites being made for IE.
It's possible for something like it to happen again with Chrome, but less likely since Google's a lot smarter and not too lazy to implement latest tech, so it will sure take longer without some activism and evangelism from web devs.
> expecting individual devs to do it collectively when it really isn't in their selfish interests is waiting for a unicorn to appear
Selfishness is a lot more complicated thing than people give it credit for. A lot of 'selfless' acts could be alternatively described as selfish in that it makes one feel good. Free software developers already do a lot of work for the wider community, where probably it would be a lot easier to just use the proprietary, already feature-rich, counterpart than trying to develop a libre alternative. But the movement understands that long-term, having as much free software as possible is what will in the end help preserve general-purpose computing in the sea of silos. It takes some discipline, sure, but long-term it's actually in one's selfish self-interest.
Short-term, sure. Long-term it opens devs to Google's whims and makes the "open" web barely more open than Apple's AppStore.
But that's short-term vs long-term thinking and I can't deny most would prioritize the short-term. Here's hoping there's still enough idealists, even among web devs to avoid that fate and bring about for the web what GNU did for UNIX in the 80/90s.
The only space I can name off the top of my head where my open-source architectures have outstripped Windows and Mac in UX is virtual desktops.
Of course proprietary software has more funding to hire designers and such, but in terms of actual functionality, I'd contest your claim.
If you're talking developer user experience, it's not even a race. The FLOSS ecosystem has a landslide lead here. In fact the whole point of WSL is to try to keep devs on the Windows platform by bringing that experience to Windows more directly.
Customizability is orthogonal with out-of-the box UX, the original axis of comparison here. In fact, the two are often at odds.
The history of biological evolution shows that monocultures invariably fail catastrophically. Diversity is the main way to guard against unpredictable events of the future. Software is not exempt from these general rules I'd presume.
I've looked into switching back to Firefox, but what I've found is that they don't allow me to use my own extensions. I would have to use a beta version of Firefox or submit all of my extensions that only I use for approval to Mozilla or have to reinstall my extensions every time I close Firefox. None of these seem good options to me.
Grand-parents/friends/org users not being sure to be on the right site after a redesign, not seeing amazon in the right language, etc. There’s countless of questions that can be solved faster by looking at the URL.
Understanding that users have more important things to do than spend time on bug reports is an important lesson to learn. If you can gather data without relying on someone whose job is to worry about other things then you will make everyone's life easier.
Example.com > Worldnews > 2020 > 06 > 14 > Big aquatic monster spotted outside of Tokyo
forum.example.com > Sport > Football > Spain > Real Madrid
It could then work like in Explorer in Windows 10, where you can press one of the breadcrumb separators and see a menu with siblings, or go straight to all news this month. It could use some manifest file in a standard format on the server for the directory information.
Of course, this should never replace the URL completely, you should always be able to get to it easily. But URLs aren't necessarily always the best solution for navigation. We tokenize code and apply different colors, mouse over pop-ups, and links, why should the URL bar be a long raw text string when it's really contains structured data?
This Google nonses of hiding everything except the domain is not a good solution IMO, it doesn't solve a problem and makes it harder to navigate, not easier.
Probably worse than the change itself, though, is the tendency of anyone who makes such a change to start playing fast and loose with actually representing the underlying address. You mention Windows 10's address bar - it's one of the worst offenders. My Windows Explorer is currently sitting in my downloads folder, which is at "C:\Users\Wyatt\Downloads". The address bar reads "This PC > Downloads". When I click on the address bar to edit the address, it changes to just "Downloads". What part of all of this is in any way useful to me or the likely action I'm trying to take when I click on the address bar?
I think it would have to be some standard format that websites use, not just string manipulation in the browsers. And certainly not some Google dictated feature! For the same reason, each part would have to be navigable on these sites, to work as I described. There's various possible solutions, like meta tags or some manifest like breadcrumbs.jsonld mentioned in another comment.
The fact that Windows Explorer doesn't show the full URL in special folders is a separate issue, I only mentioned it for the breadcrumbs example.
But I disagree with you that not showing the full address in Windows Explorer is a separate issue. In my experience loss of edge-case functionality is a core aspect of changing interfaces. Maybe in another world the address would be preserved, and my use case would still work. But someone else's unusual use would not.
Think about how Powershell use objects instead of text to chain together commands. The address isn't just text, it's structured data, why not treat it as such and make it more useful?
However, I think there is something to this idea - a breadcrumb style approach by default in Chrome would encourage developers to use paths in more standard ways that refer to resources, not heavy parameter coupling. As you noted, there are technical barriers to implementing this solution, which might encourage some other good things - servers providing resource discovery so that the browser can understand valid paths when visiting a site.
Google has too much power to dictate standards already, and has been quite happy to use that power for their own sake, rather than the good of the user. I'm not interested in giving them any more.
And like I wrote in my first post, the resource discoverability could be handled by the site itself via some manifest file in a standard format, like robots.txt. It wouldn't be dictated by anybody else.
Attempts to make things simpler by hiding the truth about where you really are in navigation seems like a way to make the web less discoverable except by Google. If you're on a web site you can usually learn more about its structure based on URL format. This makes that more difficult.
The only difference between
Example.com > Worldnews > 2020 > 06 > 14 > Big aquatic monster spotted outside Tokyo
forum.example.com > Sport > Football > Spain > Real Madrid
And probably some clever logic to deal with the randomforum.php?fid=12345&tpcid=984.3&page=5 goop that is still all-too-common... :/
You say that as though websites like that are random small sites. HN has that kind of a URL, so do YouTube and Google.
You could try to map the parent ==> child relationship of every individual post URL, which might be cool, but think about how long the URLs would get.
For sites with breadcrumbs though, the URL absolutely should follow the crumbs (and I've argued for such at my company).
Anyway, almost everyone else would mind. Especially if there was no option to revert to normal behaviour.
> It could then work like in Explorer in Windows 10, ...
That sounds like the worst of both worlds. If people want Explorer in Windows 10 behaviour - can't they just run Explorer in Windows 10?
If people want Chrome as it was yesterday, they've basically got no option now.
> But URLs aren't necessarily always the best solution for navigation.
The Chromium devs demonstrated their lack of interest in being able to navigate via URL / location bar a half decade ago when they changed the default on all operating systems to be single-click in location bar to 'select the whole address'.
I'm beginning to think they are not our friends.
I typically run up breadcrumbkiller as part of any Microsoft Windows desktop build, so I rarely see that configuration for long.
They by default worked very similarly the aforementioned Windows Explorer, what in focused state with keyboard input turns into "raw" text field.
Sadly not used everywhere, but maybe browser support would encourage its usage by site owners.
news.ycombinator.com > 2020 > 06 > 14 > Google hides full addresses in URL bar on Chrome 85
This makes it easy to not only see where you are, but also quickly click on a part of the address to go to that hierarchy, or a sibling like yesterdays posts. It makes sense that a forum like this would have a way too see all posts from a day, month, or year.
Of course, most if not all users here are comfortable with URLs so they're probably not the ones that would benefit the most. But I think most common users, the ones who Google everything instead of typing in an address, would use the breadcrumb bar while today they probably see the URL as some weird text string they have little interest in or understanding of.
This enables you to right click on the address bar, and turn on the option "Always show full URLs". It will always shows the full URL including the protocol, but I suspect they will remove this flag at some point.
Now how does this new flag interact? Has anyone enabled both to see?
This drives me crazy when debugging. Whenever I copy-paste IP addresses from the browser address bar into my console, I have to manually delete the `http://` at the front. I work on a P2P project so this is an extremely common situation for me.
Another solution for Windows and macOS users (no Linux sadly) is to use Edge Chromium, which does it by default, and you prefer to donate your data to Microsoft rather than Google like me :)
Seriously, I keep looking at the address bar to make sure the URL is still there and I'm not dreaming.
I personally don't care much what the default is for the normal user, but I want to be able to have my full urls.
Is there also a #upgrade-to-firefox-immediately flag ?
Here's a comment I made from several years ago when Chrome tried to do before what it's trying again now (it's not the first time): https://news.ycombinator.com/item?id=7678729
Maybe this next point is starting to go into the realm of conspiracy theory, but I see far too much evidence of it every day: companies are doing this because they don't want users to learn. They want to keep users naive, docile, and compliant, and thus easier to "herd" for their purposes. They don't want people knowing the truth behind how things work; they would rather "developers" (and only those who explicitly chose to be one --- probably for monetary reasons) learn from their officially sanctioned documentation (which does not tell the whole truth), and not think or discover for themselves.
(I've memorised most of printable ASCII because I did a lot of Asm programming decades ago, so I instantly understood what you mean.)
There are also definitly conscious design descisions how "cryptic" a particular feature should appear to users. I remember several bugzilla threads with discussion whether a config option should be exposed as an "ordinary" field in the settings or only as an option in about:config, so that normal users won't find it.
You'd be surprised how often veteran developers fail to grasp intermediate Unicode concepts (surrogate pairs, for instance) probably since they skipped over (or was not curious enough about) implementation details of such abstractions.
> Needlessly hiding technical details from kids is going to limit their learning.
I watch kids learning circuitry via redstone in Minecraft on iOS and Xbox - walled gardens, yet impressive learning nonetheless.
But when I was his age, all I had was MS-DOS 3.3. And I had to CD around to various directories, DIR *.EXE to remember the names of executables, etc. It was an environment that exposed more technical details, and kids who are predisposed to learn technical details learn a lot just by using it. Windows 10, doesn't promote the learning of technical details to anywhere near the same extent.
(I try to make up for it a bit. I introduced him to DOSBox.)
It is a mistake to conflate “it was harder for me” with “I learned more”.
My kids can program more complicated stuff in Minecraft than I could at their age. Part of that is having a tool that’s fun and abstracts away the boring bits.
What is "boring" varies from person to person.
I know, when our son plays Minecraft Java Edition, he likes to play it with the debug screen (F3) on.
He doesn't understand what most of the details on that screen mean, although he is learning a few. (He was asking me to explain what X, Y and Z coordinates were.) But, even if he doesn't understand most of it, he still likes it, and probably sooner or later he'll ask me more questions about it.
But the point is not that he memorises the ASCII table. The value is that he learns that computers internally represent letters/punctuation as numbers. The underlying concept is what's important, and the learning of specific values is mainly useful as a way of learning and reinforcing that underlying concept.
However: I believe Apple's motives are aligned with their users and they want their browser to be as safe and as easy to understand/use as possible. Their primary intention is to sell their shiny expensive hardware.
With Google it's more controversial, because who knows what's the plan. Combined with AMP there is a reason to be wary.
Ofc, one can make bad decisions based on good motives.
I'd stay clear of that project and use mainstream Firefox instead. And afaik they still don't support WebExtensions.
Suppose you always had to tell people to 'Google it'
Suppose 'I feel lucky' was always the default, and the result was sold to the highest bidder.
It's genuinely a benefit for the vast, vast majority of users, where the only important piece of information really is the domain name, to check which site you're actually on. And for more info, you can just click. Copying the URL becomes no more difficult.
The URL path beyond the domain is as useful to most people as an IP address, in other words not at all -- it's just noise. And displaying noise is bad UX. Pretty much only website developers and administrators and SEO people care about the full URL. Granted, there are a lot of those people here on HN, so I understand the pushback, but we're not most users.
But at the end of the day, I don't understand why people seem totally fine with Safari doing this, but not Google?
If find some of reactions on this ridiculously hyperbolic "biggest attack on the web in years" ? seriously ?
I get it, Google is a gigantic monster that does not necessarily act in its users best interests, but that does not mean we need to bring the pitchfork each time they launch an app update.
And also precisely because of AMP, this might be a very dangerous step towards blurring the lines between original and AMP pages.
I'm sure they've always been tracking these search result clicks, but I think this is a somewhat new behavior, and I find it highly deceiving.
Ublock Origin can block this behavior. Here's a posting with links to more resources on hyperlink-auditig
Right now it is a setting to enable in Chrome.
What makes you think that once it becomes default, the switch won't remain to be able to turn it off?
Chrome is built by developers. Presumably, they pay attention to what developers need from it. Which is why their debugging tools overall are so amazing.
Perhaps you should withold criticism of what you assume they'll do until they, you know, actually do it.
This is bad for web security, since the registerable domain is the part you have to trust, but it's surprisingly difficult to figure out that part.
However I feel a bit uneasy about this since URLs are important and tell you where you are on a website. I prefer Firefox's approach which emphasises the registerable domain in the URL bar and fades out the rest of it, making it easier to spot the important bit. However it's still quite subtle - it could do with being a clearer distinction.
A few years later, instead of type news.ycombinator.com, you would need to search for "hacker news", scroll through the ads and then click on the link.
So it could be a slow transition to inserting a sort of interstitial ads inter your browsing.
I'm also against this in theory, but in practice I don't care much. We shall see.
I am not calling anybody dumb. I'm saying they don't care and don't know there is any reason to care.
Even if, showing the URL bar changes nothing for them, so why hide it?
Exactly. We should push on the opposite direction (educate people and make the concepts clearer).
A DNS client looking at the list of servers, and marking the speed and reachability of each server is the most basic optimization. It makes no sense for clients to add n seconds to every request for every unreachable DNS sever.
The async DNS feature using Chrome's internal DNS client which behaves differently than glibc and so pihole appears to not work. Chrome is not injecting its own DNS servers into the mix or whitelisting anything, it always uses your system's DNS servers, it just looks them all up in parallel which it is allowed (and encouraged) to do by the RFC.
Make sure all your configured DNS servers are pihole and everything will work.
seems that the problem is not async itself, but that chrome ignores the system DNS settings and uses googles own DNS servers instead.
Google's async DNS feature uses Chrome's own internal DNS resolver which doesn't call gethostname(). It would be incorrect for Chrome to parse this file and attempt to "respect" your settings because NSS is a series of black-box system-specific modules. If you removed the dns module from /etc/nsswitch.conf then resolv.conf wouldn't even enter the mix on your system and then Chrome would do the wrong thing. If the dns module behaved differently on your system and /etc/resolv.conf was actually /etc/resolv.json or /etc/resolver.conf then Chrome would again do the wrong thing.
When resolving a name applications have two choices, either look up the name with glibc, send the request through the NSS gauntlet of black-box modules and take whatever it returns of perform the DNS request itself and ignore everything on the system. Any sort of hybrid approach would be more confusing.
What I hate is how poorly they worded when websites use http instead of https. It says "connection not secure" which makes people think there is a hacker somewhere hacking their connection. What they should have done and must correct is to make the wording "this website is not following safety guidelines". I'm tired if explaining.
At least back in the day they will have the opportunity to learn what a query string is and that no, nobody is hacking anyone.
However with these stupid changes there will no longer be an opportunity to learn even if you wanted to.
Look pilgrim, that's the standard for you right there. It's called RFC3986.
The query component contains non-hierarchical data that, along with
data in the path component (Section 3.3), serves to identify a
resource within the scope of the URI's scheme and naming authority
But my understanding is that they intend to use signed exchanges specifically for their amp URLs, finally finishing their efforts of forcing people to go to google.com without them ever having realized it.
Really my point was in jest. I think we need to trash the entire www and start again with something content only focused and hard same-origin policy and far far far lighter than what we have. I tried browsing the web on a dual core celeron N3010 recently and it was unusable on all mainstream browsers.
I'm having a hard time thinking of a situation where you have information, some more important and some less, where the correct solution is to delete the less important information. It still has importance!
That might help make the URL more readable but again doesn't really help if the relevant parts of the path/query string for trust isn't immediately apparent.
Hiding the https and www is already frustrating enough, and this change would make Chrome barely usable for my purposes.
They explain a number of reasons why it is difficult for people to extract from a URL the part which is relevant to security, ie. the bit that affects who has authority over the page and how your cookies will be separated by the browser. The cookie sharing actually had some rules I didn't know about as a non-web developer but experienced URL user. They show how every browser is already going some way towards this but they all have some problems, for example Safari shows the full domain not just the important part.
Not that it's that hard to clean up url via history api after you get access to the page via XSS atm, but there's still some short period of time where the full url is shown in such a case, that may provoke suspicion.
I was just about to write this but I don't necessarily think it's that far off.
With signed exchanges, AMP pages have the ability to hide the fact you're accessing content through Google . In 2016 Google wrote about testing 'mobile-first indexing' because more people are using mobile devices than desktop browsers .
If Google can control the URL narrative (keeping users from bouncing off AMP pages) it's just one more ability for them to be MITM.
Also nice to see DigiCert helping them out, but I’m not surprised with how DigiCert’s product lineup isn’t much more than a test of how much of a sucker you are.
Do you know there's a staggering amount of users who type "google.com" into Google?
And this will add a few extra hoops for them to jump through before they learn, so that they'll never have to leave the reassuring embrace of Google's ad trackers. How convenient. :)
Do I need to know an address to drive my car somewhere?
My knowledge that a place exists and I want to go there is sufficient to get me there, without having the physical address memorized.
As a power-user I obviously navigate through URL far more than the average user, but I am not convinced that say a 50 year old nurse using my web software needs to ever touch a URL even a single time, or that it would be beneficial to her user experience to even know what it is.
Are we promoting idiocracy now? If someone doesn't know what it is he/she should find out, or live with not knowing.
I think it is down to having to consciously decide to search before starting to type instead of just start typing, if you couldn't remember the URL just misspell it and search and it works, a more specific page throw in another word and you get the correct page as a search result essentially every time.
In any case, if I do decide to search, the search field is just a ctrl+k away, so the additional convenience of combining the fields never seemed that great to me. (But for Google, of course, it's a very convenient property of this design that everything the user types happens to end up being sent to Google.)