
A closer look at the performance of Google Chrome - mrjbq7
http://aptiverse.com/blog/closer_look_at_chrome/
======
comex
The claim that process isolation is "antiquated" is absurd, since it's what
everyone else is doing except for those, like Firefox, who have not yet
started to do anything at all. I don't know any other projects that do what
NaCl does, so regardless of whether Google _should_ switch to pure NaCl for
sandboxing, if they _did_ , it would be innovation, not scrapping an
antiquated technology.

As for whether they should - although Native Client's software fault isolation
can provide better latency than hardware-based solutions because there's no
need for a context switch, it takes a hit in throughput (7% overhead) because
of the code contortions involved (though being able to trust the executable
code might improve things). There would be significant issues with supporting
JIT in such a model, because JITs typically like to have rwx memory. Multiple
sandboxes in the same process wouldn't work with the current NaCl model on
32-bit platforms. And although the SFI has been well secured, putting user
code in the same address space as the master process might make exploitation
easier - this would be partially mitigated if the address of the user code
could be hidden from it, but doing that would require additional overhead
because addresses could no longer be stored directly on the stack.

~~~
mistercow
The actual phrasing is "antiquated style of process isolation", which leads me
to believe that he's not saying that process isolation is inherently
antiquated, but that the way Chrome does it is.

In that case, it still needs some explanation. Chrome's process allocator is
apparently pretty complicated, so assuming a reader knows enough to just take
it as read as "antiquated" is a bit much.

~~~
azakai
> The actual phrasing is "antiquated style of process isolation", which leads
> me to believe that he's not saying that process isolation is inherently
> antiquated, but that the way Chrome does it is.

That was my assumption too. Chrome's IPC was developed in secret and never
made it into WebKit, instead, Apple later developed WebKit2 - in part as a
response to Chrome's IPC - that does IPC for WebKit in a different way. It
sounded like the author was saying that Google's way, which is older than
WebKit2, is inferior.

~~~
bengoodger
I don't see how the fact that Chrome was secret prior to its 9/2/08 launch
matters much here. It's existed publicly for ~4.5 years at
<http://src.chromium.org/viewvc/chrome/trunk/src/> and all of the code is
pretty actively developed [1] & [2]. Chromium has now been public much longer
than it existed privately, and for any particular file in Chromium, the odds
are good that it's substantially different from launch day.

[1] <http://svnsearch.org/svnsearch/repos/CHROMIUM/search> [2]
<http://build.chromium.org>

~~~
azakai
> I don't see how the fact that Chrome was secret prior to its 9/2/08 launch
> matters much here.

It might not, yeah. But I've heard theories that part of the reason it never
made it into WebKit was how it was developed in secret and how that annoyed
Apple. Together with v8, another secretly-developed project that also did not
replace it's parallel in WebKit.

~~~
bengoodger
I won't speculate on why people make the technology decisions they do - and in
this case I'm somewhat distant from the WebKit team.

I am pretty familiar with Chrome's multiprocess webview harness. This is one
case where approx. 2 years ago it was incestuously tied with a lot of Chrome-
the-desktop-browser internals. Part of the technical debt accumulated with the
sprint to ship something. So I can see why back then it wasn't appetizing as-
is.

A heroic effort by a team of engineers finally managed to separate it in 2012
into "content" (not Chrome, get it?) and set up its own shell, suite of tests,
and public API for use by embedders (one of which is src/chrome, but there are
others now even within Chromium). It's still not as elegant as I think any of
us would like, but it is now usable from a standalone app.

One of these days I need to write a series of posts about some of the "big C++
app design" lessons we've learned as a team over the past few years.

~~~
yuhong
Opera is planning to use this, for example.

~~~
cmwelsh
Yes, but only in the sense that Opera is evolving into a new brand of
Chromium. Opera isn't a relevant factor in the technology discussion anymore.

------
psn
[https://plus.google.com/103382935642834907366/posts/XRekvZgd...](https://plus.google.com/103382935642834907366/posts/XRekvZgdnBb)
is a Chrome developer talking about cache metrics within chrome. He claims the
cache is capped at 320MB. Now the graphs in the article never show the cache
size hitting 320M, so possibly they are both right. ;)

I would prefer it if the article had more information on how to reproduce
their tests. For example, they claim a faulty hashmap implementation. This
seems like it would be possible to benchmark. Instructions on reproducing the
100ms delay between button click and network traffic would be cool too - as
would data on how much worse that makes facebook's latency. Also, is their
cache backed by ssds or spinning disk?

I'm also impressed that the speed of context switches matters on websites.

The jump to claiming that chrome caching is tied to ads is interesting.
Perhaps the author could fill in more details.

~~~
hobohacker
I am said Chromium developer.

Let me respond to this comment from the article: """ This is not the case for
Chrome: the browser keeps all the cached information indefinitely; perhaps
this is driven by some hypothetical assumptions about browsing performance,
and perhaps it simply is driven by the desire to collect more information to
provide you with more relevant ads. Whatever the reason, the outcome is
simple: over time, cache lookups get progressively more expensive; some of
this is unavoidable, and some may be made worse by a faulty hash map
implementation in infinite_cache.cc. """

Chromium (and thereby, Google Chrome) does not cache forever. The author is
clearly misled by the infinite_cache.cc file he referenced. That is our
experiment file, designed to examine a theoretical "infinite" cache's
performance for data gathering purposes. It doesn't actually cache the
resources, but just records the keys (basically, the URL). It only runs on a
small set of user browser sessions (only for users who opt-in to helping make
Google Chrome better and a subset of their browsing sessions).

As my previous Google+ post mentions (thanks for the parent for linking it),
we cap the cache size at 320MB. The author is simply factually incorrect about
the aforementioned claim.

As for cache performance as the cache gets larger, I fully believe that it
gets slower. We have data that backs up this assertion. Of course, larger
caches means that more gets cached. And there are ways to restructure the
cache implementation to avoid the painful latency on cache misses. While cache
misses are indeed a large percent of resource requests, it is misguided to
analyze the cost of cache misses in isolation. For the opposite argument about
how we should be increasing cache sizes, see Steve Souders' posts:
<http://www.stevesouders.com/blog/2012/10/11/cache-is-king/>,
[http://www.stevesouders.com/blog/2012/03/22/cache-them-if-
yo...](http://www.stevesouders.com/blog/2012/03/22/cache-them-if-you-can/),
etc.

The caching issues are far more complicated than described in the original
post. The data is much appreciated, and we have similar data that we're
looking at as we're making our decisions about caching.

~~~
slacka
To set the cache to only 100MB, you can always use the "--disk-cache-
size=104857600" flag.

My issue with chrome is that it eats up a lot more RAM than Firefox. When
doing research, I often have 30-50 tabs open. With Chrome my system runs out
of physical RAM and starts thrashing. With Firefox, the UI becomes
unresponsive due to it's single threaded design.

I wish Chrome would start a Memshrink project like Mozilla did or Mozilla
would finish with they started with electrolysis.

~~~
Cthulhu_
> My issue with chrome is that it eats up a lot more RAM than Firefox. When
> doing research, I often have 30-50 tabs open. With Chrome my system runs out
> of physical RAM and starts thrashing. With Firefox, the UI becomes
> unresponsive due to it's single threaded design.

Alternatively, since you seem to be a power user, you could consider upgrading
your hardware to 8 or 16 GB of memory; it's not that expensive nowadays, and
given your power user status, more memory = faster computer experience =
higher productivity. Or just more tabs.

[old man mode] Back in the day we upgraded our computers instead of blaming
software.

~~~
emn13
There's no need to answer FUD with FUD: Firefox's UI does not become
unresponsive with increasing number of tabs. Certainly not with just 50
anyhow; I do that all the time and never notice a slowdown. The tab-closing
animation is less smooth than chrome's however. And while it's certainly true
chrome uses more memory per tab, I can't imagine running into that problem
very easily even on somewhat outdated hardware. A 4GB system should be able to
do 50 tabs normally, and how much more do you need?

~~~
ksec
I am a Firefox users so this is not coming from a bashing Firefox POV. I could
open 100 - 200 Tabs in Firefox without problem if i disable Javascript and all
Add on.

As soon as you have some heavy JS usage website, even 50 Tabs can slow down
the UI. This is on Quad Core Ivy and 8GB Ram with SSD.

The amount of JS on one website now is getting insane. Chrome have similar
problem as well like the OP have said. Just a different one.

So it is not FUD at all.

~~~
emn13
The FUD is that the # of tabs has anything to do with it. Also, these problems
have been getting a _lot_ better with recent releases, which are much better
at avoiding blocking the UI thread. It's not perfect, but it's not something
you see very often either. Let me put it this way: I can't even remember the
last time I've had a UI slowdown, and I use FF on various machine with lots of
tabs all the time. (Firebug's still really slow, though).

So when you say "some heavy JS usage" what exactly do you mean? Certainly not
stuff like google docs/mail/calendar, and they're all heavy on the JS...

------
ComputerGuru
I don't understand why the author comes across as having a serious axe to
grind. What is aptiverse's horse in the race?

Also this: _This is not the case for Chrome: the browser keeps all the cached
information indefinitely; perhaps this is driven by some hypothetical
assumptions about browsing performance, and perhaps it simply is driven by the
desire to collect more information to provide you with more relevant ads._

I don't know about him, but I LOVE the fact that Chrome will show me if a link
in purple or whatever if I visited it 2 years ago. Completely, totally,
absolutely love. Other browsers can be (the last time I checked) configured to
behave similarly. When you browse hundreds of web pages a day, catalog only a
few of them, and then research a topic you once looked up over a year ago
again, it helps to know which pages you've seen and which you haven't.

~~~
vor_
But the drawback, according to the article, is an increasingly expensive
impact on cache lookup over time.

~~~
ComputerGuru
As the author notes, that's mainly due to the (according to him, but also the
only explanation I can think of) lookup table implementation.

Something like this should be done in a hash table for constant lookup
regardless of the number of entries (as entries are not being added "in real
time" non-stop, the cost of bucket resizing shouldn't be too great) or with a
trie for the best storage properties for many URLs for the same domain (the
case author notes). If done right (correct hashing algorithm, good
implementation, decent collision handling for the hash table; or any decently-
performing in terms of space/time for the trie) this shouldn't be a problem.

~~~
emn13
It's _always_ going to be a problem; even the best hash tables of this size
will suffer performance problems as they age. As the table increases in size,
less of it will fit in memory/various caches, and more of it will involve
(expensive, potentially numerous) disk seeks.

Once you take caches into account, hash tables are _not_ ammortized O(1).
Having said that, those graphs show delays of well over 100 milliseconds, and
that sounds excessive. It's possible the delay is primarily due to a poor
implementation and not so much due to inherent limitations.

~~~
twoodfin
For something like link coloring over a long history, a Bloom filter would
seem to be ideal for reducing the number of true hash table lookups you'd need
per page.

~~~
emn13
Large bloom filters have exactly the same problem. And since they're fixed
size, you'd need a potentially huge bloom filter to avoid huge numbers of
false positives; more likely you'd need to periodically regenerate it based on
the original data.

This is a really tricky optimization because on a positive hit you've
introduced _more_ random I/O! After all, you've got the bloom filter and then
the hash table lookup. False positives are also bad - so you only save
something on true negatives. Is it worth it? Only if you get the tuning just
right.

------
dhconnelly
For anyone who wants more background on Chrome's multi-process model:

\- Architecture: [http://dev.chromium.org/developers/design-documents/multi-
pr...](http://dev.chromium.org/developers/design-documents/multi-process-
architecture)

\- Models: [http://dev.chromium.org/developers/design-
documents/process-...](http://dev.chromium.org/developers/design-
documents/process-models)

\- IPC: [http://dev.chromium.org/developers/design-documents/inter-
pr...](http://dev.chromium.org/developers/design-documents/inter-process-
communication)

\- Sandbox: <http://dev.chromium.org/developers/design-documents/sandbox>

~~~
js2
And how it differs from WebKit2 <http://trac.webkit.org/wiki/WebKit2>

------
jgrahamc
_This philosophy is embraced by the developers working on WebKit: in fact, the
code responsible for rendering a typical web page averages just 2.1 effective
C++ statements per function in WebKit, compared to 6.3 for Firefox - and an
estimated count of 7.1 for Internet Explorer._

What is an "effective C++ statement"? That's a really odd measure and I can't
get my head around it.

~~~
taf2
Last I heard IE was closed source ... I'd love to know how he got his hands on
the IE code base to make that measurement... All his posturing about - design
decisions being a business need of google make me wonder whether this was a M$
sponsored article...

~~~
Someone
They say 'estimated'. A way to do that is to look at the disassembly of IE
code, assuming an on average constant number of assembly instructions per C
statement. I don't think that is a bad assumption, but it does ignore
potential differences between compilers.

Also, IE source likely is available through Microsoft's Shared Source
Initiative (<http://www.microsoft.com/en-us/sharedsource/default.aspx>). After
all, Microsoft has vehemently argumented that IE is inseparable from the OS. I
think it would require some lenient interpretation of that license to use it
for this purpose, though.

~~~
yuhong
AFAIK, the compiler version used to compile any MS product can be figured out
from the linker version stored in the PE header.

------
veb
This is probably off-topic, but I've started using Safari religiously now.
When Chrome first came out, it was barebones, fast and did the job. This is
what Safari currently feels like, so I'm using it. Fast, stable, and does the
job.

Now Chrome _feels_ like it's just another bloated browser. Which is slow, and
hogs my computer. </opinion>

~~~
k3n
Perhaps Chromium proper would be a good fit for you; it's got the polish of
Chrome UI but lacks all of that crap that Google has started bundling with
Chrome (sync, etc.).

The hardest part is finding the downloads, since they go to great lengths to
prevent anyone from easily obtaining a compiled binary.

~~~
mistercow
>all of that crap that Google has started bundling with Chrome (sync, etc.).

Why do people keep bagging on sync? Personally, I'm a fan of not having to
reïmport my bookmarks and reënter all of my autosaved passwords every time I
reformat or change computers.

~~~
k3n
Because I've never used it and yet I'm hounded to "give it a try" on what
seems like a weekly basis.

~~~
mistercow
OK, yeah I see how that would be annoying.

~~~
k3n
It's kind of an extension of the annoyance from not using G+ and yet having it
crammed down my throat; it's the same tactic just with a little less gusto.

------
stanleydrew
"... and perhaps [Chrome's aggressive caching] simply is driven by the desire
to collect more information to provide you with more relevant ads."

The whole article loses credibility with me because of that one statement.
Surely this guy knows that a local DNS or document or image cache is not being
used to provide the Google mothership with more data to improve Adwords
performance, right?

------
chadaustin
Having tested the interactivity of various WebGL applications on low-to-
medium-end graphics cards, I will say that Chrome's does have significantly
higher latency than Firefox. Try this demo:

<http://n3emscripten.appspot.com/instancing.html>

Crank the number of instances up until the frame rate drops (on my Intel HD
Graphics 4000 it's about 20,000) and then drag your mouse. Notice that the
drag latency is significantly worse in Chrome than Firefox.

~~~
fastest963
I can get up to 51k before I start to notice a drop in Chrome (M27) and in
Firefox (R19) I can only get to about 27k. So it might just be that Chrome
isn't optimized for your Intel graphics and is better with dedicated cards?

------
polskibus
Opinion on process isolation is the most controversial statement in the
article, but the other claims are more interesting and easier to verify. If
the findings are confirmed by others, it will hopefully drive webkit/chromium
ecosystem towards improvement.

------
ybaumes
I am just wondering here: two blog post from a startup in "stealth" mode. One
about CSS "advanced" tricks. The other one about criticizing Chrome browser.
Are those guys about to release a competing Web-Browser? :P Too less cues to
know, but still the question popped up in my mind.

It would explain the criticizing tone of the article. Don't make me tell what
I didn't tell: they could be proven right or wrong, it is not my point. I
don't want to follow the debate about Chrome performance here. But what I
suggest is: unknowing the real goals of Aptiverse's business and their
interests I would backup a little bit and try to look at the big picture. And
avoid religious Emacs/VI war.

But well then, I am trying to make bold bet anyway. They could be about to
release a new ground-breaking web-browser in the near future, make everyone
switch and fix the issues they announced in their blog post. Don't know what
future is made of! :-)

------
STRML
When it comes to browser benchmarks, there is a lot of emphasis on JS
performance, page load time, and memory consumption. But one area where I've
seen Chrome absolute fall apart is scrolling performance. Safari regularly
gives me 3-5x the frame rate when scrolling, to the point where some websites
are nearly unusable on Chrome (say, Reddit with RES) but are butter-smooth on
Safari.

------
ChuckMcM
There was in interesting "Ah ha" moment in the development of the Xerox Star
system when they figured out all the layers of abstraction meant that each
character placed was taking a lot of subroutine calls. Flattening the
architecture resulted in a 10x improvement in performance. It was an amazing
result.

------
linuxhansl
I have done my own tests a while ago (with Firefox and Chromium) and have
concluded that for the sites that I frequent Firefox performs much better and
uses less RAM. Every now and then I repeat that experiment... So far always
with the same outcome. The notion that Chrome is faster is mostly measured by
benchmarks; for my browsing behavior that does not translate to real real life
performance.

~~~
logn
I had the same experience. For a while when I was using a slow internet
connection, I noticed speed differences a lot. For a few sessions I would try
loading each page on Firefox, Chrome, and Safari, and Firefox was clearly the
fastest.

This is another reason why we should be wary of everything moving to WebKit.
If one day it's just too slow, we're not left with easy options to move away
from it.

------
aristidb
I don't buy the claim that process isolation is superseded by NaCl-style
validation.

~~~
cpleppert
I agree... You can only validate what is presented to you; a hacker won't
kindly hand you a piece of code to run after exploited a hole and ask you to
verify it.

It also rather circuitous, browsers already verify what they are executing.

------
dbloom
_"This is because the synchronization needs to occur over a low-throughput,
queue-based IPC subsystem, accompanied by resource-intensive and unpredictably
timed context switches mediated by the operating system. To understant [sic]
the scale of this effect, we looked at the latency of synchronous, cross-
document DOM writes and window.postMessage() roundtrips."_

Web pages running in different Chrome renderer processes can _only_
communicate using postMessage. WebKit's design makes it practically impossible
to access DOM or JS objects across different processes or threads (the only
browser that can do this as IE -- top level browsing contexts have run in
different threads in IE since the beginning).

You can test this hypothesis by creating two same-origin documents in
different processes. In the first window, do window.name="foo"; then in the
other, do window.open("javascript:;",
"foo").document.documentElement.innerHTML="hello"; this will work in every
browser except Chrome.

Chrome actually provides a way for web developers to explicitly allow a
window.open invocation to create a new renderer process (see
<http://code.google.com/p/chromium/issues/detail?id=153363> ). This way, the
author can allow Chrome to use a new process if they don't need access to the
popup beyond postMessage.

So, I have no idea where that peak 800 millisecond DOM access latency came
from, but it's not from IPC across renderer processes. I'd love to see the
benchmark that was used to get that number.

~~~
taeric
My reading has notoriously sucked lately, but it reads like you are
strengthening the blog's point. Is that right?

~~~
dbloom
The circumstance I described that Chrome doesn't support is very rare in real-
world web pages. It's pretty unusual to use window.open to get a JS reference
to an _existing_ window, and not supporting that makes Chrome's renderer
process model much much simpler. So, it's a reasonable trade-off.

TL;DR: Chrome does a good thing. This blog post is written by someone who is
not well informed.

~~~
taeric
It makes it simpler, but does it also make it slower? Seemed that the blog's
point was that communicating between windows is slower in chrome than in other
browsers because of this behavior. You simply further explained why, and
confirmed that it is only in chrome. Right?

Now, I do think there is room for argument that this is a better way. But you
do not seem to be undermining any of the blog's points. Those being that
chrome has a slower process to communicate between windows, and that it is the
only browser that does this. The frequency with which this is needed was not a
point of contention.

~~~
dbloom
The author claims that windows in different processes communicate slowly in
Chrome.

My claim is that windows in different processes _cannot communicate at all_ in
Chrome. Only same-process windows can communicate -- and that refutes the
author's claim that IPC slows down cross-window/frame communication in Chrome.

~~~
taeric
Hmm... interesting. I was honestly under the impression that all tabs (or
windows) were separate processes. Are you saying that if you call window.open
with a same domain that a new process is not started? (Sadly, I don't have
chrome handy to test this right off.)

~~~
dbloom
That's right -- Chrome uses the same process when there is a JS reference
between the windows.

(In addition, Chrome will sometimes make windows/tabs share a process if there
are a lot of tabs open, to save memory. There is a limit to the total number
of render processes that Chrome will have.)

~~~
taeric
Cool, thanks. I am, not shockingly, curious how this works, now. Is it just a
hinting mechanism, or can the rendering process of a tab/window change on the
fly? What happens when you go to a new url in an opened tab? (I mean these
more as things I'm now interested in. Maybe I'll get off my virtual butt and
check the source. Granted, that source tree is less than casually
approachable.)

~~~
dbloom
Unfortunately, I don't know details that specific (I'm not familiar with the
codebase either). Maybe ask on IRC?
<http://www.chromium.org/contact/-chromium-irc>

------
33a
The only real wtf here is the history issue. The rest of the stuff --
especially the "antiquated" process isolation -- seem like reasonable trade
offs. Switching to a better hash implementation should solve 90% of the
performance problems.

------
xpose2000
This is a fantastic blog post, and I am thankful Alex took the time to write
about it. There is no question someone over at the Chrome team is starting a
conversation about these findings.

The best thing about Chrome is that they move fast. So I suppose the first
step is to get an official response by someone over there....

------
damian2000
I started using Chrome in late 2010 and used it continuously until about 3
months ago. It started suffering the problem of tabs just freezing about every
few minutes of browsing. This behaviour started happening around the same time
on two different machines - one quad core desktop, another lenovo laptop.
Since then I've gone back to FireFox, but this article is making me think I
may just need to do a fresh install of Chrome.

------
ww520
While we are at the Chrome performance problems, I have encountered one
recently when running Javascript code in Chrome. I have job progress data
shipped back from the server to the browser asynchronously, which drives the
progress bar and the data are concatenated together for display. Chrome simply
can't handle long string of concatenation. It just hangs. Other browsers have
no problem.

------
zobzu
So uhm, it seems like firefox is faster than chrome after a week of use :p

~~~
ceautery
Unless you clear your cache, in which case, in addition to being more secure,
Chrome is faster.

~~~
LeonidasXIV
Why is Chrome more secure? Did I miss a memo? I was under the impression that
Firefox security issues are fixed in a pretty decent manner.

~~~
spoiler
To be honest I just trust chromium (and chrome) more. I am not even a sure
why. Sort of gives me a sense of reliability, security and stability.

I had performance issues with Firefox in the past, though, but I doubt that's
it.

I guess what make a me feel all tingly inside is the GUI, which I love.

~~~
smlsml
How is "gives you a sense" a good reason?

------
benologist
That's interesting that Chrome's caching is so broken, I wasn't aware it
actually existed at all because I use the real internet that is not next door
the googleplex and every time I hit the back button I enjoy a 1 - 5 second
break waiting for the page to be completely reloaded.

------
Revisor
Two days later and this article doesn't exist. So much for link rot.

------
cooldeal
>This is not the case for Chrome: the browser keeps all the cached information
indefinitely; perhaps this is driven by some hypothetical assumptions about
browsing performance, and perhaps it simply is driven by the desire to collect
more information to provide you with more relevant ads

>Some of these issues - such as the "infinite history" or the antiquated style
of process isolation - may be driven by Google's business needs, rather than
the well-being of the Internet as a whole.

How does caching the files indefinitely lead to better ad targeting? Keeping
the history, perhaps, but I don't believe Chrome's web history is used to
target ads when they have a lot of other ways of doing it, like Google cookies
from people logging into Gmail at home and work, third party sites using Ad
Words or Google+ etc. etc.

~~~
Filligree
> How does caching the files indefinitely lead to better ad targeting?

It doesn't, of course. That's pure FUD; Chrome doesn't contribute to ad-
serving in any form other browsers don't.

The differences between Chrome and Chromium aren't that large, people would
notice.

~~~
Cthulhu_
>> How does caching the files indefinitely lead to better ad targeting?

>It doesn't, of course. That's pure FUD; Chrome doesn't contribute to ad-
serving in any form other browsers don't.

Actually, I'd argue that caching files (indefinitely or otherwise) speeds up
the internet for the user; higher speed = more pageviews = more ad impressions
and potential ad clicks.

Google's quest for internet speed is a win-win-win win for us since we get
faster internet, win for them since they get more ad impressions / revenue,
another win for them for gaining goodwill and a positive reputation.

~~~
ybaumes
I would go further with your argument (which is perfectly valid for me) for
the sake of completeness: it helps them to have a foot in web standardization
and, more importantly, to offer a viable alternative solution, fully
integrated and "in control". Secured. I don't think it is only a matter of
"speed". :-)

Before Chrome they were to the "mercy" of the leader in the market place: IE -
with its OS companion MS-Windows, the first "barrier" to the the web. Which is
the main playground of Google. IE was not really moving the web forward, and
known for a lot of issues. I remember people reluctant to use they credit card
on the web, because of their unconscious feelings of MS-Windows/MS-IE
insecurities. Stuff evolved A LOT from there. Microsoft IE is now much more
respectful of the w3c standard AFAIK, more stable, etc. And as you see, from
that stability emerged a lot of business. I could not envisaged so much
possibilities if the status-quo was still holding today as in 1998. I would
make a bold statement, saying that thanks to FireFox, Chrome, Hackers, we are
now seeing all those startups...

It was a very important challenge for Google (and it is not finished) because
they have incentive in people using the "open" web more and more, as you told.
The more user on the internet, the more time they spend on it, as you said,
the more they watch ads/spend money/consume. And I still know people
frightened by this "Tool" that they don't understand. Viruses, Credit card
number steal, etc. "Who are those guys, the Anonymous hackers?" I was asked
not a long time ago. I was visiting friends owning a PS3 when the PS3 network
have been closed down because of act of pirating, totally chocked by its
useless video games. Etc, etc. Long list of example.

Google understood early that it was in their interest to work on that matter.
Those topics will take more and more place in news in the near future, I
guess. Google won't be able to sort everything out of course. But they were
needing to push further the control of their own fortune. Chrome was a step
forward going into the action.

They are still working on that full "Vertical" offer. Chrome was just ONE part
of the full scheme. They've released Android, now they are releasing Google
Pixel. Tomorrow, Google glass. That must be exciting times at Google because
the work of so much year is taking forms, and I guess it will translates in
even a better future. At least they are showing to me that they perfectly
envision from a long time ago which the treats are and the challenges for
their business. And how to tackle them. Facebook, native guis, any other kind
of "closed" web (as opposed to open web) are another kind of threats, but that
is another story, I guess... :-) .

------
barista
A quote worth noting from the article: "Some of these issues - such as the
"infinite history" or the antiquated style of process isolation - may be
driven by Google's business needs, rather than the well-being of the Internet
as a whole. Until this situation changes, our only recourse is to test a lot -
and always have a backup plan."

~~~
lenazegher
This glosses over how anti-user infinite history is.

The problem of the ever-expanding cache is annoying but easy to deal with -
Ctrl+Shift+Del, select only "cache", then "obliterate since the beginning of
time".

But if you follow this procedure with your browsing history, your (or at
least, my) browsing experience is significantly degraded because all your URL
autocompletes are gone, at least until you re-visit all your regular sites.
You can tell chrome to delete, say, just your browsing history from the last
week, but that doesn't help you when what you want to do is delete all
browsing history _except_ that from the last week (to preserve your
autocompletes).

It's a real PITA.

~~~
wonderzombie
<https://news.ycombinator.com/item?id=5282234>

The cache is not infinite.

------
martinced
I regularly delete Chrome's cache because I did somehow "discover" that
behavior intuitively a long time ago.

That said, it's totally silly to criticize security measures for slowing down
a bit page rendering / navigation.

I'm taking a 50% slower Internet _today_ if, in exchange, it's 100% secure. I
know it's not doable and won't happen any time soon.

But those willing to sacrifice security in the name of perfs should be shot to
dead.

------
waltz
Chrome should get rid of that hideous download bar

