
Chrome has transcended version numbers - AndrewDucker
http://www.codinghorror.com/blog/2011/05/the-infinite-version.html
======
wladimir
That upgrader using binary differences (courgette) is impressive. From 10
megabytes to 78 kilobytes. I wonder why Linux distributions such as Ubuntu
still download the entire new packages on an upgrade. A lot of upgrade time
and bandwidth could be saved by only sending the differences. And it would
reduce load on the mirror sites.

Edit: did a bit of looking around and it seems to be planned for Oneric Ocelot

[https://blueprints.launchpad.net/ubuntu/+spec/foundations-o-...](https://blueprints.launchpad.net/ubuntu/+spec/foundations-
o-debdelta)

~~~
nuclear_eclipse
> _I wonder why Linux distributions such as Ubuntu still download the entire
> new packages on an upgrade. A lot of upgrade time and bandwidth could be
> saved by only sending the differences. And it would reduce load on the
> mirror sites._

Speaking as someone who has worked on this problem for my own projects [1], I
think I can answer this.

Fedora/Yum already supports downloading a binary diff between rpm packages to
reduce download size, but this also requires keeping a cache of previous rpm's
to run the patch against. There are multiple reasons why you can't rely on
binary diffs against the files actually stored on the system, most namely for
files like /etc/* that are more than likely modified since installation.

But the real problem with binary diffs is that unless you're doing what Google
does to ensure that people stay up to date, the number of binaries you need to
diff against grows very quickly, and there are a lot of edge cases to take
care of.

For example, let's assume some package A has been released as version 1, 2,
and 3. When A has a new release 4, you obviously want to build a diff against
release 3, but then you also most likely need or want to build a diff against
2 and maybe even 1 to take care of people who haven't already upgraded to 3.
And even if you build a diff against _every single version ever released_ ,
you will still always need to provide a full version of the package as well
for two cases:

1\. New installations, or reinstallations, of the package.

2\. When the user has cleared their package cache to save room.

And even beyond that, creating diffs involves a lot more effort and knowledge
on the part of the packaging team because they not only need to _know how_ to
build those diffs, but they also need to keep track of old package versions to
build those diffs against.

The end result is that you trade download bandwidth and time on part of the
server and end users for a lot of effort, time, and storage space on part of
the packagers and distro mirrors. For mirrors that are already encroaching on
50GB for a single release of Ubuntu and/or Fedora, adding a whole bunch of
binary diff packages will most likely grow the repository size by at least
30-50%, if not more, depending on how many old versions you diff against.

The question then becomes: does this trade off actually make sense, or does it
present further roadblocks for contribution from packagers and donated
mirrors?

[1]: If you would like to see how I handled this sort of task, I have a Python
library I wrote to handle the client side updating. I know it's not the entire
piece of the puzzle because it doesn't cover generating the updates, but it
might be useful for someone else. <http://github.com/overwatchmod/combine>

~~~
dspillett
> When A has a new release 4, you obviously want to build a diff against
> release 3, but then you also most likely need or want to build a diff
> against 2 and maybe even 1 to take care of people who haven't already
> upgraded to 3.

You wouldn't need a diff for every combination (1->2, 1->3, 1->4, 2->3, 2->4,
3->4 in the four version case), just the three diffs 1->2, 2->3 and 3->4\.
Then if someone has v2 you send out the diffs for 3 and 4 and have the client
apply both in order to make the updated package ready to apply. The saving in
space and number of diffs stored will grow as the number of versions grows.

This will be a little less efficient in terms of bandwidth use on average when
people are skipping a couple of versions, but will make little or no
difference if people are upgrading in a timely manner (so are only moving in
single version steps most of the time) and will save space over storing diffs
between all versions.

As well as the diffs I would store checksums for each version and have the
client send the checksum for the version they have just-in-case, to avoid
sending a diff (sending the file package instead) if the reference file seems
corrupt.

Also if you store diffs for both directions you can serve old versions of
packages (in case people need to roll-back due to some unexpected
incompatibility, or they are developers needing to build a test environment
with older library versions) without storing every version completely. This
increases the diffs per package, but not nearly as much as storing one diff
between every version (for 11 versions, 1 diff per change is 10, diffs in both
directions is 20, diffs between all versions totals 45 (or 90 for both
directions), for 21 versions those numbers are 20, 40, 190, 380).

~~~
Groxx
In my experience, diff + diff + diff = super-long update process. It was done
for many game updates up to several years ago, and an update from the box
version through 3 patches could take up to an hour. The install, meanwhile,
would take 20 minutes or so at most.

edit: not that I don't think this can be improved, nor that the updating
software they used was any good. Just sayin'.

~~~
dspillett
Often with game updates to get around the fact that patching the compressed
multi-asset files directly would not be efficient (as a change early in the
file would mean the rest of the file to the end would need patching unless
they had the foresight to use something like gzip's rsync-friendly option),
the patch would unpack the compressed resource files, patch them, then
recompress. Depending on how granularly the assets are distributed amongst the
installed files and how many of them were being touched by the patch this
could be a lot slower than just reading the compressed file from CD to hard-
drive which is what the installer would do.

Of course this means that if using the multi-diff method of update
distribution you would need to be careful about your selection of compression
arrangement to avoid the same inefficiency (unless the saving in bandwidth for
the client and the package storage servers is far more important than a bit of
extra time spent on the updates client-side)..

------
p4bl0
Maybe people are looking wrong at Chrome version numbering. Take GNU Emacs for
instance. At some point the developers realized that their software would
never be the subject of a change in nature big enough to change the major
version number, so they ditched it. Now we have Emacs 23 but it's actually
Emacs 1.23, and nobody complains.

I think it's really a non-issue and it's not really worth talking about:
Chrome just doesn't display the '1.' (or '0.' depending on your view point ^^)
in front of its version number :-).

~~~
dkersten
Yeah, its more a release number. It makes sense to use a single positive
integer and simply increment it every release.

Having said that, though, I quite like Semantic Versioning[1]. The advantage
it has over a single incrementing counter is that you know when API
compatibility changes.

[1] <http://semver.org/>

~~~
chalst
Right. Think of a new version of some software that drops support for a
deprecated part of an API in a minor release. I did get screwed by that
happening with a Latex package, xy-pic, some years back.

~~~
dkersten
Yeah. Painful. Thats the kind of thing semantic versioning aims to solve:
within a major release, all minor and patch releases must maintain backwards
compatibility. Patch releases may not change the API at all; minor releases
may add to it but not remove things[1]. Major releases may drop backwards
compatibility by removing or modifying API items. Seems like a reasonable
system to me and would avoid the problem you mentioned.

[1] I guess it can deprecate things, as long as they are still available for
use.

------
stcredzero
This is exactly the sort of visionary engineering needed to break the field
into the next stage. This isn't just a quantitative difference, it's a
revolutionary qualitative difference!

Our online infrastructure is _broken_ in ways we're dimly aware of, because it
has always been that way. In the same way that people trying to do business
demand network, electric, and roadway infrastructure that once didn't exist,
we will someday demand software infrastructure with features that do not exist
today.

Chief among these will be security features. If Google plays their cards
correctly, they can create an ecosystem that stays ahead of the black-hat
hackers. By correctly incentivizing white-hat hackers, they could expose and
patch security holes fast enough to ruin the economics of the black-hats. This
infrastructure will enable Google to make more money, resulting in a virtuous
cycle.

If the infrastructure can be extended to the server-side, with web app
frameworks that receive security updates with equal rapidity, then Google can
establish a secure, smoothly running "toll road" -- an infrastructure subset
relatively free from problems faced by the rest of the net. That could be
worth billions.

(We'll know this strategy is winning if/when Microsoft starts doing it too.
Once that happens, we'll be in a new era of computing.)

~~~
Splines
> if/when Microsoft starts doing it too

They already do it with Automatic Updates. Turn the update dial to 11 and let
your machine apply them at night. I don't believe they provide binary diffs
for updates, but I believe it's for logistical reasons rather than
technological (e.g., title updates over XBL are surprisingly small).

Of course, MS also hasn't figured out how to update components in-place while
they're being used, so expect your machine to be restarted in the morning. :-/

~~~
tedunangst
Chrome needs to be restarted after updates, too.

~~~
nkassis
I wouldn't put it past google to be working on fixing that too.

------
masklinn
> Somehow, we have to be able to automatically update software while it is
> running without interrupting the user at all. Not if -- but when -- the
> infinite version arrives, our users probably won't even know.

For what it's worth, this is already available in Erlang (although it was
built in for different reasons, closer to getting the fluidity of web
applications updates on just about any server software): two versions of the
same code can live in parallel in the VM, and there are procedures for
processes to update to "their" new version without having to restart anything
(basically, you switch functions mid-flight and the next time an updated
function is called the right way, the process just switches to the new code
path).

You need follow a few procedures and may have to migrate some states, but by
and large it's pretty impressive. And it could certainly be used for client-
side software. The sole issue I'd see would be the updating of a main GUI
window in-flight (how do you do that without closing and re-opening it?). But
I doubt this one changes _that_ much in e.g. chrome these days.

~~~
jamii
Following ideas from both erlang and android, each gui screen within the
application would be attached to a single process. When a new screen is opened
the process is killed and a new process starts. If new code has been loaded
the new process will switch.

~~~
masklinn
Screenwise is not really a problem, you can just delete and reload the
controls the next time the thing becomes visible. To get back to chrome, you
could even swap the JS engine and DOM: leave the old one running in existing
tabs, switch new or reloaded tabs to the new one, that's pretty much seamless.

The issue I have is for the "static" chrome around the mobile parts: title
bar, URL bar, that kind of stuff. There isn't much opportunity to switch that
to a new process, I think. Especially if there are changes to make to the UI.

~~~
bonzoesc
Fortunately, those don't actually need to be changed very often (no matter
what the Chrome team says). As long as the rendering, JavaScript, and other
parts that touch untrusted data stay up to date, you're probably fine with a
URL bar that hasn't had today's unfashionable components removed.

~~~
masklinn
I did note that in my original comment, but you need to note the following: if
the software never _needs_ to be restarted, you _will_ find somebody who never
restarts it. The result is that sections of the code base may get out of sync,
resulting in nonsensical behavior. 6 months from now, somebody _will_ push
code which does not work anymore with right now's awesomebar (or whatever),
and the results may be minor or may lead to significant loss of state and
information.

------
omh
There are disadvantages to constant, automatic updates.

I had a call from someone who'd been using Chrome to regularly print a web
page, and one day it just stopped working. The site hadn't changed, but for
whatever reason the latest version of Chrome just didn't render it. And of
course trying to install an older version of Chrome was quite difficult.

(In Google's case they do now have a way to disable the updates, but not all
software is so good about it)

~~~
jinushaun
I experienced this yesterday. My website renders differently now than it used
to on Chrome last month. Firefox and IE still look "correct". I used to think
the auto-update feature on Chrome was great, but now I'm not sure. I can see
why some companies still stick to IE 6 internally. It's stable.

~~~
abarth
Please consider reporting the bug so that we can fix the problem:

<https://bugs.webkit.org/enter_bug.cgi?product=WebKit>

We have a large number of tests, but sometimes we miss things. :(

------
wccrawford
I stopped looking at Chrome's version numbers (unless I have a specific issue
or question about Chrome) back around 9. That's because 9 was the last
development version I used... The features I need are all in the stable
release now. When 10 came out, my 9-dev turned into 10-stable and I didn't pay
attention from there.

At this point, I don't even bother 'updating' (read: close the browser and
open it again) for up to a week or 2 after an update comes out, unless I need
to close my browser for some other reason.

------
qjz
Oh, how I wish I had this issue with Android! I'm currently locked at version
1.6...

~~~
code_duck
No doubt Google would be glad to develop and update Android in the same way...
if we could only get the carriers to step aside.

------
melling
I run the Canary build so I get an update every day.

[http://googlesystem.blogspot.com/2010/07/google-chrome-
canar...](http://googlesystem.blogspot.com/2010/07/google-chrome-canary-
build.html)

It's impressive how stable the nightly has been.

~~~
chunky1994
Is the canary build stable enough? It seems to have a lot of impressive
features.

~~~
abarth
The Canary build goes out automatically without being looked at by a human, so
there's a very real possibility that it will be unstable. Currently there's
some strange crasher in the PDF viewer, for example.

You can install Canary "side-by-side" with another channel, so you can switch
back to something more stable if canary goes pear-shaped.

~~~
abraham
And with syncing everything from extensions to preferences stays consistant
between the two.

------
slackerIII
John Boyd would be proud. Everything else being equal, the team with the
fastest OODA loop usually wins: <http://en.wikipedia.org/wiki/OODA_loop>

------
lmarinho
Apples App Store, for both Mac and iOS, could learn a thing or two from this,
their software update experience is awful, requiring you to re-download whole
multi-gigabyte apps sometimes for minimal updates.

~~~
sthulbourn
Not only should Apple do this for iOS apps, they should do it for Xcode. It's
5GB EVERY TIME, it's like a conspiracy...

------
br1
Microsoft actually went to great length to build an update mechanism that
doesn't require reloading. It seems this is not so useful after all, and it's
not being used: <http://jpassing.com/2011/05/01/windows-hotpatching/>

------
kolektiv
There are software systems which do get updated while running though, but
perhaps it requires a change in software architecture more than just (very
clever) diff tools. Erlang systems, for instance, can have the concept of hot
code swapping baked in to them in a more predictable way because that
requirement is part of the base system - application life cycle is built in to
the platform, not on top of it. Of course, for systems such as telecoms
switching, the complexity and cost of this was worthwhile. For browsers...
perhaps not. Cost/Benefit analysis is probably the usual trusted friend. What
would we hope to gain (and how would we measure it) by letting browsers never
restart?

------
ck2
So how do you roll back with Chrome when it breaks a plugin for example?

I guess this means for ignorant users this is good but for power-users we are
having more and more control taken away from us.

Personally I disable all of Chrome's phoning home because it's impolite and
does it too many times per day and I have no easy way to verify exactly it's
sending all those times.

~~~
MikeKusold
It seems as though Google is trying to eliminate this by supporting their own
plugins. Flash has been shipping with Chrome for a while, and a PDF reader has
been shipping for months.

Those two plugins have 90% of people's plugin needs covered.

------
Typhon
Would they really stop at Chrome infinity ? I'm pretty sure they would make a
version aleph one next. And so on.

------
sehugg
That's a great improvement over a generic binary diff. I remember Symantec was
doing something similar for their AV definitions updates. In fact they got
some patents: <http://www.symantec.com/press/2001/n010207b.html>

------
arkitaip
This is slightly offtopic but Wordpress built-in update feature only works in
you have ftp on your server. If you've disabled FTP for security reason
updating becomes a manual process. I wish the WP devs would use patch or some
other, CLI friendly, solution.

~~~
griffbrad
If you install the ssh2 PECL extension, the updater will also allow you to
update over SSH/SFTP.

------
evangineer
Made a similar observation yesterday. Only times that the Chrome version has
mattered in my recent experience have been with regards to the recent WebGL
security hole and with Native Client.

------
kfool
Here is how I see things:

1\. Updates should not only be applied in sequence.

It is better to produce a binary diff between any two versions, and apply only
that (one) binary diff. The reason for this isn't efficiency, but semantics.
Updates not only fix things, but break things. Meaning, updates corrupt
application state (data), both in-memory and on-disk. It can be disastrous to
apply an intermediate update that removes state, only to realize that a future
version reversed the semantics and needs to use that state (which was
available, but is now gone).

Peserving backward compatibility is important, which means the ability to skip
some version updates is necessary. To the extent possible, reversing updates
is important too.

2\. The ideal update system should apply updates live, not offline.

With a model that accounts for updating the entire state of an application,
updating live is possible. The reason most updates are not applied live yet is
that the model is not descriptive enough to change the entire state of the
running application.

Notable state that should be updated, but often isn't, is continuations and
the stack. This is why GUI applications need to be shut down to update.

Scheme's call/cc (call-with-current-continuation) solved making changes to
continuations and stack state decades ago better than Erlang. Erlang cannot
force stacks unroll or continue from arbitrary points.

3\. Updates must be produced with source code and programmer input.

Updates should not be produced with binaries as input.

The reason is the need to account for application semantics, which binaries do
not expose in the detail source code does. Although automated, sophisticated
semantic-diffing based on control-flow can be developed, it is sometimes
inconclusive whether an update will break things.

4\. It is necessary for programmers to provide live update guidance.

In the cases where producing provably safe dynamic updates is not possible, it
is input from the programmer that can clear any conservatism of the safety
certification process.

Tools are needed for programmers to reason about the semantic safety of their
live updates, integrated in the development process. Including tools that help
transform application state between versions.

------
cstrouse
I'm a big fan of their frequent updating even if the version bumps do get out
of hand. Thanks Google for continuous improvements and updates!

------
fendrak
Being a software developer sometimes feels like an especially thankless
position -- if you're doing your job well, users never think of you.

~~~
megablast
Unlike what job? Aircraft mechanic? Pilot? Street cleaner?

------
swah
> But even Google hasn't figured out how to install an update while the
> browser is running.

I don't think it ever displayed that dialog on OSX.

~~~
rudd
The screenshot you're referencing is from OS X! If you don't quit Chrome for a
while, then open "About Google Chrome", that dialog or something very similar
will display.

~~~
swah
Oops, my bad! I do quit Chrome accidentally from time to time (Cmd-Q being
neighbor to Cmd-W).. Thats probably when Chrome is updating.

------
arapidhs
Updated icon deserves it is own version number and release cycle if it is by
google (chuckle)

------
mhb
Making the side tabs acceptable-looking would be worth a real version number.

~~~
tghw
File a bug or feature request:
<http://code.google.com/p/chromium/issues/entry>. Complaining on forums is not
likely to get you very much.

------
orenmazor
Preach on, brotha!

~~~
orenmazor
as an aside: I definitely was agreeing with him. jeez, you guys are difficult.

------
patrickg
Jesus, the 90° rotated "8" that should be an infinity sign is ugly as hell.

Edit: wording

------
tybris
Chrome crashes more than any other browser I've used, but the best thing about
it is that they have even made crashes seamless.

