
History of the browser user-agent string (2008) - jesperht
http://webaim.org/blog/user-agent-string-history/
======
antoncohen
It gets one better...

And Chrome was good, and MSIE wasn't, so webmasters served bad pages to MSIE.
Microsoft was not happy. So they created Edge. Edge was good, but Microsoft
feared webmasters would treat it like MSIE. So Microsoft Edge pretended to be
Chrome to get the good pages.

Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/39.0.2171.71 Safari/537.36 Edge/12.0

~~~
hermitdev
I always feel like these timelines miss an era. The era where Netscape
stagnated and MS came in with a superior, free, non-standards compliant
browser that actually pushed the evolution of HTML/JS forward. IE 4.0-IE 6.0
pretty much pioneered features that would become HTML 4. For example:
dynamically modifying the DOM and XML async requests.

IE pretty much ushered in the era of truly dynamic websites. Granted, IE 6
sucked (eventually), and thus began the era of IE stagnation. MS got the
market share they wanted, then basically sat on their hands for a decade (as
the other browsers started innovating again and W3C got off its ass).

Shit on IE all you want, but there was a forgotten era when it was the
pioneer.

Also, I'm generally finding Edge on Android a better experience than Chrome on
Android after having played with it for a month or so. I still prefer Chrome
vs Edge on my Windows desktop.

Obviously, YMMV, but these are my personal observations and experiences.

~~~
wahern
Netscape Navigator had layers
([https://en.wikipedia.org/wiki/Layer_element](https://en.wikipedia.org/wiki/Layer_element))
long before Internet Explorer made the innerHTML property mutable. And unlike
layers, IE's innerHTML was unusable for dynamic updates because it leaked
memory like a sieve and after dynamically reloading content a few times IE
would grind to a halt.[1]

Layers lost out and innerHTML won the day, but it's a stretch to say IE was
more innovative than Netscape. Arguably innerHTML won because this was the era
where Microsoft was throwing its monopoly weight around to push IE and kill
Netscape.

[1] People were building amazing DHTML games using JavaScript and layers in
Navigator 4. IE 4's innerHTML seemed more powerful but it was just plain
unusable in practice. Using the DHTML games as a model, I built a timesheet
entry and management application for Navigator (layers) and IE (innerHTML)
that never reloaded the page. This was circa early 2001. With Navigator you
could reload timesheets all day long; with IE the application became unusable
after just a few timesheet changes. IE 4 dominated the landscape so long that,
in truth, you couldn't build real dynamic web applications until many years
later. From my perspective, IE _delayed_ the emergence of the modern web.

~~~
wahern
Looking at Wikipedia I see that IE 6 came out in 2001, so I must be
misremembering some context. But I know I left that job not long after 9/11,
and I know that whatever version of IE we had to target was unusable for
serious DHTML work.

~~~
gcb0
windows only had ie4 and then 6. mac had a great version of ie5 (really)

but ie crashed if you used xmlhttprequest or innerhtml too much. so despite
them forcing it on the world since ie4, it only became usable on later ie6
versions (windows service pack ftw), a little later than 2001.

------
nabla9
And Alan Kay saw this coming from a mile away and said: "What a total stone
age BS this is. We already did it better in the PARC". Instead of sending
shitty text files to rendering engines to parse all their own way, we should
send objects. Every object should have an URL and the users should interact
with these objects. And he teamed with David A. Smith and six others and they
made it happen... aand it had 2d objects and it had 3d virtual reality where
objects from different servers interacted and everybody saw it was cool as
hell, but nothing came out of it because the world is path dependent and
network effects rule.

[http://wiki.c2.com/?OpenCroquet](http://wiki.c2.com/?OpenCroquet)

[https://en.wikipedia.org/wiki/Croquet_Project](https://en.wikipedia.org/wiki/Croquet_Project)

[https://www.youtube.com/watch?v=XMk9IGwuRmU](https://www.youtube.com/watch?v=XMk9IGwuRmU)

TL;DR: Future was already here, but it could not communicate with the present.

~~~
zamalek
> 3d virtual reality

3D skeuomorphic interfaces were always good _in theory._ I think the problem
with them is the keyboard. The mouse is capable of recording velocity (which
is why it is used for aiming), whereas the keyboard is binary. This
distinction is crucially important for navigating quickly through interfaces.
If I want to switch apps or tabs, I can typically do it with a very quick
flick of my mouse. In 3D I would have to walk over to my new task which could
take a significant amount of time depending on how far away it is from me.

Nobody wants to explore a 100 story virtual mall when they can just type in
exactly what they are looking for. It could maybe work for Ikea, but the
markets where this could work are so incredibly niche.

The advantage that computers bring is that they don't have to work like the
real world. I am sure that our current approach to VR interfaces (which is 3d
skeuomorphic) is a dead-end; there exists a more productive method that works
nothing like our reality.

Regardless, the 2D browser was the better approach. Cool does not equate to
usable.

~~~
stctgion
There are plenty of situations where cool is better than fast. Anything
entertainment based, maybe even social media. It is hard to match the
immersion and engagement of a 3d interface. Get someone hooked on 3d (e.g. FPS
gaming) and there is no going back.

~~~
deepakkarki
Correct! But then you're actually seeking out coolness or entertainment.
Otherwise you just want to do the job and get on with it, in which case fast >
cool.

Of course there will be exceptions, but generally the rule applies.

------
Wehrdo
This is interesting, because at every step along the way, each actor took the
locally optimal step -- webmasters wanted to serve up working pages to their
users, and new browser vendors wanted their users to get pages with all the
supported features.

Yet, in the end, we end up with a mess for everybody. What could have been
done differently to end up at a good solution? I guess having universally
defined and complied with standards would have helped, so a browser could just
say "I support HTML 1.3".

~~~
kibwen
_> What could have been done differently to end up at a good solution?_

Something like a standardized API for feature-detection, possibly.

~~~
bzbarsky
That existed: see DOMImplementation.hasFeature.

Turned out, there were cases where browsers returned "true" while their
implementation of the feature did not do what the authors wanted. There were
various reasons for this: the feature detection being decoupled from the
feature implementation, bugs in the feature that could not be captured by the
implementation, the detection not being fine-grained enough, etc. And there
were cases where hasFeature returned "false" while the feature was usable, for
similar reasons.

Long story short, at this point the implementation of hasFeature per spec, and
in browsers, is "return true".

------
Maultasche
I remember that in the very early days of Firefox, some websites would refuse
to serve pages to anything that wasn't Internet Explorer. I did not see the
point to that and I was not amused.

Firefox didn't have a problem displaying those pages, so I had to install a
plugin so that Firefox could pretend to be Internet Explorer so that I could
just see the web page.

I'm glad those days are over.

~~~
clouddrover
> _I 'm glad those days are over._

Those days aren't over yet.

Google Earth says "Google Chrome is required to run the new Google Earth" or
"Oh no! Google Earth isn't supported by your browser yet" if you try to use
another browser:

[https://www.google.com/earth/](https://www.google.com/earth/)

~~~
m-p-3
Or trying to access the web-based version of Remote Desktop in something else
than Google Chrome
[https://remotedesktop.google.com/access](https://remotedesktop.google.com/access)

But switching the user-agent isn't enough in other browsers, Google must be
using some fuckery in the background.

~~~
MertsA
Switching the user agent isn't enough because it's not blindly relying on the
user agent. It's actually doing feature detection so if your browser doesn't
support whatever features used for remote desktop it'll give you an error
message without relying on whitelisting a particular browser. This is exactly
how web applications that need to use non-universal features should work,
sure, look at the user agent for blacklisted browsers but use feature
detection for what you can so that a new web browser that supports it will
work without needing to modify the whitelisted user agents.

------
phamilton
Anyone doing anything with user agents should use ua-parser[0]. Don't even
bother trying to do any of this yourself.

If ua-parser doesn't exist in your language, just pull the yaml file out of
ua-core. That defines the regexes you should use and how they translate to
browser versions (and os versions and devices).

[0] [https://github.com/ua-parser](https://github.com/ua-parser)

~~~
kccqzy
Are you encouraging people to do browser sniffing accurately? Aren’t we
supposed to discourage such sniffing instead?

~~~
paulddraper
Ideally, you would not care at all. You would simply develop HTML 4/5/6 a user
has a browser that supports that spec.

In reality, browsers have known bugs that last for years, you need to collect
stats to figure out support policies, and you need to reproduce customer bugs.

Example: old versions of Firefox have an RCE vulnerability if you use third
party jsonp apis. If you use these apis but don't block these ff versions,
your users will be vulnerable.

~~~
phamilton
Exactly. I had to deal with this due to Chrome 45 blocking flash[0]. While
that might not seem like something worth targeting specific browsers, Chrome's
implementation of blocking flash was an advertiser's worst nightmare. In order
to render the page properly, the page would load with flash enabled on all
content. It would then pause the flash runtime on all content not deemed
"important". This had the wonderful effect of giving a video ad enough time to
start playing a few frames and to fire the impression that results in the
advertiser getting billed for showing the ad. This would be flat out fraud on
our part (a major video ad exchange), so we had to aggressively avoid allowing
flash ads to buy spots from chrome 45 and later.

We used ua-parser and everything went very smoothly.

[0]
[https://www.infoq.com/news/2015/08/chrome-45-flash](https://www.infoq.com/news/2015/08/chrome-45-flash)

------
pastelsky
It's quite interesting how user agent stings have changed and become more
bloated with time.

Other fun facts:

\- Chrome on iOS reports its chrome version (eg 64.0.36), with no way to get
the underlying Safari engine version.

\- Android webviews have replaced one UA string pattern with another close to
three times (pre-Kitkat, Kitkat till Marshmallow, and one for marshmallow and
above)

\- Chrome continues to add a "Webkit" version to its UA, even after having
forked to Blink. Though since Chrome 27, the webkit version always says
"537.36".

Src - I wrote a library that generates user agent strings programatically -
[https://github.com/pastelsky/useragent-
generator](https://github.com/pastelsky/useragent-generator)

------
biesnecker
I need to bookmark this for the next time I hear “oh let’s just encode the
params as a string” from a coworker.

~~~
csours
Surely, THIS TIME, it won't balloon out of control! And who will ever need
more than 255 characters of PATH?!

File under: Problems that require a time machine to fix.
[https://blogs.msdn.microsoft.com/oldnewthing/20110131-00/?p=...](https://blogs.msdn.microsoft.com/oldnewthing/20110131-00/?p=11633)

~~~
biesnecker
The road to hell is paved with one off exceptions that are temporary until we
get a better implementation in place anyway. :-)

------
stagbeetle
Some more fun tidbits:

> ProductSub returns 20030107 for Chrome and Safari, because that's the
> release date for Safari which used an Apple fork of WebKit. Chrome also uses
> this fork. For Firefox, it's 20100101. I don't know why.

> Vendor returns "Google Inc." for Chrome, but undefined for everything else.

> Navigator can tell if your device has a touch screen

> Navigator can tell how many logical cores you have

> appCodeName always returns "Mozilla" and appName always "Netscape"

> Navigator can tell if you're using: Wi-Fi, Ethernet, cellular, Bluetooth, or
> WiMAX

> Navigator knows how much RAM you have

> And the exact plugins you're using. A Firefox useragent won't hide
> 'type':'application/x-google-chrome-pdf'

> Your screen can be shared through navigator -- without your permission

> Languages are set as either `US-en` or `en` to differentiate between
> Americans and British

> Your battery can be acpi'd by Navigator

> File permissions can be read, revealing usernames

And this is just navigator, wait till you see all the fun things you can do
with Javascript and canvas.

~~~
silverwind
> For Firefox, it's 20100101. I don't know why.

At some point in time, that date was Firefox's build date. Then, some concerns
were raised about that date allowing sites to track users based on that date
so it was set to 20100101.

------
smsm42
Should be "Why every _browser_ user agent string"... Non-browser agents
usually don't (and shouldn't) do the Mozilla tricks.

~~~
ogoffart
Actually, we've had to add "Mozilla" in the user agent of one of our program
because users have been complaining being blocked by some proxy.

~~~
smsm42
I can get per-browser content switching, but blocking by proxy is a
malpractice. Probably driven by some bot abuse, but certainly a very wrong way
to deal with it.

------
samfisher83
I learned from that article what Mozilla means: Mosaic Killer.

------
ohf
> What's your favorite web browser?

Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML,
like Gecko) Chrome/0.2.149.27 Safari/525.13

~~~
petee
One thing I wish were explained is where the 'U' came from; it first shows up
when Mozilla was born with Gecko

~~~
Macha
It was about encryption ciphers, when the US had export restrictions on key
lengths. U = USA = 128bit, I = International = 40bit, N = None. Nowadays the U
is another vestigal piece of the UA string.

------
omarforgotpwd
Great history lesson for people my age who might not have known this back
story (I was born in the 90s).

------
kibwen
The OP is from 2010. For those wondering what sort of user-agent a brand-new
browser engine would adopt in this era, see this discussion regarding
inventing a UA for Servo, which involved collecting data from popular sites in
the wild to see how they treat UAs:
[https://github.com/servo/servo/issues/4331](https://github.com/servo/servo/issues/4331)

TL;DR: you can see end result for each platform here:
[https://github.com/servo/servo/blob/2d3771daab84709a6152c9b5...](https://github.com/servo/servo/blob/2d3771daab84709a6152c9b56c43bad2b280b2ab/components/config/opts.rs#L456),
and it looks like "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:55.0) Servo/1.0
Firefox/55.0"

~~~
crzwdjk
Good to see that building Servo on ARM will return "i686" as the cpu
architecture. Because there are plenty of sites that will just match /arm/ in
the user agent string and redirect you to the mobile version, regardless of
what your user agent actually is. Which is supremely annoying to those of use
with ARM desktops (a tiny minority, I admit).

------
ersh
My user agent starts with "Wget" :)

~~~
arca_vorago
The ol Stallman-oroo

------
palanik
Similar story for every animated gifs to have "Netscape 2.0" app extension.

------
walrus01
saying "... and used KHTML" glosses over the entire Konqueror project and
existence of Konqueror long before the first release of Safari. I was using
Konqueror on a KDE 2.0 desktop quite happily for a while.

------
askvictor
The userAgent property has been aptly described as “an ever-growing pack of
lies” by Patrick H. Lauke in W3C discussions. (“or rather, a balancing act of
adding enough legacy keywords that won’t immediately have old UA-sniffing code
falling over, while still trying to convey a little bit of actually useful and
accurate information.”) [[https://superuser.com/questions/1174028/microsoft-
edge-user-...](https://superuser.com/questions/1174028/microsoft-edge-user-
agent-string)]

------
biastoact
I once created a similar problem. I built a tracking and split testing system
designed around a list of features activated during a page load. So a single
page load might be described like:

root,signin,bluebutton

Where bluebutton was a design we were testing for our signin page. Of course
once bluebutton worked and had run for a while everyone was afraid to change
it in case there was a dependency of some kind. So the Facebook login that
replaced the old signin would look like:

root,signin,bluebutton,fbookredirect

Even though no sign in page was shown let alone a bluebutton.

------
Choco31415
For me, the page isn’t loading. Here’s a Google cache of it:

Text-only cache:
[http://webcache.googleusercontent.com/search?q=cache:maxiNwj...](http://webcache.googleusercontent.com/search?q=cache:maxiNwj6M34J:https://webaim.org/blog/user-
agent-string-history/&num=1&client=safari&hl=en&gl=us&strip=1&vwsrc=0)

Edit: The full-version cache is broken for me as well!

~~~
feelin_googley
More cache urls

[http://web.archive.org/web/20180306011516/https://webaim.org...](http://web.archive.org/web/20180306011516/https://webaim.org/blog/user-
agent-string-history/)

[https://cc.bingj.com/cache.aspx?q=http://webaim.org/blog/use...](https://cc.bingj.com/cache.aspx?q=http://webaim.org/blog/user-
agent-string-history/&d=4917026296366068&w=IDh6bxIIUwy8ASvaLhADIqZ5inWgsqxQ)

[https://archive.is/mJg8G](https://archive.is/mJg8G)

[https://88h6obas83.execute-api.us-
east-1.amazonaws.com/dev/p...](https://88h6obas83.execute-api.us-
east-1.amazonaws.com/dev/parse_article?source_url=http://webaim.org/blog/user-
agent-string-history/)

The last one returns JSON

------
nasso
Wow. What a mess!

Super interesting read though! :)

------
BillinghamJ
I'm guessing that today, in the current age of the modern web, user agents
strings are no longer so relevant, and can be basically set to anything?

~~~
joemi
Some servers feed different pages depending on whether they think a request is
coming from a browser or a bot based on user agent string. Sure it's easy
enough for a bot to pretend to be anything, but some servers are still set up
to consider the user agent string.

~~~
tyingq
Still useful, because of stark differences like “flash works” or “css grid
works”.

------
jayflux
Can Mozilla/5.0 be eliminated these days?

~~~
astura
New pages aren't the problem - pages written 20 years ago still exist and
might depend on the Mozilla/5.0 being there to render properly.

------
exikyut
MIRROR: [http://archive.is/u22lH](http://archive.is/u22lH)

------
baleine
what should the user agent be if you have to plug in the internet throw a plug
behind your head, in the coming soon 20 years later

------
qwerty456127
The whole thing should be deprecated altogether.

------
VohuMana
Thanks for the history lesson

------
edwhitesell
Great article. Could we get [2008] added to the title please?

------
grzm
Actual article title: "History of the browser user-agent string"

------
michaelmior
s/start/starts/

------
barce
The title is simply false. I read the article and it does present interesting
history from the browser wars. However, any cursory glance of web server logs
will show that sometimes the user agent string is blank, or it starts with
"MobileSafari" or "UrlTest." The user agent string is client generated and can
be anything the client wants.

~~~
khedoros1
Ah. Which browsers ship with those settings?

------
crobertsbmw
I wonder if the author of this text is religious at all..

~~~
skellertor
Judging by his verbiage such as "In the beginning", and "behold", I would say
yes. I rather enjoyed the tone.

~~~
astura
I thought that was just alluding to the Book of Mozilla
[https://en.wikipedia.org/wiki/The_Book_of_Mozilla](https://en.wikipedia.org/wiki/The_Book_of_Mozilla)

