Hacker News new | past | comments | ask | show | jobs | submit login
History of the browser user-agent string (2008) (webaim.org)
558 points by jesperht on March 5, 2018 | hide | past | favorite | 168 comments

It gets one better...

And Chrome was good, and MSIE wasn't, so webmasters served bad pages to MSIE. Microsoft was not happy. So they created Edge. Edge was good, but Microsoft feared webmasters would treat it like MSIE. So Microsoft Edge pretended to be Chrome to get the good pages.

Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36 Edge/12.0

I always feel like these timelines miss an era. The era where Netscape stagnated and MS came in with a superior, free, non-standards compliant browser that actually pushed the evolution of HTML/JS forward. IE 4.0-IE 6.0 pretty much pioneered features that would become HTML 4. For example: dynamically modifying the DOM and XML async requests.

IE pretty much ushered in the era of truly dynamic websites. Granted, IE 6 sucked (eventually), and thus began the era of IE stagnation. MS got the market share they wanted, then basically sat on their hands for a decade (as the other browsers started innovating again and W3C got off its ass).

Shit on IE all you want, but there was a forgotten era when it was the pioneer.

Also, I'm generally finding Edge on Android a better experience than Chrome on Android after having played with it for a month or so. I still prefer Chrome vs Edge on my Windows desktop.

Obviously, YMMV, but these are my personal observations and experiences.

Netscape Navigator had layers (https://en.wikipedia.org/wiki/Layer_element) long before Internet Explorer made the innerHTML property mutable. And unlike layers, IE's innerHTML was unusable for dynamic updates because it leaked memory like a sieve and after dynamically reloading content a few times IE would grind to a halt.[1]

Layers lost out and innerHTML won the day, but it's a stretch to say IE was more innovative than Netscape. Arguably innerHTML won because this was the era where Microsoft was throwing its monopoly weight around to push IE and kill Netscape.

[1] People were building amazing DHTML games using JavaScript and layers in Navigator 4. IE 4's innerHTML seemed more powerful but it was just plain unusable in practice. Using the DHTML games as a model, I built a timesheet entry and management application for Navigator (layers) and IE (innerHTML) that never reloaded the page. This was circa early 2001. With Navigator you could reload timesheets all day long; with IE the application became unusable after just a few timesheet changes. IE 4 dominated the landscape so long that, in truth, you couldn't build real dynamic web applications until many years later. From my perspective, IE delayed the emergence of the modern web.

EDIT #2: The games I was thinking of were Dan Steinman's. Here's StarThruster: https://web.archive.org/web/20010215013449/http://www.danste...

He had turned his proofs of concepts into a cross-browser (Navigator 4, IE 4) library called DynAPI along with a tutorial: https://web.archive.org/web/20010413015916/http://www.danste...

Looking at Wikipedia I see that IE 6 came out in 2001, so I must be misremembering some context. But I know I left that job not long after 9/11, and I know that whatever version of IE we had to target was unusable for serious DHTML work.

windows only had ie4 and then 6. mac had a great version of ie5 (really)

but ie crashed if you used xmlhttprequest or innerhtml too much. so despite them forcing it on the world since ie4, it only became usable on later ie6 versions (windows service pack ftw), a little later than 2001.

It is better than that: Netscape was actually the browser ignoring standards (and in particular was refusing to implement CSS) and Microsoft with IE came in as what someone from the W3C at the time called the "white knight" with a browser that cared about standards and published their DTDs and really implemented stuff like CSS. They (of course) ran into the typical-MS problem of "we implemented a draft and then they made a major change (to the box model, which ironically later got a configuration flag for and many many people actively prefer the old MS way ;P) and couldn't update very well as we now had existing users", but what I claim really caused them to stagnate was the massive lawsuit against them, which demoralized the IE team in a company that doesn't force people to work on project teams where they don't want to work (the culture in Microsoft was "if you want to build a new product, poach a team worth of people from other, more boring teams").

For more information and a million references, here is a comment I left detailing this history five years ago:


Do you still plan to write that article?

I think that Microsoft stagnated for the same reason that Netscape did: Their codebase was not flexible enough to implement new features well.

It's a big decision to pull the plug on a codebase of that size, so browser developers tend to resist doing it until stagnation happens.

It happened with Netscape, it happened with MSIE, it happened with Opera, it happened with Firefox, can you guess who's next in line?

The plug was pulled on Presto and on Mariner (except for NSPR, SpiderMonkey and NSS parts).

The plug was not pulled on the IE and Firefox code bases. In both cases, the plug was pulled on legacy extensions permitting major changes, but neither Edge nor Quantum are new code bases.

(WebKit deprecated on Mac and obsoleted on Linux its first embedding API and replaced it with thr WebKit2 API.)

Safari? I honestly don't know.

I think he is referring to Chrome.

> IE pretty much ushered in the era of truly dynamic websites.

I'd call that inferior, not superior: it's what destroyed the web as a document-viewing platform and turned it into the modern privacy-destroying, JavaScript-laden application-deployment platform we have today. Every time I see a blurry image, every time I have to enable scripts to view text & images — I can blame IE.

> Also, I'm generally finding Edge on Android a better experience than Chrome on Android

Firefox on Android is also cool, and free software to boot.

I don‘t know about Edge, but I think you are spot on about the IE4, IE5 era. At least what you write fits very well with what I remember from that time: Even outspoken Netscape proponents used IE 5 secretly.

On the other hand the article is about the UA string and isn‘t meant to be an accurate browser timeline anyway.

For example, it says:

„And the followers of Linux were much sorrowed, because they had built Konqueror, whose engine was KHTML [..]

Then cometh Opera and said, “surely we should allow our users to decide which browser we should impersonate, [..]”

If I remember correctly, Opera came before Konqueror and KHTML.

Microsoft has scripted automation of their apps, and exposing apps’ components to programmers, deep in their DNA. VBA, DDE, COM. It does count as innovation in the browser space, and it’s also a natural extension of what they’ve always done.

On Android and Windows I'm finding Firefox better than both. Extensions in Firefox on Android are particularly helpful.

There's an issue with not-always-60fps scrolling in Firefox on Android and the UX is not ideal, but having uBlock Origin and Stylus on Android in my opinion beats that.

I doubt that's the case. MS knew IE was broken and outdated but so many people rely on its implementation at that state, they couldn't move it forward and they had to reset before they lost every share in the browser market.

They certainly didn't want Edge to look as broken when people had,

if(browserType === 'IE') doIEWorkAround() else doNormalThings()

So I understand their disguise.

And Alan Kay saw this coming from a mile away and said: "What a total stone age BS this is. We already did it better in the PARC". Instead of sending shitty text files to rendering engines to parse all their own way, we should send objects. Every object should have an URL and the users should interact with these objects. And he teamed with David A. Smith and six others and they made it happen... aand it had 2d objects and it had 3d virtual reality where objects from different servers interacted and everybody saw it was cool as hell, but nothing came out of it because the world is path dependent and network effects rule.




TL;DR: Future was already here, but it could not communicate with the present.

> 3d virtual reality

3D skeuomorphic interfaces were always good in theory. I think the problem with them is the keyboard. The mouse is capable of recording velocity (which is why it is used for aiming), whereas the keyboard is binary. This distinction is crucially important for navigating quickly through interfaces. If I want to switch apps or tabs, I can typically do it with a very quick flick of my mouse. In 3D I would have to walk over to my new task which could take a significant amount of time depending on how far away it is from me.

Nobody wants to explore a 100 story virtual mall when they can just type in exactly what they are looking for. It could maybe work for Ikea, but the markets where this could work are so incredibly niche.

The advantage that computers bring is that they don't have to work like the real world. I am sure that our current approach to VR interfaces (which is 3d skeuomorphic) is a dead-end; there exists a more productive method that works nothing like our reality.

Regardless, the 2D browser was the better approach. Cool does not equate to usable.

There are plenty of situations where cool is better than fast. Anything entertainment based, maybe even social media. It is hard to match the immersion and engagement of a 3d interface. Get someone hooked on 3d (e.g. FPS gaming) and there is no going back.

Correct! But then you're actually seeking out coolness or entertainment. Otherwise you just want to do the job and get on with it, in which case fast > cool.

Of course there will be exceptions, but generally the rule applies.

While cool I'm not sure a 3D interface to a Wiki was ever the future. Thank god.

It was a phase.

There was VRML, The Second Life, Linux had that 3d cube where every side was a virtual screen. Everyone had been reading Snow Crash.

And just like virtual reality was just around a corner, it was also the time of the first digital currency boom: Liberty Reserve, E-gold, DigiCash, Flooz. More things change, more they stay the same.

The Compiz Cube was cool as heck and no one will ever convince me otherwise. I used to have it configured to layer the windows in 3D space based on how recently they had focus, had it snow and rain on my desktop to match the actual weather using cron, curl and a few compiz plugins, had hotkeys to change transparency or toggle always-on-top or invert colors when I had eyestrain - gods I miss that era.

Like everyone else, I use macOS now because it's what work supports, and the desktop is nearly unusable and primitive as anything. Every day I rage that I can't adjust transparency or toggle always-on-top however I want.

You are so right. I loved that cube, and there were other customizations that truly worked, but we've traded them all for reliability, a little bit of consistency, and other compelling things. But that era felt promising it's still a little bewildering that things didn't continue on that path.

Got so many people converted to Linux showing off the Compiz effects when I was 12. Some how they ran buttery smooth on shitty Pentium III processors.

Funny you should mention it, I just ran into VRML today. In AGI32 (a terrible piece of lighting simulation software) if you hit export in a rendered view that's the default format it offers you.

The more things change the more they stay the same. VRML recast itself in XML when you had to be XML to be taken seriously and now Web3D is mostly the same stuff reborn in the HTML5 era.

VRML described the vertices and edges of 3D objects, but surely it had nowhere near the power of OpenGL or WebGL. With vertex shaders and fragment shaders there is almost no limit to what can be done with OpenGL in terms of imagery you can produce. I never heard that VRML had anything like that.

In fact I would bet that a VRML renderer could be implemented in pure JS + WebGL.

VRML was declarative. It was like HTML for describing 3D scenes instead of documents.

Web3D (WebGL, specifically) is imperative. It lest a program render 3D scenes by telling the system where to put the triangles on each frame.

I like "sending shitty text files" as our (collective) job description.

It looks "cool", in a early 2000's kind of way, but after watching the video it didn't seem to solve any new problems other than maybe enhancing collaboration a little bit. I feel that new platforms/OSes need to have a killer, differentiating feature that makes it stand out and solve a real problem an order of magnitude better than its predecessors. Also it always seems that feature has to be "normal" user facing, not programmer facing. Real users don't care (or even know) if your whole platform is based off of shitty text files, or addressable object oriented unicorns.

To put it into modern terms, think of a Croquet world as a peer-to-peer 3-D Slack channel that anybody could host as easily as a web page. It used TeaTime and two-phase commit for low-latency synchronization, making it a distributed, real-time database of sorts. And what it distributed were objects and events, so maybe also think of it like Ethereum contracts, where the members' VMs together constitute the execution engine.

Croquet was and still is far ahead of its time. Only in 50 or 100 years will people realize the potential, at which point Croquet itself will likely have been forgotten completely :(

Copy of the wiki.c2.com article that doesn't require JS:


I think that all the people that says that its cool but unpractical are focusing too much on the 3d aspect of croquet.

The thing you should really think is that everything is done via objects, so instead of you use json as a sort of rpc, you can just get all the objects trough your browser, instead of screen sharing your computer, you can just share a sheet of a spreadsheet, a game or your file system, instead of building these thin clients on the browser, you can just share your software trough a system like this.

What does it mean "objects" in practice?

JSON is basically objects but serialized. X can also kind of share a window through the network, but as it is designed to do so, it fails on many other levels.

Think object as if it cant draw itself, lets say you have a spreadsheet object that it can draw it self in some window manager so you are updating it with some method calls, etc and you want to share trough the network, what is the difference between serializing that object over the wire and drawing it in your screen than use some sort of json to wire the data and then recreate with a thin client the spreadsheet in your browser?

I mean the difference is subtle, you can say why we dont have the same draw engine and use json yo carry the data over the network? You will have to solve syncing, etc, but if you instead use something like a croquet (I'm assuming here, I don't have experience with that system) you are sharing the object, the methods the full object, not just a text representation of it.

Doesn't this mean executing an arbitrary code on another person's computer? You could send a javascript object over network rather than some json data and then glue it with the code on the client side too (this is what google does/did with the search suggestions at some point).

yes, its a weakness that they discuss at the wiki


This is interesting, because at every step along the way, each actor took the locally optimal step -- webmasters wanted to serve up working pages to their users, and new browser vendors wanted their users to get pages with all the supported features.

Yet, in the end, we end up with a mess for everybody. What could have been done differently to end up at a good solution? I guess having universally defined and complied with standards would have helped, so a browser could just say "I support HTML 1.3".

>What could have been done differently to end up at a good solution?

Simple, not having a user agent from the start.

Ideally a URI would always just return the exact same webpage. Except it became necessary to be able to update them which broke this assumption, and eventually the need for some kind of authenticated session spawned all kinds of mechanisms that definitively killed off URIs as Uniform Resource Identifiers.

Perhaps if we were to do it all over we'd have a uniform method for authentication, and maybe even the possibility to refer to past versions of a page. Alas it was not to be.

It's total wishful thinking, but I wish would take another stab at designing the Web with the benefit of 30 years of hindsight.

Pretty soon we're just going to be executing WebAssembly blobs and that will be that.

You can still do this today by simplifying your code to where you don't have to sniff useragents, etc.

Different user agents are used for desktop vs mobile pages for the same browser, so a single user agent is not a viable solution now.

This might not surprise you, but I think serving the mobile and desktop versions via the same URI is an abomination and needs to stop. Encouraging mobile users to use an app instead is downright sacrilegious.

> I guess having universally defined and complied with standards would have helped, so a browser could just say "I support HTML 1.3".

Probably not; standards on the web that don't lag behind implementation end up like XHTML 2.0.

You made me chuckle.

I feel like web masters shot themselves in the foot and created this mess. I kind of like just serving the page and letting the market figure it out. Developers would have moved faster to make sure their stuff didn't break and maybe the browser wars and the dark ages that followed would not have happened.

That's not really a pragmatic approach, though. Users don't care that it's the browser's fault, they'll use the service with the better user experience.

A working reference implementation of a page renderer that defined what "correct" display looked like to go along with the spec.

> What could have been done differently to end up at a good solution?

Something like a standardized API for feature-detection, possibly.

That existed: see DOMImplementation.hasFeature.

Turned out, there were cases where browsers returned "true" while their implementation of the feature did not do what the authors wanted. There were various reasons for this: the feature detection being decoupled from the feature implementation, bugs in the feature that could not be captured by the implementation, the detection not being fine-grained enough, etc. And there were cases where hasFeature returned "false" while the feature was usable, for similar reasons.

Long story short, at this point the implementation of hasFeature per spec, and in browsers, is "return true".

The client hints draft is somewhat helping here (10 years later): http://httpwg.org/http-extensions/client-hints.html

At first I though feature detection as in Sobel, LoG, &c, which made me really confused when reading the replies.

How do other protocols handle versioning? SSL/TLS seems to do it well enough.

Absolutely not.

In TLS we now have two bogus version numbers you should ignore. We also have an extension that will signal the real version number. It'll also send a bunch of bogus version numbers to "train" servers to expect and ignore bogus version numbers. This is all due to the fact that server vendors found it too complicated to implement "if I get version higher than what I support I answer with the highest version I do support". Instead they often implement "if I get a version higher than I support I'll drop the connection".

But all of that was not enough to make TLS 1.3 work. It now also includes sending a bunch of bogus messages that have no meaning and are ignored, just to make it look more like TLS 1.2.

David Benjamin summarized that recently at Real World Crypto: https://www.youtube.com/watch?v=_mE_JmwFi1Y

I remember that in the very early days of Firefox, some websites would refuse to serve pages to anything that wasn't Internet Explorer. I did not see the point to that and I was not amused.

Firefox didn't have a problem displaying those pages, so I had to install a plugin so that Firefox could pretend to be Internet Explorer so that I could just see the web page.

I'm glad those days are over.

> I'm glad those days are over.

Those days aren't over yet.

Google Earth says "Google Chrome is required to run the new Google Earth" or "Oh no! Google Earth isn't supported by your browser yet" if you try to use another browser:


At least that's not just because of a user agent string.

Firefox doesn't support Native Client, and Google hasn't finished rewriting Earth in WASM yet: https://medium.com/google-earth/earth-on-web-the-road-to-cro...

Google does not just do this for Maps. Observe how different a Google Search results page looks on Chrome vs Firefox on Android: https://imgur.com/a/A8TYQ

I'm sure some will say they're more happy with the simpler interface, but the fact still remains that they're serving a lower quality version of the site (with no access to things like Search Tools to filter by date, for example) to non-Chrome users.

At the time, asm.js was the standard for high-performance web applications, and Google decided to go with a format only they used, with no backwards compatibility to any other browser, ignoring the back then WIP WebAssembly.

And despite using a simple compiler backend to compile their native code to NaCl that also allows support for asm.js and WASM in a matter of a few days (entire game engines have been ported that way), Google has been going for months and still kept it Chrome-only.

Did you read the article I linked? It explains why they went with NaCL over ASM, and why, despite them having a WASM prototype for months now, it's not ready for production yet.

Basically: native threading. NaCL supports it, ASM and WASM don't (yet).

Then don’t release it.

This isn’t the first time Google has released a product exclusively for Chrome, trying to pull more users to their own platform. Even if this is not directly intended, the result is a massive anticompetitive effect.

Or trying to access the web-based version of Remote Desktop in something else than Google Chrome https://remotedesktop.google.com/access

But switching the user-agent isn't enough in other browsers, Google must be using some fuckery in the background.

Switching the user agent isn't enough because it's not blindly relying on the user agent. It's actually doing feature detection so if your browser doesn't support whatever features used for remote desktop it'll give you an error message without relying on whitelisting a particular browser. This is exactly how web applications that need to use non-universal features should work, sure, look at the user agent for blacklisted browsers but use feature detection for what you can so that a new web browser that supports it will work without needing to modify the whitelisted user agents.

Or using the web version of Slack to make a call or screenshare.

Google has been doing that for a long time. For example, many years ago it showed a warning about an unsupported browser if you tried to visit Google Docs using Opera. Now it is using user-agent to choose Youtube UI version (if you are unlucky to use a modern browser, you will get that bright whitened UI with huge margins).

Also, web version of Skype isn't working in Android browser.

Not quite over.

These days I can't view my facebook messages on Firefox for Android without hacking my user agent to pretend to be Android 4.

I'm happy that it's just a few irresponsible sites though. It used to be half the web.

Sadly I do something similar for Amazon Music. I don't have the bundled flash player in Chrome enabled and it won't use web based drm on Linux browsers, so I use a user agent extension that tells their site I'm on Windows and it works fine.

Wait, that works? I'd just given up on fb on my phone.

If you don't mind... what exactly do you need to do?

phony add on

Even at mbasic.facebook.com? This is how I do all my messaging these days.

I hate Facebook so much for this. Their apps are beyond bloated and track far more than I'm interested in sharing. It used to be no issue at all to send messages.

But suddenly it's not possible to send a message without going to a clunky version of Facebook.

My objections are not with Facebook really. I'm willing to make the tradeoffs that are required to participate in Facebook. I wish it were better, but Facebook is responding to incentives and is rewarded for its actions, so it's hard for me to hate them, or expect moral virtuousness.

Facebook is just doing what capitalism tells it is ok. Hate for Facebook is wasted energy. We have to stop rewarding the behaviors that we don't wish to see repeated.

That's an entirely different issue I think? Do you have a different experience with Chrome?

One of the issues was that companies that targeted IE often developed their sites using vbscript rather than jscript, therefore they would not be compatible without a re-write.

Now days there's plenty of Chrome only web apps.

> One of the issues was that companies that targeted IE often developed their sites using vbscript rather than jscript, therefore they would not be compatible without a re-write.

Based on:

>> Firefox didn't have a problem displaying those pages

It probably wasn't due to vbscript

Netflix used to refuse to run in Firefox I had to install Chrome on my laptop just to visit one site - incredibly frustrating.

That was a DRM thing. Eventually Firefox added support for EME: https://www.pcworld.com/article/3183742/data-center-cloud/ne...

But a downside is that Netflix still limits Firefox to 720p video, even if you're paying for 1080p:


You can see what resolution you're getting with the Ctrl+Alt+Shift+D shortcut to bring up Netflix's debugging information (and press again to dismiss it). You can also verify your resolution with the Test Patterns video.

The sad thing is 1080p video works just fine in Firefox. There's a Firefox add-on available which enables the 1080p stream:


But that's also the restriction for Chrome. Only edge, IE and Safari get 1080p support for browsers. Which proves to me that I don't need 1080p really since I hadn't noticed.

How far do you sit from your screen, how big is it?

That was back in the days when Mozilla has a backbone and was standing up for users even though users were bitching

Then they caved and now we have DRM in the standard.. sad days

> even though users were bitching

> Then they caved

Some people call that "responding to user feedback". It sucks that you don't agree with their decision, but trying to paint Mozilla in a bad light for listening to their users' demands is preposterous.

I am painting Mozilla in a bad light for Betraying the The Mozilla Manifesto .. https://www.mozilla.org/en-US/about/manifesto/details/

Yes. And I am saying that it's preposterous because they did what any organization (and I mean any, this applies to companies, nonprofits, governments, etc) should be doing -- listening to their users.

Their beliefs are great and all, but at the end of the day they are providing a product for the end user, so if their users are "bitching" that they want Netflix and other EME services available on Firefox, then the right choice is to make the user happy if possible.

What about former users... because that who the most vocal people where, Chrome Traitors that do not value freedom, privacy or security.

Their actual users were demanding they not do it, they not Embrace Web Extensions, they not Force install Adware on every system, they Not make Privacy Invading features opt-out instead of Opt-In. They not Embrace the destruction of the Open Web....

We, the actual Firefox user base, were given a big middle finger by Mozilla and instead they went on a sorry excuse for a begging marketing ploy to beg user to return to their new Chrome Clone

Any organization cares about their userbase as a whole. There's no distinctions. No "former users", "traitors", or "actual Firefox user base". Just "users". Because once you get past a certain number of users, you need to start looking at decisions statistically.

If 70% of users want Netflix, and the other 30% want privacy, then Mozilla is going to look at the 70%. Because that's what makes sense from any sort of organizational planning. You're not going to try to appease a tiny minority (and yes, privacy-conscious individuals are very much in the minority in the world. We may be in a bubble here on HN, but the common person is not going to give two shits about the privacy concerns we may have) when you can appease far more users by doing the opposite.

You've made it clear you are biased in this argument. Maybe try taking a step back and looking at the issue more objectively, or from the other perspective.

I am being very objective, The Mozilla Foundation is a Non profit tax free organization that gets that tax free status because they are suppose to be following their stated goal of the organization not to be popular

Mozilla Foundation has a Tax Free Status in order to promote the Open Web

They are no longer honoring that goal, as such they should lose their Tax Free status, the should stop calling themselves a Foundation, they should stop fraudulently holding themselves out as being For Privacy and the Open web

if they want to make a Insecure, Privacy Invading Browser, that is perfectly fine. Google and MS already do that

They need to be honest about it and not hold themselves out to be something they are not

They do not fight for the open web, they do not fight for user privacy, they do not support the goals stated in the the Mozilla Manifesto. Thus it should be removed and Mozilla Foundation should be dissolved into the Mozilla Corporation a for profit software vendor making a Commercial Web Browser

You taking the stance that Mozilla is no different than Google, MS or other Commercial Software Vendors. That they are a software company looking to make the best software for their customers..

Mozilla does not have customers, Mozilla is not and should not be a commercial entity, Mozilla is a Charitable Foundation with a set of goals they are violating

In those days each browser had their own quirky idea of how to render a website (even Firefox). None of which was really "right" but how IE did it was the de facto standard because IE had 90+% marketshare. If you had a site that could potentially be "broken" if the layout didn't render correctly the the easiest thing to do was to test that it worked how you wanted it to in IE and then add a note telling people it might be broken in other browsers (or not serve to them).

It wasn't until after people really started focusing on standards compliance and cross-browser compatible frameworks that things got better. The "acid tests" for html/css/js standards compliance helped establish how far along the various browsers were at the time. Most browsers were absolutely terrible in that era, it wasn't until Chrome hit the scene and webkit started taking off that standards compliance started to become a big deal. Eventually most major browsers had decent or good standards compliance in their rendering and things like jQuery helped smooth over the rough spots of differences in browser behavior.

Today, Google Search itself uses that.

Try using Google.com from Firefox Mobile and from Chrome Mobile, and you'll see a major difference.

To get 95% of the functionality working, you need to fake a Chrome UA. In 2018.

Just tried this with latest versions of Chrome and FF on Android. The Firefox one loads what looks like the desktop site from 2013 but with bigger font; the chrome one loads a regular modern UI.

Faking the user agent header (yay, FF on Android supports all the desktop extensions) makes everything normal on Firefox.

Sometimes, in my up-to-date Firefox on Linux, I come across a web site saying I need the newest version of Firefox / Chrome / IE...

Those days are nowhere close to over. See https://news.ycombinator.com/item?id=15636674 for a short list of examples that Mozilla's web compat team has hit just in the recent past.

From memory that plugin was "IETab" and actually rendered the page using IE somehow within a Firefox tab.

The plugin page is still there though obviously obsolete by now.


I remember there being some activeX plugin/module/whatchamacallit that'd run your website in a google chrome frame inside IE. Awful hack but it did work great.

Chrome Frame. It was shipped by Google itself.

I ran into a site last year that refused to work if the browser reported the OS to be Linux. Worked fine with it spoofed.

In the days of Netscape, pages would tell IE users to "get a real browser"

I doubt that ever paid for itself in development cost vs "tax" collected.

(yes i know it's probably a joke)

Aren't we still saying this?

They aren't over. Google use some weird WebRTC in Hangouts that doesn't work in Firefox and so on.

And in the newest installment, https://github.com/google/closure-library/issues/883 is UA-sniffing that is now preventing Firefox from aligning with all other browsers on whether arrow keys fire keypress events, which causes _other_ Google things, which assume they don't, to break.

Also Closure assumes that only things with "WebKit" in their UA might be running on a mobile device and that all browsers fall into the WebKit/IE/Edge/Gecko buckets (and will fail badly if a browser does not).

And this is just one library.

User agent sniffing is still around.

If you're uding something different from the approved four, Edge/Safari/FF/Chrome, lots of sites will nag you.

Anyone doing anything with user agents should use ua-parser[0]. Don't even bother trying to do any of this yourself.

If ua-parser doesn't exist in your language, just pull the yaml file out of ua-core. That defines the regexes you should use and how they translate to browser versions (and os versions and devices).

[0] https://github.com/ua-parser

Shameless plug for the WhatIsMyBrowser.com API:


As per modern web dev standards: you should always use feature detection not agent sniffing to handle cross browser issues; however having accurate user agent detection is really handy for trouble shooting customer issues, bot detection, spotting trends etc.

Are you encouraging people to do browser sniffing accurately? Aren’t we supposed to discourage such sniffing instead?

Ideally, you would not care at all. You would simply develop HTML 4/5/6 a user has a browser that supports that spec.

In reality, browsers have known bugs that last for years, you need to collect stats to figure out support policies, and you need to reproduce customer bugs.

Example: old versions of Firefox have an RCE vulnerability if you use third party jsonp apis. If you use these apis but don't block these ff versions, your users will be vulnerable.

Exactly. I had to deal with this due to Chrome 45 blocking flash[0]. While that might not seem like something worth targeting specific browsers, Chrome's implementation of blocking flash was an advertiser's worst nightmare. In order to render the page properly, the page would load with flash enabled on all content. It would then pause the flash runtime on all content not deemed "important". This had the wonderful effect of giving a video ad enough time to start playing a few frames and to fire the impression that results in the advertiser getting billed for showing the ad. This would be flat out fraud on our part (a major video ad exchange), so we had to aggressively avoid allowing flash ads to buy spots from chrome 45 and later.

We used ua-parser and everything went very smoothly.

[0] https://www.infoq.com/news/2015/08/chrome-45-flash

You can still want to parse UAs for other reasons. I used such a library recently in a project where a user is shown their login history, including what OS and browser was used, in a human-readable format (e.g. "Firefox 58 on Linux").

It's quite interesting how user agent stings have changed and become more bloated with time.

Other fun facts:

- Chrome on iOS reports its chrome version (eg 64.0.36), with no way to get the underlying Safari engine version.

- Android webviews have replaced one UA string pattern with another close to three times (pre-Kitkat, Kitkat till Marshmallow, and one for marshmallow and above)

- Chrome continues to add a "Webkit" version to its UA, even after having forked to Blink. Though since Chrome 27, the webkit version always says "537.36".

Src - I wrote a library that generates user agent strings programatically - https://github.com/pastelsky/useragent-generator

I need to bookmark this for the next time I hear “oh let’s just encode the params as a string” from a coworker.

Surely, THIS TIME, it won't balloon out of control! And who will ever need more than 255 characters of PATH?!

File under: Problems that require a time machine to fix. https://blogs.msdn.microsoft.com/oldnewthing/20110131-00/?p=...

The road to hell is paved with one off exceptions that are temporary until we get a better implementation in place anyway. :-)

Some more fun tidbits:

> ProductSub returns 20030107 for Chrome and Safari, because that's the release date for Safari which used an Apple fork of WebKit. Chrome also uses this fork. For Firefox, it's 20100101. I don't know why.

> Vendor returns "Google Inc." for Chrome, but undefined for everything else.

> Navigator can tell if your device has a touch screen

> Navigator can tell how many logical cores you have

> appCodeName always returns "Mozilla" and appName always "Netscape"

> Navigator can tell if you're using: Wi-Fi, Ethernet, cellular, Bluetooth, or WiMAX

> Navigator knows how much RAM you have

> And the exact plugins you're using. A Firefox useragent won't hide 'type':'application/x-google-chrome-pdf'

> Your screen can be shared through navigator -- without your permission

> Languages are set as either `US-en` or `en` to differentiate between Americans and British

> Your battery can be acpi'd by Navigator

> File permissions can be read, revealing usernames

And this is just navigator, wait till you see all the fun things you can do with Javascript and canvas.

> For Firefox, it's 20100101. I don't know why.

At some point in time, that date was Firefox's build date. Then, some concerns were raised about that date allowing sites to track users based on that date so it was set to 20100101.

Should be "Why every browser user agent string"... Non-browser agents usually don't (and shouldn't) do the Mozilla tricks.

Actually, we've had to add "Mozilla" in the user agent of one of our program because users have been complaining being blocked by some proxy.

I can get per-browser content switching, but blocking by proxy is a malpractice. Probably driven by some bot abuse, but certainly a very wrong way to deal with it.

I learned from that article what Mozilla means: Mosaic Killer.

> What's your favorite web browser?

Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/ Safari/525.13

One thing I wish were explained is where the 'U' came from; it first shows up when Mozilla was born with Gecko

It was about encryption ciphers, when the US had export restrictions on key lengths. U = USA = 128bit, I = International = 40bit, N = None. Nowadays the U is another vestigal piece of the UA string.

Quick search answered my own question - 'U' indicates USA; As a result of cryptographic export restrictions, different levels of security were shipped in early browsers: U(SA) = 128bit, I(nternational) = 40bit, or N(one).

It first showed up in Netscape 1.x.

Getting it removed from Gecko was https://bugzilla.mozilla.org/show_bug.cgi?id=572668 . Chrome and Safari followed.

"Strong security (Default) the browser provides crypto support that is stronger than what the "international" builds of Netscape offered circa 1995."


Great history lesson for people my age who might not have known this back story (I was born in the 90s).

The OP is from 2010. For those wondering what sort of user-agent a brand-new browser engine would adopt in this era, see this discussion regarding inventing a UA for Servo, which involved collecting data from popular sites in the wild to see how they treat UAs: https://github.com/servo/servo/issues/4331

TL;DR: you can see end result for each platform here: https://github.com/servo/servo/blob/2d3771daab84709a6152c9b5..., and it looks like "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:55.0) Servo/1.0 Firefox/55.0"

Good to see that building Servo on ARM will return "i686" as the cpu architecture. Because there are plenty of sites that will just match /arm/ in the user agent string and redirect you to the mobile version, regardless of what your user agent actually is. Which is supremely annoying to those of use with ARM desktops (a tiny minority, I admit).

My user agent starts with "Wget" :)

The ol Stallman-oroo

Similar story for every animated gifs to have "Netscape 2.0" app extension.

saying "... and used KHTML" glosses over the entire Konqueror project and existence of Konqueror long before the first release of Safari. I was using Konqueror on a KDE 2.0 desktop quite happily for a while.

The userAgent property has been aptly described as “an ever-growing pack of lies” by Patrick H. Lauke in W3C discussions. (“or rather, a balancing act of adding enough legacy keywords that won’t immediately have old UA-sniffing code falling over, while still trying to convey a little bit of actually useful and accurate information.”) [https://superuser.com/questions/1174028/microsoft-edge-user-...]

I once created a similar problem. I built a tracking and split testing system designed around a list of features activated during a page load. So a single page load might be described like:


Where bluebutton was a design we were testing for our signin page. Of course once bluebutton worked and had run for a while everyone was afraid to change it in case there was a dependency of some kind. So the Facebook login that replaced the old signin would look like:


Even though no sign in page was shown let alone a bluebutton.

For me, the page isn’t loading. Here’s a Google cache of it:

Text-only cache: http://webcache.googleusercontent.com/search?q=cache:maxiNwj...

Edit: The full-version cache is broken for me as well!

Wow. What a mess!

Super interesting read though! :)

I'm guessing that today, in the current age of the modern web, user agents strings are no longer so relevant, and can be basically set to anything?

The last time I tried browsing The Economist web site with lynx, it refused to work unless I changed the user agent string. Gibberish was OK, but apparently lynx wasn't.

Some servers feed different pages depending on whether they think a request is coming from a browser or a bot based on user agent string. Sure it's easy enough for a bot to pretend to be anything, but some servers are still set up to consider the user agent string.

Still useful, because of stark differences like “flash works” or “css grid works”.

I've set up an extension that randomized my user agent string, to see what would happen, and some major sites were severely broken. Some gave me degraded mode (google did that several times). Also many sites use user agent for things like OS detection, so if you want correct downloads, must have at least partially correct UA.

One can argue that yes, in 2018, there should be APIs that allow to detect all this stuff in a much better way than parsing random mess of legacy markers that is a common user agent. But in reality, parsing UA string is still the case and unfortunately keeps being the case, including very major and technically advanced sites.

It would be great if one of the major vendors made this the default behavior. Sites would start having cause to clean up this mess.

nope, user agent sniffing is alive and well. Google is especially guilty, but lots of sites do it. Try changing your user agent and see how many things break.

When I was working on mod_pagespeed I wrote some about how we decided to parse the UA: https://www.jefftk.com/p/why-parse-the-user-agent

(Summary: feature detection requires more round trips, and slows down pages.)

I've definitely tried browsing with a random User agent - many, many sites are broken. Give it a go!

Can Mozilla/5.0 be eliminated these days?

New pages aren't the problem - pages written 20 years ago still exist and might depend on the Mozilla/5.0 being there to render properly.

what should the user agent be if you have to plug in the internet throw a plug behind your head, in the coming soon 20 years later

The whole thing should be deprecated altogether.

Thanks for the history lesson

Great article. Could we get [2008] added to the title please?

Actual article title: "History of the browser user-agent string"



The title is simply false. I read the article and it does present interesting history from the browser wars. However, any cursory glance of web server logs will show that sometimes the user agent string is blank, or it starts with "MobileSafari" or "UrlTest." The user agent string is client generated and can be anything the client wants.

Ah. Which browsers ship with those settings?

I wonder if the author of this text is religious at all..

Judging by his verbiage such as "In the beginning", and "behold", I would say yes. I rather enjoyed the tone.

I thought that was just alluding to the Book of Mozilla https://en.wikipedia.org/wiki/The_Book_of_Mozilla

I think it's meant to sound more "ye olden times" like.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact