And Chrome was good, and MSIE wasn't, so webmasters served bad pages to MSIE. Microsoft was not happy. So they created Edge. Edge was good, but Microsoft feared webmasters would treat it like MSIE. So Microsoft Edge pretended to be Chrome to get the good pages.
Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36 Edge/12.0
IE pretty much ushered in the era of truly dynamic websites. Granted, IE 6 sucked (eventually), and thus began the era of IE stagnation. MS got the market share they wanted, then basically sat on their hands for a decade (as the other browsers started innovating again and W3C got off its ass).
Shit on IE all you want, but there was a forgotten era when it was the pioneer.
Also, I'm generally finding Edge on Android a better experience than Chrome on Android after having played with it for a month or so. I still prefer Chrome vs Edge on my Windows desktop.
Obviously, YMMV, but these are my personal observations and experiences.
Layers lost out and innerHTML won the day, but it's a stretch to say IE was more innovative than Netscape. Arguably innerHTML won because this was the era where Microsoft was throwing its monopoly weight around to push IE and kill Netscape.
He had turned his proofs of concepts into a cross-browser (Navigator 4, IE 4) library called DynAPI along with a tutorial: https://web.archive.org/web/20010413015916/http://www.danste...
but ie crashed if you used xmlhttprequest or innerhtml too much. so despite them forcing it on the world since ie4, it only became usable on later ie6 versions (windows service pack ftw), a little later than 2001.
For more information and a million references, here is a comment I left detailing this history five years ago:
It's a big decision to pull the plug on a codebase of that size, so browser developers tend to resist doing it until stagnation happens.
It happened with Netscape, it happened with MSIE, it happened with Opera, it happened with Firefox, can you guess who's next in line?
The plug was not pulled on the IE and Firefox code bases. In both cases, the plug was pulled on legacy extensions permitting major changes, but neither Edge nor Quantum are new code bases.
(WebKit deprecated on Mac and obsoleted on Linux its first embedding API and replaced it with thr WebKit2 API.)
> Also, I'm generally finding Edge on Android a better experience than Chrome on Android
Firefox on Android is also cool, and free software to boot.
On the other hand the article is about the UA string and isn‘t meant to be an accurate browser timeline anyway.
For example, it says:
„And the followers of Linux were much sorrowed, because they had built Konqueror, whose engine was KHTML [..]
Then cometh Opera and said, “surely we should allow our users to decide which browser we should impersonate, [..]”
If I remember correctly, Opera came before Konqueror and KHTML.
There's an issue with not-always-60fps scrolling in Firefox on Android and the UX is not ideal, but having uBlock Origin and Stylus on Android in my opinion beats that.
They certainly didn't want Edge to look as broken when people had,
if(browserType === 'IE') doIEWorkAround()
So I understand their disguise.
TL;DR: Future was already here, but it could not communicate with the present.
3D skeuomorphic interfaces were always good in theory. I think the problem with them is the keyboard. The mouse is capable of recording velocity (which is why it is used for aiming), whereas the keyboard is binary. This distinction is crucially important for navigating quickly through interfaces. If I want to switch apps or tabs, I can typically do it with a very quick flick of my mouse. In 3D I would have to walk over to my new task which could take a significant amount of time depending on how far away it is from me.
Nobody wants to explore a 100 story virtual mall when they can just type in exactly what they are looking for. It could maybe work for Ikea, but the markets where this could work are so incredibly niche.
The advantage that computers bring is that they don't have to work like the real world. I am sure that our current approach to VR interfaces (which is 3d skeuomorphic) is a dead-end; there exists a more productive method that works nothing like our reality.
Regardless, the 2D browser was the better approach. Cool does not equate to usable.
Of course there will be exceptions, but generally the rule applies.
There was VRML, The Second Life, Linux had that 3d cube where every side was a virtual screen. Everyone had been reading Snow Crash.
And just like virtual reality was just around a corner, it was also the time of the first digital currency boom: Liberty Reserve, E-gold, DigiCash, Flooz. More things change, more they stay the same.
Like everyone else, I use macOS now because it's what work supports, and the desktop is nearly unusable and primitive as anything. Every day I rage that I can't adjust transparency or toggle always-on-top however I want.
In fact I would bet that a VRML renderer could be implemented in pure JS + WebGL.
Web3D (WebGL, specifically) is imperative. It lest a program render 3D scenes by telling the system where to put the triangles on each frame.
Croquet was and still is far ahead of its time. Only in 50 or 100 years will people realize the potential, at which point Croquet itself will likely have been forgotten completely :(
The thing you should really think is that everything is done via objects, so instead of you use json as a sort of rpc, you can just get all the objects trough your browser, instead of screen sharing your computer, you can just share a sheet of a spreadsheet, a game or your file system, instead of building these thin clients on the browser, you can just share your software trough a system like this.
JSON is basically objects but serialized. X can also kind of share a window through the network, but as it is designed to do so, it fails on many other levels.
I mean the difference is subtle, you can say why we dont have the same draw engine and use json yo carry the data over the network? You will have to solve syncing, etc, but if you instead use something like a croquet (I'm assuming here, I don't have experience with that system) you are sharing the object, the methods the full object, not just a text representation of it.
Yet, in the end, we end up with a mess for everybody. What could have been done differently to end up at a good solution? I guess having universally defined and complied with standards would have helped, so a browser could just say "I support HTML 1.3".
Simple, not having a user agent from the start.
Ideally a URI would always just return the exact same webpage. Except it became necessary to be able to update them which broke this assumption, and eventually the need for some kind of authenticated session spawned all kinds of mechanisms that definitively killed off URIs as Uniform Resource Identifiers.
Perhaps if we were to do it all over we'd have a uniform method for authentication, and maybe even the possibility to refer to past versions of a page. Alas it was not to be.
Pretty soon we're just going to be executing WebAssembly blobs and that will be that.
Probably not; standards on the web that don't lag behind implementation end up like XHTML 2.0.
Something like a standardized API for feature-detection, possibly.
Turned out, there were cases where browsers returned "true" while their implementation of the feature did not do what the authors wanted. There were various reasons for this: the feature detection being decoupled from the feature implementation, bugs in the feature that could not be captured by the implementation, the detection not being fine-grained enough, etc. And there were cases where hasFeature returned "false" while the feature was usable, for similar reasons.
Long story short, at this point the implementation of hasFeature per spec, and in browsers, is "return true".
In TLS we now have two bogus version numbers you should ignore. We also have an extension that will signal the real version number. It'll also send a bunch of bogus version numbers to "train" servers to expect and ignore bogus version numbers.
This is all due to the fact that server vendors found it too complicated to implement "if I get version higher than what I support I answer with the highest version I do support". Instead they often implement "if I get a version higher than I support I'll drop the connection".
But all of that was not enough to make TLS 1.3 work. It now also includes sending a bunch of bogus messages that have no meaning and are ignored, just to make it look more like TLS 1.2.
David Benjamin summarized that recently at Real World Crypto:
Firefox didn't have a problem displaying those pages, so I had to install a plugin so that Firefox could pretend to be Internet Explorer so that I could just see the web page.
I'm glad those days are over.
Those days aren't over yet.
Google Earth says "Google Chrome is required to run the new Google Earth" or "Oh no! Google Earth isn't supported by your browser yet" if you try to use another browser:
Firefox doesn't support Native Client, and Google hasn't finished rewriting Earth in WASM yet: https://medium.com/google-earth/earth-on-web-the-road-to-cro...
I'm sure some will say they're more happy with the simpler interface, but the fact still remains that they're serving a lower quality version of the site (with no access to things like Search Tools to filter by date, for example) to non-Chrome users.
And despite using a simple compiler backend to compile their native code to NaCl that also allows support for asm.js and WASM in a matter of a few days (entire game engines have been ported that way), Google has been going for months and still kept it Chrome-only.
Basically: native threading. NaCL supports it, ASM and WASM don't (yet).
This isn’t the first time Google has released a product exclusively for Chrome, trying to pull more users to their own platform. Even if this is not directly intended, the result is a massive anticompetitive effect.
But switching the user-agent isn't enough in other browsers, Google must be using some fuckery in the background.
These days I can't view my facebook messages on Firefox for Android without hacking my user agent to pretend to be Android 4.
I'm happy that it's just a few irresponsible sites though. It used to be half the web.
If you don't mind... what exactly do you need to do?
But suddenly it's not possible to send a message without going to a clunky version of Facebook.
Facebook is just doing what capitalism tells it is ok. Hate for Facebook is wasted energy. We have to stop rewarding the behaviors that we don't wish to see repeated.
Now days there's plenty of Chrome only web apps.
>> Firefox didn't have a problem displaying those pages
It probably wasn't due to vbscript
You can see what resolution you're getting with the Ctrl+Alt+Shift+D shortcut to bring up Netflix's debugging information (and press again to dismiss it). You can also verify your resolution with the Test Patterns video.
The sad thing is 1080p video works just fine in Firefox. There's a Firefox add-on available which enables the 1080p stream:
Then they caved and now we have DRM in the standard.. sad days
> Then they caved
Some people call that "responding to user feedback". It sucks that you don't agree with their decision, but trying to paint Mozilla in a bad light for listening to their users' demands is preposterous.
Their beliefs are great and all, but at the end of the day they are providing a product for the end user, so if their users are "bitching" that they want Netflix and other EME services available on Firefox, then the right choice is to make the user happy if possible.
Their actual users were demanding they not do it, they not Embrace Web Extensions, they not Force install Adware on every system, they Not make Privacy Invading features opt-out instead of Opt-In. They not Embrace the destruction of the Open Web....
We, the actual Firefox user base, were given a big middle finger by Mozilla and instead they went on a sorry excuse for a begging marketing ploy to beg user to return to their new Chrome Clone
If 70% of users want Netflix, and the other 30% want privacy, then Mozilla is going to look at the 70%. Because that's what makes sense from any sort of organizational planning. You're not going to try to appease a tiny minority (and yes, privacy-conscious individuals are very much in the minority in the world. We may be in a bubble here on HN, but the common person is not going to give two shits about the privacy concerns we may have) when you can appease far more users by doing the opposite.
You've made it clear you are biased in this argument. Maybe try taking a step back and looking at the issue more objectively, or from the other perspective.
Mozilla Foundation has a Tax Free Status in order to promote the Open Web
They are no longer honoring that goal, as such they should lose their Tax Free status, the should stop calling themselves a Foundation, they should stop fraudulently holding themselves out as being For Privacy and the Open web
if they want to make a Insecure, Privacy Invading Browser, that is perfectly fine. Google and MS already do that
They need to be honest about it and not hold themselves out to be something they are not
They do not fight for the open web, they do not fight for user privacy, they do not support the goals stated in the the Mozilla Manifesto. Thus it should be removed and Mozilla Foundation should be dissolved into the Mozilla Corporation a for profit software vendor making a Commercial Web Browser
You taking the stance that Mozilla is no different than Google, MS or other Commercial Software Vendors. That they are a software company looking to make the best software for their customers..
Mozilla does not have customers, Mozilla is not and should not be a commercial entity, Mozilla is a Charitable Foundation with a set of goals they are violating
It wasn't until after people really started focusing on standards compliance and cross-browser compatible frameworks that things got better. The "acid tests" for html/css/js standards compliance helped establish how far along the various browsers were at the time. Most browsers were absolutely terrible in that era, it wasn't until Chrome hit the scene and webkit started taking off that standards compliance started to become a big deal. Eventually most major browsers had decent or good standards compliance in their rendering and things like jQuery helped smooth over the rough spots of differences in browser behavior.
Try using Google.com from Firefox Mobile and from Chrome Mobile, and you'll see a major difference.
To get 95% of the functionality working, you need to fake a Chrome UA. In 2018.
Faking the user agent header (yay, FF on Android supports all the desktop extensions) makes everything normal on Firefox.
The plugin page is still there though obviously obsolete by now.
(yes i know it's probably a joke)
Also Closure assumes that only things with "WebKit" in their UA might be running on a mobile device and that all browsers fall into the WebKit/IE/Edge/Gecko buckets (and will fail badly if a browser does not).
And this is just one library.
If you're uding something different from the approved four, Edge/Safari/FF/Chrome, lots of sites will nag you.
If ua-parser doesn't exist in your language, just pull the yaml file out of ua-core. That defines the regexes you should use and how they translate to browser versions (and os versions and devices).
As per modern web dev standards: you should always use feature detection not agent sniffing to handle cross browser issues; however having accurate user agent detection is really handy for trouble shooting customer issues, bot detection, spotting trends etc.
In reality, browsers have known bugs that last for years, you need to collect stats to figure out support policies, and you need to reproduce customer bugs.
Example: old versions of Firefox have an RCE vulnerability if you use third party jsonp apis. If you use these apis but don't block these ff versions, your users will be vulnerable.
We used ua-parser and everything went very smoothly.
Other fun facts:
- Chrome on iOS reports its chrome version (eg 64.0.36), with no way to get the underlying Safari engine version.
- Android webviews have replaced one UA string pattern with another close to three times (pre-Kitkat, Kitkat till Marshmallow, and one for marshmallow and above)
- Chrome continues to add a "Webkit" version to its UA, even after having forked to Blink. Though since Chrome 27, the webkit version always says "537.36".
I wrote a library that generates user agent strings programatically -
File under: Problems that require a time machine to fix. https://blogs.msdn.microsoft.com/oldnewthing/20110131-00/?p=...
> ProductSub returns 20030107 for Chrome and Safari, because that's the release date for Safari which used an Apple fork of WebKit. Chrome also uses this fork. For Firefox, it's 20100101. I don't know why.
> Vendor returns "Google Inc." for Chrome, but undefined for everything else.
> Navigator can tell if your device has a touch screen
> Navigator can tell how many logical cores you have
> appCodeName always returns "Mozilla" and appName always "Netscape"
> Navigator can tell if you're using: Wi-Fi, Ethernet, cellular, Bluetooth, or WiMAX
> Navigator knows how much RAM you have
> And the exact plugins you're using. A Firefox useragent won't hide 'type':'application/x-google-chrome-pdf'
> Your screen can be shared through navigator -- without your permission
> Languages are set as either `US-en` or `en` to differentiate between Americans and British
> Your battery can be acpi'd by Navigator
> File permissions can be read, revealing usernames
At some point in time, that date was Firefox's build date. Then, some concerns were raised about that date allowing sites to track users based on that date so it was set to 20100101.
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27 Safari/525.13
Getting it removed from Gecko was https://bugzilla.mozilla.org/show_bug.cgi?id=572668 . Chrome and Safari followed.
TL;DR: you can see end result for each platform here: https://github.com/servo/servo/blob/2d3771daab84709a6152c9b5..., and it looks like "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:55.0) Servo/1.0 Firefox/55.0"
Where bluebutton was a design we were testing for our signin page. Of course once bluebutton worked and had run for a while everyone was afraid to change it in case there was a dependency of some kind. So the Facebook login that replaced the old signin would look like:
Even though no sign in page was shown let alone a bluebutton.
Text-only cache: http://webcache.googleusercontent.com/search?q=cache:maxiNwj...
Edit: The full-version cache is broken for me as well!
The last one returns JSON
Super interesting read though! :)
One can argue that yes, in 2018, there should be APIs that allow to detect all this stuff in a much better way than parsing random mess of legacy markers that is a common user agent. But in reality, parsing UA string is still the case and unfortunately keeps being the case, including very major and technically advanced sites.
(Summary: feature detection requires more round trips, and slows down pages.)