Personally, I'd say it was mainly because we were tech geeks who, back then, actually believed in a web that should be accessible to as many different platforms as possible using as many different browsers as possible, not just Windows and IE users.
In a sense, that notion prevailed: the web is ubiquitous, browser lock-in is considered a douche move and you can still access some really worthwhile sites even with rather simple means.
In another sense, we all failed, lending our hands to help create the current privacy-invading, dark-patterned UX nightmare we're still trying to pass off as a reasonable way of making the world a better place.
I never thought I'd lament the day Microsoft stopped making their own browser core. As a web developer, one less rendering engine to GAF about is nice. As a netizen, the Google dominance is going from unsettling to scary.
I started out using Mozilla Communicator Suite, Phoenix, and now Firefox as my primary browser. I never defaulted to IE and never to Chrome but I have always strived to make everything I wrote work across as many browsers as possible.
It was glorious for about a year or two then everything started to diverge again and with Mobile versions of browsers we're starting to head back towards the hell that was developing for the Web around 2004.
There are certainly more platforms to juggle today but they're more similar than different at this point.
I think it's more likely that a new communication paradigm will supplant the web than that a Chromium competitor will replace Chromium, just as the new smartphone/cloud computing platforms finally broke the Windows stranglehold. PCs are still dominated by Windows.
The bigger issue is backwards compatibility; the web has bent over backwards to maintain it, which is a blessing and a curse. On the one hand it makes the feature set absurd, but on the other hand the web is (relatively) easy to archive since format interpreters for it are widespread, and pages can be built and then forgotten about because they still work.
I'd like to see that experiment done.
This is talked about in a few videos and articles by it's creators.
The HTML5 parsing algorithm is not _that_ hard:
Do you really think that the complexity of a full XML implementation of the half dozen specs required to implement XHTML would really be that significant a savings compared to the actual features browser developers actually spend their time on?
You're correct that complex software is what people want but that's complex as in an advanced document layout system with advanced language support, rich media, forms, etc. rather than the format those features are implemented in.
Seriously, XML is arguably bad, but html5 is absurdly horrible. The only reason it's acceptable is because it's so widely used that parsers are huge shared projects and most bugs are shallow.
I guess it's all a matter of perspective: sure, you might argue, hey, the spec isn't hundreds of pages, so it's humanly comprehensible. But from my perspective that's a really low bar, and it clears it just barely.
It matters too, because these weird quirks have often hidden things like performance issues, misparses, and security issues due to incorrect normalization.
I think I disagree. Most of my customers don't even want software. They want to watch a movie, or read something, or make some data transform, or something else that is their actual goal.
Complex software is what we make in order to enable them to achieve their goal; perhaps if we were smarter and better, we'd be able to make simple software that enabled their goals.
Or perhaps more useful: domains / classes of such issues?
I suspect a lot is:
- JS generally
A heatmap of problem areas might be interesting.
Update: that's sort of here:
We've already seen WebKit give rise to two divergent major browsers: Chrome and Safari.
I just discovered Goanna on Wikipedia, a fork of the Gecko engine, presumably with relatively thin resources. Don't know how well it compares to mainstream engines though. 
I suppose the short version is that the workload is a function of the goals.
Edit: at that point I wouldn't really consider it "FF of yore." A rewrite is really a new product.
The comment you called out is directly related to what I was talking about, it is not orthogonal.
I'll repeat myself: Firefox is already using the browser engine for its own UI, and that's been the case—I'll repeat again—since before it was even called Firefox.
> they recently announced that the UI is web component based - which means that using Servo for the window chrome is in the realm of possibility
IMO some of the key words there are "web component" and "Servo":
1. IIRC you can't render XUL strictly with web components or anything that would be called "web", since that usually refers to web standards or something you would use from a webpage but not included as a standard (non-standardized extensions, flash, silverlight and so on).
2. Regarding the Servo part, I don't think that Servo ever included XUL support at all. Servo isn't used wholesale as the rendering engine in firefox, but the parent comment talks about the possibility to use it and web standards for the browser chrome.
I feel like you are responding to a comment that never was and then responding to a comment on that comment by saying the original comment was something you thought it was, but it wasn't.
> IIRC you can't render XUL strictly with web components or anything that would be called "web"
> use [Servo] and web standards for the browser chrome.
Geez, this is excruciating.
If you want to use web standards for the browser chrome, this is not a new development. Because Gecko supports web standards. And Gecko has been used for the browser UI for years. There is nothing particular to Servo here. There's nothing particular to Rust.
Using standardized web tech for the "window chrome" is not "in the realm of possibility". It is possible. Full stop. It has never not been the case.
1. Servo supports many web standards, not XUL
2. The firefox UI used to be built in XUL
3. The firefox UI is now not built on XUL, but rather on web standards
This means that the firefox UI can now be rendered by servo or similar components.
That's basically the parent comment. If you disagree with any of those facts that's an interesting discussion, but I think you think that somebody said that the firefox UI could not be built on gecko previously, which nobody said.
If anything, Firefox is a good example of why it's probably better for aspiring browser competitors to start off by just spinning off from Chromium like Brave did. Not that Brave is perfect either, especially not on desktops, but the saved resources let them focus on finding ways to surpass the competition like improving the mobile ui.
[Edit (fixing link)]: 4. https://www.dedoimedo.com/computers/firefox-addons-future.ht...
Just like the 1st edition of Bill Gates book, Microsoft decided the internet thing wasn’t as interesting as whatever they had cooked up. To their credit, they figured out they were wrong.
> The final nail in the coffin was that after the release of Internet Explorer 6, Microsoft decided to tightly couple new Internet Explorer releases to Windows releases. So they dismantled the Internet Explorer team and integrated it into the Windows product team.
I also wonder if they realised they were just building infrastructure that competitors would use to supplant them. Already by 2001 and 2002, Google was starting to dominate search.
Besides, enterprise customer adoption of browser based tech sold lots of SQL Server.
Outside of the SV bubble, this is seen as the creepy grandstanding that it is.
It’s like saying that because Microsoft Amazon and Google all use Linux - that we’ll have no innovation.
They have a handful of products that dominate the market and everything else is a joke. And it seems like every good idea/invention they have, gets turned into a mess by marketing/management.
just yesterday i ran into chrome's 2016 img/flex-basis bug which works properly in firefox but requires an extra wrapping div as a work-around in chrome.
what possible motivation is there to fix it when you're not competing with anyone?
hopefully microsoft can help fix it now?
also yesterday, i was writing some ui tests that use getBoundingClientRect() at different media query breakpoints. not only does chrome intermittently fail to deliver consistent results between runs (even with judicious timeouts), at different screen pixel densities its rounding errors are several pixels off and accumulate to bork all tests in a major way. on the other hand, firefox behaves deterministically across test runs and there's a single pixel (non-accumulating) error in one of several hundred tests.
somehow, i made it through the dark ages of IE6 without permanent hair loss, but i dont have fond memories of those years in my career.
now manifest v3 is starting to roll out in Chrome 80. once uBlock Origin stops working, i will use chrome even less (i try only to use it for its devtools currently)
Considering the bug hasn't been fixed in 4 years with completion from Firefox and Edge, I think your assuming that competition drives bug fixes might be a bit off. It clearly doesn’t.
i guess phrased differently, dominant market position (which Chrome has had for those 4 years) is not conducive to "boring" tasks, such as bug fixes - at least those for which workarounds exist. but this is also what happened with IE6 - devs found clumsy workarounds for its bugs, so they never got fixed because even if end users switched to firefox, it's not like devs could suddenly ignore the 800lb gorilla with 90% market share, their sites had to continue to work everywhere, greatly reducing the incentive to switch (users will say "it works fine in both browsers!"). it becomes a self-fulfilling prophecy.
that being said, i've had some positive experience with a rendering bug i've reported getting fixed: https://bugs.chromium.org/p/chromium/issues/detail?id=899342, but that bug was relatively simple since it did not affect layout, just paint.
nuances aside, the fact that 75-80% of the landscape will be blink-based is unfortunate.
Seriously, I need somebody to explain it to me because I don’t get it.
Having one core base of code is not a bad thing to me.
What am I missing here?
And there certainly are people that say the HTML monoculture is maddening. Perhaps you've heard people complaining about Electron being used for desktop apps.
Sure, it's one fewer target to test against, but it saddens me to think that this is going to make it even more likely that web developers target Chrome and its ilk only and that the layout bugs in it are becoming the de-facto standard, just as happened with IE6 for ages. This is actually bad for Firefox even in the parts where it adheres to the standards when other browsers won't.
IE used a proprietary rendering engine. It's now being replaced with a free and open source one. This seems like a strict improvement. It's the opposite situation -- a single proprietary engine being dominant -- that is the doomsday scenario, and that's what we saw a decade and a half ago with IE. The farther we get from that being a possibility, the better.
For related reasons, I'm not happy that DRM has become part of the standard Web feature set.
Because the Chromium monoculture has allowed Google to dominate the web standards process. They can veto any feature or force one through by shipping it and pushing sites to depend on it (including their own).
There is an army of Googlers whose job it is to keep tacking on new web standards. And Google will implement the features before proposing the specs, so their competitors—well, now it's just Mozilla and Apple, I guess—are kept playing constant catch-up. Meanwhile, anything that comes from outside of Google will have to brave the same army trying to smother it in committee.
Just ask anyone who's dealt with web standards politics from outside of Google. It isn't fun anymore.
(Oh, yeah, and because there's essentially no accountability now, plenty of these new features rushed through the door are buggy and introduce security holes. It's like IE all over again.)
That's a dang shame then :/
Apple's a big company - but we saw how they mishandled their own first-party Maps service after divorcing from Google - I can see Apple's Safari potentially falling behind badly if they can't keep-up with Google's work on Blink.
In fact, I can't think of any webkit developments that positively surprised me the past few years; development seems glacial, at best. A list of somewhat notable stuff chromium and gecko have that webkit is still missing:
Stuff because it's better to make your devs pay licenses for no good reason:
- In "fairness": https://caniuse.com/#feat=hevc
There's a whole bunch of stuff that would make it easier for webapps to replace app store apps or otherwise appear native; can't have that!
- https://caniuse.com/#feat=vibration (trying to push people to the apple app store?)
- https://caniuse.com/#feat=flow-root (supported on osx, not ios?)
- https://caniuse.com/#feat=input-datetime (mostly supported on ios, but not osx?)
There are the missing features that just seem there to bug users and devs:
- https://caniuse.com/#feat=link-icon-png (I mean, seriously?)
Then there's useful stuff they don't seem to be willing to work with:
Obviously, there are features that webkit has that others do not, but by and large they're not as interesting or plausibly useful.
Webkit is definitely not blink; not anymore.
The same way reimplementing Chromium in Firefox can't fix anything.
It doesn't matter if the code for a single web rendering engine is available if the standards process is closed.
Although in fact there are also implementation issues. Nowhere has it been proven that open source implementations are optimally efficient, secure, and robust. In fact the various debacles around SSL etc strongly suggest otherwise.
The fact that development is either open source or proprietary continues to be a huge problem. They both have strong points, but they also have obvious weaknesses. Realistically neither is optimal for critical public infrastructure.
Currently Google has far too much over influence over infrastructure - rather like Microsoft did in the late 90s, but even more so.
Open source won't fix this. Anti-trust action - which is long overdue - might.
A huge benefit from that is that companies like Igalia are able to push features forward across all engines via OSS contributions.
Trident was not much of a contender in 2010s. It’s formal death does not decrease diversity. Now that Microsoft _embraced_ Chrome, I expect the browser market to become more diverse, not less.
I think that would have been a preferable outcome, for it would have pushed at least some fraction of users towards Firefox, potentially helping to shore up the only browser using a (significantly) different engine. Instead we have another Chromium-based browser, which doesn't add anything of significance to the browser landscape.
With all the coupling with other Windows subsystems and some features just to enable a Windows-only PC ecosystem, I'm not sure that IE/Edge/Trident/EdgeHTML can be open sourced in a whim.
> Now that Microsoft _embraced_ Chrome, I expect the browser market to become more diverse, not less.
Internet explorer just became a OS vendor backed version of NeoPlanet browser. Just the same thing, in different shell.
While those portions could be stripped-out, it would be a mammoth task to go through the source code history and identify what belongs to who and stub it out if necessary (let alone replace it with first-party code).
Whereas MS’ new projects (like .NET Core) were made open-source from the start and made in the open (Cathedral and the Bazaar) - so there’s no mean ol’ lawyers from LCA to stop people having fun.
I feel like many people forget that, back when it completed against Netscape, IE really was the best browser on the market. The problem was that once they had “won” the browser war, Microsoft just completely abandoned development, allowing the product to languish and become the terrible abomination many of us remember having to write ridiculous workarounds to support.
What you call the permissiveness of web browsers - in other words, their insistence on attempting to render invalid or badly-formed HTML - is what has made the web succeed at all.
Firstly, it was fundamental from the start: NCSA Mosaic was implemented that way, as was Netscape, so there's no point blaming Microsoft.
Secondly, and far more importantly, the robustness of web browsers is the reason why you can read 99.9% of web pages at all, including the one you're reading right now. (Yes, it's invalid: https://validator.w3.org/nu/?doc=https%3A%2F%2Fnews.ycombina... )
I know it's tempting to believe that draconian error handling would have forced people to code web pages "properly". Unfortunately, when draconian error handling was added to the web (as XHTML), it failed to take off. Check the history: https://www.w3.org/html/wg/wiki/DraconianErrorHandling
Mark Pilgrim wrote several excellent pieces about why non-draconian error handling is better, and as someone who wrote XML feed parsers and validators that were among the most robust and thorough in existence, he is deeply qualified to know. My favourite of those pieces is the "Thought Experiment" but I also recommend , which includes:
There are no exceptions to Postel’s Law. Anyone who tries to tell you differently is probably a client-side developer who wants the entire world to change so that their life might be 0.00001% easier. The world doesn’t work that way.
I'm not entirely sure an XML-error-deface is the worst way to expose a program that automatically inserts anyone's garbage in your web page while not having a clear model of acceptable garbage.
There is a historical precedent to show it.
I remember when XHTML was the future, about 2002-2005. Pages were loaded in Firefox by a XML parser. If the page was invalid XML for any reason, Firefox would render a parser error message: "error X in line Y, column Z" with a copy of the offending line and a nice caret under the error position thanks to a monospace font.
Wrong percent encoding? No page rendered. Invalid entity? No page rendered. Messy comment separator (two minus signs)? No page rendered. Inserting an element where not allowed ? I guess no page rendered.
This is nice for a rigorous developer perspective, I appreciated it. But (I used to hate that but a wise person sees the world as it is) it is a catastrophe for real-world adoption.
Fixing one static page on your dev machine, thanks to the error message, is a thing. Making a dynamic website become practically impossible unless all your engineers are extremely rigorous and well-organized, and/or use a framework that generates guaranteed valid XHTML any time.
But all frameworks (except a few unknown ones) had (have?) no notion of a document tree or proper escaping but just concatenate text snippets.
From a business perspective, it means your website is much more difficult to get displayed at all (let alone correctly displayed). And even if it works today, it can blow up at any time because of a minor fix anywhere. Worse, the pages your team tests are okay, but real-world visitors will hit some corner case and get an error message intended for a developer.
One may have hoped that some cleaner framework would appear and serve guaranteed valid XHTML any time. I would have liked this option. Developer would create tree hierarchies in memory and serialize them into XHTML. Please commenter name some that do and how popular they are. Did it save XHTML?
That aspect may be the reason number one why XHTML was ditched in favor of HTML5: the web worked because it did a best effort to render invalid pages. Any solution that strays away from this principle will not be adopted at large.
Meta bonus: we're discussing the HTML level but this kind of discussion we would have had at any other level, had the stack been consistent a few levels higher (script) or lower (HTTP, TCP). It's funny how HTTP and TCP looks like they just work, but they have their own corner cases and spec holes. The ecosystem just happened to have mostly converged on a few implementations that mostly work okay. (No, let's not talk about IPv4, NAT, and the like. ;-)
Also, Netscape was extremely buggy as well: I remember moving friends and family to IE as it was a much better experience in general. For example, one very annoying bug (given the slow downloads at 28.8) was Netscape would often not use its cache for certain code paths and would re-download files again, even if it had them in its cache. In particular window resizing would cause it to re-download files for no reason (this was obviously well before anything like responsive layouts, or imgsrc), and you'd often wait 30-40 seconds to see the page again.
Similarly, I seem to remember IE's "offline mode" worked almost perfectly from the cache and you could revisit pages when not dialed up, but Netscape often would not show anything.
(Obviously there were other ways of handling this, like Teleport Pro to download entire sites, but it was convenient IE's features generally just worked from a user perspective).
I'm not saying that Netscape's engineers did a poor job, because web pages are hard to render correctly. (No, not just because of badly-formed HTML.) I'm saying that IE, for whatever reason, was faster, more stable and more correct.
I finally switched away to chrome because Opera became unbearable on Ubuntu, it slowed down and a lot of graphics issues appeared.
I am old enough to remember the time where VML and the others were invented. And at this time "Open Source" was still called by Microsoft a "cancer", and Linux users frightened for patent violation every 2 month by MS.
It would have been insane to re-implement or standardized anything coming from Microsoft at this time, leading you for sure in front of a judge for patent infringement...
Anything coming from Microsoft was radioactive due to stupid political decisions and aggressive patent & IP attitude.
This is sad, and cause us to loose 10 years in the web evolution, reinventing the wheel many many times.
Without even consider the millions of hours of engineering wasted to fight with broken HTML compatibilities, locked-in technologies (flash, Sliverlight, ActiveX, Vbscript) and continuously deprecated proprietary APIs.
(Alas, I think they'll be around for a few decades more, unless someone is wicked enough to use them.)
Anyone who's either a member of the Baby Boom or Generation X will bear witness to the Cold War for most of the rest of this century.
(Even the fiery mushroom cloud surrounding the fireball burning your face off a few years later ...)
"Internet Explorer already had many of the things that we came to reinvent later and that we now celebrate as innovations."
It happens all the time in tech, and IE probably reinvented some things from Hypercard or whatever came before it.
Treating them as advances that were, sadly, not adopted by the rest of the web takes a lot of chutzpah.
The problem with Microsoft was aggressive pricing and forced default installation, it was not the technical side of the browser.
ActiveX nearly destroyed the web, and as recently as a few years ago there were still enterprises digging themselves out of the proprietary mess they developed themselves into.
The ActiveX API was also well-specified and debuggable in a way NPAPI was not, and it was possible to embed it in other runtime environments like Office documents and Visual Basic applications relatively easily, because COM was truly wonderful technology (even if using it was, at times, very painful). It's not a coincidence that Firefox made heavy use of COM for a long time (though they've rightly been removing it).
Having used COM and ActiveX extensively, despite their flaws they were vastly superior technologies compared to NPAPI and they were a pleasure to work with. The security model was bad but again none of the competitor technologies were any better. I shipped large-scale native apps that successfully embedded ActiveX controls (like the flash player) and this was reasonable specifically because of how good the APIs were.
Even after NPAPI and ActiveX made an exit, the web still was infected by swf files and unity games and what have you. Those things are all either dead now or on life support because it turns out browser vendors don't want to maintain them and they're not portable.
I had always wished MHT replaced PDF. But due to the rivalry at the time Firefox refuse to support MHT ( even to this day ). Webkit has WebArchive which as far as I know isn't support outside of Apple's ecosystem.
I dont actually buy the argument it was Vista that slows down IE development. IE 7 wasn't that much difference to IE 6. It shows Microsoft had very little incentive to improve the Web. I dont know how many actually hate them for not complying with ACID "Standards". I certainly dont. But at the time the Web has so many low hanging fruit a lot of people ( or just me ) were pissed Microsoft didn't even bother improving while holding the web standard hostage with its IE dominance. Along with the crap called Windows Vista.
Luckily we got the first iPhone, 2 (?) years later. And the rest is history.
Anything before 8 was a challenge due to some atrocious bugs.
This had it problems but it really taught you not to write sloppy CSS and JS as it would usually just wouldn't work.
Versions After 7 basically anything that wasn't in the spec supported wasn't implemented so you had to write code pretty much bang on the spec.
Just this Friday I solved a rendering problem with IE where SVG TEXT elements weren't being rendered correctly, I was calling element.innerHTML to set the text which was incorrect. I should have been element.textContent. Using element.innerHTML is incorrect as SVG elements shouldn't have a innerHTML property (they are not HTML). IE11 was actually working correctly, where the latest Chrome behaviour was incorrect.
So spending time making it work in IE has improved my code.
Is that definitely the case? Chrome, Firefox, and Safari all return a value for the innerHTML property of an element in an SVG document.
This W3C spec  specifically mentions XML documents in addition to HTML documents. And as I understand it, it seems like embedded SVG elements also inherit from the Element interface which includes InnerHTML.
IE11 might also be correct, following an older spec, but I don't think you can jump to the conclusion that Chrome is wrong just because the property is called innerHTML.
I assumed that innerHTML must have been wrong because textContent works in all the browsers I have tried it on whereas innerHTML doesn't work. A cursory search textContent vs innerHTML seemed to suggest textContent was the correct way.
It looks like it isn't a simple case of IE11 (I haven't had a chance to test on 9 & 10 yet) being correct and the others being incorrect. Thanks for the info.
Should we address the elephant in the room? For us without a CS degree, Flash was easily the first choice, IE have lagging problems whenver there's more than three layers of <DIV>s around. Yes it has lots of cool capabilities but it's rendered largely impractical. Even Adobe Flex was about take over the "business app" world.
On the Microsoft side, .NET happened and Silverlight happened.
But ultimately, the iPhone happened. 1-charge per day battery phones happened.
BTW the article didn't mention <IMG DYNSRC> and background MIDI music support.
The implementation of them seem to be totally inconsistant (sometimes weird nonstandard CSS syntax, weird meta tags, ".htc" files, etc. etc.), and very IE-specific, so it's almost impossible for other browsers to implement.
This is the real reason why they cranked out all these weird features: to vendor-lock people into IE.
Not sure if Playstation browser could do this either, but be nice since console's are more locked down but seems they are opening up since Fortnite I believe is the first cross platform game where your Playstation and Xbox friends can play together. Then I think if you created something like a virtual world where dynamic content is allowed, I think the console makers might not be too thrilled about that, so really like the idea of being able to publish console games as just a web app directly.
I also think Microsoft is more open too when it comes to consoles, for example you can go to Walmart and buy a Xbox and then turn it into a DevKit while I believe the others make you buy a expensive DevKit hardware that isn't the same as the console already shipped, but maybe this is because of Microsoft's PC background. So from my understand it's easier to publish to the Xbox if making a native game compared to the other consoles, can get started faster but still need approval to ship - while I think with the PlayStation you have to spend a lot of money just to license the tools before you even write that first line of code.
But, why did it fail?
Of course, as the article says, one reason was the Ballmer-era tsunami of bureaucratic confusion that inundated Microsoft and stymied the release of the Windows versions that carried IE.
Another was security. Cybercreeps love IE. Drive-by malware? IE. "Internet Exploder."
A third was the cost of compatibility. It necessary for web developers and later, web app developers, to develop and test once on all the browsers and then again on each version of IE. It didn't help that it came bundled with Windows: large-org IT managers often forced their users to use an atrocity like IE6 years after it upgraded. This bogus standardization shackled a ball and chain to third-party developers.
A fourth was, paradoxically, the whole ActiveX Control subsystem. Apartment threading, anyone? Monikers, anyone? It was just barely good enough that DICOM and other high-end imaging systems could use it. That took away incentives to get <canvas>-like stuff working well.
Other companies have done similar things. DECNet, GM's MAP/TOP. Apollo Token Ring. SysV vs. BSD. But none of those things hobbled an industry quite like IE.
Trebuchets are cool tech too. But imagine if every UPS truck had to carry one to place packages on peoples' doorsteps.
>The other reason could have been a lack of platforms to spread knowledge to the masses. The internet was still in its infancy, so there was no MDN ...
Really incredible demos, too. You can see the URL in some of the demos; looks like the author stood up a VM and wrote many of the demoes. (Recent Star Wars trailers in Windows XP?) Ah, later there's an Internet Archive link to a M$ published VM image!
> You think Internet Explorer could not animate stuff? Not entirely true. Because, back in the days there was already SMIL, the Synchronized Multimedia Integration Language. SMIL is a markup language to describe multimedia presentations, defining markup for timing, layout, animations, visual transitions, and media embedding. While Microsoft was heavily involved in the creation of this new W3C standard, they ultimately decided against implementing it in Internet Explorer.
This brings back bad memories; I recall being taught SMIL briefly in a web-dev class in college. I think Mozilla implemented it. IIRC, you could declaratively animate SVG via XML-like tags, rather than JS or CSS. I didn't know it could access DOM/HTML, or play audio/video!
The implementation of the currentScript() example uses `i` without declaring made me quickly panic. (Thank god JS doesn't allow that to work; had to double check in a console quickly though; "surely `undefined`++ won't convert anything to a number).
You're right, it converts it to Not-A-Number. undefined++ becomes NaN.
Invariably the cast in stone designs were PDF drawings from Photoshop where the art was in second guessing what the designer was thinking of and where they stole their influences from.
You could not implement a table in a cool way on MS IE and in a more boring way on the other browsers, knowledge was just not there or the space to experiment.
Microsoft was somewhat already vested in Chrome tech before starting, so skill and knowledge was there.
I would love to read an official interview though.
Additionally, pure rust dependencies (which is the ultimate goal of the effort) are all statically linked and different versions of the same library can be used multiple times without symbol remaining voodoo, etc solving all sorts of typical DLL hell issues associated with adopting a huge foreign component into a project.
But Microsoft needs to release New Edge today. And today, Gecko is not made from a bunch of reusable, standalone Rust libraries. It's built from a couple of good libraries, plus a lot of ugly, tangled legacy. Some day, Gecko might be the best choice for software that wants to embed web technologies, but that's not today.
But I still use IE11, ironically, because I like to develop quick HTA tools for enterprise in HTML and Typescript powered by excel documents or access database using COM instead of having to download Electron and what not, which I don't need since I'm only developing for Windows.
it's unfortunate that everybody lost nearly 10 years because Microsoft stopped taking Webtechs seriously in order to focus on Silverlight and what not, which they later abandoned anyway.
I need to find an alternative to HTA though that still support COM since eventually Windows will stop supporting HTA apps.
Yes it is. It is also unfortunate that some people refuse to upgrade their browsers and engineers have to jump through hoops in order to still support them... ;)
Who still uses WMV?
Around 2012 when all 3 major browsers had similar market share  is what we want: everyone can make try extension but no one can ram them down our throats.
At the end of the day, the default option for most users would naturally be to stick to Edge, but they instead have been installing Chrome. Why? Because Chrome was better. Now Edge is like Chrome.
There's much less of a reason for users to install Chrome as a result. Microsoft are likely to regain some marketshare with this approach. If Bing is good enough, that may also mean billions in revenue.
I've actually noticed it is better in some kinds of searches, and worse in others. Thanks to Edge, it's my default and to be honest, I don't even realize it's not Google most days.
2. Mozilla is not a competitor but they would get just as much free work. (Or MS sponsors Firefox for the Bing plug, and they get that side benefit.)
Though I think there are real innovations discussed in the article, the portrayal of Microsoft being an altruistic actor trying to get others to adopt their tech is downright dishonest with what we know about their internal discussions of their motivations and strategy at the time.
I found the article really interesting, but I could do without the rose colored glasses.
Well if you only look at the web and exclude the bulk of enterprise software in the 00’s.
> The other reason could have been a lack of platforms to spread knowledge to the masses. The internet was still in its infancy, so there was no MDN, no Smashing Magazine, no Codepen, no Hackernoon, no Dev.to and almost no personal blogs with articles on these things. Except Webmonkey.
(There is, however, a quite legitimate argument that back then English fluency was a very significant barrier)
The main problem was that 1990s Microsoft was all about “cutting off their air supply”. They threw huge amounts of money into building out tons of features, exclusive bundling agreements and promotions with various other companies, etc. but they were not nearly as lavish in spending on QA or developing web standards even before the collapse of Netscape lead them to pull most of the IE team away. If you tried to use most of the features listed, they often had performance issues, odd quirks and limitations, or even crashing bugs which made them hard to use in a production project and many of those bugs took years to be fixed or never were.
In many cases — XHR being perhaps the best example — going through the process of cleaning the spec up to the point where another browser would implement it might have lead to the modern era dawning a decade earlier with better than weekend-hackathon-grade quality implementation in IE. I look at that era of Microsoft as a tragedy of management where they had some very smart people but no strategy other than “prevent competition”.
While committees often make crazy decisions this is the other end of the spectrum.
Similarly, orders from the office team for extensions.
The stories behind these things would even better reading.
...no? If I want two side by side divs, I could give them width: 50%. But with regular box-sizing, if I apply a border to those boxes they'll stack vertically. That's pretty dumb.
You only have N bandwidth available to the house. Cable companies have a tradition in providing content; content can be very complex, and very rarely was content ever coming back from the house. So, instead of doing N/2 upload N/2 download, they did, lets say, N/4 upload 3N/4 download. Why haven't they fixed it? Legacy systems. We've all been there.
Using chromium as a base, browsers will have to differentiate by offering a better product.
“Better” is less likely to be better performance, compatibility, or accessibility.
By using much of the same code as another browser implementer, any browser vendor [hint hint] that still makes their own engine could reduce the resources they put on the foundations and web platform and put more of it on the product itself.
Perhaps we’ll have more groundbreaking innovations to move browsers forward. The last few big ones: multi-process architecture, tabs.
In effect, by using the same foundations, the browser wars could in fact be reignited and the users could be the winners.
On the web platform side, I.e., the stuff you see on MDN and w3c specs, using the same “base” doesn’t mean the browsers won’t have different implementations of future APIs if the vendors’ opinions diverge strongly. Case in point: Chromium used to use WebKit as its renderer and now uses blink, a fork of WebKit.
In other words, throw out all the advantages that come from using Rust in the browser in favor of a codebase that has a policy forbidding any use of that language. No thanks.
Furthermore, I have to note the irony that you say nobody else should implement their own engine when your team is the one that forked Blink from WebKit in the first place.