Um, why would they move when they already have an engine? Sure you can argue they should have moved to WebKit or Gecko or even Opera's rendering engine years ago but Trident has been improved significantly over the years. Don't forget Edge is based on Trident (so it's not entirely new or anything as implied in the post; it provided a good base to rip legacy support out of and to improve other things).
Beyond that though it seemed fine just kinda felt like the author had a bias against Microsoft from that first statement where he / she questioned their rendering engine decisions. I'm also not entirely convinced they tuned explicitly for the benchmarks they called out; sure they may have but I don't know that I believe they necessarily would have had to.
And now that Edge is close to release, we see that it already surpasses WebKit in HTML5 support:
For a long time, Microsoft made a strategic decision to ignore the web, and that hurt a lot of things. And Microsoft has privileged archaic Intranet sites over progress on the web ever since, which hasn't been great either (unless you run one of those Intranet sites). But the idea that Microsoft doesn't have the resources to maintain a browser engine is ludicrous -- they're Microsoft. And they're already ahead of one of the suggested alternatives, WebKit. (And WebKit is probably the one of the three that's least tied to a specific browser implementation, so it's the one most amenable to having been picked in place of their Trident revamp.)
Yeah obviously they didn't start from scratch, the new engine is just a forked Trident where they removed all the legacy cruft.
'It seems like Microsoft has been targeting their optimization effort for the competitor benchmarks in order to show impressive results for their new product. When it comes to more intensive and complex HTML5 benchmarks they are still miles behind the competition.'
Reasonable explanation to me.
The new tests don't seem particularly hand picked given that the are among the top results I get when I google for "Browser speed test", arguably a sign that they aren't overly obscure.
As an aside, all the tests benchmarks are made by parties other than microsoft so they should count as third-party. And their being third party is typically seen as a virtue due to the tendency of vendors overturning on their own benchmarks.
So while this was obviously a bit of hostile benchmarking, it doesn't seem like there was any need for special effort to get a discrediting result, simply benchmarking with other suites proved enough. The fact that the author is clearly hostile to microsoft does not in itself disprove the results presented, and is not ground for dismissing them.
Neither of these accusations are in any way constructive. I'd be much more interested in real world examples where one browser or another is noticeably slow.
I'm working on a web app that uses typed arrays heavily.
Chrome runs my test case in 2.7 seconds, Firefox in 3.2, and IE Edge in 23 seconds... (ie 11 in 30+)
Now to be fair i developed "for" firefox, checking that it worked in IE and chrome the whole way, but that is a pretty large performance difference.
Plus i've always hated benchmarks. They are so easy to "game" without real improvements in actual code. They are great for development and regression testing, but when used to compare engines i've found they are nearly useless.
All that being said, I applaud the IE team for really sticking to their promise of improving the browser and it seems they are going to give Chrome and FF a run for their money soon.
I'd be interested to see how your test case performs on the Edge browser.
I guess i just have a tendency to throw IE on the front of it. (and i've also been referring to it along with the other IE versions as IE).
So the 23 was Edge, and 30+ is IE11
Is the current term for it just "Edge"?
It is confusing when you add IE because IE 11 has an edge mode (essentially their newest rendering mode). IE 11's edge mode and the Edge browser share a handful of components, but they're selling Edge as a brand new browser with a lot of cruft removed (backwards compatibility) and other improvements which IE won't ever get (11 is the last version).
I think you are doing something wrong. I have a light app and very heavy SPA (written really badly with almost 5mb of code, D3 + Highcharts). Still, both app loads in less than 4 seconds in IE 11, FF, Chrome and Safari.
It does some fairly heavy image manipulation in web workers using typed arrays. The app loads a little faster in IE (the code comes in just under 1mb), but the runtime of my few test cases are magnitudes slower.
I do, and the page load for a big table is stunningly way over 10s even on Chrome/FF, while immense on IE11 (never tested EDGE).
Of course a simple fix is ReactJS or natively rendering.
However due that only 6 Tables are slow and they changed barely we won't fix that, yet.
Issue #2: "Personally I have found the Peacekeeper results to be a reliable measurement of web browsers performance." Is there data to back up this claim?
* It accidentally benchmarks setTimeout clamping. https://bugzilla.mozilla.org/show_bug.cgi?id=610077 is a dependency of peacekeeper. https://bugzilla.mozilla.org/show_bug.cgi?id=608648 is an example in which setTimeout clamping affects the score a lot.
* Its benchmark of array.splice() is extremely strange: https://bugzilla.mozilla.org/show_bug.cgi?id=592786
* Its layout benchmarks do not really stress layout and instead either stress basic painting operations or DOM accessors. Most pages do not sit there calling style.top in a loop over and over.
* It sets MozTransform only in Firefox without setting values in Chrome: https://bugzilla.mozilla.org/show_bug.cgi?id=920659
And so on.
There is no reason to think that Peacekeeper's JS benchmarks are particularly better than V8's; in fact, they're probably worse, due to the proliferation of microbenchmarks. You'd get about the same effect by going to jsperf.com and clicking around.
Octane and SunSpider give more weight to the rest.
It'd be very impressive if Edge was faster all round.
Honestly, any test between a microsoft product and a competitor run on microsoft's operating system has to be viewed with a grain of salt. What's interesting here is the degree of trust given to a vendor that has tried to rig/break even hardware to lock out competition: http://antitrust.slated.org/www.iowaconsumercase.org/011607/...
And yes, bg still has a lot of swing at microsoft: http://www.businessinsider.com/bill-gates-is-back-at-microso...
Worse below, there are people simultaneously believing in a vendor that has long track record establishing their "rig the game" overarching corporate strategy and dissing the blog author for doubting the vendor's claims, even to the extent of accusing the author, who checked only two HTML5 benchmarks, of going on a witch hunt to find benchmarks that disadvantaged the vendor.
> "If seems unfortunate if we do this work and get our partners to do the work and the result is that Linux works great without having to do the work."
Maybe I'm being too naive, but this sounds more like he is concerned about giving away the work rather than locking people in.
I would prefer if these kind of initiative are open source but in my opinion each company have the right to release as they please the technology they develop. In my mind, this is the same case that Apple with the Thunderbolt, and I think is fair that if Apple decide to keep its technology as private they should be able to do so.
But SunSpider is a really bad benchmark anyway. All benchmarks are bad and suffer real problems, but SunSpider more than most.
I have to doubt his judgement. Microsoft is a huge software companie, the biggest in the world. Much more than Apple or Google.
Not only they can do that, but they also need to be in control of their stack.
But the author is upset because it isn't faster for the specific benchmarks he prefers. If it's slower at Peacekeeper does that specifically mean that the browser is now slow ?
And then weirdly Mint/Linux comes into the argument as though it has any relevance to Edge. Nobody is switching operating systems because of browser microbenchmarks.
The post 1st validates Microsoft claims whether they hold. His tests show that Edge is faster on SunSpider/Octane, but not by the margin that Microsoft suggested. The article also points out that old HW is used as the test bench and that HW will make a difference.
Then the article expands the scope to other benchmarks, and there is a (subjective opinion) that Peacekeepers results hold in real world.
I can't see where the author is upset. Rather I see a typical geek hobbyist experiment and the results posted to a FORUM community - not to a professional or scientific publication.
From arewefastyet.com they seem to be doing ok, (even if they keep testing some really slow chrome version for some reason) and so it seems strange they are being left out.
If Edge can be to Windows 10 what Safari is to OS X, I’ll try to do what I can to move my browser workflow to Edge as much as possible.
IE11 has pretty solid web standards support:
And Edge promises a lot more.
> What to make of these results? It seems like Microsoft has been targeting their optimization effort for the competitor benchmarks in order to show impressive results for their new product. When it comes to more intensive and complex HTML5 benchmarks they are still miles behind the competition.
"Well these tests don't meet my expectations so lets keep testing using other benchmarks until I'm validated".