I work on SpiderMonkey and I'm super excited about this news. All JS engines have added more-or-less similar performance optimizations but often implemented differently and I'm really interested to see what the Chakra team did. I'd be happy to write a blog post on it next month, if people are interested.
To get into the gory details google Vyacheslav Egorov - he's a v8 engineer with a number of talks on YouTube, presentation slides, and blog posts on v8 internals. He also maintains a tool called IRhydra, that lets you examine functions after they've been compiled into v8's internal representation.
I just wanted to point out a minor detail: I am an ex-V8 engineer - I have not been working on V8 since 2012.
It's actually fairly interesting. These huge mega companies are just support teams for a few programmers. All the corporate vision and endless strategies mean nothing compared to one of those programmers having a good or bad day in their work. You honestly get the feeling something somewhere is very broken when you think about it.
Strange thing to think. As if the core mission of the company, the motivations of the management, strategic decisions to hire people and structure projects etc, the funding and priority they give to specific products etc, don't determine and affect the end product!
Put that way it's as if a great browser engine can even get out of some accounting software house, if only the right programmers chance to work there.
Only things like the Mythical Man Month save Mozilla a bit here.
Now I know that, I'm actually looking forward to playing with the engine more than before - a concentrated braintrust of a few skilled engineers is always more ideal than a sprawling mass of seagulls (to borrow ideology from Finding Nemo :P).
The flip side, of course, is that all of you have to keep your game up to quite a high degree or you're out. Respect. (I think what the Edge team as a whole has managed is really amazing - I mean, a brand new browser...)
[Also... I have to ask... I've been wondering since before this announcement: is it an even remotely vague possibility that I'll ever able to natively run EdgeHTML on FreeBSD or Linux one day in the distant future, source or binary? :D]
I simply can't take this claim it has better coverage at face value.
because I only just finished testing a week or so ago and the js code we deploy that works on every platform from android through Linux mac ios and Windows.
is still mostly broken on edge
and doesn't even begin to work in ie.
so we will still be recommending users not to use edge or ie at this time.
that recommendation isn't one I make happily.
but windows machines make up such an insignificant part of the market now it's an easy business decision.
The question is if its also the primary product of the COMPANY the development team belongs.
As it has been losing ground for a decade or so, and without it there is no Mozilla. I see some crazy initiatives obviously doomed to fail (like the mobile OS), which are worrying.
Coming up with a Servo based browser that's both more secure due to Rust AND faster due to parallel processing, and a better native-look-and-feel story (at least on the Mac) would be good to catch up with the others. And better developer tools, as Chrome has eaten that influential segment (web devs) up.
Mozilla used to be able to sit around and say "we absolutely refuse to do certain things, and we want to spend our time figuring out how to make the web an interesting place for power users and developers". I respected that Mozilla. It had a lot of clout in the market and used that clout to fight against DRM on behalf of all users while spending their resources building a super-extensible platform (which I think is a better description of Firefox's crazy plug-in oriented nature).
The post-Chrome reality is that Firefox no longer has an automatic dominating position in the "alternative" (non-IE) browser space, and so they have had to start caving to loud user demand and start fighting for the end-user market segment. They don't have the ability to fight against DRM anymore, so they have been forced to include Adobe DRM by default. They don't have the ability to waste a lot of time on power users anymore, so they are dropping all the complex-to-maintain parts of their platform spec and have started dumbing down the UI.
But of course, Google isn't going to want to do that, because Google is a company with a strategic vision that happens to benefit from owning the web browser and being able to unilaterally make major decisions and perform weird experiments and crazy product integrations through it, which is all the easier for them to bootstrap as they can use their position as "the place almost all people both search and advertise" to push Chrome on people. This means that Chrome has no reason to collaborate with anyone, and even the one alliance they sort of had (with Apple on WebKit) they broke off when they decided they didn't have enough unilateral control: rather than collaborate as part of a community, Google just wants to own the product.
They also happen to be the primary customer of Firefox, so Mozilla is being forced to operate on smaller budgets. Note that this is the usual effect of competition and should be the obvious one: the idea of someone "stepping up their game" makes no sense when you are now operating on smaller margins (as competition means you can't demand as much share of the profit on any particular transaction) of a smaller market (as competition means that some customers will be using your competition). You only get to "step up your game" momentarily, often towards frustrating ends (such as giving up the DRM battle or trying to dumb down your UI as fast as possible), until your resources start to wither. (Yes: in a small initial market, competition can cause greater customer awareness leading to more pie for everyone; but that obviously isn't the case here: that is only true near the beginning of a new concept, when no one even believes the thing you are doing is relevant or valuable.)
In this case, it is even worse, as the primary customer to Mozilla's product was Google... and so they are essentially screwed in that negotiation. Firefox has had to switch to Yahoo as the default search engine and start making content deals to bundle marketing and software with their product, something they were morally opposed to doing in the past but have been forced into doing due to competition. This also doesn't come cheap with respect to executive time: rather than working out their product and platform vision, they are having to spend time negotiating and having painful conversations about how to keep their company from being destroyed and what morals they are willing to compromise for how long in order to maintain that fight. I don't particularly love Mozilla (as someone who has been paying attention to the open Internet since the beginning, I frankly found Netscape's business model of selling web browsers bundled with ISP contracts terrifying), but I have great sympathy for them these last few years, and absolutely do not see Chrome as being a positive force for anything at all in this ecosystem, except maybe security :/.
Some good things have come out of Mozilla-the-organisation - I very much hope that Rust/Servo is a success. But when it actually comes to developing an open-source browser, the incentives of a donation-funded foundation like Mozilla are all wrong.
I don't know what the right way to fund open-source development is. Dual licensing has its share of failures. So does trying to make it a direct business. So do research grants. Partly it's just the tragedy of the commons. In my darkest days I wonder if open source is fundamentally doomed because it simply can't make the monetary incentives line up with good engineering practice.
This is full of TILs, and mind-bogglingly enlightening to read.
In many ways the Web feels like exactly the same place as it was 16 years ago (especially to read about the WSP xD) but things have gotten significantly better for the user and standards of late.
TIL that NS/Mozilla was really the thorn in everyone's side on the tech front, but M$ wore the blame for the Web's early history because of the antitrust cases... that's insane. Absolutely insane.......
Are there any binary builds of Mariner, newlayout and NGLayout I can track down?
With the first few versions they were playing catch-up, but if I recall correctly, IE 4 and IE 5 actually had more features and better standards compliance than the current Netscape versions, as did IE 6 at its launch.
"IE hell" started once Microsoft won the race.
For better or for worse, I think the case had a lot to do with Internet Explorer's long pause. I often wonder how Microsoft's browser, its Internet services, and the company as a whole would be today had that case not been undertaken.
The way I see it, the only trouble with the case was that it was too late. By that time IE has already won.
Are you suggesting some other consequence of this case, leading to stagnation of IE development? I can't see any...
I think that you could see a lot of optimistic excitement from Microsoft in the idea of merging IE into Windows just prior to the antitrust case. There were experiments with using the HTML renderer everywhere in the OS from widgets (the "Active Desktop" thing) to applications (HTML Help, even the HTML usage of Windows (now File) Explorer)... Admittedly today with have mixed opinions of such experiments (and their often poor performance), but it is hard not to wonder what could have happened had Microsoft invested fully into that combined Windows/IE rendering platform had they been less afraid of the antitrust repercussions...
We're finally starting to see HTML/JS/CSS "everywhere" application toolkits (it's a vertical slice in the "Universal Windows Platform", and then there's efforts like Electron and Cordova), and it's interesting to think that maybe some of that would have happened sooner in a world without that antitrust lawsuit. (Certainly the counter is that it would have been less standardized, but I don't think that is necessarily the case, either: it would have largely have been different standards though.)
I can't directly see any either because I don't know anyone at Microsoft, least of all on the IE team. But I think the coincident pause in IE's evolution is undeniable. As for a cause and effect link, that's just conjecture. To me, it seems likely that there were either formal business decisions that reduced the effort expended on IE or at least a psychological block that had much the same effect.
Look, I am a Mozilla partisan through and through. I've been using Netscape, then Mozilla, then Firefox for years. So in many ways, I applauded the outcome of the United States vs Microsoft case. But to consider the case as one of only upsides and no downsides seems a bit narrow minded.
They've had a seperate research department for over 20 years or so, and if memory serves me well quite some innovative things came (and still come) out of there. So I think they is too general in your phrase - there is innovation (also look at recent Surface line etc) but you're right in saying that it doesn't always come out and doesn't make it to the market, probably because the they you mean is some parts of management which sees money flowing in and is like 'hey, cashcows enough, no need to think about the future'. Or something like that. And lately this turned around again.
But the fact as you mention remain correct. They are not trying to bring new and innovative stuff to people's life. At least not when they are leader .
It's hard to say what this means for the higher levels, but many people think that Nadella's appointment says that Microsoft does want to change.
And luckily for US, Microsoft is and will be the underdog for a while at least, in most markets they are in: web search (after Google), browsers (after Chrome), mobile OSs (after Android and iOS), server-side OSs (after Linux), cloud stacks (after AWS).
I am having hard time understanding why luckily for US ?
Disclaimer : I am not US citizen nor inside us.
The capital letters were just for emphasis.
Freudian slip :)
And another, very important thing at that time: IE4/5 was much faster and lightweight than Netspace. I recall using IE all the time just because it take NN forever to start up. The only contender on the speed front was Opera.
I's almost sad that Edge is not cross-platform.
This may change with the Docker support we are seeing promised. Powershell is definitely a workable remote shell. But it's not the case that this is sufficient today.
It's a lot less common but I wouldn't call it `very rare`
I had a friend go there, came back talking about hallway discussions of Posh on Redhat.
Now, Node is probably the most reliable cross platform language host. Write something for node, nearly anything, and if it works on your OS it'll probably work elsewhere too.
There's no reason something similar couldn't happen with Chakra, especially now that it's open source
That'd be the coolest thing since it'd encourage developers that shy away from MS technology to dive into it.
BUT, once you have the JS engine over, I could see them porting over the rest. But anyways, you could still run headless browser (or just Chakra) and run tests against it.
Downloading and installing IE browser on OSX is a much better solution. And this way, you have a browser that people can use casually as well. Since it has awesome ES6 support, that makes it even better for JS Devs.
In all seriousness I'm pretty sure everyone would say POSIX-y systems.
I use Chrome as my primary browser on Linux and OS X. Tried to use Edge as primary on 10, but it's
a freaking HOG. I'm using Chrome as primary on 10 now, too.
Can't wait to see what improvements come.
The Edge, Chrome, Firefox and WebKit teams are all working on ES6 compatibility, and releasing new versions pretty frequently. The Edge team are in the lead because they've implemented more features, faster.
Chrome/V8 was actually quite a way behind for a while, although they've caught up quite a bit recently. I believe the situation wrt to V8 within Google was a bit messy for a while, as the original developers (Lars Bak, etc.) were more interested in championing Dart than implementing the latest ES6 features. Eventually, Google had to create a new team, based in Munich, to work on V8.
Well, TraceMonkey can at least legitimately argue to not be inspired by it: it was publicly announced prior to Chrome, and work obviously started on it before that.
See 5:30 into video (on ES6): https://www.youtube.com/watch?v=PSGEjv3Tqo0
I have been doing a fair amount of JS work in an app using Angular this year. I don't use much inheritance of any kind, mostly FP techniques and "objects" (associative arrays) as aggregates.
If you use "closure" variables instead of "this", it's pretty easy to graft functions from one aggregate to another for code reuse.
Source: https://channel9.msdn.com/Blogs/Charles/SPLASH-2011-Brendan-... [10:04]
The Code editor they released is built on Atom Electron and seems more performant than Atom in the few experiences I have had switching between them.
If they can continue to gain trust in the community and improve their UI they could become great again. You can tell they have thought about how to do this. A few years ago now I remember the guys from the IE team did an AMA about the new explorer IIRC it was 10. They talked about cross browser compatibility and wanted developer feedback.
I am not sure if they are actually an "underdog" but I find myself feeling like that, and hoping they can get it together.
Sounds like a lot of work (not to say it's not possible or interesting). Our bindings code is pretty complicated already. It depends highly on how Chakra deals with things internally -- if it's similar to SpiderMonkey; I'd be interested in having a look. Might become a fun project :)
Without a reference open source DOM implementation (which we have for SM -- Firefox's DOM), we'd also need good docs for the Chakra API. No idea if that exists.
(I'm guessing this would be at least the milionth time they hear about it :) )
I think the person who wrote this particular page simply didn't specify a friendly name, which is why it's falling back to the canonical name.
Will have a look and compare with the spidermonkey bindings when I get time.
Rendering engines (all in C++ except Servo):
- Blink (Chrome/Opera)
- Trident/Spartan (IE/Edge)
- Webkit (Safari)
- Spidermonkey (used by Servo and Firefox)
- Chakra (used by Trident/Edge)
- V8 (used by Blink, also by Node)
Servo already uses a JS engine in C++ because a Rust JS engine is a huge undertaking in itself (see )
It does make sense to try out Chakra or V8 for Servo. It's probably a lot of work, though. And there may not be a net gain out of that (We have access to in-house spidermonkey know-how, none of that for the others).
Here's a `git grep jsapi` on that folder: https://manishearth.pastebin.mozilla.org/8853940. Haven't removed duplicates or formatted it, so it's probably much shorter than it looks.
Please keep me posted about this!
Stuff like JITs can't really reap Rust's safety+performance benefits, and overall the same might be said of a JS interpreter, with all the garbage collection and stuff. The other thing is that Spidermonkey/Chakra/V8 have had years of optimization and tweaks, starting from a clean slate would be _very_ hard. With Servo's small team this is an impossible task, but with a larger, dedicated, team, there's a chance. shrugs
: though it's possible they will still be safer than the C++ counterparts, which is still good. The question then becomes, how much is that increase in safety, and can it justify a whole rewrite?
Sometime down the line, a NAN-spidermonkey or NAN-chakra project might become feasible.
On paper. In reality, HELL NO. I maintain node native addons, and the thing is, NAN is great, but they simply cannot foresee what will break next in v8.
Once something breaks, they provide a node-version-independent feature-shim, but every time there's a new version of Node, I DREAD the work I'm going to have to do to maintain my add ons.
It's simply less work to cross compile C++ to JS w/ Emscripten or write it in JS (or higher level language and compile to JS) in the first place.
Is your lost.
They hope to merge it back into upstream, that seems possible given the recent history of the Node.js project.
Of course, it is very very unlikely that npm modules with native code work in Node.js/Chakra just yet.
I would hope they follow the same path as the jxcore+SpiderMonkey team so I don't have to worry about which engine my npm modules support.
I'm asking because I'm wondering if Node.js's evented approach is the only way to do things.
Also it's not (any longer) considered good practice to have languages that allow mutable memory sharing since that makes software unreliable, so it's not really a good idea.
Without arbitrary memory sharing, multi-threading is already supported with web workers.
It gives many of the benefits of sharing memory without all of the gotchas
You know what happens with shared typed arrays? Shared Uint8 arrays.
You know what happens with shared Uint8 arrays? Multiple workers using JSON.parse.
Do you want ants? Because this is how we get ants. :(
Also, assuming what you say is true, what's the problem? It's much easier for the JS implementations to synchronize things only with a specific type of typed arrays than sharing all kinds of GC-managed data structures.
In the last few years SpiderMonkey (Firefox's JS engine) has dropped support for this more and more, and these days you can't anymore. But that's a consequence of the engine implementation, and not the language.
The reason for this is that as soon as you have separate threads sharing memory, it introduces non-determinsism into the mix. When developers use synchronization mechanisms incorrectly, or not at all, this can lead to deadlock between threads.
This must not be allowed to occur on the main thread shared with the rendering engine.
So keep your eyes out for SharedArrayBuffer.
The event loop is very important to the existing language semantics and isn't going anywhere.
It gives the benefits of having multiple competing implementations but unlike the browser platform we (the developers) get to choose which one we use.
They're embracing startups since they need them and are changing their business models to cater better to smaller startups than the behemoth enterprises with Office 365, Azure, etc.
The hobbyists would use it as is but no company would touch it without support anyway.
Sounds kind of strange to say it out loud, though. ;-)
Still, given where Microsoft is coming from and given Russinovich's position, this is significant.
Have gone back to Sublime 3 for now but if the VSCode devs ever decided to support tabs I'd switch.
I sometimes have crazy thoughts about the world ending.
To me it seems the past ~5 years of JS engine development have been focused on what kind of optimizations to implement, rather than parsing or implement core library features.
There are many JS engines which are extremely small compared to the big guys used in browsers. Most of them tend to grab a subset of the language and implement that.
For example, the creators of nginx created their own specialized engine called nginScript that runs JS for their stuff https://www.nginx.com/resources/wiki/nginScript/
From the copyright headers he worked on it starting in 2009 and had the first public release in 2014 (actually 2010, see comment below). He certainly worked on other stuff in between too. This is also not a toy implementation. While I have no idea how it compares to state of the art engines, it does feature a whole bunch of optimizations and a JIT compiler.
I personally learned with SICP (But reading this book isn't just about interpreter, it will make you a better programmer and blow your mind in so many different ways. I wouldn't say it's for experts only, but this isn't the kind of book that beginners would get excited about. However, if someone is serious about improving as a developer, I can't think of a better book):
Finally, (How to Write a (Lisp) Interpreter (in Python)) by norvig is a great read if you're getting started with interpreter: http://norvig.com/lispy.html
* The reason the two last links are lisp related is because writing interpreter in Lisp is really straightforward (Since the code is the AST). Similarly, if you wanted to learn about memory management, a language like assembler or C might be more suited than say Python.
It could be a good example of a small js interpreter. I remember the author said to be 95% compatible with real js. That was a couple of years ago.
I didn't know Node.js could use anything but v8. This is also very nice.
I've seen the benchmarks but in my experience it just... lags...
For example, on the kangax ES6 compatibility table . On chrome clicking a column takes less than a second, on Edge (i'm on an up-to-date Windows 10 machine, no preview stuff) it loads faster than in chrome, but takes 3+ seconds to switch between columns.
Even some of the stuff i've written acts similarly, and i can't figure out why.
But for a more "pure" js example, I have an app which does some image processing in the browser using the canvas ImageData stuff and typed arrays.
For whatever reason my test case completes in about 3 seconds in chrome, 3 to 4 seconds in FF, 11 seconds in IE11, and 6 seconds in Edge.
I have a feeling the problem is that i'm optimizing for V8 because i know it the best, and i'd much rather not do that if possible.
Think about it, Microsoft really wants bing to succeed, for it to happen, Edge has to succeed so I'm sure they are putting their best minds behind it. Its a matter of time.
Was this a deliberate choice by Microsoft to steer people away from Google products, or is it something more benign than that?
I expect that if this were the problem, OP wouldn't be mentioning it.
When you attempt to use the right-click menu to copy and paste, you get a clearly-worded message box that tells you that you need to use the keyboard shortcuts, and which keyboard shortcuts to use.
Go make friends with someone who works on Safari, then take them out for drinks. You'll understand why I'm saying what I'm saying. :)
He tells me that Edge is often very quirky in very subtle ways. He's had to spend many, many hours diagnosing, reporting, and working around quirks in Edge. These quirks sometimes get fixed and sometimes do not, but are often not present in Internet Explorer.
That being said, since our JS is generated, it's purely ES5 since it still has to run on IE9. So maybe the weird cases are in the more modern JS parts we don't use.
Unless you mean "why write your own engine instead of forking V8", in which case the answer might be that if you have the time to do it, it's easier to become expert on something you created yourself rather than an existing large complex system. And while you're at it you can make sure the system you create is good at solving the problems you mean to solve, which the existing system may not be.
A concrete example of one of the design tradeoffs here: V8 doesn't have an interpreter mode, just several levels of JIT. Every single other browser JS engine does have an interpreter, because it turns out that for cold code (which is most code) that's both faster and less memory-intensive than a JIT. Not to mention allowing your engine to run on hardware for which you haven't created a JIT yet.
I'm waiting for the first ports to Linux or, hell, a native port of IE... given the trend, it's not unreasonable that MS will open source a load of stuff.