Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft Edge's JavaScript engine to go open-source (windows.com)
852 points by clarle on Dec 5, 2015 | hide | past | web | favorite | 268 comments

Wait, Edge does better on ES6 coverage than both Chrome and Firefox? Microsoft have seriously stepped up their game, especially seeing as it's now neck and neck for performance with Chrome: http://venturebeat.com/2015/09/10/browser-benchmark-battle-s...

The next Firefox Nightly build should get 84% on that page, much closer to Edge than Firefox 44 (74%).

I work on SpiderMonkey and I'm super excited about this news. All JS engines have added more-or-less similar performance optimizations but often implemented differently and I'm really interested to see what the Chakra team did. I'd be happy to write a blog post on it next month, if people are interested.

People are definitely interested. Posts about engine architecture are rare and interesting.

Speaking as someone tuning a JS game, I focus on V8 over other engines because there are great articles around that explain in detail what v8 knows how to optimize, what causes functions to deopt, what kicks object property lookups into slow mode, etc. Articles explaining this for SpiderMonkey would be greatly appreciated!

FWIW, people in Mozilla are working on a "JIT coach", which will tell you if performance critical sections of your code aren't getting JITed, and why. I believe this is almost ready for use, though I'm not sure when it will be presented.

Sounds analogous to the way the chrome profiler tells you when functions have permanently deoptimized, right? That would be a terrific feature for Firefox, looking forward to it!

Could you recommend a few articles?

On v8 right? This one is probably the best broad overview:


To get into the gory details google Vyacheslav Egorov - he's a v8 engineer with a number of talks on YouTube, presentation slides, and blog posts on v8 internals. He also maintains a tool called IRhydra, that lets you examine functions after they've been compiled into v8's internal representation.

I am glad you find my stuff useful enough to recommend it to other people! Thanks.

I just wanted to point out a minor detail: I am an ex-V8 engineer - I have not been working on V8 since 2012.

Duly noted, and thanks ;)

Super interested! I often find technical blog posts like this years later and I'm really grateful to the people who donate their time to the difficult task of writing them.

Super interested!

yes please


One would think Firefox would lead here, since the browser is Mozilla's primary product, imagine how much hype and PR Firefox would get if only they were far ahead on ES6.

Strange thing to think. Companies do not produce browsers - employees do. In fact, not even employees. A handful of talented programmers produce browsers. All the desire in the world from a company won't improve the ability of those handful of programmers.

It's actually fairly interesting. These huge mega companies are just support teams for a few programmers. All the corporate vision and endless strategies mean nothing compared to one of those programmers having a good or bad day in their work. You honestly get the feeling something somewhere is very broken when you think about it.

>Strange thing to think. Companies do not produce browsers - employees do. In fact, not even employees. A handful of talented programmers produce browsers. All the desire in the world from a company won't improve the ability of those handful of programmers.

Strange thing to think. As if the core mission of the company, the motivations of the management, strategic decisions to hire people and structure projects etc, the funding and priority they give to specific products etc, don't determine and affect the end product!

Put that way it's as if a great browser engine can even get out of some accounting software house, if only the right programmers chance to work there.

Microsoft has about 120 times the number of employees that Mozilla has. That's an insane number. If they consider the browser remotely important, they can put much more resources behind it than Mozilla ever can.

Only things like the Mythical Man Month save Mozilla a bit here.


...that is impressive.

Now I know that, I'm actually looking forward to playing with the engine more than before - a concentrated braintrust of a few skilled engineers is always more ideal than a sprawling mass of seagulls (to borrow ideology from Finding Nemo :P).

The flip side, of course, is that all of you have to keep your game up to quite a high degree or you're out. Respect. (I think what the Edge team as a whole has managed is really amazing - I mean, a brand new browser...)

[Also... I have to ask... I've been wondering since before this announcement: is it an even remotely vague possibility that I'll ever able to natively run EdgeHTML on FreeBSD or Linux one day in the distant future, source or binary? :D]

I have to apologise. but it shows.

I simply can't take this claim it has better coverage at face value.

because I only just finished testing a week or so ago and the js code we deploy that works on every platform from android through Linux mac ios and Windows.

is still mostly broken on edge

and doesn't even begin to work in ie.

so we will still be recommending users not to use edge or ie at this time.

that recommendation isn't one I make happily.

but windows machines make up such an insignificant part of the market now it's an easy business decision.

The browser is the primary product to the Chrome team. Does it make any difference?

It's obviously the primary product to ANY browser development team.

The question is if its also the primary product of the COMPANY the development team belongs.

What is actually relevant here is whether the percentage that Google considers Chrome to be valuable to their company times the resources of the company is competitive against the resources Mozilla has (times the percentage that Mozilla considers the browser their focus, as they clearly spend a lot of money on side projects people sometimes seem to enjoy, such as Persona). Mozilla is small enough (at least in comparison to Google) that I think comparing their entire company to one division at Google is probably the correct strategy.

And not all of us at Mozilla actually work on Firefox. I don't know the number but we work on a lot of different things.

Maybe keep more focus on Firefox then?

As it has been losing ground for a decade or so, and without it there is no Mozilla. I see some crazy initiatives obviously doomed to fail (like the mobile OS), which are worrying.

Coming up with a Servo based browser that's both more secure due to Rust AND faster due to parallel processing, and a better native-look-and-feel story (at least on the Mac) would be good to catch up with the others. And better developer tools, as Chrome has eaten that influential segment (web devs) up.

I think that firefox has improved their dev tools significantly no? I thought that Chrome and Firefox were neck and neck on this at the moment? With some features better in one and some features better in the other.

Although important for their future, Chrome is not Google's cash cow, Chrome is everyday stealing Firefox's market share, they have the luxury to wait, Chrome's pressure (and now Edge) on Firefox hasn't made Firefox step up their game as one would expect.

It might make one question the underlying assumption that competition somehow can cause people to somehow magically become better (a concept many people have which makes no sense). In reality, competition changes how people allocate resources as they play a strategy game to not lose control over segments of the market they perceive as strategically important. It also causes them to lose their negotiation power in the ecosystem, which can be good (as they can't push around smaller players) but also bad (as they can now be pushed around by larger players or loud users).

Mozilla used to be able to sit around and say "we absolutely refuse to do certain things, and we want to spend our time figuring out how to make the web an interesting place for power users and developers". I respected that Mozilla. It had a lot of clout in the market and used that clout to fight against DRM on behalf of all users while spending their resources building a super-extensible platform (which I think is a better description of Firefox's crazy plug-in oriented nature).

The post-Chrome reality is that Firefox no longer has an automatic dominating position in the "alternative" (non-IE) browser space, and so they have had to start caving to loud user demand and start fighting for the end-user market segment. They don't have the ability to fight against DRM anymore, so they have been forced to include Adobe DRM by default. They don't have the ability to waste a lot of time on power users anymore, so they are dropping all the complex-to-maintain parts of their platform spec and have started dumbing down the UI.

The one thing that Chrome did that was truly important was not to compete against Firefox: it was to prove to the world that something--specifically high-performance JavaScript--was both possible and desirable. This is the one positive aspect of "competition", and it is something that frankly should never need to happen in the world of open source, as one can do that in the context of the other project: I can't imagine a scenario where Firefox would have turned down performance patches :/.

But of course, Google isn't going to want to do that, because Google is a company with a strategic vision that happens to benefit from owning the web browser and being able to unilaterally make major decisions and perform weird experiments and crazy product integrations through it, which is all the easier for them to bootstrap as they can use their position as "the place almost all people both search and advertise" to push Chrome on people. This means that Chrome has no reason to collaborate with anyone, and even the one alliance they sort of had (with Apple on WebKit) they broke off when they decided they didn't have enough unilateral control: rather than collaborate as part of a community, Google just wants to own the product.

They also happen to be the primary customer of Firefox, so Mozilla is being forced to operate on smaller budgets. Note that this is the usual effect of competition and should be the obvious one: the idea of someone "stepping up their game" makes no sense when you are now operating on smaller margins (as competition means you can't demand as much share of the profit on any particular transaction) of a smaller market (as competition means that some customers will be using your competition). You only get to "step up your game" momentarily, often towards frustrating ends (such as giving up the DRM battle or trying to dumb down your UI as fast as possible), until your resources start to wither. (Yes: in a small initial market, competition can cause greater customer awareness leading to more pie for everyone; but that obviously isn't the case here: that is only true near the beginning of a new concept, when no one even believes the thing you are doing is relevant or valuable.)

In this case, it is even worse, as the primary customer to Mozilla's product was Google... and so they are essentially screwed in that negotiation. Firefox has had to switch to Yahoo as the default search engine and start making content deals to bundle marketing and software with their product, something they were morally opposed to doing in the past but have been forced into doing due to competition. This also doesn't come cheap with respect to executive time: rather than working out their product and platform vision, they are having to spend time negotiating and having painful conversations about how to keep their company from being destroyed and what morals they are willing to compromise for how long in order to maintain that fight. I don't particularly love Mozilla (as someone who has been paying attention to the open Internet since the beginning, I frankly found Netscape's business model of selling web browsers bundled with ISP contracts terrifying), but I have great sympathy for them these last few years, and absolutely do not see Chrome as being a positive force for anything at all in this ecosystem, except maybe security :/.

Just for historical accuracy, both Safari and Firefox were working on JITs since before Chrome was announced, so high-performance JavaScript was coming independently of Chrome. I do agree that there was more competition with Chrome there and it took less time than it might have otherwise to get to the performance levels we have today.

So what is next for Mozilla? Sounds like they don't have much going in their favor at the moment.

The Mozilla project was wrongheaded from the start. Anyone who genuinely believed in open source could have known that KHTML was better quality code and so it proved, despite vastly greater resources being poured into the terrible Netscape codebase. (I can't help thinking this was largely jingoistic Americans preferring an American project).

Some good things have come out of Mozilla-the-organisation - I very much hope that Rust/Servo is a success. But when it actually comes to developing an open-source browser, the incentives of a donation-funded foundation like Mozilla are all wrong.

I don't know what the right way to fund open-source development is. Dual licensing has its share of failures. So does trying to make it a direct business. So do research grants. Partly it's just the tragedy of the commons. In my darkest days I wonder if open source is fundamentally doomed because it simply can't make the monetary incentives line up with good engineering practice.

It was already being rewritten as Gecko, which also reminds me of the Mariner fiasco which is part of why Netscape 4 stuck for so long.

What do you mean by / what are you referring to with "it was already being rewritten as Gecko"? And what was the Mariner fiasco?

Mariner was an attempt to upgrade the old Netscape 4 codebase that got cancelled. Gecko/NGLayout was the new rewritten layout engine. WaSP pushed for this cancellation:




Wow, wow and wow.

This is full of TILs, and mind-bogglingly enlightening to read.

In many ways the Web feels like exactly the same place as it was 16 years ago (especially to read about the WSP xD) but things have gotten significantly better for the user and standards of late.

TIL that NS/Mozilla was really the thorn in everyone's side on the tech front, but M$ wore the blame for the Web's early history because of the antitrust cases... that's insane. Absolutely insane.......

Are there any binary builds of Mariner, newlayout and NGLayout I can track down?

Well, historically, when Microsoft was still in competition-mode, Internet Explorer was kicking Netscape's ass with the later versions.

With the first few versions they were playing catch-up, but if I recall correctly, IE 4 and IE 5 actually had more features and better standards compliance than the current Netscape versions, as did IE 6 at its launch.

"IE hell" started once Microsoft won the race.

Perhaps. But it's also worth noting that United States vs. Microsoft was coincident with the stagnation in Internet Explorer, with the case starting in 2000, during IE 5's tenure.

For better or for worse, I think the case had a lot to do with Internet Explorer's long pause. I often wonder how Microsoft's browser, its Internet services, and the company as a whole would be today had that case not been undertaken.

If I understand correctly, MS was found guilty of abusing its monopoly by bundling IE with OS, which has put other browsers to disadvantage.

The way I see it, the only trouble with the case was that it was too late. By that time IE has already won.

Are you suggesting some other consequence of this case, leading to stagnation of IE development? I can't see any...

«Are you suggesting some other consequence of this case, leading to stagnation of IE development?»

I think that you could see a lot of optimistic excitement from Microsoft in the idea of merging IE into Windows just prior to the antitrust case. There were experiments with using the HTML renderer everywhere in the OS from widgets (the "Active Desktop" thing) to applications (HTML Help, even the HTML usage of Windows (now File) Explorer)... Admittedly today with have mixed opinions of such experiments (and their often poor performance), but it is hard not to wonder what could have happened had Microsoft invested fully into that combined Windows/IE rendering platform had they been less afraid of the antitrust repercussions...

We're finally starting to see HTML/JS/CSS "everywhere" application toolkits (it's a vertical slice in the "Universal Windows Platform", and then there's efforts like Electron and Cordova), and it's interesting to think that maybe some of that would have happened sooner in a world without that antitrust lawsuit. (Certainly the counter is that it would have been less standardized, but I don't think that is necessarily the case, either: it would have largely have been different standards though.)

> Are you suggesting some other consequence of this case, leading to stagnation of IE development? I can't see any...

I can't directly see any either because I don't know anyone at Microsoft, least of all on the IE team. But I think the coincident pause in IE's evolution is undeniable. As for a cause and effect link, that's just conjecture. To me, it seems likely that there were either formal business decisions that reduced the effort expended on IE or at least a psychological block that had much the same effect.

Look, I am a Mozilla partisan through and through. I've been using Netscape, then Mozilla, then Firefox for years. So in many ways, I applauded the outcome of the United States vs Microsoft case. But to consider the case as one of only upsides and no downsides seems a bit narrow minded.

I think the antitrust suit also scared Microsoft off from including antivirus software with Windows, which has protected the incumbents in the antivirus industry, but the monetization model of Norton and MacAfee makes them little better than the malware they combat.

This is very true. Microsoft are at their best when they're the underdog

The path is always easier when there's someone there to show you the way, and a clear example of where not to go. Mozilla had a clear path forward with Firefox against IE, and had Navigator as an example of what not to do. Microsoft is now in that position with Edge.

Everyone is at their best when they are the underdog from my experience.

Only the underdogs that make the news ... make news. Everyone else you never hear about.

Great point, and true in my experience as well.

I think this is perfectly valid point.We can think of it like this, they are pretty competitive and have shitload of resource. but they are not innovative , the moment , they become leader they lose their interest toward growing and growing, although they have huge amount of resource which help them the moment they become underdog. I think this is cultural problem with Microsoft.

but they are not innovative

They've had a seperate research department for over 20 years or so, and if memory serves me well quite some innovative things came (and still come) out of there. So I think they is too general in your phrase - there is innovation (also look at recent Surface line etc) but you're right in saying that it doesn't always come out and doesn't make it to the market, probably because the they you mean is some parts of management which sees money flowing in and is like 'hey, cashcows enough, no need to think about the future'. Or something like that. And lately this turned around again.

The papers coming out of Microsoft Research are a goldmine (relatively speaking as far as academia goes) for programming language and graphics research, just to name a couple of areas I'm familiar with.

Exactly , You correctly rephrased that. I want to note last time I checked people who works at R&D in Microsoft , I literally blown away by their name. Maybe more big name than every other company in whole IT industry.

But the fact as you mention remain correct. They are not trying to bring new and innovative stuff to people's life. At least not when they are leader .

I was also surprised to discover Microsoft Research's awesome technology. Actually, they've done a fair bit of licensing that research to other companies who go on to make great things with it. I always thought it was because they were pre-occupied with their own projects: Windows, Office, the Xbox, and of course supporting the vast .NET platform which remains the core of their business model: creating software tools and licensing them out to huge corporations, big big money!

Well, from what I've read on the internet, their culture is changing, at least at the grass roots level. A ton of people promoting Open Source & other similar stuff.

It's hard to say what this means for the higher levels, but many people think that Nadella's appointment says that Microsoft does want to change.

And luckily for US, Microsoft is and will be the underdog for a while at least, in most markets they are in: web search (after Google), browsers (after Chrome), mobile OSs (after Android and iOS), server-side OSs (after Linux), cloud stacks (after AWS).

>And luckily for US

I am having hard time understanding why luckily for US ?

Disclaimer : I am not US citizen nor inside us.

I think the parent comment meant "us" like "you and me," not the USA.

The capital letters were just for emphasis.

It was actually a typo, but you're right that I wanted to emphasize that aspect.

Freudian slip :)

And by that logic we should make sure they remain an underdog.

All companies take more risk when losing.

At that time, MS also had the best online documentation of (D)HTML in MSDN site.


And another, very important thing at that time: IE4/5 was much faster and lightweight than Netspace. I recall using IE all the time just because it take NN forever to start up. The only contender on the speed front was Opera.

Yep. They're kicking everyone's asses in ES6 feature coverage. All the more impressive when you consider how they came from behind. http://kangax.github.io/compat-table/es6/

I's almost sad that Edge is not cross-platform.

Engineer on the Chakra team here. As the blog post says, we are definitely interested in going cross-platform. Which platforms would you be interested in seeing first?

I would be very insteresting to see a Chakra-based Node.js project for server side apps, so Linux would be my first vote :)

Microsoft is already working on Node, and it's actually already possible to run Node on Chakra. See https://blogs.windows.com/buildingapps/2015/05/12/bringing-n... and https://ms-iot.github.io/content/en-US/win10/samples/Nodejs....

It's just very rare to use a Windows VM in a cloud service environment to deploy services.

This may change with the Docker support we are seeing promised. Powershell is definitely a workable remote shell. But it's not the case that this is sufficient today.

Azure is the second biggest cloud provider in the world (after amazon) - and we (TipRanks) deploy VMs to Azure.

It's a lot less common but I wouldn't call it `very rare`

A lot of Azure use is with Linux VMs.

Azure provides a massive number of Linux boxes. It's actually their preferred target, if you hear them tell it at Connect2015.

I had a friend go there, came back talking about hallway discussions of Posh on Redhat.

That's cool but only run on Windows 10 which limit its usage and hence the community. A widely used non-V8 Node.js would be a great project in my opinion.

Give it time. When Node was new it didn't work on Windows, which limited its usage and hence the community.

Now, Node is probably the most reliable cross platform language host. Write something for node, nearly anything, and if it works on your OS it'll probably work elsewhere too.

There's no reason something similar couldn't happen with Chakra, especially now that it's open source


Yes, please! Here's another vote in support of Linux and FreeBSD!

Just to clarify, you're taking about Chakra going cross-platform, not Edge itself? I think a few of these replies might be looking at whatever_dude's comment and getting the wrong idea.

That's correct, this announcement is only about ChakraCore being open sourced. I have no knowledge and can't comment on what the Edge teams plans are.

OS X. I don't use it but I think it'd have great value. I often hear engineers talk about how they don't want to work with IE because they have to boot up a VM to use it. So...they don't even test it, they know nothing about it except for how IE used to be 10 years ago when they still used Windows machines.

That'd be the coolest thing since it'd encourage developers that shy away from MS technology to dive into it.

They're open sourcing the JavaScript engine, not the browser. A port to OSX would mean you can run there server side programs before deploying them to Windows servers. I believe there are more Linux servers out there. Still a port to OSX would be useful because there are many developers with Macs that deploy to Linux.

I don't see how it'd help deploying server-side programs. Maybe that makes sense with IoT. And as a Node alternative/enhancement.

BUT, once you have the JS engine over, I could see them porting over the rest. But anyways, you could still run headless browser (or just Chakra) and run tests against it.

Have you heard of www.browserstack.com ? Easy way to test different platforms without VMs.

yeah but it's paid, needs internet access, and requires some sort of setup. Plus, most OS X devs would do this ONLY for IE. Which is a barrier of entry.

Downloading and installing IE browser on OSX is a much better solution. And this way, you have a browser that people can use casually as well. Since it has awesome ES6 support, that makes it even better for JS Devs.

Linux, and then Android would be interesting, for non java android apps.

A little love for Haiku sure would be nice....

In all seriousness I'm pretty sure everyone would say POSIX-y systems.

Nodejs-Webkit or Electron should support Chakra engine as well.

This is just for Windows 10 isn't it?

All! Mac OS X + Linux!

No love for the BSDs or Redox or TempleOS

Linux, because of servers.

It will be, it's mentioned in the article.

Well, I was talking about Edge the browser (UI + DOM rendering + JS) not just the JS engine.

Does it run under Mono?

I'm guessing Chakra is written in C++. Do you mean that you want it to run on top of managed-C++, on the CLR? Is that still even a thing today?

It might perform as fast as Chrome. I certainly wouldn't rank it as stable or memory efficient. It hasn't been, in my experience.

I use Chrome as my primary browser on Linux and OS X. Tried to use Edge as primary on 10, but it's a freaking HOG. I'm using Chrome as primary on 10 now, too.

Can't wait to see what improvements come.

Totally, and with Typescript 1.7 they have also advanced the node.js game significantly with ES7 features.

What I'd love to see is webmidi support, it would be rad to be able to do a windows universal app with es6 and create a midi sequencer for my hardware synths, I can do that on chromeos and then package it as an android app on android 5.0+ but I can't do that with windows universal, also can't do that with ios. Webaudio is supported across all three but only google and opera so far support webmidi.

warning: not all of the graphs in the venturebeat article start from 0!

Yes, that's surprising. Another es6 compat page: https://kangax.github.io/compat-table/es6/ I'm hoping Chrome steps up its game. It's embarrassing.

I wonder if Microsoft is actually spearheading the implementation of ES6 in the browser or did they benefit from the timelines of the ES6 specification fortuitously lining up with the development of Edge - or maybe both.

Not sure what you mean by "lining up with"? The development of ES6/ES2015, and the development of all the browsers and their various JS engines are pretty much ongoing, all the time.

The Edge, Chrome, Firefox and WebKit teams are all working on ES6 compatibility, and releasing new versions pretty frequently. The Edge team are in the lead because they've implemented more features, faster.

Chrome/V8 was actually quite a way behind for a while, although they've caught up quite a bit recently. I believe the situation wrt to V8 within Google was a bit messy for a while, as the original developers (Lars Bak, etc.) were more interested in championing Dart than implementing the latest ES6 features. Eventually, Google had to create a new team, based in Munich, to work on V8.

Of course, I'm merely saying that there might be some benefit that arises from implementing a specification (ES6) into a newer engine vs a legacy engine.

Ah, I see. The JS engine actually wasn't changed for Edge, just the rendering engine. Chakra has been around since 2009, and was used in IE9-11. It was one a wave of new JS engines developed in response to the release of Chrome/V8 which destroyed the previous generation of JS engines in terms of performance (I'm sure the various browser vendors would say they always planned to create faster JS engines, but there's no doubt that the release of Chrome/V8 in 2008 accelerated their efforts).

> It was one a wave of new JS engines developed in response to the release of Chrome/V8 which destroyed the previous generation of JS engines in terms of performance (I'm sure the various browser vendors would say they always planned to create faster JS engines, but there's no doubt that the release of Chrome/V8 in 2008 accelerated their efforts).

Well, TraceMonkey can at least legitimately argue to not be inspired by it: it was publicly announced prior to Chrome, and work obviously started on it before that.

As far as one can tell, the move from IE to Edge had much less of an affect on Chakra. (Chakra's codebase is, after all, far newer than IE's!)

I can't wait to see what sort of strategic shit they will try to get away this time leveraging "superior" technology.

Microsoft always steps up its game. It usually just sits around for a while before doing it. (presumably coding)

No wonder: Microsoft needs "class" syntax in JS, they are a driving for behind TypeScript and ES6. Especially interesting, as Javascript has prototype inheritance. The reason: All of MS code is class-based, and they want it to look similar to C#.

Developers of large JavaScript codebases have come to realize that class-based OO is exactly what they need. Prototype OO sounds interesting in theory, but in practice it tends to not be what's needed. Projects that use it end up trying to poorly imitate class OO. This wouldn't necessarily be a problem, except it isn't done consistently. There are multiple approaches, some of them which don't mesh well with others. Class-based OO, on the other hand, tends to be much more consistent and well-defined. This consistency improves developer productivity, it keeps the code cleaner, and it allows for greater code reuse between projects. Prototype OO hasn't proven itself in practice, while class OO has again and again.

Not just that - there are performance implications as well due to browsers being able to use a fixed allocation of memory with classes' properties instead of a dynamically resized amount for objects, which results in less efficient algorithms able to be applied internally for the browser for doing similar operations.

In principle, in many cases it's possible to allocate the right amount of space for an object's properties, under the assumption that most objects from a single constructor gain properties in the same order.

"Those people will go to their graves, never knowing how miserable they are" :-)

See 5:30 into video (on ES6): https://www.youtube.com/watch?v=PSGEjv3Tqo0

I have been doing a fair amount of JS work in an app using Angular this year. I don't use much inheritance of any kind, mostly FP techniques and "objects" (associative arrays) as aggregates.

If you use "closure" variables instead of "this", it's pretty easy to graft functions from one aggregate to another for code reuse.

Would love to hear more about this. I've come to favour TypeScript style annotation with passing around hashes into pure functions in ES6.

"JavaScript will never have mandatory types" says Brendan Eich, who doesn't work for Microsoft last I checked, which I mean to say Microsoft is not the driving force behind ES6.

Source: https://channel9.msdn.com/Blogs/Charles/SPLASH-2011-Brendan-... [10:04]

Mandatory types is obvious—it needs to be compatible with existing JS code. Gradual typing is entirely plausible (and I think highly likely), and from there it's easy to add some static lint that ensures a codebase is statically typed.

Agreed. This is not "mandatory types", note well.

weird, feels like that time you see an x across the bar and your like daammn!

They skip a lot of stuff supposedly. It is more a subset of javascript than Chrome and Firefox.

I find myself hoping Microsoft makes a comeback and they are really doing a lot to win developers which I think is the right move. Obviously, they are a huge platform and developers inherently will be using it but they have taken a lot of great steps like open sourcing this engine as well as other projects.

The Code editor they released is built on Atom Electron and seems more performant than Atom in the few experiences I have had switching between them.

If they can continue to gain trust in the community and improve their UI they could become great again. You can tell they have thought about how to do this. A few years ago now I remember the guys from the IE team did an AMA about the new explorer IIRC it was 10. They talked about cross browser compatibility and wanted developer feedback.

I am not sure if they are actually an "underdog" but I find myself feeling like that, and hoping they can get it together.

Ditto - and I agree, underdog doesn't quite feel like the right word. It's more like yesterday's champion coming out of retirement to deliver some good old-fashioned ego checks on today's cocky up-and-comers. Either way, it's fun to watch.

"Developers developers developers developers" :)

Wow, I'll admit that I haven't been looking at Edge simply because of the IE stigma, but this blog post impressed me. 90% ES6 support? More so than Babel? Awesome. And it's getting open sourced! I hope to see it ported to the Unixes. Perhaps Servo+Chakra could be a thing?

> Perhaps Servo+Chakra could be a thing?

Sounds like a lot of work (not to say it's not possible or interesting). Our bindings code[1] is pretty complicated already. It depends highly on how Chakra deals with things internally -- if it's similar to SpiderMonkey; I'd be interested in having a look. Might become a fun project :)

Without a reference open source DOM implementation (which we have for SM -- Firefox's DOM), we'd also need good docs for the Chakra API. No idea if that exists.

[1]: https://github.com/servo/servo/tree/master/components/script...

Engineer on Chakra here- are these the docs you're looking for? https://msdn.microsoft.com/en-us/library/dn249552(v=vs.94).a...

I know it's off-topic, but since you're a Microsoft employee: haven't the MSDN guys ever heard about friendly URLs? Compare: https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Sp...

(I'm guessing this would be at least the milionth time they hear about it :) )

MSDN does support friendly names: https://msdn.microsoft.com/library/system.string (works for any .net class)

I think the person who wrote this particular page simply didn't specify a friendly name, which is why it's falling back to the canonical name.

Yep! Especially the reference.

Will have a look and compare with the spidermonkey bindings when I get time.


Initially I thought Edge would just be IE with (yet another) skin, but I have to say, it's absurdly fast and lightweight. It's still not my main browser as I'm now too deeply entrenched in Chrome for development, but it makes me really happy to know they're actually pushing the web ahead.

Give the JavaScript Browser a go; is Edge render + Chakra, but has a faster starup - Also OSS by MS O_o https://www.microsoft.com/en-gb/store/apps/javascript-browse...

To be fair that's a sample app which is part of a tutorial demonstrating the capabilities of the web platform on Windows 10.


I find its UI quite annoying and its tab management really poor, but technologically it's really solid. I've noticed it seems to handle HTML5 video more smoothly than Firefox (I rarely use Chrome so I can't really comment on it).

Servo is a rendering engine written in Rust, why would it choose Chakra, which is written in C++, over WebKit, which is written in C++?

You're comparing apples and oranges here.

Rendering engines (all in C++ except Servo):

- Servo

- Blink (Chrome/Opera)

- Trident/Spartan (IE/Edge)

- Webkit (Safari)

JS engines:

- Spidermonkey (used by Servo and Firefox)

- Chakra (used by Trident/Edge)

- V8 (used by Blink, also by Node)

Servo already uses a JS engine in C++ because a Rust JS engine is a huge undertaking in itself (see [1])

It does make sense to try out Chakra or V8 for Servo. It's probably a lot of work, though. And there may not be a net gain out of that (We have access to in-house spidermonkey know-how, none of that for the others).

[1]: https://news.ycombinator.com/item?id=10682274

I have a vested interest in creating a SM -> Chakra adapter to swap out SpiderMonkey and be able to compare it to see how it does because Chakra is also an interpreter and not JIT-only like V8. Such an adapter would also make playing with Servo -> Chakra possible. I haven't play with Servo, but is there a list of all the JSAPI calls that it needs to function or can you easily dump that list?

The comments below link to the bindings (rust-mozjs is the Rust API, but the parts of it we use are in components/script/dom/bindings).

Here's a `git grep jsapi` on that folder: https://manishearth.pastebin.mozilla.org/8853940. Haven't removed duplicates or formatted it, so it's probably much shorter than it looks.

Please keep me posted about this!

Webkit uses JavaScriptCore, not V8.

TIL; fixed, thanks.

My bad, I meant JavascriptCore or the JS engines often paired with WebKit. What I meant was, why pair Servo with Chakra and not any of the other Javascript engines?

Servo currently uses SpiderMonkey as its JavaScript engine (SpiderMonkey is the Firefox JS engine). Sure, a JavaScript engine written at Rust would be useful at some point, but right now that's not on their roadmap.

From what I heard, it is on a roadmap, just not on Servo's. Apparently, it'll someday be a sister project to Servo.

Servo contributor here: Haven't heard of this plan (still could exist), though we do (mostly jokingly) toss the idea around every now and then. There are a couple of not-production-ready Rust JS interpreters floating around (https://github.com/swgillespie/rjs/ is the latest I've seen) though.

Stuff like JITs can't really reap Rust's safety+performance benefits[1], and overall the same might be said of a JS interpreter, with all the garbage collection and stuff. The other thing is that Spidermonkey/Chakra/V8 have had years of optimization and tweaks, starting from a clean slate would be _very_ hard. With Servo's small team this is an impossible task, but with a larger, dedicated, team, there's a chance. shrugs

[1]: though it's possible they will still be safer than the C++ counterparts, which is still good. The question then becomes, how much is that increase in safety, and can it justify a whole rewrite?

It is true, though, that a lot of the danger isn't in the JIT but rather in the bindings (e.g. the native implementation of the Date object, or Typed Array Buffer, or what have you), which Rust's safety features could help with.

Servo is use in production by any browser? or any Project like Chakra? or is just an experiment till now?

I don't think so. From the roadmap it looks like in a year or so it might start to be approaching a beta https://github.com/servo/servo/wiki/Roadmap

Not yet. I don't think it's an experiment and they are intending to actually use it at some point, but it is still in development right now

I'd like to see Node.js using Chkara by default, V8 developers have showed that they don't care much about Node.js, they are more interested in Chrome, and MS have showed more interest in Node.js than Google and I'm sure it will be better for all, fingers crossed.

I don't. A lot of code out in the wild already relies on v8, and won't work with Chkara. Node 5 is up-to-date with v8. Though, I would like to see Mozilla devsvwpuld revives the unofficial spidermonkey Node.js support (https://news.ycombinator.com/item?id=2469786 , https://github.com/zpao/spidernode ).

This is definitely a problem today, but NAN (https://github.com/nodejs/nan) which is now the recommended way to build native modules, offers a way out. That project provides a stable API to insulate module developers from changes between v8 versions.

Sometime down the line, a NAN-spidermonkey or NAN-chakra project might become feasible.

Sort of -- in practice, we found ourselves instead playing catchup to NAN (many changes across Node 10, 12, 4, etc.). It insulates from minor changes, and increasingly, part of that has been Node waiting longer and longer between v8 changes.

> which is now the recommended way to build native modules, offers a way out

On paper. In reality, HELL NO. I maintain node native addons, and the thing is, NAN is great, but they simply cannot foresee what will break next in v8.

Once something breaks, they provide a node-version-independent feature-shim, but every time there's a new version of Node, I DREAD the work I'm going to have to do to maintain my add ons.

It's simply less work to cross compile C++ to JS w/ Emscripten or write it in JS (or higher level language and compile to JS) in the first place.

>and won't work with Chkara

Is your lost.

I meant many npm modules ;) Not my code. So it would be your problem too.

That's the interesting part, even with V8 updates NPM packages get brocken, I don't see the migration to Chakra as something different to migrating to a newer V8 engine witch also break a lo of stuff.

Not Node.js per se but a separate project that ran on the Chkara core and worked hard for cross-compatibility would be more than welcome.

A node.js fork where you can optionally swap out v8 for chakra: already baked by none other than microsoft.

They hope to merge it back into upstream, that seems possible given the recent history of the Node.js project.

Of course, it is very very unlikely that npm modules with native code work in Node.js/Chakra just yet.

> Of course, it is very very unlikely that npm modules with native code work in Node.js/Chakra just yet.

I would hope they follow the same path as the jxcore+SpiderMonkey team so I don't have to worry about which engine my npm modules support.

Yeah I just seen it on Reddit. Nice job MS.


This is the first I've heard that v8 devs don't care for node. Since I am not in the know, are there some sources that detail this issue in depth?

I have a somewhat off topic question: is there anything in the design of Javascript that mandates single-threadedness? Could any Javascript engine implement threads?

I'm asking because I'm wondering if Node.js's evented approach is the only way to do things.

If you allow threads to share memory arbitrarily you need to add locking to all internal VM structures, which is going to be a significant slowdown.

Also it's not (any longer) considered good practice to have languages that allow mutable memory sharing since that makes software unreliable, so it's not really a good idea.

Without arbitrary memory sharing, multi-threading is already supported with web workers.

Web Worker is very weird in that you have to have a separate file.

Agreed! FWIW this is why I made Operative. It gives you a way of writing "inline" JS that utilizes web workers (caveat: not actually inline; no scope/context access of course). It provides good support across browsers and fallbacks for envs where web workers don't exist (and ~all the in-between cases): https://github.com/padolsey/operative

Looks nice! I took a stab at this a while back https://gist.github.com/icodeforlove/deb0f19a9e7bd528bd48

Not with a little imagination and hackishness!


Sharing only typed arrays shouldn't be hard.

Well it's not sharing, but web workers have a zero-copy way of transferring typed arrays between web workers. [1]

It gives many of the benefits of sharing memory without all of the gotchas


I use that a lot already but I wish there was at least read-only shared access.

There's a draft spec for SharedArrayBuffer:


Shared memory in Javascript is a Bad Idea(tm). Run, don't walk, to your standards body, and tell them not to entertain such notions.

You know what happens with shared typed arrays? Shared Uint8 arrays.

You know what happens with shared Uint8 arrays? Multiple workers using JSON.parse.

Do you want ants? Because this is how we get ants. :(

Why would anybody ever want to JSON.parse typed arrays from web workers when the parsed data (or unparsed strings) can be passed around directly? Strings are immutable and are not copied when you pass them around. I don't see your point.

Remember: web workers communicate (to the best of my knowledge) using a full serialization/deserialization of message objects (which is why they added transferable objects). So, to create a one-to-many broadcast mechanism, you'd do exactly what I described. Additionally, I am unsure, but I could easily see immutable strings being copied nonetheless between web workers because of the desire to give every worker a private heap.

I still think is faster to send the same message to all than using JSON.parse on each (not to mention much more compatible). Since each worker is going to make a copy anyway (with JSON.parse or by receiving a message), why does it matter?

Also, assuming what you say is true, what's the problem? It's much easier for the JS implementations to synchronize things only with a specific type of typed arrays than sharing all kinds of GC-managed data structures.

There is nothing in the language itself that allows multi-threading. Going forward, they are adding support in the language for the async keyword, similar to F#. Any way to achieve parallelism would have to be from APIs, e.g., like WebWorkers in the browser.

No! Javascript the language and thread are not related. If you find a Javascript implemented over JVM like ringojs uses JVM threads to execute JavaScript in parallel. In which case you get both event based within thread, and multi-threaded at the same time.


The JVM implementations of Javascript (e.g. Nashorn) support multithreading. Imho it's not a good thing, because all existing Javascript code is written with singlethreading in mind and lots of stuff will break if you use it from multiple threads. Multithreaded code for Nashorn requires the use of JVM synchronization primitives, which are then not supported by other Javascript engines.

It used to be possible to write Firefox extensions that used multi-threaded JS; however, like any other language, accessing the DOM was not threadsafe (so you can't have a window be your global object, and therefore they needed to live in separate source files). As JS lacked native support for threading, you also had to be very, very careful and often ended up with less obvious threading issues anyway.

In the last few years SpiderMonkey (Firefox's JS engine) has dropped support for this more and more, and these days you can't anymore. But that's a consequence of the engine implementation, and not the language.

Mozilla is working on a spec for something called SharedArrayBuffer, which will allow Workers other than the main thread of execution to share memory.

The reason for this is that as soon as you have separate threads sharing memory, it introduces non-determinsism into the mix. When developers use synchronization mechanisms incorrectly, or not at all, this can lead to deadlock between threads.

This must not be allowed to occur on the main thread shared with the rendering engine.

So keep your eyes out for SharedArrayBuffer.

The event loop is very important to the existing language semantics and isn't going anywhere.

My speculation is that before Node stepped into the picture, primary use-case for JS was browser, hence UI manipulation. That usually prompts for a single thread that can update UI, unless you're willing to introduce a lot of mental overhead when dealing with synchronization primitives.

I don't think there is anything preventing JS running shared memory threads. Apart from all the existing code and libraries which aren't thread safe, but that's the case in many languages.

No mention of license. Is it safe to assume it'll be Apache or MIT?

I work for Chakra team and it is going to under MIT license.

This is great news!

Good idea. And please add support for WebMIDI.

WebMIDI is not owned by Chakra. Please voice your opinion here


I don't think WebMIDI is in the scope of JavaScript engine.

I believe MIT.

Microsoft is slowly but surely winning me as a fan. Keep doing things that matter, show that you're committed to the open source community, and continue to help push the web forward and I think nothing but good things will come from this.

I wonder if in the future node will enable swappable js engine edit: ok just found this https://news.ycombinator.com/item?id=9534138

I've hoped for this since it's inception.

It gives the benefits of having multiple competing implementations but unlike the browser platform we (the developers) get to choose which one we use.

Native addons and even Node core are too tightly coupled with v8. NAN will need an overhaul.

Wow. They're really serious about changing their philosophy aren't they. Using Github for their stuff, making and open sourcing Visual Studio Code, other stuff I can't remember, and now this.

They're aware of the permeance of open-source in current tech scene and want to be a part of it than fight it. Not many startups (except for DreamSpark, BizSpark, and Seattle/Redmond based ones) build on MS stack nowadays. MS audits are dreaded (thanks to Oracle for showing the way) and a diversion for a growing startup. It's better to steer clear of MS & other proprietary tech whilst building a startup.

They're embracing startups since they need them and are changing their business models to cater better to smaller startups than the behemoth enterprises with Office 365, Azure, etc.

If they go full SaaS I would not be surprized if they go Open Source on Windows itself, seriously. I think for that to happen Nadella needs one big win/turnaround and the shareholders might be on board.

Well, it seems feasible if they take out enterprisey features. For example Windows without AD support could be an option. Or without remote desktop support.

The hobbyists would use it as is but no company would touch it without support anyway.

Unless my memory is playing tricks on me, a couple of months back, Mark Russinovich (of SysInternals fame) gave a talk somewhere and mentioned that they were looking into that.

Sounds kind of strange to say it out loud, though. ;-)

He said it was not impossible, he never said MS was considering it ATM

Thanks for the correction!

Still, given where Microsoft is coming from and given Russinovich's position, this is significant.

I was literally blown away by Visual Studio Code. It is an amazing editor. I haven't opened sublime since.

I really wanted to like VSCode but couldn't get over the "working files" paradigm and consequent lack of tabs :(

Have gone back to Sublime 3 for now but if the VSCode devs ever decided to support tabs I'd switch.

I'm still pretty much on Atom, but yes VSC wasn't expected. I would migrate if most of my plugins were available on there.

I wish it could keep all the file info between quiting/opening and auto-saving similar to the way that sublime does.

I prefer Atom

As a mere mortal - How hard would it be for a day programmer to built something cool like an JavaScript engine ?

I sometimes have crazy thoughts about the world ending.

How hard would it be to built your own javascript engine from scratch ?

These programs aren't magic. They use the same kind of constructs you use every day to write your programs. They use some different algorithms, but you can look those up in books and in existing engines. People who work on these engines started where you started and there's no reason you can't pick up all the skills over a few years.

This is an excellent attitude.

The problem with JS engines nowadays isn't as much about the language itself (or its core library), it's more about the absurd amount of optimization it should have to perform well in a varied number of possible setups, and all while the browser is busy doing DOM compositing, loading yet more JS code, etc. Things like JIT are not exactly necessary if you just want your language to "run", but an important part of an optimized system.

To me it seems the past ~5 years of JS engine development have been focused on what kind of optimizations to implement, rather than parsing or implement core library features.

Building a javascript engine is super simple. But to build one that has performance on par with the current crop will take you - as a single person - a lifetime and by then the state of the art will have moved on.

The performance is hard, but the bug-for-bug compatibility is what will kill you.

Actually, for JS engines this is a much smaller problem than for other parts of the web platform. There are two nasty warts I'm aware of: the "function inside if" mess, and the fact that the spec says enumeration order is undefined but actual web pages depend on some things about enumeration order that all browsers implement; chances are this will make it into the spec at some point. But by and large JS engines agree with each other and with the spec. Much more so than, say, DOM or CSS implementations.

Enumeration order was indeed (finally) added to the ES2015 spec. Here's a link to the release candidate spec for the [[OwnPropertyKeys]] algorithm:


[[OwnPropertyKeys]] is not invoked by a for-in loop. [[Enumerate]] is instead, and while it defines that the set of iterated names is the same as that returned by [[OwnPropertyKeys]] it explicitly says that order is not defined.

You could just leave them out ;)

In a world-ending scenario, 'having a working javascript engine' is likely to be surprisingly far down the list of priorities. I think we might actually need a new level on Maslow's hierarchy of needs, just above 'self actualization', for 'lightweight scripting.'

It's not as impossible as it sounds.

There are many JS engines which are extremely small compared to the big guys used in browsers. Most of them tend to grab a subset of the language and implement that.

For example, the creators of nginx created their own specialized engine called nginScript that runs JS for their stuff https://www.nginx.com/resources/wiki/nginScript/

There is even a javascript interpreter written in javascript which can be a good starting point to learn a bit how it works https://github.com/jterrace/js.js/

BeRo did build a javascript engine from scratch on his own: https://github.com/BeRo1985/besen

From the copyright headers he worked on it starting in 2009 and had the first public release in 2014 (actually 2010, see comment below). He certainly worked on other stuff in between too. This is also not a toy implementation. While I have no idea how it compares to state of the art engines, it does feature a whole bunch of optimizations and a JIT compiler.

Well, the first public release of BESEN was in 2010:


A toy javascript interpreter is quite easy. But a high-performance, production-ready engine is a totally different story.

Anyone have a recommendation for a good place to begin, if I was interested in writing a toy javascript interpreter?

Friends of mine used "Create Your Own Programming Language" to get started learning about interpreters: http://createyourproglang.com

I personally learned with SICP (But reading this book isn't just about interpreter, it will make you a better programmer and blow your mind in so many different ways. I wouldn't say it's for experts only, but this isn't the kind of book that beginners would get excited about. However, if someone is serious about improving as a developer, I can't think of a better book): http://www.amazon.com/Structure-Interpretation-Computer-Prog...

Finally, (How to Write a (Lisp) Interpreter (in Python)) by norvig is a great read if you're getting started with interpreter: http://norvig.com/lispy.html

* The reason the two last links are lisp related is because writing interpreter in Lisp is really straightforward (Since the code is the AST). Similarly, if you wanted to learn about memory management, a language like assembler or C might be more suited than say Python.

You can check this code https://github.com/espruino/Espruino/blob/master/README.md

I quote

"Espruino is a JavaScript interpreter for microcontrollers. It is designed to fit into devices with as little as 128kB Flash and 8kB RAM."

It could be a good example of a small js interpreter. I remember the author said to be 95% compatible with real js. That was a couple of years ago.

You can start with https://github.com/DigitalMars/DMDScript which is an open source (Boost licensed) JavaScript engine written in D.

While at university I attended a "learn Scheme" summer course and it was taught by implementing your own Scheme interpreter (in Scheme). It is not that hard to implement a programming language, but making it perform well might be a completely different ballpark.

Edge is certainly much faster than Chrome/Firefox for JS processing that I wish I could use it on Linux. Looks like that might be happening. Really great news.

I didn't know Node.js could use anything but v8. This is also very nice.

I'm not sure where the hangup is, but I've found that Edge is fast as hell for initial page startup, but lags behind quite a lot when under heavy load.

I've seen the benchmarks but in my experience it just... lags...

For example, on the kangax ES6 compatibility table [1]. On chrome clicking a column takes less than a second, on Edge (i'm on an up-to-date Windows 10 machine, no preview stuff) it loads faster than in chrome, but takes 3+ seconds to switch between columns.

Even some of the stuff i've written acts similarly, and i can't figure out why.


I know between Chrome and Firefox for a particular project of mine (which is a bit old now), Firefox ran raw JS faster, but Chrome could update its DOM faster. FF would spend much more time when many updates needed to happen in a large column of divs (10k+). Raw JS speed isn't everything.

I get that, the Kangax table was just an example I literally ran into moments before.

But for a more "pure" js example, I have an app which does some image processing in the browser using the canvas ImageData stuff and typed arrays.

For whatever reason my test case completes in about 3 seconds in chrome, 3 to 4 seconds in FF, 11 seconds in IE11, and 6 seconds in Edge.

I've profiled the hell out of it and i just can't figure out the reasoning for it, but it's there. (and as a side note, having the dev tools open in Edge obliterates javascript performance, my test case was taking 30+ seconds to run when i had the dev tools open and it took me longer than i'd like to admit to figure that one out)

I have a feeling the problem is that i'm optimizing for V8 because i know it the best, and i'd much rather not do that if possible.

Chrome's Dom and painting engine is faster than Edge. That being said Edge just came out recently and is still bogged down by some of IE 11 stuff like the trident layout engine. I'm sure they are really invested to make it faster.

Think about it, Microsoft really wants bing to succeed, for it to happen, Edge has to succeed so I'm sure they are putting their best minds behind it. Its a matter of time.

I think theoretically Node.js could work without V8, and guess some of MS's work for IoT was indeed that (swapping V8 for Chakra).

There used to be an Mozilla spidermonley based node.js too, though it isn't maintained atm.



There is an actively maintained Spidermonkey port of Node.js called jxcore. It also supports an older version of V8 from the 0.10 branch.


I was wishing that MS Open Tech could make the other JS engines Windows x64 ABI compliant including SEH for a while now.

We have a bunch of laptops at school running Windows 10 with Edge. Students can't copy and paste into Google Docs while using Edge.

Was this a deliberate choice by Microsoft to steer people away from Google products, or is it something more benign than that?

If it fails when they paste via right-click menu, it's actually a known issue/design decision on Drive's side. Try using shortcuts instead.

> If it fails when they paste via right-click menu, it's actually a known issue/design decision on Drive's side.

I expect that if this were the problem, OP wouldn't be mentioning it.

When you attempt to use the right-click menu to copy and paste, you get a clearly-worded message box that tells you that you need to use the keyboard shortcuts, and which keyboard shortcuts to use.

I had similar problems using Google Docs on Safari too, so I think it’s just a bug on Google’s side.

AIUI, Safari is the IE6 of the "Modern Web".

For the downvoters:

Go make friends with someone who works on Safari, then take them out for drinks. You'll understand why I'm saying what I'm saying. :)

I know a guy who works on core web-dev stuff at Google.

He tells me that Edge is often very quirky in very subtle ways. He's had to spend many, many hours diagnosing, reporting, and working around quirks in Edge. These quirks sometimes get fixed and sometimes do not, but are often not present in Internet Explorer.

We have a fairly large-ish JS codebase at work and from what I saw so far is that Edge's JS was never a problem. SVG rendering on the other hand is horribly broken currently for some cases.

That being said, since our JS is generated, it's purely ES5 since it still has to run on IE9. So maybe the weird cases are in the more modern JS parts we don't use.

This is just how google docs/scheets/slides etc. works on every browser and every OS.

Out of curiosity, why rebuilding a whole new engine rather than using/improving V8?

Why build v8 when SpiderMonkey existed? Why build Chrome when Firefox existed? Ad infinitum

Because you may want to implement things you care about, not necessarily the things the V8 developers care about.

Unless you mean "why write your own engine instead of forking V8", in which case the answer might be that if you have the time to do it, it's easier to become expert on something you created yourself rather than an existing large complex system. And while you're at it you can make sure the system you create is good at solving the problems you mean to solve, which the existing system may not be.

A concrete example of one of the design tradeoffs here: V8 doesn't have an interpreter mode, just several levels of JIT. Every single other browser JS engine does have an interpreter, because it turns out that for cold code (which is most code) that's both faster and less memory-intensive than a JIT. Not to mention allowing your engine to run on hardware for which you haven't created a JIT yet.

They've already built it, I think before V8 existed or around the same timeframe.

It was years later. V8 shipped in 2008, having been in development for (IIRC) three years. Chakra got its first public preview release in 2010, and shipped in 2011—I strongly suspect it started after V8 first shipped.

Thanks for the correction.

Chakra started from a clean slate so they could think about problems differently and not be bogged down by v8 architecture.

What in blazes?! Okay MS, that's an impressive step.

I'm waiting for the first ports to Linux or, hell, a native port of IE... given the trend, it's not unreasonable that MS will open source a load of stuff.

I just want node running on chakra on my linux box. Given MS has a lot of money to make from Azure, I see this happening.

Well I guess we can all start using ES6, no need to compile to ES5 anymore.

You will still need to compile to ES5 to support older browsers.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact