I feel for Brendan Eich, he is a bit of an unsung hero when you consider what he enabled with JavaScript and it is always stressful to learn someone is trying to kill your baby.
Having said that, I really do hope it is killed. Well, not killed, but demoted to "one supported language of many". And the sooner this is started, the better, because it is a long, long process. I don't think JavaScript is bad, but I do strongly feel that the underlying engine for client side web programming should be a full-blown VM that supports many possible source languages (ala modern .net with the CLR, DLR and all), and I'll support that model regardless of who pushes it, so long as they do a good job on the technical side and don't try to use it as a lock-in mechanism. Google seems to be the most posed to do this (I can't imagine Microsoft giving up that much control of .net) and their idea to support JavaScript as an interim target is the only practical way this will ever be solved, so I hope Dart is successful -- assuming it is actually going for this and not just trying to switch one core language out with another, which remains to be seen when they announce more information.
BTW, yes, I know you can already compile many other languages to JavaScript (as Dart itself will do in the first stage), but.. why? Why not just put a proper VM down there that is truly designed to efficiently run byte-code from many different source languages, and have JavaScript be one of those languages?
Brendan has previously talked about some obstacles to a bytecode/VM-level specification, and some advantages of specifying the browser language at the source level rather than at the bytecode level:
The main problem with common VMs for multiple existing languages is that often the semantics of the language are heavily coded into the VM. For example, Parrot's core opcodes mimic Perl's behavior of "Type coercion for everyone, the string "0" is false!" There are a lot of other issues as well, and while they are certainly solvable, the resulting VM would be very heavyweight. If you want to design a VM that runs multiple languages, often you have to design the languages around the VM.
Well, at some point you might run out of space in the opcode byte (or multiple bytes), and then you need longer sequences that put up more cache pressure and require more time decoding, right?
Also, the developers time budget and the VMs complexity migth keep you from optimizing all possible paths.
Google's PNaCl is trying to promote this. I've talked about this a few weeks ago[1]. But until it gets widely deployed and accepted, it is just yet another standard[2] that will unify everything.
A standard VM/Runtime specification for all the browser would be awesome. We got plenty of them; CLR, DLR, JVM, Parrot, etc. Can you imagine? I can see those mime types already: application/csharp, application/perl, application/ruby, application/IL.
To be pedantic, it wouldn't be "application/csharp". By the time it's compiled to MSIL bytecode it doesn't matter if it was C#, VB, F# or whatever. "application/MSIL" is what I'd say.
I can imagine it, and I guess my browsers already have three VM/Runtimes in them (JavaScript, Flash, Silverlight) but it feels so inelegantly top-heavy to have lots more.
Right, call me silly but I'm honoring the "Right-click->View Source" spirit here, I don't know about you but I feel more confortable knowing that my browser is downloading source code and compiling or interpreting than downloading some kind of binary to execute, even if you call it "intermediate code", "byte code" "almost-there-machine-code" or whatever, it's still binary stuff. I get your point though.
But. Flash isn't served as source, is it? And JS is often minified/obfuscated. Sure you can run that through a tool to get back the source (sans comments and long variable names) but you can do just that with MSIL bytecode too.
Yep, Agree. Then we will see how additional tabs emerge in Firebug and Chrome Developer Tools, I can see them already: "IL View", ".NET Refactor", "JSIL View" hehe.
This was the original intent (to have many languages supported by browsers; not necessarily a single VM with many languages). It just never really materialized. By the time the web was ready for bigger apps that needed more software, Javascript was Good Enough for the purpose, and munging other languages into the browser was too much effort. JavaScript grew up alongside HTML and the DOM, making it uniquely qualified for building applications around HTML and the DOM. I think it's telling that Google is building a new language rather than modifying an existing one (though it does appear to be inspired by other languages).
I agree with your conclusions, especially that it's telling what Google is not doing, but I have to object to your "This was the original intent" lede. Whose intent was that? Not Sun's in '95 or '96 with the JVM, which was all about Java. Certainly not Netscape's.
Multi-VM/multi-language beats multi-language-single-VM, indeed, but then you have problems such as the cross-heap cycle collection one I raised on this thread. No free lunch.
And anyway multi-language-single-VM was never any browser vendor's original intent. I had folks like John Ousterhout belatedly (early '96) pitching TCL, but smart enough to see that it wouldn't fly (John then tried pitching Tk for cross-platform widgets, but Netscape was already screwing up its platform -- no sale).
Mid-90s C-Python, TCL, Perl 4 then 5 -- none of these was ever intended to be wedged into any browser and kept up-to-date. Not by Netscape or Microsoft or any vendor of that era, and not in the 2000s either. MS put IE on skeleton crew maintenance. The Silverlight-era "DLR" (single VM for many languages) was much later and not browser-bound.
I have no idea. But, the script tag in HTML, for example, allows specification of language (i.e. "<script type="text/javascript">, historically). It was my understanding that this was intended to permit other languages to be included in the browser, or as plugins. It was used by Microsoft for vbscript, and others along the way. I thought that was the intent of that flexibility built into HTML.
And, I noted that it was not intended for a multi-language VM ("not necessarily a single VM with many languages", to quote myself, which I guess could have been more emphatic in denying that a multi-language VM was intended; I sort of assumed everyone would consider multi-language VMs the new hotness and not something that would have been considered back in the early days of VM-based languages becoming mainstream).
I merely meant that other languages were supposed to be possible. Which makes sense. No one knew (maybe you had a gut feeling), with any confidence that JavaScript would grow up to be as capable as it has. And, the idea of a multi-language multi-paradigm VM is pretty novel stuff even now. Only in the past ten years or so have we started to see people building disparate languages on a single VM and others building VMs for the purpose of hosting widely varying languages. And, those experiments are still up in the air, as far as I can tell. The JVM does pretty well for Java-like languages, and Parrot can run dynamic languages, and LLVM is great for C/C++ and their descendants...but, push them out of their comfort zone and things get hairier and the VM probably needs to grow bigger and more like the language it is hosting. They'll probably work it all out eventually. But, I certainly don't think multi-language VMs were on the minds of the creators of HTML or the browser makers.
I'm not sure we disagree (and I would obviously have to defer to your much greater knowledge on the subject, even if we did disagree). I could have been more clear in what I was suggesting was "the original intent".
I'm the guy that created <script> -- big dummy me first used language="JavaScript", not type= -- type came in HTML4 (where Dave Raggett invented "text/javascript", never registered; see RFC 4329).
Yes, I added language= at the start, but the default was JS and I had no particular intention to support other languages using one or more VMs. I see what you mean now, though -- thanks. Hope this history helps. It's less meaningfully intentional than you thought. More like blind future-proofing.
To me, through my own skewed historical lens, it looks like an admirable level of humility. So, good job.
While that flexibility was mostly used for evil, rather than good (vbscript, for instance), I think it probably helped build a stronger web ecosystem having competitors to the throne even if none ever really got a foothold against JavaScript.
The reason you won't see a bytecode VM replace Javascript is because bytecode is either interpreted (slow) or JIT-compiled (complicated but faster). If you go for speed and JIT-compile bytecode, why not just compile your source and dispense with bytecodes altogether (as V8 has done)? This gives other runtime implementers the freedom to choose whether to create an interpreter, compiler, or some other execution mechanism, and the programmers the freedom to code at a high-level.
Bytecode, as distinct from object code, is on the way out. Consider Javascript a portable high-level object code and move on. If you want other languages, compile them to Javascript. Semantic affinity between source language and execution environment (e.g. is "0" false, as mentioned elsewhere) is an issue you'll need to address regardless of whether you're using bytecode.
Also, don't underestimate the power 'view source' had in making Javascript perhaps one of the most widely-adopted programming languages in the world.
The trouble with this approach is that the Javascript prototype-based object model is somewhat incompatibile with precompiled-library loading. Loading Javascript (Even in bytecode form) requires running the code. Modern desktop/server software leverages tons of code via shared-libraries with fast startup times, but Javascript webpages have horrible startup times with only tiny amounts of library code. This problem is important to fix or the javascript ecosystem will be prevented from effectively building higher and higher leverage software layers.
In addition, Javascript's abstractions are based on relatively slow mechanisms such as dynamic-hashes. This make it difficult if not impossible to get C/Java/C# like performance out of it even with a JIT. You can build fast dynamic hashes out of structs, but you can't really build fast structs out of dynamic hashes.
Evaluating function declarations and expressions, even in a module pattern closure, takes well under a millisecond in modern browsers.
The "dynamic-hashes" remark shows serious ignorance of polymorphic inline caches and shapes aka hidden classes. Competitive JS JITs do not probe hashtables for dot member accesses.
From what is known, it seems as if Google plans to move all their Javascript app building efforts to Dart. Given that GWT/Closure are already pretty successful in large organizations and internally, that seems like a big bet on Dart's future.
Whether the Dart VM gets adopted by other browsers seems relatively inconsequential provided they have a decent cross-compilation story.
Assume Dart is objectively better except for lacking browser support. Even Brendan Eich seems to admit that Dart will be "better". Then developing using Dart will give you an adult tooling story (through Chrome/Dash VM/Brightly), a server-side front-end language (capable of Google-scale applications), optimized compilation to Javascript (presumably better than GWT), and major corporate backing (in contrast to coffeescript).
This seems to be like the makings for a major win provided that Google commits to adopting it internally.
"Even Brendan Eich admitted...". As if I would not expect, nay demand, that Gilad and Lars would do better -- much better -- than JS!
For the record, I'm not worried about JS being replaced by a better language. I am working to do that within Ecma TC39, by evolving JS aggressively.
The leaked Google doc's assertion that this is impossible and that a "clean break" is required to make significant improvements is nonsense, a thin rationale for going it alone rather than cooperating fully.
The big issue I have with Dart, which you seem to consider inconsequential, is whether Google forks the web developer community, not just its own paid developers, with Dart, and thereby fragments web content.
A Dart to JS compiler will never be "decent" compared to having the Dart VM in the browser. Yet I guarantee you that Apple and Microsoft (and Opera and Mozilla, but the first two are enough) will never embed the Dart VM.
So "Works best in Chrome" and even "Works only in Chrome" are new norms promulgated intentionally by Google. We see more of this fragmentation every day. As a user of Chrome and Firefox (and Safari), I find it painful to experience, never mind the political bad taste.
Ok, counter-arguments. What's wrong with playing hardball to advance the web, you say? As my blog tries to explain, the standards process requires good social relations and philosophical balance among the participating competitors.
Google's approach with Dart is thus pretty much all wrong and doomed to leave Dart in excellent yet non-standardized and non-interoperable implementation status. Dart is GBScript to NaCl/Pepper's ActiveG.
Could Google, unlike Microsoft ten or so years ago, prevail? Only by becoming the new monopoly power on the web. We know how that story ends.
So somehow Google is going to become a monopoly because Apple and Microsoft will voluntarily decline to support a new technology that Google will presumably license freely (as with NaCl)?
I don't even know if I like Dash or not, but how is this move predatory? Did you approach a standards committee or consider the philosophical balance of your competitors before creating JavaScript and shipping it in Netscape? You don't get new good ideas from committees -- just look at the disaster of the last 10 years of the W3C.
Who said anything about Google being "predatory"? I said "not standard."
Google is a search monopoly in some locales, but what I wrote is more about their acting like a browser monopoly.
In a more balanced market, the "defect" choice in the Prisoner's Dilemma is to crank out proprietary stuff and leave your fellow prisoners twisting on the reverse-engineering treadmill (update: don't try open-washing me: NaCl being licensed freely is meaningless when Google controls this very complex pile of code that depends on a deep/wide Pepper API into only one browser; Dart may be simpler but open-source != open or "free"; some vendors such as Opera cannot use any open source).
The "cooperate" choice is to bring prototypes and proposals to standards bodies and gain buy-in and interop.
Yeah, Netscape did a bunch of stuff without standardizing it with w3c or any other body, definitely including JS. We made Microsoft reverse engineer JS as JScript (poor them!). We took JS to ECMA in the second year and only under some pressure from MS and non-sock-puppet web devs. Nothing I can brag about there. That was then, and everyone paid a price.
Now, are you suggesting that Google is the better Netscape, the "good" monopoly? I don't think so.
Again, I'm not moralizing. I'm not the dry drunk lecturing the kids to avoid having the "fun" I had. Google will do what it thinks best. My objection is that we have a non-monopoly browser market, not even a duopoly, with pretty good open-standards innovation. Dart goes the other way and puts the open web at risk. It is fragmenting.
On JS evolution in Ecma TC39, no one wants design by committee. I spoke about how we strive to avoid that in TC39 at TXJS. Give my blogged video a listen if you can.
I don't fear forking the developer community in terms coffeescript vs. GWT vs. Javascript vs. Dart-to-JS. The web development experience, from front-end to back-end and at the language and framework level, is already incredibly fragmented. More higher level options that unify the front-end and back-end seem like a win for the web stack.
I agree that there is virtually no chance that Microsoft or Apple will adopt the Dart VM. So the real question is whether the Dart-to-JS compiler combined with the Dart VM on the server side is compelling. This sounds to me like an improved GWT with server-side built in.
The controversial difference is the Dart VM will be in Chrome, enabling Chrome to deliver experiences that other browsers presumably cannot. I see your worry about "works only in Chrome". But this will only come true if Dart-to-Javascript delivers a much, much worse experience. If that's true, there is a problem that other browsers need to confront not avoid. Until they do the Dart VM will presumably be like Flash -- a necessary evil that makes clear the limitations of current standards. Presumably Dart will always compile to whatever the best target is, including whatever comes out of TC39.
Yes, standards must keep up, whatever the proprietary challenger does -- this was true when Flash could do many tricks that browsers generally could not.
Without further nuance, saying "standards must keep up, deal with it" is a bit too consequentialist for me, a Mozilla founder, or for anyone working sincerely in the open web standards bodies. Korean and Chinese banks still require ActiveX PKI plugins or else deny service. The end of better web tech does not justify any means, especially not single-vendor locked-in means.
It seems to me Google is trying to have it both ways, which creates not just the appearance of a conflict, but an actual conflict: does their top talent work with standards bodies to make interoperable specs (based on prototypes and open source and so on), or do they go for wow-effect and sweet-talk if not strong-arm tactics?
Google knows they can't standardize something like NaCl. My belief is that control flow integrity enforcement will sooner come to OSes and OS-targeted toolchains than it will to browsers via NaCl and Pepper. So, safer (but still OS-specific) binary plugins, in the next five years. But mapping Pepper, an unimpressively large and messy API tied to Chrome as well as WebKit, onto other browsers? Other vendors won't get on that treadmill, and I don't believe Google would try to "do it for them". Gears was too painful.
On Dart: Google as a single entity does not know what could be done in Ecma TC39, since the Dart/V8 principals never participated, and V8 needed a fresh-thinking second team finally to get going on Harmony prototyping. But let's agree that one or two designers work better sprinting alone, not burdened by a committee (see my blog post for how we avoid design by committee).
Then Dart could come to Ecma as a proposed spec with a single open source implementation, perhaps later this year or early next. This was not part of the leaked game-plan, however. Clearly Google did not, as of that document's date, intend to standardize. And who believes they would give up "design control"?
Would TC39 entertain Dart as a second language? I doubt it. A whole second spec to write and get through ISO, cross-language-heap cycles to collect, mixed and possibly conflicting runtime semantics, specifically more numeric types in Dart than current JS to coerce and/or fail to convert in the native/managed bridges and API stubs, the list goes on.
Would TC39 take inspiration from Dart and fold ideas and design elements into the Harmony agenda? Absolutely we would, but we need to see Dart first, and this last week does not make for an auspicious launch. Still, no hard feelings if we do get a clean pitch from Dart principals to TC39.
However this plays out, Google has a lot of power. I won't quote Spider-Man's Uncle Ben, but for a company that moralizes about evil-as-in-don't-be, the standards they hold themselves to have to be high. I argue that they can't successfully and faithfully both work for interoperable and better web standards among multiple browsers, and play deep proprietary lock-in games. Pick one or the other.
Innovating in the open and proposing early and often to standards bodies are fine. Delayed-open-washing and increasing piles of proprietary (open-source doesn't matter) single-vendor code, which implements features that are not ever proposed for standardization, are not.
Even if any means are justified toward the end of improving web programmer productivity over what JS affords today, Dart represents a specific, clear and negative judgment on the Harmony work in Ecma, a judgment that I believe will be shown to be a mistake. It can't possibly help us make faster progress in TC39 -- it's at best a distraction and at worst a break in trust among the members.
With tooling (Brightly) and massive developer and evangelism resources backing Dart, will we really have the kind of open-web-first, fair-play contest that people who thought Google was (as Paul Graham put it) "aligned with the grain of the web" have come to want and expect from Google?
It looks like we won't. GOOG is acting more like MSFT of old (also like AOL, wanting sticky eyeballs and time-on-site instead of being the best search engine). The game theories of public companies, the innovator's dilemma, the Facebook challenge, and the browser-vendor Prisoner's Dilemma, all predict this ironic development. It is not a surprise.
But companies are made of people, and I have hopes that Googlers on the right side of this conflict will speak up.
Responding to your comment about NaCl, I just wanted to point out something, not contradicting what you said, but emphasizing the motivation for the other side. We are right now in a situation where every significant browser innovation has to move through a very slow process on a timescale of years. And these are the succesful features not blocked by vendor politics. Also, features which are not of widespread interest, (say I want to try web programming in a new just invented language), dont even come up in discussion, due to limited resources of the standardization process. We would ideally want to be in the position similar to lisp where innovations even in fundamental areas like object systems dont have to wait for a committee before being used in production systems, as opposed to something like the Java standardization process.
Now, NaCl may not be ideal for the various reasons you list. But something similar to it, when it becomes standardized, will lead to much faster innovation. Things which were previously held up in standards committees can then happen on the fly. In the spirit of Scheme, browser standardization can progress not library by library, but by removing the obstacles which prevent programmers from implementing libraries themselves. For this reason, I submit, that this should be one of the important goals of a web browser implementor, and maybe even worth a few compromises(not compromising cross-platform standardization, of course - the single greatest thing about javascript is that it freed us from the win32 api).
First, jashkenas isn't held up by standards committees -- he's able to experiment with cool new client-side web tech on whatever schedule he wants. And building on this, even more new stuff is happening, like Tim Disney's contracts.coffee [1].
Second, your ability to ship the coolest new client-side technology isn't being held up by TC39, or by the HTMLWG -- nothing we or they could do would make MS ship IE 9 for XP (or make people upgrade from Debian Stable :).
Third, the reason that design takes a while is because it's hard. I'm really glad that Dave Herman and I haven't iterated the ES6 module design in shipping versions of a browser -- that's not the right thing for anyone.
Finally, you mention Scheme. If you want to see a truly disastrous example of language progress held up by politics, check out the last 5 years of Scheme standardization.
I guess I was wrong when I said every browser innovation. I also, more or less, agree with your other points - my goal wasn't to bash HTMLWG but to point out that the standardization of a lower level api will help us skip some of the other parts of the standards process by allowing us to do things like compile an sql library directly. Design is indeed hard, but with people free to implement different alternatives, it can happen in a distributed way with more possibilities explored even in production systems. Of course, something as basic as a module system should definitely have a standard. Which would also lead me to agree with your comment on scheme standardization while noting that some of the experiments with first class environments and f-expressions seem to be a genuine exploration a new part of the design space and not a gratuitous incompatibility, but maybe the standardization process should ignore them for the time being.
Quick comment: lower-level APIs can be harder to standardize than higher-level ones, depending on the diameter of the API-set and the implementation dependencies. Running native binaries requires a new compiler and a bunch of runtime API support (Pepper).
OTOH adding typed arrays or binary data to JS is narrowly targeted and pays off for higher-level API builders. And the typed arrays and binary data specs are being standardized.
You have me wrong: NaCl is great, perhaps as near to ideal as it can be (it's still nowhere near "done"; it's experimental where it isn't in DLL hell, libc hell, "portable" LLVM bitcode hell, etc.).
The quality of the NaCl work is not the problem.
The problem is that none of the other browser vendors can or will afford to get on Google's treadmill, trust Google as much as they would have to, then try to implement Pepper (for which the code is the spec, including bug for bug compat), take on all the toolchain dependencies, and hire people to co-maintain on their own release schedules. Not gonna happen.
The OS-specific toolchain-and-runtime alternative looks likelier to me. It solves the distribution problem, sinks the costs at the OS vendors, and avoids requiring coordination at the price of the safe plugins still being OS-specific binaries (which we live with today).
Another problem I have with your comment is the hope that innovation speeds up when we have more native-code plugin options. I doubt that greatly.
Web developers can choose different tools today, but their complexity is mostly confined to the server side and to the development lifecycle there.
Moving some or all of this complexity to the browsers via a large and growing menu of "DLLs" that are required to be downloaded along with primary app source? No way. Remember, the whole gambit of NaCl requires the humongous and messy Pepper API, the API that is nowhere near standardized or ready (or able) to be standardized, which non-chromium.org browsers are not going to support.
It's like the >1000 MS-COM interfaces that ActiveX plugins can and do use in old IE versions.
My bet is that the client side will not grow to look like your random Apache installation with (magically memory- and control-flow-safe) multiple programming language DLLs lying about. We may get to a post-JS future but it won't be that Tower of Babel, not on the client -- especially not on the mobile client.
I agree with what you are saying in the beginning - We definitely need a proper standard and not a single company having design control. I am not so much defending the way Google is going about it, but rather the value of something which plays the essentially same role.
Ideally, I would like the web developer to have as much freedom as a developer for a native application - the freedom to choose a programming language and the ability to compile to a reasonably low level target for performance while still being cross-platform. Now, this somehow needs to be done without the security nightmares of ActiveX plugins. (Though, I dont see the non-security problems with a bunch of shared libraries stored on the browser. This is essentially what the OS does, and my goal is to replace the OS with the browser. Bandwidth will probably improve so that downloading libraries shouldn't be a problem and if not, the web site maintainer can still choose not to develop in a large library environment The popular ones will be cached in any case.)
You seem to be saying that the technical difficulties of doing this in a well-specified (memory and control-flow), secure and cross-platform way are intractable. Maybe this can be solved by moving the VM to a more abstract level.
I don't know how to be more clear. I'm not talking about the technical difficulties by themselves, out of context. The technical difficulties are formidable -- but not insuperable for Google with its many engineers.
I'm saying that in context of market realities, the other browsers won't use shared Google-source as the "implementation is the specification" for NaCl/Pepper, and no one can write a real NaCl/Pepper spec by which any browser could implement an independent and interoperable workalike in less than decades.
So why try if the OSes can do their own non-interoperating workalikes? (And they are already doing this, from what I hear.) If safe native code on the web is mainly for plugins, there's no problem. Plugins are OS-specific today and likely to remain so, safety or no.
OK, Thanks for clarifying. Again, my purpose is not defending NaCl and if this can be done by making JS a better target or by a translator from another VM, that's great, and I hope these projects succeed. And these are worth trying even when the OSes have incompatible workalikes, precisely because they are incompatible. Thanks again for all your work on JS and the open web.
In the very long run, could OS-specific CFI enforcement toolchains and runtimes merge into a single standard, usable on the web?
Maybe, but that is beyond my prediction horizon. Either OS vendors or browser vendors (if there's a difference) would have to standardize a butt-load of APIs.
Since the '60s, researchers have dreamed of universal object files (ANDF, e.g.). It would be too funny if far-future JS merges into this single standard too!
("too funny" in my experience means this will definitely happen...)
I'm having a hard time understanding these objections.
Suppose Dart is a new open-source language from Google for front-end web developers. Surely this wouldn't offend anyone.
Now suppose they include a Dart->Javascript compiler with the language. Well, he have GWT and coffeescript and all the others. Surely this wouldn't be offensive. Pragmatic, even.
Then, being in the unique position to do so, they include specialized runtime support in Chrome (and it will still compile to JS). Now they're evil and anti-open-web?
As to the MSFT analogy -- I've never heard of any intentions of driving competitors to implement ActiveX or VBScript.
Still, I'm curious, if Google, responding to reactions such as this one, said "our bad, it won't run any faster in Chrome. See you at the committee."
and released the JS compiler only. Would that ease the objections?
You set up a series of straw men based on nothing I wrote. I wrote "Then Dart could come to Ecma as a proposed spec with a single open source implementation."
Nowhere did I say a single word trying to corral Google into a non-evil, artificially held back (you imply) posture of offering only compile-to-JS, with no Dart VM in Chrome. Chrome ships all kinds of non-standard or proto-standard stuff.
Chrome can certainly ship Dart if it floats their boat, and if it is standards-bound, meaning Ecma TC39 bound, then I will not object categorically.
The issue is whether Dart is even a proto-standard, or more a modern VBScript. Get it?
"Now they're evil and anti-open-web?"
Half-right. You put the e-word in my mouth, but yes, some of Google's actions are anti-open-web.
Remember, I'm not the one throwing around slogans about "evil". Do you really need me to quote Uncle Ben in full? The relevant words from that quote are "power" and "responsibility". Google is acting like a monopoly power. Now, being a monopoly is not illegal or evil, it depends on actions. You bet I am objecting to proposed actions in the plan that leaked.
But whether these actions are evil, I leave to others. They are definitely disruptive to Google's standards effort in TC39. The open-standards-based web does not work by following anything like the game plan in the leaked memo.
"As to the MSFT analogy -- I've never heard of any intentions of driving competitors to implement ActiveX or VBScript."
This is not only ignorant, but naive.
We at Mozilla faced pressure to implement ActiveX and ship support for it, starting in the late '90s. Adam Lock did an implementation, still in our source tree (IIRC in chromium.org too). Without ActiveX, plugins were unscriptable until we at Mozilla founded the plugin-futures mailing list in 2004 and co-created NPRuntime with Apple, Adobe, and others.
Good thing we (meaning Netscape, then AOL, which was under pressure) didn't cave and ship ActiveX support or we would have been on a very long, high-speed treadmill, reverse-engineering hundreds of COM interfaces in IE. But see my earlier comment about the PKI story in parts of Asia.
That's the history. I believe that you don't need to know it in detail to game out how minority share browsers were stuck without a plugin API locked into MSCOM and Windows. Really, what were other browsers to do, without market power to get plugin vendors sinking more cost in a second plugin API? The old Netscape-1.1-era NPAPI did not include scriptability (JS debuted in Netscape 2).
Ok, maybe you did need to know some details, but I still think your use of "analogy" and "I've never heard" are naive.
As for VBScript, MSFT was absolutely trying to make that a de-facto standard, gunning against JS as de-facto standard from Netscape (yes, Netscape was a monopoly at first; not for long). GNU folks even offered to get on the VBScript treadmill and help Mosaic, Netscape, and other mid-'90s browsers support it. Good thing JS came out early enough to choke it off to a few microsoft.com sites.
"Would that ease the objections?"
More straw men. If you don't know what I meant by "consequentialist", look it up. I'm not going to play "how low can you go" if we don't agree on premises, specifically how open web standards are best made, what's wrong with the game plan in the leaked memo, and why monopoly power shouldn't be so eagerly abused.
Being open is nothing like (a) delayed, single-vendor-dominated source code release; (b) sweet-talking or arm-twisting with a big ad budget; (c) hedging one's standards commitments so heavily while tossing fragmentation grenades such as standardizable-only-by-market-power new language VMs into Chrome.
A bit more detail about ActiveX. What really brought pressure on non-IE browsers targeting Windows was not just ActiveX's support for scripting the plugin from the page's JS -- again that was missing from the NPAPI.
Fast forward to today. I'm told that the next Android release, Ice Cream Sandwich, will drop NPAPI support -- this time in favor of Pepper. ActiveX, ActiveG.
Vertical lock-in from plugin to browser to OS to hardware. The 90s -- if not the 80s pre-commodity-PC -- are back.
I think you are being unfair in your criticism. Microsoft, Apple, and Adobe, just to name a few, have all tried to force web developers into proprietary closed-source platforms. If you want to get angry, get angry at them!
Google is just trying to give web developers a choice. They plan to continue fully supporting ECMA's efforts as well, with developers and money. And for this they deserve to be attacked?
You keep talking about lock-in, but if Dart becomes popular, other vendors will integrate it into their browsers as well. That's what being an open standard means. There's no lock-in here (assuming that my understanding of the project is correct.) Let's try not to be petty about, even if JS is your baby.
What makes you think I have not been "angry" at MS, Apple, and Adobe? You must not follow my writings closely!
What's more, my mood is not the point, actual market-facing behavior is.
The topic here is Google and its actions. There are good reasons to focus on the big G now:
* Microsoft is older, formidable but late to mobile, in some ways in decline.
* Apple, we know what we get: proprietary lock-in -- but also extensions to the web platform that mostly (still waiting for some CSS spec drafts) get into the web platform. The secret sauce is the iOS-specific stuff: Obj-C, Cocoa, CoreAnimation, etc. They do not cross the streams.
* Adobe has turned to HTML5. They know Flash is in trouble. Neither Flash nor Silverlight is the "open web" threat some of us worried about four or five years ago.
In contrast, Google is a money machine with significant market power, and it is crossing the proprietary extension and open web standards streams without doing the requisite open spec work -- yet, of course. But very late spec'ing is no help. And again, open-washed open source is not nearly enough.
"Google is just trying to give web developers a choice. They plan to continue fully supporting ECMA's efforts as well, with developers and money. And for this they deserve to be attacked?"
From the http://wiki.ecmascript.org/ recent changes history, you can see who is doing hard work on Ecma proposals. Mark Miller works very hard. Google as a whole does not, and it could do a lot more, but it is not of one mind, and it wants to try both proprietary and open-standards tracks.
My point is that doing both means doing one well and one poorly. Can't serve two masters.
As for "just trying to give web developers choice", grow up. How would it work if Mozilla, Apple, Microsoft, and Opera revealed delayed-open-(spec|source) new and non-interoperable programming languages to "replace JavaScript"? Uh huh. Now can you spell "fragmentation"?
"You keep talking about lock-in, but if Dart becomes popular, other vendors will integrate it into their browsers as well."
Dart is a secret still, hence "proprietary". Delayed release of source and/or spec won't make an open standard that multiple vendors will implement, hence "lock in".
Without monopoly or majority market power, Google cannot force other vendors to support Dart in the short run, and the lock-in effects of having two+ years head start designing and implementing Dart is a negative for other vendors.
Even if Dart becomes so popular via either a native VM in Chrome or Dart-to-JS compilation elsewhere, other vendors may defect and not implement native Dart support.
Unless Google has high Chrome share by then on its web apps and sites that use Dart, it will suffer poor performance in the JS-implemented runtime the Dart-to-JS compiler targets, and may have to come up with a "plan B" (use pure JS, try a Gears-like plugin, etc.).
Meanwhile, JS is growing ever faster and the standardized ECMA-262 language is being evolved. And other vendors may try their own tit-for-tat proprietary responses (Mozilla won't).
"That's what being an open standard means."
Bullshit. I work on open standards, I've done it for 15 years. Have you? Open means Open: no wow-effect reveal after two+ years work, no promises (empty in a lot of cases so far) to standardize later.
Yes, Netscape didn't do that. They were a monopoly. I'm not moralizing here, I'm pointing out that because Google is not a monopoly, it can't help but fragment the web by acting as if it were.
"Let's try not to be petty about, even if JS is your baby."
You misunderstand me. JS is part of a "commons" now, not my baby. It's a shared asset. It requires stewardship, including evolution, and not just maintenance.
The leaked memo declares that JS can't be evolved to fix critical problems, in order to justify Dart, without giving evidence and without Google making a concerted effort in the governing standards body, Ecma TC39.
My concern about this two-faced and fragmenting approach is not "petty" and it's not about "my" anything.
It's about a common good ethos that must prevail or the open web tends to fragment. Not all at once, not fatally, but down a bad and slippery slope.
Just a small historical revision - Netscape was losing a point of market share per month in mid 1997 - even before IE4 came out, because of Microsoft enforcing IE as the OEM browser on most desktop PCs
I cited some market share links from wikipedia elsewhere in this thread, and TechnicalBonobo set me straight on monopoly vs. (let's say) dominant competitor or market leader (whatever that term is).
The point for Dart (vs. JS) is that Google doesn't have 80% or even 50% share.
Those of us old enough to remember (Brendan, that should include you) know that Javascript itself was developed by a company (Netscape) in pretty much exactly the same way, for pretty much exactly the same reasons. Get off your high horse...
If you are that old, you really ought to have better reading comprehension. My argument has nothing to do with Netscape, me, or my high horse.
Look, Netscape had a monopoly and put JS "on first" and just about everywhere. I'm done apologizing for that, I've made up for it in spades on standards work, and it's a fact I cannot recall and rewrite.
The precise point now, here on planet earth and not wherever you are, is that Google is not that new monopoly. Not yet, not likely for years even in their wildest dreams.
If Google were the monopoly Netscape was, sure: Dart would be the new JS. The two would co-exist for a long while but "replacement" would be conceivable so long as market power held up.
Since Google does not have monopoly power, and with the non-standardizing tactics of that leaked memo, Dart is unlikely to be adopted by other browsers. It's fragmenting. It's an invitation to others to inject their own would-be replacements and build a Tower of Babel.
"I'm done apologizing for that, I've made up for it in spades on standards work, and it's a fact I cannot recall and rewrite."
Wouldn't it be fair then to at least give Google the chance to do that standards work with Dart after it's released before labeling the company anti-open?
How early should a language be announced? It's certainly not cool to drop it on the world when it's done and baked , solicit zero feedback, and then demand the rest of the world implement it. But it's also not cool to toss out some vaporware spec with no implementation and no empirical evidence that the language is any good.
Look at what's actually happened, never mind the future:
1. Dart development (Dash, whatever, and there may be more, including CSS and HTML killers; not sure, rumors swirl) has been ongoing for approximately 2 years -- or more. It didn't start last November.
2. Google members of Ecma TC39 have been working on ES.next (some harder than others, I observe) without being able to show how Dart solves "unfixable" JS problems. Such demonstrations would help either:
2a. steer JS toward fixes if the "unfixable" assertion is false (as seems likely to me; little is unfixable on the web), or else:
2b. abandon doomed fix-the-unfixable attempts and instead work harder on other and fixable problems (e.g. being a better target language for Dart-to-JS compilation).
3. Delayed open-source means other browser vendors and volunteers have a high hill to climb to become committers/reviewers/co-owners, so Google controls the open source. This has happened many times. Competitors are unlikely to join, especially if the code is complex and has deep dependencies on other code (cf. NaCl/Pepper).
BTW, WebKit is an example more than a counter-example. It was Apple-dominated even though early-mostly-open, and now Google has taxed Apple committers/reviewers and is gaining the upper hand.
WebKit was early-open, a fork of KHTML at first, then set up as webkit.org in 2005 patterned after mozilla.org and in the aftermath of a recruit-half-the-Safari-team-to-fork-Firefox attempt by Flock. This history shows more open that closed, and earlier open at that, but mixed up with various intrigues and corporate control agendas.
While the history is not a clean win for any point of view, WebKit is a "commons" of its own. Note how chromium.org has to hold the Google-only extensions that Apple et al. won't take.
4. Standardization of Dart could happen anywhere, but it would be perceived as anywhere from wasteful to hostile for Google to bypass Ecma TC39. Early opening of a draft spec or even just an open-source implementation again could have won friends and influenced people on TC39. Late opening goes the other way.
What actually has happened, from what we already know: late-open.
My point isn't that you should apologize -- why should you? Any more than Sun should apologize for Java, or Microsoft for .NET Sometimes shit needs to get done, and people need to make things happen. Google is trying to fix some problems; no big deal. You see "Tower of Babel", I see the free market offering alternatives.
And don't be so quick to dismiss the likelihood of other browsers adopting Dart... If I'm not mistaken, IE supports Javascript, right?
Let's argue languages based on technical merits, not how or by whom they were developed.
"Javascript itself was developed by a company (Netscape) in pretty much exactly the same way, for pretty much exactly the same reasons."
That's wrong on at least two counts:
1. ["same way"] Netscape had a monopoly, it wasn't injecting JS into a situation where there was already a scripting language widely used on the web and implemented among multiple competing browsers. It was not fragmenting a multi-lateral browser market or web content language ecosystem.
2. ["same reasons"] The Dart reasons adduced in the leaked memo are nothing like our reasons at Netscape for doing JS. We (marca and I, mostly) wanted a language for non-Java programmers, non-programmers even, which could be written directly in HTML. A language designers, beginners, amateurs could learn by the yard. We did that without failing to upgrade a prior such language already widely supported.
Dart, according to the leaked memo, aims to replace JS because JS can't be fixed. But who says JS can't be fixed? Why, the people making Dart, working at the dominant web company of the last decade! That's no "reason", it is a choice.
"I see the free market offering alternatives."
Uh huh. If Dart has native support only in Chrome, then it's not an alternative for people using other browsers. Then what?
The "free market" is a bogus political phrase. I'm in favor of markets: real ones that self-regulate by preventing fraud (a central clearing/blinding counter-party, bid/offer/open-interest/size transparency) and abuse of power (market winners capture governments -- this has happened throughout history, it's a big problem right now, see the Global Financial Crisis).
Talking about Google's anti-standards power games in the current multi-lateral browser market as if it's all "free market" goodness is b.s.
"If I'm not mistaken, IE supports Javascript, right?"
Think, floppybunny, think! IE supports JS because Netscape had a monopoly once it took over from Mosaic and grew the web via commercialization (SSL, another Netscape innovation). IE had no choice but to support "JScript", and indeed they were thus motivated to help standardize ES1. Standards, hmm.
Google has no such monopoly. The likely outcome of putting a native Dart VM into Chrome and (again per the leaked memo) using Dart for web app development in Google is not to make other browsers roll over and embed the native Dart VM as well.
Can you see Apple doing that? Microsoft? Mozilla would be expected to do it by all you bring-on-the-new-monopoly fanboys, but even if we did it wouldn't help.
We're in a multi-browser market. Competitors try (some harder than others, pace Alex Russell's latest blog post) to work together in standards bodies. This does not necessarily mean everything takes too long (Dart didn't take a month or a year -- it has been going longer than that, in secret).
Open standards development does not mean design-by-committee, either. Multi-browser collaboration among Apple, Mozilla, and Opera was what created HTML5.
Dart goes the wrong way and is likely to bounce off other browsers. It is therefore anti-open-web in my book. "The end justifies the means" slaves will say "but but but it'll eventually force things to get better". Maybe it will, but at high cost. More likely, it won't, and we'll have two problems (Dart and JS).
"Let's argue languages based on technical merits, not how or by whom they were developed."
Now that's just low. You know damn well that Google has kept Dart a secret, so none of us can assess its technical merit (maybe you work for Google and can?).
Sure, when Dart is released, let's argue in a new thread. This thread is about the fragmenting, essentially two-faced politics of the leaked JS strategy memo, and whether and why that's bad for the web.
If you want a new monopoly to sweep the web clean, bully for you. No such monopoly exists now, so realists will still have to work in standards bodies. Is Google really working in standards bodies? As Maciej Stachowiak points out in this thread and over on reddit, more and more of their extensions in Chrome are not being standardized, and apparently won't be proposed "later".
In theory, you could have gotten input from Microsoft, IBM, Mosaic, and so forth before ever releasing Javascript. Did you? No. Even assuming that those other companies had an insignificant share of the market, and could safely be ignored, you still could have asked for public commentary on your design, like you're asking Google to do now. Did you? No again. It's hypocritical to blast Google for doing the exact same things you did, for the exact same reasons.
Secondly, every attempt to create a "programming language for non-programmers" has resulted in something horrible. According to wikipedia, "one of the design goals of COBOL was that non-programmers—managers, supervisors, and users—could read and understand the code. This is why COBOL has an English-like syntax and structural elements." When you are comparing your language design decisions to those made by COBOL, it is time to give up.
Your own stated rationale for getting Javascript done in 10 days was to "prevent something worse" from happening. This is the exact same reason why Google is developing Dart. The only difference is that for them, "something worse" is Microsoft Silverlight, Adobe Flash, and Apple iPhone applications.
Right now, if I want an application that "feels native," Javascript is a no-go, and I have to use one of these (closed, proprietary) technologies. Shouldn't we be saluting Google for trying to make the situation better, rather than blasting them for not asking their competitors for a mother-may-I?
Dart is going to be an open platform that is well-supported by Chrome and Android. That alone should be enough to make it a viable choice. It's great that you created Javascript and that it became an open platform for everyone. Now someone is trying to build something even better. You should be helping them rather than raising objections which are illogical at best, and hypocritical at worst.
First of all, you did not cite market share numbers to show lack of an effective monopoly. You cited a fun wall chart of all the various browsers, most with tiny and non-growing if not rapidly shrinking market share back in 1995. Come on!
Anyone around in the Netscape era knows that Netscape took over most of the market from Mosaic and smaller-share browsers, and evolved the web rapidly with proprietary (in the same sense I use against Google, however standardized later) extensions.
This led to Bill Gates' famous memo about the Internet Tidal Wave, and his cancellation of Microsoft's 1994-era AOL-killer, Project Blackbird.
Netscape dominated browser market share until IE came up to version 4, which was better on Windows than Netscape's old version 3 or very late version 4, and IE was bundled and locked-in hard to boot.
"It's hypocritical to blast Google ...."
Accusing me of hypocrisy shows ignorance of that word's definition in light of the history of my career.
I'm not practicing one thing and preaching another. I worked on JS standardization less than a year after shipping it in Netscape 2 beta. After that I co-founded mozilla.org. I'm not currently doing A and preaching not-A, nor have I been doing "proprietary" work for a long time. I've paid my dues.
Just FYI, like I owe you any explanations, Netscape did collaborate with Sun, gaining the "JavaScript" trademark license (a marketing scam, of course; I hated it). And I did collaborate with Bill Joy of Sun. But really, that's irrelevant.
Netscape was a monopoly in effect (it's very rare for a real-world monopoly to have 100% of the market). We did not have ability or time to make open standards of all our work. We knew, because Netscape had rejected a low-ball acquisition offer from Microsoft in late 1994, that Microsoft was coming after us. And we knew they had the power to kill us.
If we had not pushed hard to add programmability to the web (JS and Java in Netscape 2, plugins before in 1.1), then Microsoft would have been the reigning monopoly power, and would have abused that power (which did happen; it was prosecuted successfully).
So, though it's no justification in general, Netscape -- including my JS work -- did forestall a likely Microsoft push of their tech -- including VB as the Web scripting language. Monopoly good-cop/bad-cop act, or just history now, neither all "good" nor all "bad".
This is in contrast to today, where there is no browser monopoly and the top three have very close shares in many locales -- especially if you sort by model as well as make, and include WebKit variations on mobile, which is rising to eclipse desktop.
"Dart is going to be an open platform that is well-supported by Chrome and Android."
I think your sock-puppet is slipping. How do you know this? Do you work for Google?
Nice HN history you have, btw (two comments, both on this thread today). Why not do as on Google+ and use your real name? I have.
"1. ["same way"] Netscape had a monopoly, it wasn't injecting JS into a situation where there was already a scripting language widely used on the web and implemented among multiple competing browsers. It was not fragmenting a multi-lateral browser market or web content language ecosystem..."
"First of all, you did not cite market share numbers to show lack of an effective monopoly..."
"Netscape was a monopoly in effect (it's very rare for a real-world monopoly to have 100% of the market)..."
I don't think you know what the word monopoly means. Market-share is entirely irrelevant. What constitutes the definition (the reason the term even exists: what is meant to describe) is not how "big" a company is, but the _exclusivity_ (as in: is anyone else allowed to ENTER that sector of the market). And using the words "effective monopoly" or "in practice/real world" doesn't work as a permission to misuse the term, either. In fact, it does the opposite. Actual real-world "effective" monopolies would be: An entity holding a patent for some invention, the State having exclusive control of force, etc, etc.
Some company being "the only company who is currently doing X" is not a monopoly, as long as anyone else can enter the market.
BTW many of us developers are totally against the evil "effective monopoly" (see what I did there :P) that this not-so-pretty JavaScript language has in the world of web development, so the idea of more options doesn't sound bad at all.
Because last time I checked, there was no axiomatic law embedded in the fabric of the universe which stated that "Every desired improvement and/or business endeavor in the realm of browser scripting should be expressed in the form of a proposal for a next version of JavaScript, over at http://wiki.ecmascript.org, or else the oh-so-heavenly multilateral dimension of the interweb net will fragment and spiral down eventually collapsing into a black hole made of kittens"
Dart is probably going to suck, though (and Java-stained ideas polluting its design will probably be the cause). Also, unless Chrome wants to commit suicide, it's gonna keep supporting JS in the current and future versions, so you JS people should put a halt to this soap opera. Pause thy bitchfest.
I defer to your economic terminology expertise, but Netscape did have 80% of the market during a huge growth phase (people extrapolated exponential growth from a few months or quarters in '95 and '96). What's the term for that position?
Whatever you call it, if Google had that now or very soon, it could indeed ship new stuff and "make it stick". Since it doesn't have that power, I repeat that Dart is fragmenting.
No one is obligated to work on extending existing standards only, not try injecting new ones. Doing both without market power to make the new ones stick is going to make a mess in my view. I keep saying this, do you disagree?
I'm the last person who wants to save JS from extinction. If it were cheap enough to kill, I'd do the deed myself. It's not cheap to kill -- quite the reverse -- and the leaked memo's assertions about it being unfixable are exaggerations at best, and betray a significant conflict within Google.
(BTW I agree there's a smell of "Java-stained ideas polluting [Dart's] design.")
If TC39 had a crack at standardizing Dart or putting ideas from it into ES6, everyone would be better off -- even if Google then launched Dart anyway.
Instead, while we in TC39 were working in the open on ES6 (which is past new-proposal freeze), we knew nothing. That is not just missed opportunity, I call it poor stewardship on Google's part.
I explicitly support Brendan's arguments. Programming languages are a STANDARD to help people COMMUNICATE units of functionality. If we frequently invent new languages that just mainly give us a new "style" to do things (and of course some will do this better, and others will do that better), we get an unnecessary chaos of different codebases that will be incompatible to each other but yet do basically the same. (1)
Language design implies even more responsibility in versioning and adaptation to new programming paradigms than API design.
Let me again cite Atwood's Law: "Any application that can be written in JavaScript, will eventually be written in JavaScript." (see http://www.codinghorror.com/blog/2009/08/all-programming-is-... )
It doesn't matter that much in which language a program is written, as long as the language level fits the level of your module / application and portabiliy (1) is high.
I would even vote against a separate module type and other experiments that complicate JS sytax. But this is, of course, another discussion. What I like and heavily use is CoffeeScript.
Thank you Brandon for keeping things in order!
I'd be curious to see this backed up. Certainly with a long enough timeline we are not at a maximum of "works only in x", and -- ignoring those beleaguered Opera users still subjected to crappy UA sniffing -- I can't think of the last time I came across a site that blocked me for whatever my choice of browser (I'm a Chrome and Firefox user as well).
If, on the other hand, you're talking about fancy new CSS3 or WebGL (or whatever) demos leaving out some browser due to neglecting to add a particular prefixed property (or whatever), I don't really see that as a problem. Prototype then standardize, right? Yes, not feature detecting and prefixing properly is poor web development practice and yes, they are cutting out part of their audience, but using an experimental feature inherently means that it may break at some point, and almost certainly isn't supported in all browsers.
When 3d CSS transforms land in Firefox soon, there are going to be a whole lot of demos out there that won't work. I would say that's an education and library problem, not a fragmentation one.
I was explicit about the problem, which is not new features that come to some browsers first, advertised in a browser-agnostic way. That is ok, provided the features are on a standards track. What I cited is fragmentation due to capital-C Chrome as a requirement.
Google is doing deals and marketing web apps and add-ons as "Chrome". This is showing up all over the place, most recently in my experience on the Starbucks free wireless service agreement page.
Major content publishers Mozilla has talked to are surprised that we can host their Chrome-targeted web apps and HTML5-based extensions. They think such things are exclusive to Chrome. If they use Dart and need native performance, then that'll be true. There is a slippery slope, especially when more and more of the differentiation is not on a standards track (Dart, NaCl, WebSQL).
Nevertheless most of the current differentiation is bogus and any modern browser will do. There's certainly some "WebKit required" fragmentation. We saw this with Amazon's kindle web app, which is specifically built to WebSQL and not IndexedDB. But I do not believe any big player is pushing "WebKit required" as an agenda or marketing program. In contrast, Google is clearly pushing "Chrome" as a requirement or works-best-in brand.
are obvious, maybe too obvious. Defenders can always play up the "experiment" hedge. That helps but only so much, especially when many of these experiments could work in other browsers with trivial changes.
I'm told that people have run chrome.angrybirds.com in Firefox. So it should be html5.angrybirds.com but -- no surprise from a marketing angle -- the domain name starts with C not H.
... and I was with you until the WebSQL vs IndexedDB thing.
IndexedDB is a horrible, horrible thing. It should be put out of its misery and replaced with WebSQL in Firefox.
Mozilla is singlehandedly holding back offline web apps with the insistance on a slow and hard to program -- yet buzzword friendly -- technology that no one actually wants to use. There's no reason what so ever to stick with a dumbed down K/V store when you're not trying to replicate data across data centers.
So, IMHO, you've bought this bit of "WebKit required" onto yourselves. WebSQL is the only production ready offline web app technology available today.
Take it from someone who offline enabled one of the biggest web apps on the web: I'd rather leave out offline entirely than code IndexedDB.
Considering that Apple will never support it, it's pretty much dead. And I feel that Mozilla is really on the losing end of this particular battle.
How should we deal with the mandatory dependency on a particular major/minor/patchlevel version of SQLite? That seems bad to standardize on.
I personally think there's definitely room for a SQL database API built into the browser, but Mozilla did have a point. It should be something a little bit better specified than that.
Also, it's not singlehanded -- IE refused to support Web SQL as well IIRC, and they aren't going away anytime soon.
Thanks, that's worth reiterating: IE wasn't buying. Alas they didn't get IndexedDB done for IE9. And, just because WebSQL wouldn't fly does not make IndexedDB the one and only winner. We need to revisit this whole messy area.
did anyone on the standard bodies (from MSFT?) suggest LINQ framework as a WebSQL "replacement" ?
it makes a lot of sense IMHO. it abstracts away dependency on exact SQLite version, but preserves a familiar SQL-like mental model. it is stable, mature, well understood and widely deployed/used (god i sound like a shill.. ;)
and has the added benefit that it can be implemented in two different flavours: as a standard JavaScript (ES3) library on top of WebSQL (or SQLite in Gecko), or as a first-class language extension with syntax sugar similar/related to array comprehensions (from JS1.7/Harmony/ES6)..
if it wasn't proposed, can someone from WHATWG (like you Brendan ;) politely ask Microsoft to "contribute" any related intellectual property to W3C? how likely do you think would it be for AAPL/GOOG to accept it? (and if not, it could simply be implemented as a thin layer above WebSQL ;)
Maybe IndexedDB is bad. I'm withholding judgment. That's not the point, and your demurral on that particular example should not make you reject my entire argument. WebSQL is not standards-track, period. Whether it should somehow get back on the standards track, let's debate elsewhere.
We could go awry on H.264 vs. VP8 too (I'm not religious), but note how Chrome has best of both worlds, including paying the gangster fee (MPEG-LA license). Not exactly "open".
The minor point for this discussion is that works-in-WebKit happens, but mostly unintentionally or out of rank laziness on content authors' parts. It may even (as you suggest and I agree) provide a big clue-stick to repair a standardization mistake or pox-on-both-houses-try-again situation.
Meanwhile, and this is the major point: works-best/only-in-Chrome looks like an intentional marketing game, backed by a nine-figure budget. Big difference there.
I agree with many of the points you have made in this thread, it is indeed a worrying trend.
Howver, I am not sure it is fair to tar WebSQL with the same brush as these other technologies. WebSQL was on the standards track, and in fact started there. The main reason it isn't any more is that Mozilla representatives insisted on kicking it off the standards track instead of fleshing out the spec to be truly independently implementable.
The most common use of SQL you see is in iOS/Android-targeted Web apps, because WebSQL has been around there fore a few years while IndexedDB is new and isn't shipping in production quality anywhere.
Google is actually pretty sanguine about replacing WebSQL with IndexedDB.
Better examples of this trend would be:
SPDY as a replacement for HTTP, actually used in production by Chrome talking to Google servers, not a hint of it on any standards track.
VP8, where Google's single-source implementation is more authoritative than their spec, and no hint of it on any standards track. No apparent interest in putting it there.
(I realize Mozilla is on board with VP8 due to the patent licensing issues with MPEG, but consider the risk of giving Google total unilateral control of video codec design. I would have hoped that Mozilla insisted on taking VP8 through a real standards process before signing on wholeheartedly.)
Maciej, why do you write as if Mozilla doesn't ask for (not "insist" on) things with Google and fail to prevail? We do that all the time. Some in Mozilla are still waiting for the good thing to happen.
In VP8's case, what alternative do we have that isn't OS-dependent? We can't afford the gangster fee.
I am wholeheartedly convinced that you asked privately. It might have made a difference if Mozilla had publicly asked for an open standards commitment. As things stand, Google got PR cover for a power grab. I also think it was a reasonable choice for Mozilla to sign onto VP8 anyway, given the lack of better alternatives for you. However, I think VP8 is still a better example of the phenomenon of Google's increasing promotion of purely Google-controlled technologies than WebSQL is.
The video codec situation is really quite sad. There is no technology option that is all of an open standard, RF licensed and sufficient quality. Vendors all have to make their choices out of the imperfect options we have.
I agree that gmail only having offline for certain browsers is suboptimal for the web (though they claim, like amazon with their kindle app, that they will expand support), but it's worth noting that at least the gmail app includes an installable component equivalent to an extension...and there are plenty of single-browser extensions for every browser.
I don't think Dart will beg other browsers to comply to their will of adding a VM.
On the other hand, I find it more likely that, in a whole year's time, the people behind Dart thought about NPAPI. They may very well produce a plugin that everyone will download.
That would make Dart the new Flash, but that's the price to pay for a serious cross-browser Dart.
Of course, that would also make Dart second-class citizen in the web platform, and I guess Dart people don't want that. However, I find it hard to believe that they hoped all browser vendors would bow before the new programming language overlord.
Now, making browsers speak Dart will be the easy part, I think.
The Enterprise will have a hard time accepting a brand-new language blindly when they have just learned that C# is slowly losing Microsoft backing.
Programming languages all make mistakes, and Dart will be no exception; they had a full year and bright engineers, but that rule is sacred. No matter how hard they advocate the niceties of the language (which I assume will be closures (again!), the full funarg, yet another garbage collector, optional typing, type inference, bignums, modules, event-oriented programming, maybe lazy evaluation) but I know many programmers that won't bother learning any programming language that falls too far from the C-like tree, especially if it has odd features, especially if there are mistakes in the way those features work together.
I have seen many bright programming languages that never found any serious adoption. Scheme would have been very interesting in real work environments; Perl6 has many interesting features, but is too late and slow; Haskell still has potential, but I don't see it suddenly go past Objective-C, C# or Java in popularity.
I would love Dart to find success. But this is not a dream world. The platform it seeks to conquer, the competition it must overcome, the tender age it has: Dart is so very late for such a challenge.
This has been studied, I don't have the stats but it may have been in the context of Gears: users don't install new plugins lightly. Distribution deals could be bought and paid for, but that is a long road (see ChromeFrame).
Also, NPAPI is not expressive enough to enable a new programming language VM to be integrated on par with JS. Big deal, Google can write browser/OS-specific code? They tried with Gears. It's hard, and versionitis/unfrozen APIs bite hard.
Take these two points together and the NPAPI route is not going to get Dart widespread adoption outside of Chrome, any time soon. So would web developers use it if they had to compile to JS for too many visitors?
I doubt it. CoffeeScript is a transpiler and it may actually speed up code compared to writing in JS. It is popular, with slick RoR integration, but it's not taking over the world from JS.
Dart's new semantics (at least new number types, per the memo) will result in slower compiled-to-JS code, although the win may be programmer productivity. It depends on how much slower, but the first barrier with compiling is getting devs to put up with the toolchain pain.
Now that Apple has successfully marginalized Flash as a cross platform development environment the browser fragmentation will ramp up and we'll see more of this type of behaviour rather than less. Apple didn't start this fragmentation but they are committed to ensuring that the web is not a level playing field with technology segmentation like canvas and css ES. Microsoft has always tried to fragment the market with IE. Take for example the current version of web based Outlook which only functions at full power in the IE browser, otherwise you get the 'lite' incarnation.
And now that Google is joining the fray we're concerned? Not at all. There will always be applications that only run on one browser (or operating system like Android). This same methodology will be incorporated by Google for gmail or any of their other services which will allow them to control which environment gets the best features and allow them to leverage their technologies effectively when their competitors are already doing the same thing.
You're not concerned about fragmentation so long as there's a "lite" fallback. Got it. Let's see how this plays out. It may be that browser vendors with hot websites have to make the "lite" experience as good over time as their proprietary one, so they work fairly with the standards bodies.
Again, for this thread, per the leaked memo, that was not the plan with Dart. Coming to a standards body late will not work -- it may get a spec, but not Dart support in other browsers -- unless Googe has near-monopoly power.
"From a practical standpoint, all these apps are integrated with the web. How the code is delivered, or what runtime is used, are just technical details."
How standards are made, what must be implemented in an interoperating browser, are not "just technical details". If that were so we could treat browsers like bespoke server installations, with different language VMs, databases, etc. That would fragment the web into non-interoperating silos.
On the server side, subsidiarity is the rule. Different sources of authority over domain names can provision as they please. But the servers all speak HTTP, HTML, CSS, JS, PNG, etc. -- a relatively small set of content languages. And the browsers must all agree on how these work.
Given that it is also a server side language this is not surprising or worrying. It's no different than node.js specific code that doesn't work in the client. I think it's just inherent to heterogenous environments. If they can do better than gwt I think they'll be fine.
Google is clearly priming to do a Microsoft-style embrace-and-extend in a big way. Between Chrome, V8, NaCL, WebGL, Android, they have a wide range of technologies that they could begin steering away from standards.
But...why? Google makes money selling ads. It doesn't make money selling you support contracts for software that you've been locked into.
From what I can tell, Android exists so that Google isn't at the whim of Apple. The big G's greatest fear is that iPhones become the defacto smartphone, and Apple has the power to cut them out of their platform (by banning Google's ad networks and by setting the default search engine to Bing). I doubt the Android platform will ever really make money for Google.
When making these analyses people tend to forget that Google is a company of thousands of engineers. If we can devote a bit of engineering time to building better environments, the benefits across the entire company are multiplicative. Would you expect anything less from an engineering-driven company?
Don't forget SPDY. If you use Chrome, you're probably using SPDY when accessing services like GMail and Google Search. What does this get you? A snappier experience. This could dissuade you from using other services even in just the 'feel'.
Google wants to lock you into their advertising channels (gmail, google docs, google search).
As long as your mail app is a source of revenue for someone (whether from support contracts or ads), they benefit from you being locked in, at least locally. Globally they may suffer because of the loss of goodwill, of course.
Google's strategy may be different than Microsoft, but the motivation is the same: to trap developers, and make them dependent on you, and to differentiate their technology from their peers.
Except that they have a monopoly position in none of those domains. Microsoft used their power to protect their monopoly. The Dart document itself says the number one reason for Dart is to protect Google's massive bet on the web platform. It's easy to see how Microsoft or Apple could seriously undermine Google by keeping the web platform second rate, which is certainly within their power given current browser marketshares.
I'll pipe up with one thing nobody seems to be emphasizing. A lot depends on how good Dart turns out to be. Suppose it turns out so good that we get all four of the following:
1. No worse a user experience in any browser;
2. A much better user experience in Chrome;
3. A much better developer experience;
4. A much better server-side platform than V8.
If we get all that, standards or not, I will be jumping for joy. If Dart is that good, it will become standard, either de jure or de facto as alternatives fall by the wayside.
Of the four, #1 (a good enough Dart->JS) seems problematic. #2 may be bad for other browser vendors, but if it's good for users and programmers, why should I care? I see no virtue in a lowest common denominator. #3 and #4 are hardly objectionable. Only those who like them need bother.
I don't know how likely this win-win-win-win is, but here's why I'm indulging in the hope. These guys have been gods of fast dynamic VMs since Strongtalk. If Dart is their chance to build what they always wanted, but this time with 20 years of experience to draw on and the institutional support of a Google behind them... well, we have the chance of something insanely great. Greatness sets its own standard.
Comparisons to VBScript and Flash only kick in if what they come out with is lacklustre.
The comparison to VBScript is not technical at all. It is about single-vendor control.
Your item 1 is obviously problematic. If Dart and JS have close semantics, it's just a transpiler like CoffeeScript, which intentionally sugars JS semantics with leaner syntax.
But this does not match the leaked info and hype about Dart, and even then, such warm beer still requires you to run a tool over your primary source, which costs adoption. Lots of languages already contending in this transpiling space.
More likely, a Dart-to-JS compiler (making non-local transformations) and a JS-implemented Dart runtime (to support the novel runtime semantics, e.g. new numeric types) will be required.
This means a worse user experience in browsers that don't have the native Dart VM, because performance will lag (especially depending on the new numeric type details) and bug-for-bug compatibility will be lost.
The native VM will be the super-fast source of truth. The compiler/runtime will be a stop-gap. This brings us back to the standards table, unless the idea is to corner the browser market. See
under "On Dart: ..." about what the standards table would look like if it is anywhere near today's mix of browser vendors and other interested parties.
#2 is bad for users unless Chrome is shipping on all hardware and operating systems users use. In fact, that's the same reason any sort of web-related monopoly is bad.
Where's Chrome for PPC processors? Chrome for ARM (being worked on, but no there yet)? Chrome for whatever architecture or OS users might want tomorrow?
The right comparison here in your success scenario may not be VBScript but ActiveX, except without the security issues ActiveX had. If ActiveX were required for use of gmail today, how would you feel about it? How would you have felt about it in 2002?
The ironic thing about this scenario of ActiveX being required by Gmail is that XHR sort of has its roots in ActiveX -- that is, before Mozilla and others had to replicate it to remain competitive.
You say XHR was a good idea, and it's clear now that reverse engineering it and putting it into Mozilla's browser was a good move because it allowed that technology to advance the web -- but at the same time you are showing opposition to taking a similar course of action for a Dart VM because it's not an open standard.
I understand the viewpoint that you would rather have had Google focused on improving JavaScript through Ecma TC39 than creating another language with absolutely no outside influence. I also admit that this is a pretty inconsiderate move for them to make, especially as that email made it sound as this new language aimed to be a "replacement" for JS. But despite the lack of good will behind this action, I would still hate to see something as rare as a new client side language, that has a decent potential of allowing developers to think differently about solving problems on the web, die off because no one wanted to adopt it.
See my other reply on XHR. It is orders of magnitude simpler than Dart. Also, even though MS had monopoly power when they added XHR, they did not abuse it on the web. XHR was for OWA. Mozilla and Opera were not coerced by MS-inspired use of XHR on the web. We chose to implement (with some changes, see the wikipedia page) at leisure.
In contrast, Dart usage in Google web properties targeting the native VM in Chrome, per the leaked memo, would pressure other browsers to adopt or reverse-engineer a much more complex non-standard.
I like Apple's hardware (modulo the fans running all the time on my latest dual-core i7 MBP), and some of its software (but WTF happened to Mail.app on Lion?).
Still, "insanely great" didn't make Apple-only web tech ubiquitous. That took hard, open-standards spec-work in the CSS WG. And that, to the extent it happened with help from Apple, is the opposite of the Dart plan in the leaked memo.
Stop worshipping i.g. It's never that great. Native apps on iOS use Objective-C, for crying out loud. Everyone's sh*t stinks.
I.g. is no excuse for monopoly behavior. The greatness goes away, leaving only insanity.
Like I said in a tweet this morning, I know those Googlers know way more about JS than I do, but if they think as per the leaked mail) that devs are choosing iOS over browsers because of JS, they have their heads in the sand.
Those choices are happening due to iOS as a platform not because devs are somehow so much more happier learning/using objective-C! The best thing that Google could do is to KEEP working hard on improving the browser platform like you guys are doing with B2G and they started to do with ChromeOS.
For example, I'd love to hear from the ChromeOS team exactly what in JS is blocking them delivering all the kinds of APIs that B2G is promising to deliver.
And on the server-side, some one should tell the hordes flocking to Nodejs that they are all wrong! As you pointed out, serverside everyone can choose to use whatever they want and given that choice, just look at everyone jumping on the Nodejs bandwagon. I'd love to see all those thousands of module that dev around the world have written in Go recently! (npmjs.org)
"So you would be OK with requiring everyone to use Windows as their operating system, in perpetuity?"
Of course I don't think that. I meant by "ubiquitous" to imply the opposite. I fear we're talking past each other.
The risks of Google behaving like a Microsoft-style monopolist here are low because (a) they have no monopoly in browsers, VMs, or OSes to abuse, (b) their culture is the most hacker-driven of any large company, (c) their interests are aligned with what is good for the web, and (d) the industry has changed since the bad old days.
Given this, and because the potential value of this project is so high, I'm willing to give them a chance.
Since making my original post, I've read a bit more about Bracha and Newspeak. Assume for the time being that Dart is an evolution of this work. We're talking about a fast, small, even-more-dynamic Smalltalk augmented with ideas from Self and E, designed for the web and multicore. That's mind-blowing. Now throw in the implementation prowess of a Bak and the institutional support of a Google, and we're talking about something potentially game-changing. To judge by their track records (e.g. the Resilient embedded Smalltalk that Bak worked on), these guys are out to correct not only the mistakes of Javascript but of Java as well. Historic stuff, which they have as good a shot at pulling off as anybody ever has.
I admit the odds are against it turning out so well. But this is the rare case where at least some optimism is well-grounded. If what they release does turn out to be an attempt at the above, it will be exciting.
"Ubiquitous" ActiveX without Windows is a pipe dream, not a useful premise. Kind of like assuming world peace first, then resolving a lesser conflict.
With ActiveX, the interfaces that might be used are too many, and their implementations too large and complicated, to have bug for bug compatibility. Even having source code would not be enough. All-paths testing would be needed.
Someone could (and a few companies did, for COM on Unix) tediously reverse-engineer a subset of COM interfaces and components implementing them. No one ever pulled off anywhere near a full Windows workalike.
The same threat arises with delayed-open-source controlled by a single proprietor. You can port and fork such code, but you can't depend on the single proprietor, especially if it's a hostile competitor. You'll have to co-maintain if you don't make a long-term fork of your own.
Source != spec. An implementation will over-specify in its source code and API. Abstractions leak. Standards can and do turn down specificity by dropping to prose or more formal means, without overspecifying.
"The risks of Google behaving like a Microsoft-style monopolist here are low because (a) they have no monopoly in browsers, VMs, or OSes to abuse,"
Look closer: Google is a search monopoly in many locales, and they are an emerging duopoly member (the larger share than Apple) on mobile, whose growth predicts it dominating desktop.
"(b) their culture is the most hacker-driven of any large company"
That was true a few years ago. It is much less so now. Larry has cut back on the thousand flowers, and focused on a few strategic bets: Android, Chrome, Google+, Search.
"(c) their interests are aligned with what is good for the web,"
So you say (and perhaps Google people say this, but I know some who candidly admit it just ain't so any longer).
Why do you believe this? As a public company, Google has to show quarterly good results, not just great profit margins but bubbly growth, to keep its stock appreciating, to retain and recruit (see the Facebook defection problem of last year). This is a big distortion on a pure open web mission.
"and (d) the industry has changed since the bad old days."
And human nature has changed since the 20th century, or the French Revolution, or the dark ages? Yeah, right.
You're much closer to the details of all this than I am, and it's possible I've got them wrong. But I can't agree that there's nothing that can make Dart a positive long-term contribution to the web, regardless of how good it turns out to be. I'm willing to be proven wrong about that, but only by the actual outcome. In the meantime, I'm excited - purely because of the track record of the creators. If and when my hopes are dashed I'll come back and post a mea culpa.
In that case, I have no idea how you think ActiveX could ever have become "ubiquitous".
> they have no monopoly in browsers, VMs, or OSes to abuse
They have a monopoly in search engines (and some would argue webmail clients), and are trying hard to work on browsers and OSes, especially on mobile... Give them a few more years, and see how things look.
> their interests are aligned with what is good for the web
They used to be, for a bit. At this point, I've very skeptical that this is still true.
> the industry has changed since the bad old days
Has it? I don't see much evidence of this. Particular _markets_ have changed, but attitudes really haven't.
< I have no idea how you think ActiveX could ever have become "ubiquitous". >
You know what, scratch everything I said about ActiveX. I didn't think about it very much (it was so awful, I can't bear to), and you're probably right.
< I don't see much evidence of this. >
Really? To my mind the industry is more hacker-centric than it used to be. For example, there's a spectrum of how open and sharing Google may turn out to be with Dart. Some points on that spectrum are better than others, but nowhere on it is the possibility of an old-school closed-source implementation. That's a major change from how things used to be.
You're right, open source and the Web have both done a lot to take down old-school, straight-ahead proprietary ploys such as closed source, market-power-based de-facto standards that competitors have to reverse-engineer at very high cost. Even on Windows (itself still closed source).
The two technical/cultural shifts, open source and the open web, make for a good trend.
That's why counter-trend action such as delayed-open (Dart), delayed/partly-open (Android, e.g., but other examples are easy to find) raise hackers' hackles. At least for some hackers.
And hackers aside, the competing vendors meeting in existing standards bodies get left out. That clouds the prospects for future standardization, unless (again) based on market power. Which is not there, if the topic is Google Dart (and it is :-|).
It is probably true that, even with a cross-compilation strategy, if Dart doesn't get adopted by competing browser vendors it will be a force towards fragmentation, which tends to be bad for the web.
An interesting question to consider, IMO, is whether a "clean break" for client programming languages is a bad idea outright, irrespective of context and motivation. If it is, it follows that the principal means for information dissemination and collaboration our species has devised will forever be programmed in the same language. It is a bleak thought, because, even if the language was faultless at its time, progress happens and the web needs to improve.
Of course Javascript can evolve (I personally very much like the proposals, and look forward to working with ES.next), but any language designer knows once some decisions are made there is no way to go back and revisit them.
If we allow for the possibility of a "clean break", then we can ask ourselves how could it be done. IMHO, committees don't fare all that well in language design. I think it would have to come from a single knowledgeable designer or a group of closely aligned knowledgeable designers. I also think it would be impossible to evaluate such a language without a real implementation. A real implementation would have to be in a real browser. So, a Browser vendor that employs experienced language designers seems like a good candidate to try a clean break, and I'd be glad if it succeeds.
Thumbnail of possible future (first is happening): ES6: modules and shallow continuations; ES7: guards, contracts, event loop concurrency; ES8: macros, parallel arrays (SIMD).
JS evolution disrupts the clean-break rationale.
Languages can change both surface and semantics. The open question I see is: can JS become the VM (its source the "bytecode") to host many languages and several approaches (none memory unsafe; no shared-memory threads).
We have lots of evidence for JS becoming the VM already, along with nay-sayers who want a clean break.
I'm going with evidence over clean-break assertions until we hit a wall that is either inherent in the language or emergent in the standards body and the market. Again, this is why I am concerned about Google playing fair.
Dart will not be 2x (or more) faster than JS.
Dart will not be so much more expressive and elegant than JS is.
Google would have to replace the whole stack (from HTTP to DOM to Dart to IDEs) to make it really compelling, but I'm pretty sure they will give it a shot.
Fortunately there is a competition:
Nobody will dedicate himself to Chrome Web Store and Google Web technologies and miss out on future Facebook WebApp Store (or whatever Project Spartan turns out to be) or future Apple AppStore (I believe in Apple's intelligence and visionary and that they will realize potential of Open Web and Clouds in time).
Android also is in trouble so unless Google will win the mobile market big time it will get more fragmented and thus forcing everybody to jump into Open Web and not Google Web.
And then there's Mozilla - let's hope they can pull off some good piece technology with their B2G and Firefox Mobile!
Why are you so sure of this? The people involved appear to be big believers that dynamic languages can be extremely performant provided they are designed with the VM in mind. LuaJit proves this out. A simple 1000x1000 matrix multiply in LuaJit is already 20x (not 2x!) faster than V8 and on par with C.
I think far too much emphasis is being placed on whether Dart replaces JS in the browser. Google would gain a huge amount by having a performant server-side language that can be tooled and compiled to JS for the client.
The difference between V8 and LuaJIT is nowhere near 2x on matmul and comes from differences in language features (no debugger interruption support for generated code in LuaJIT2) and lack of certain optimizations (e.g. array bounds check elimination) in V8 [matmul_v2.lua used for measurements relies on ffi arrays which do not include bounds checks at all]. Code generated by both compilers is roughly equivalent (invariants hosted out of the loop, temporary double values are on xmm registers, etc). A bit more details: http://blog.mrale.ph/post/5436474765/dangers-of-cross-langua...
Interesting, thanks. Do you have a guess on what the Dart means by "Dash is designed with performance characteristics in mind, so that it is possible to create VMs that do not have the performance problems that all EcmaScript VMs must have." Where will a redesign likely see gains?
JS has notorious optimization hazards such as holes in arrays (Array.prototype[1] = "ha ha"; a = [0,,2]; ... a[i] for i=1 must find the prototype element), prototype delegation in general, delete and default-mutability, plus the eval of old and 'with' (respectively reformed in modern impls and specs, and banned in strict mode).
These all add some cost in either runtime guards, static analysis burdens, or a mix.
A new language could leave out most of these, but it would still want inheritance. Then the question would be: how useful is mutability along the inheritance chain, i.e. shadowing -- even if you remove 'delete'.
Beyond this, a new language or a JS extension (this is what the Dash memo rules out as un-possible, without evidence) could allow pinning down the shape of objects, even to include machine types such as int32, etc. That would allow for big speedups on certain benchmarks.
"his is why I think cooperating in standards bodies is critical for web interop and non-fragmentation. It necessarily implies less competitive content languages at any given instant, but with consequent widest reach."
I think it's interesting he's admitting to standards bodies leading to inferior languages but that the benefits include wide reach. I disagree. See HttpRequest. I say let the browsers innovate at will and let the standards bodies come in and standardize what people are actually using after the fact.
Even then, it was new and optional, and we cloned it into Mozilla (Opera cloned it too). It was pretty simple compared to anything like Dart or NaCl's Pepper APIs.
Ok, look where we are today: no monopoly power, but Google arguably acting like one. That won't lead to reverse-engineering without a much less balanced market.
First, the reverse-engineering costs of Dart are much higher than of XHR.
Second, the more balanced competition won't make any browser follow instead of lead. We could indeed end up with two problems (three, counting JS), as just happened with WebSQL and IndexedDB. Dart, and Microsoft's new language Phart, say. :-P
Glib "shoot first, ask standards bodies later" is a recipe for fragmentation and non-interoperation. Even if standards are written (years later, for XHR; never for VBScript and ActiveX and a great many examples you conveniently ignore), they don't offset the up front costs.
And here's the kicker: if you get your wish, there won't be one interoperable superior language ruling the roost until that later standardization step, if it even happens. If there's no monopoly or duopoly power structure, you'll just have the "inferior" language (JS) and spotty support for the "superior" one.
It seems to me that a lot of you "bring on our new overlords" boosters are unaware of how HTML5 came about. It was not through one vendor shipping proprietary code and standards bodies mopping up later. Study some recent history.
Open source is usable only by some vendors. My Opera pals say they can use it (I'd heard differently from others at Opera in the past). Microsoft, I'm told, cannot -- they cannot even read open source. This is public knowledge, it came out, e.g., as part of the IronPython project.
So Microsoft objected to WebSQL, since it depended on SQLite as source-code-is-specification. Writing a spec for SQLite or Dart is very hard. SQLite is >100KLOC in one .c file! Dart is at a guess bigger. Hence, reverse-engineering is required.
Reading code at this scale doesn't tell you enough. Testing, with good coverage, is required. Then you have to separate the over-specification due to abstraction leaks and bugs, from the intentional specification. That's hard work too, and while source code comments may provide clues, the source itself does not say what's what.
"Well shit. I definitely don't want to be one of those!"
Sorry, I was not addressing you specifically, rather the (many, more than I expected) people here and elsewhere who are asking for the new and good monopoly to rewrite the web by itself with more awesome languages.
The IE monopoly with ActiveX and all that was so long ago, the new generation did not live through it.
I certainly remember and that's what your comment made me think of. It's not QUITE the same thing as it is supposed to compile to js but yeah, I get the comparison and would prefer to have no single-corporation overlords.
I'm afraid it's already too late to make suggestions, but I would be really excited if Dart supported facilities to make unit testing an integral part of the language, taking into consideration the interaction with the DOM. This is something that would really help IMHO.
> I'm afraid it's already too late to make suggestions,
Which was precisely the point of developing things this way. They don't want anyone's suggestions; they want to force the thing that benefits _them_ the most on everyone else.
I hate to say it, because Brendan Eich is no blogspammer, but I feel like he name-dropped Google Dart (Dash) as well-intentioned linkbait. It's mentioned in one sentence and offers no new details or interesting discussion.
I agree with georgemcbay that he has reason to be disturbed by Dart's appearance and the backing of Google/Chrome, but with no actual details, he's just taking a quick shot at it while he can here.
I took no shot at Dart because we all know nearly nothing about it.
My whole talk was about how competing browser vendors work together to standardize and evolve JS. In that light, bailing to a proprietary move like Dart is entirely relevant and not linkbait.
This is a non-technical, or really meta-technical point. Lots of JS and web standards work is about techniques for solving technical problems, hence meta.
Any well-funded company with smart people on staff can "do better" for a given point-function or language in the set of languages. On the web, such better-is-better approaches won't get traction without being adopted by multiple competing browsers.
The best way to get adoption is standardization. Forcing reverse-engineering, open-washing as if that helps competing vendors more than a little (assuming their hackers can even read the delayed-open source as a reference), fragmenting web content are all "less good" in my view.
A browser with market power can try these tactics. They may backfire, or make a messy "now you have two problems" world.
Having said that, I really do hope it is killed. Well, not killed, but demoted to "one supported language of many". And the sooner this is started, the better, because it is a long, long process. I don't think JavaScript is bad, but I do strongly feel that the underlying engine for client side web programming should be a full-blown VM that supports many possible source languages (ala modern .net with the CLR, DLR and all), and I'll support that model regardless of who pushes it, so long as they do a good job on the technical side and don't try to use it as a lock-in mechanism. Google seems to be the most posed to do this (I can't imagine Microsoft giving up that much control of .net) and their idea to support JavaScript as an interim target is the only practical way this will ever be solved, so I hope Dart is successful -- assuming it is actually going for this and not just trying to switch one core language out with another, which remains to be seen when they announce more information.
BTW, yes, I know you can already compile many other languages to JavaScript (as Dart itself will do in the first stage), but.. why? Why not just put a proper VM down there that is truly designed to efficiently run byte-code from many different source languages, and have JavaScript be one of those languages?