Hacker News new | past | comments | ask | show | jobs | submit login
Dart language (dartlang.org)
527 points by haasted on Oct 10, 2011 | hide | past | favorite | 489 comments



Mozilla's Brendan Eich, inventor of JavaScript, on the Dart memo leaked a few weeks ago:

A Dart to JS compiler will never be "decent" compared to having the Dart VM in the browser. Yet I guarantee you that Apple and Microsoft (and Opera and Mozilla, but the first two are enough) will never embed the Dart VM. So "Works best in Chrome" and even "Works only in Chrome" are new norms promulgated intentionally by Google. We see more of this fragmentation every day. As a user of Chrome and Firefox (and Safari), I find it painful to experience, never mind the political bad taste

From here: https://news.ycombinator.com/item?id=2982949


I disagree with his premise.

The Dart to JS compiler doesn't have to be "decent" compared to having the Dart VM in the browser to be useful in a world where other browsers don't use the Dart VM, it merely has to be "decent" compared to handcrafted JavaScript performing the same task in another browser.

If direct Dart code runs faster than JavaScript on Chrome, that's a nice bonus, but if it runs as well as similar code that was originally written in JavaScript on the other browsers (when the Dart code is compiled to JS) that's good enough. Dart on the other browsers isn't competing with Dart on Chrome, it is competing with JavaScript on the other browsers and that's how it should be measured.

And if it turns out apps in Dart on Chrome really blow away apps in JavaScript on other browsers to the point where both devs and users start embracing Chrome even more for these gains then hopefully that will light a fire under everyone else to either adopt Dart or do something to fix what would then be an undeniable problem of JavaScript.


I think you have missed the context of his remarks; perhaps I should have pasted more from the original post. You seem to be talking about "what is necessary for Dart to be a viable Web content language", while (I believe) Brendan is talking about "what are the consequences if Dart becomes a viable Web content language".

If the Dart → JS compiler produces "good enough" results, and web authors wind up adopting it en masse, then suddenly a chunk of the web "works best in Google Chrome", and every other browser manufacturer is put into a very, very awkward position. They can do nothing, in which case their continued relevance depends on Google continuing to ship the Dart → JS compiler (and diminishes as users switch to Chrome). They can license the Dart VM from Google, in which case their continued relevance depends on Google continuing to license the VM and we have another large chunk of the Web based on a single binary monoculture (which worked so well for Flash!). Or, they can try to develop their own Dart VM, in which case they'll be eternally chasing the tail-lights of Google's VM, kept behind by Google's head-start and Google's engineering power. None of those options is very pleasant, in the long run, for anybody who doesn't own Google stock.

As Brendan's original comment points out, this choice is not unique to Dart. Currently browser makers have picked option 3 ("write our own") to deal with V8, option 2 ("licence Google's implementation") to deal with WebM, and so far option 1 ("ignore it") to deal with SPDY.

Under the carefully-managed tension of ECMA, the ECMAScript standard has become a much more practical language to develop in directly, and to target from higher-level languages — and with the agreement of all the major browser vendors including Google. It's not immediately obvious that a project like Dart is even needed; even if it is a major technical advance, it's a political regression.


I understand the context and still disagree. If Dart is so much better than JavaScript on Chrome and if its Dart->JS engine does a good enough job of creating usable JS on other browsers that developers are willing to commit to using it to write apps then, well, that's too bad for everyone else, isn't it?

Why should Google put the brakes on improving client side development just because it puts other browsers in the uncomfortable position of having to license Dart (for free, since it is OSS and has a liberal patent grant)?

The same would be true if Microsoft created some sort of .NET->JS compiler with their own VM for IE, or if Mozilla did the same with some new language. As long as they bridge to JavaScript, more power to them. Client side web development could certainly use some of the tools that these different language environments could offer and if it kicks off another performance race to get browser-side languages even faster (especially on mobile devices), so much the better for everyone.

I really don't see how there is any downside (to anyone but maybe Brendan Eich) to widespread usage of Dart unless the Dart->JS code proves to be suboptimal, but in that case devs won't use it.


When I first got onto the Internet, nearly every page I came across had a little image at the bottom saying "Best Viewed in Netscape Navigator". Since I was using Netscape, everything worked fine and I didn't care. Some time later, I began to see images saying "Best viewed in Internet Explorer". Everything still worked fine, so I still didn't care... but as time went on, more of those images popped up, and the pages they appeared on began to look much, much scrappier in Netscape than they did in Explorer. While most of them still worked, more or less, Explorer made a noticeable improvement.

So, three cheers to Explorer for defeating Netscape and improving client-side development for everybody, right? Find a web-developer who still has to work with IE6 and ask them what they think.

Right now, Dart is freely-licensed and available to everybody, but who can contribute to it? Who will be allowed to veto compatibility-breaking changes? What happens if Apple announces some new hardware platform, and then Google coincidentally decides that supporting that platform in the Dart VM is no longer a priority, refuses to accept patches for it, and later refactorings happen to make it very difficult to maintain outside the official tree? What if the Dart VM becomes strategically important like Android, and current releases are only available to approved partners?

Sure, it's very unlikely that all those things would happen, and fairly unlikely that even one of them would: up until now, Google has stuck pretty close to the "do no evil" thing. But if we stick with openly, collaboratively created technologies, those things are impossible, and "impossible" is better than "unlikely" in my book.

If Dart were some kind of server-side thing Google wanted to use to increase the responsiveness of their sites, or something else internal to their systems, I wouldn't care — they're allowed to do whatever they want in the privacy of their own servers. It's only because the value proposition of Dart is "give Google an inherent advantage of control over the Web, and in exchange you will get shiny things" that I'm concerned. There'll always be more shiny things, but (so far) there's only one Web.


"So, three cheers to Explorer for defeating Netscape and improving client-side development for everybody, right"

Abso-friggin-lutely! You should be thanking Microsoft and Internet Explorer for upping everyone's game. They're the ones who introduced XmlHttpRequest. You knoww, "ajax"?

It's their browser and they can do what they want with it. If you don't like it, don't use it. If everyone else likes it though, tough luck for you. Go build your own browser and language and drum up your own support for it. All of this political correctness is a bunch of bull.


Must some historical fact or pattern of facts be "all good" or "all bad"? Grow up!

Microsoft added XHR to IE when they gave Java the boot in a spat with Sun, in order to keep Outlook Web Express in working order in IE. Was that "all good"? Clearly not for Sun or Java!

The browser wars phase I had no winners. Yes, IE's DHTML innovations such as innerHTML were a win, and Netscape should have agreed to implement and standardize them. No, the long IE6 stagnation and legacy were Not Good.


Wow, really? Grow up? Nobody said it was all good for everyone. I made a counter-point that was clearly as balanced as the original point.

Anyway, it really depends on your point of view. Personally I love the IE6 stagnation and other failures that affect the web such as the epic lameness of Javascript. Why? I think the whole platform sucks and I'm glad to see it fail and push people to build better solutions.


"Abso-friggen-lutely" on top of "three cheers" lacks qualification about the thing you cheered being less than good for anyone. You were abso-friggen-lutist.

Anyway, the open web standards are not failing. Dart is not a clearly better solution. I get your point about XHR, but do try to take mine about the whole history being a mixed bag, including what IE did. We can do better.


georgemcbay basically said [why shouldn't Google do what they want with their browser?], to which thristian replies [because...Microsoft]. I think that's a bullshit argument, especially since there's nothing about Dart that threatens to take away anyone's freedom to continue using Javascript or whatever else comes along.

I gave a bullshit reply to make a point; let's call it hyperbole. Of course I know that it was a mixed bag for everyone. Some good things did come out of it though, IE6 is almost gone and those good things are still here (and they've evolved). Evolution is slow and life itself IS indeed a mixed bag. Business is WAR!!

Anyway, I love, love, LOVE the fact that people are working on giving us lowly blub programmers a choice. :)

EDIT: Didn't realize who I was talking to, so let me be really clear that I have a LOVE/HATE relationship with all things in tech, including Javascript!


I have mixed feelings about JS and most software, but moreso JS than, e.g., Unix (BSD kernel code in the late '80s, not to date myself too much).

People seem to think I have some big ego investment in JS, but it's this thing I did in a hurry, but with a purpose that caught on when the web was young and growing fast. JS is not done, yet it will be hard to replace.

JS is easier to extend than some here claim (and as my recent comments have noted, not all the parties on the record asserting that it's hard to "fix" are truly giving it their best effort). As with most web standards, you can't remove stuff predictably, but new and more winning extensions can help retire worse old forms.

Business may be WAR but the modern browser era is marked by claims of peace and openness and standards conformance. So while Dart, and (I joke; I hope this doesn't happen) matching novelties from Apple ("Flechette") and Microsoft ("Javelin"), will take some attention off the standards, the forces operating in favor of standards will probably manage to keep JS evolving.

I actually miss some of the Googlers who have worked in TC39, or around the edges, but who are now full time on Dart or other projects. It seems like a lost opportunity to me, but it's Google's call.


The alternative to google developing Dart isn't doing nothing, but throwing more resources behind javascript.

I think you are missing some of the context of Eich's comments. The very same leaked memo declared that Dart was developed b/c js could not be evolved into a suitable language. Obviously Eich thinks it can, but more to the point he's working on the committee that is responsible for guiding such evolution -- the same committee that google plays a major role in.

So the particular claim he's laying out is this: Google throwing resources behind Dart is worse for the web than Google truly committing to an evolved javascript.

You might agree or disagree with this claim, but your arguments so far have been tangential to it.


If JS can be evolved, why has it taken over a decade, and still nothing much of substance? I think Javascript is much like Java, it is too hampered by concerns of remaining compatible with existing legacy semantics. Whenever you see a language where the only thing they can add is "syntactic sugar" it means they are unwilling to consider functionality that would break the underlying legacy VM assumptions, and that means, ultimately, their evolution is "boxed in" by these concerns.

Java basically is at a dead end because changing the JVM is hard.


Such ignorance. Willful? You seem new around here, and you are not using your real name. If only we were on Google+ :-/.

The long stagnation from ES3 (1999) to ES5 (2009) had everything to do with Microsoft abusing its browser-tying OS monopoly to stagnate the web by disbanding the IE team after IE6. This was prosecuted in U.S. v. Microsoft. How old are you, to never learn or else to forget this?

When I brought Mozilla back to Ecma in 2004 as we were launching Firefox 1.0, the JS standards group within TC39 was fooling around with E4X. Only one MSFT and one ex-BEA (ex-MSFT before that by way of Crossgain) guy were doing the spec work, and IE was not ever going to implement E4X.

We had to restart browser competition to get the various parties (Apple and Opera too) back in the room and working productively on the core language.

Now, we have ES5 done and ES6 under way. ES5 had new semantics, ES6 has more along with syntax, chiefly the module system (a second class system to facilitate prefetching, along with a loader API for when you need to be dynamic).

This is all documented on http://wiki.ecmascript.org/.

So your trollish commentary notwithstanding, ES5 is done and shipping in the latest browsers (finally complete in IE10 previews), and ES6 is being drafted and pieces are being prototyped in SpiderMonkey and V8.

I'm the last person to defend Java, but the JVM deserves a word here: it has finally evolved a bit, and it is fairly thriving on the server side with Scala, Clojure, and other languages.

Java's stagnation was due to Sun's miscalculations and mismanagement over the years.


You accuse me of trollish behavior, yet resort to personal slights, asking how old I am? Irony. Did I ever question your age or intent? I am fully aware of Microsoft's behavior, but that still doesn't address the fundamental issue that language evolution within spec committees is painfully slow and the competing interests waterdown proposals. It happened to C++ and to Java too. It happens in our government as well, we often don't get what's best or right, only what you've got the votes for.

The issue is whether or not any large changes can be pushed through in a timely manner, if the need arises. I've seen, over several threads, you seem to simultaneously stress the idea that Javascript performance is good enough (vs native), a Dart VM performing better than JS is a problem, and that whatever performance problems there are, they'll be resolved by better JS VMs. Not only are those hard to reconcile, but it requires a great leap of faith to believe that they will, because we've heard it all before. We've heard the same arguments about Smalltalk performance, about Self performance, and of course, the JVM, that the performance differentials will be tackled.

As for the language ecosystem on the server, yes, it's wonderful, but happened in spite of the JVM, not because of any evolution in it. The fact remains, Java changes were often evaluated towards remaining binary compatible with existing bytecode (e.g. erasure vs reification) and that caused the vote down of language proposals. (the same problem did not occur in C#/CLR)

My question is, when can I write a mobile web app using Javascript, that performs as delightful, in startup time, in runtime performance, in lack of janky-ness, for a mobile device, that doesn't burn battery or memory? Are you promising that evolved JS is going to fix this issue?


We agree on Java the language being mismanaged. Especially in comparison to C# (see a recent InfoQ interview with Neil Gafter). But the JVM is in better shape.

I despair of your reading comprehension, though, when it comes to what I have written about the leaked memo's Dash strategy. I never wrote that JS can be as fast at Dart-to-JS code as a native Dart VM could be. Not ever -- quite the reverse.

Now if we on TC39 had the benefit of Google's Dart expert input any time in the last year, we could have possibly worked harder on guards (optional type annotations), branding (aka trademarking, nominal types for use with guards), bignums, classes, or other proposals that have suffered in comparison to those that clearly made it into ES6. FYI, these were all strawman-status on http://wiki.ecmascript.org over the last year or so.

Some Googlers on TC39 did work on a few of these, in most cases too late for our agreed-upon ES6 proposal cut-off date. We did hear a dire warning from one Google rep in May 2010 that if two of these proposals were not promoted to Harmony status, JS "would be replaced!"

So your complaints about the ECMA standard not progressing fast enough come with ill grace from a google employee. Google has a lot of power and it has not discharged it responsibly, in my view. ES6 would be a better Dart-to-JS target language, including draft specs and prototypes in V8, if it had made a concerted effort.

Contrast this with Mark Miller (Google) and Tom Van Cutsem working in the open, and very effectively, on ES6 proxies, already prototyped in SpiderMonkey and V8. Or the module system designed by Dave Herman and Sam Tobin-Hochstadt in the open, which you overlooked.

Talk about self-fulfilling prophecy! In this case, you are not even predicting the future, you're misstating the past.


I think if it wants to be taken seriously as a good force in advancement of the web, google does have to be aboveboard with its involvement in TC39 and put in significant effort there. Maybe they really should have put in more effort to pushing strawmen through (I wasn't paying enough attention a year ago to comment on that).

But are you suggesting that particular people at google should have been (forced to be?) more involved in TC39 efforts? I don't think you can force that (and someone like Lars Bak would probably just leave if he didn't want to do it). There are lots of language experts out there that are working on their pet compiler instead of helping with the next coffeescript variant. That's not an evil, it's just a missed opportunity (and the reality of trying to get good people involved with a standards process).

I think that your better argument is that an open Dart repository from the start could have fed ideas back into TC39 process, but the problems you mention didn't suffer from lack of knowledge about them, just champions for strawman solutions. That is, again, not a job that most relish.

> We did hear a dire warning from one Google rep in May 2010 that if two of these proposals were not promoted to Harmony status, JS "would be replaced!"

"Unnamed sources at Google suggest gmail actually runs on the blood of orphans, kittens." What are you, a Techcrunch guest columnist? Give a name or don't bring it up.


Does it matter who said that? I don't think so.

You seem scandalized by the idea that an employer might force employees to work on standards (or anything else per typical work-for-hire contracts).

That is not unusual. It's dog-bites-man.

What's going on is unusual, in my experience.

Google is forcing people it employs off of tc39 work. I will not say whom, since I was told in confidence. Three people at least. This casts a different light on the Dash memo's serve-two-masters glibness, and on Alex Russell's recent blog post.


Am I promising you a pony? No, but then you are not a three-year old. I am simply working every day to advance the open web, including JS, including on mobile devices. What are you doing, besides asserting falsehoods about the JS standard and its leading implementations?


This is what this conversation is starting to sound like: http://www.youtube.com/watch?v=tOrI6uqS-vk


More than syntactic sugar is being added, for instance:

https://developer.mozilla.org/en/JavaScript_typed_arrays


I'm super familiar with type arrays (I helped port GwtQuake which was one of the first apps to ever use them in large measure), but type arrays are an add-on API that doesn't change language semantics. Something that would change language semantics would be to offer early-bound namespaces. Early-binding would increase performance while simplifying VM implementation and make tooling easier.


Modules, which are statically bound, are part of ES Harmony [1]. Please stop spreading falsehoods.

[1]: http://wiki.ecmascript.org/doku.php?id=harmony:modules


Sorry, I'm out of date, when ES4 was terminated, John Resig said this: "Some ECMAScript 4 proposals have been deemed unsound for the Web, and are off the table for good: packages, namespaces and early binding. This conclusion is key to Harmony."

Is static scoping the same as early binding? For example, it is guaranteed that any class layouts within the module are statically frozen and cannot be altered at runtime? If you can add or remove methods from a class within a module, it's not really early binding, and this inhibits it's usefulness in a refactor or an IDE trying to do accurate usage search or code assist.


Exports from Harmony modules are frozen and cannot be changed at runtime. Classes are too if you use Object.freeze().


The points you're using to substantiate your argument are not really in line with the facts. Many browsers broadly share a single implementation for huge chunks of the web. Examples off the top of my head include NSS, WebGL, LibPNG, LibJPEG, and WebM. And it's inaccurate to claim WebM is "licensed" from Google; that's no more true than claiming that NSS is "licensed" from Mozilla. Both are open source projects accepting third-party contributions. People can happily use the code as is, contribute to changing it, or simply fork it off.

Finally, you seem to have missed the fact that Mozilla is already working on an experimentatl implementation of SPDY in Firefox: http://bitsup.blogspot.com/2011/09/spdy-what-i-like-about-yo...


Dart is OSS, so I geuss if it really got mass adoption, Firefox and IE could adopt it. Yes, this would lead to a single implementation, I'm not sure how I feel about this. On the one hand, it limits innovation from multiple implementations, on the other hand, incompatibilities between implementations, have been a big source of programmer headache in the first place. Flash was not bad in that respect and is one reason for it's success, despite its shortcomings. I mean, with Unix, most people are running a single implementation: Linux Kernel, and no one really cares about Open Solaris, or Darwin anymore.

Really though, Brendan has made the argument that Javascript is fixable and that it is a good target for higher level languages, and I don't really agree with either of them.

The changes I see being planned for ES.next don't really address some of the fundamental problems with JS, like network transport, or startup costs because of legacy constraints of not wanting to break existing JS. On mobile devices, loading a lot of Javascript is expensive, and I don't see anything in ES.next to bring it anywhere near competitive with native. If mobile is the future and the predominant way we consume the web, then we are at big risk of being dominated by native. Can we afford to wait years more for a promised fix that isn't even in sight?

For a mobile device, ideally you'd want a dynamic language that could be atleast partially statically optimized for a particular device platform, and transmitted in a highly compact format that is cheap to parse and execute on a mobile VM to not make the device waste bandwidth or precious battery performing tasks that could be done on the server for it (like executing a parse and eval)

Likewise, I don't agree that JS is an ideal intermediate format for higher level languages to compile to. I'm not saying LLVM or JVM bytecode are either, but if your HLL contains numerics like 64-bit ints or longs, for example, it's pretty crappy to translate. There's also double costs here: You're running a HLL compiler to parse, compile, and optimize one language, which then must be parsed, evaled, and optimized a second time on your target CPU.

Something like DVM, or a bytecode specifically designed for JS would be better I think.


There's a difference between open-source software and an open-source project. For example, compare the PostgreSQL and MySQL databases. Both are released under an open licence, but PostgreSQL is developed openly on a mailing-list and anyone can step up and contribute, while MySQL is developed privately by an organisation, with occasional source releases. No single entity has a majority control of PostgreSQL, but Oracle has a pretty firm grasp of MySQL.

Likewise, even though Dart is supplied under an open licence, it remains to be seen whether it's operated more like PostgreSQL or more like MySQL. As I understand it, the V8 project is somewhere between the two, and Android is way up the MySQL end of the spectrum. Based on what Brendan has said, I doubt Mozilla would want such a large part of their platform based on a black box they can't control, and I can't really imagine Microsoft doing it.

I don't think your Unix example is quite fair; the number of Unix systems on the planet is orders of magnitude smaller than the number of web-connected system. Even so, there's still a lot of people working with Darwin in the guise of iOS and OS X, and there's a decent number of computers running non-Unix-based operating systems too.

I'll confess I'm not too familiar with the exact changes planned for ES.next, I just recall Brendan saying that improving JS as a primary language and as a compilation target were major goals in the ECMAScript committee. Maybe it's not there yet, but it can get there faster than it will take to add performant Dart VMs to every browser.

Brendan talks about the specific issue of a standardised bytecode here: http://news.ycombinator.com/item?id=1905291 (as part of an entire thread on the topic)


"I mean, with Unix, most people are running a single implementation: Linux Kernel, and no one really cares about Open Solaris, or Darwin anymore."

Nobody cares about Darwin? Really? I mean, in terms of hacking the kernel, sure, but tons of people are running Darwin. The POSIX standard is more important than ever.

"On mobile devices, loading a lot of Javascript is expensive"

I keep seeing this assertion that bytecode would be smaller than gzipped, minified JavaScript source. I'm actually somewhat skeptical. Has anyone actually tested this?

"could be atleast partially statically optimized for a particular device platform"

I don't want this at all. That's too close to ActiveX for comfort.

"wasting precious battery performing tasks that could be done on the server for it (like executing a parse and eval)"

You realize that Java requires a nontrivial bytecode verifier too, right? IIRC the bytecode verifier, and certainly the compiler, require abstract interpretation to convert the stack-oriented bytecode to a virtual register-based one.


Who said anything about Java bytecode? Java bytecode != bytecode.

"I don't want this at all. That's too close to ActiveX for comfort."

It has nothing to do with ActiveX. It has everything to do with dead-stripping code and targeting particular browsers the way GWT and Closure Compiler do today.

In particular, Javascript VMs today always parse JS even if it's already been cached, hasn't changed, and is loaded from the cache. They must also retain the original source as well as the parsed AST representations due to other semantics (toString()), which wastes gobs of memory on memory constrained devices.

As for size, I believe there's a Microsoft paper floating around somewhere where they use a custom tokenized AST byte code format to achieve significant wire savings.

The reality is, Java on Android already starts up faster than Android and runs faster than JS, so there is ample evidence that JS is underperforming other environments, meanwhile, Android APKs are not any bigger than comparable JS apps from my anecdotal observations.


"They must also retain the original source as well as the parsed AST representations due to other semantics (toString()), which wastes gobs of memory on memory constrained devices."

Wrong. SpiderMonkey doesn't. It retains the bytecode only, throws away the original source, and uses a decompiler when toString() is used. V8 saves the original source, but that's its problem :)


I don't know about GWT, but in my experience closure compiler does not produce different output for different browsers (though it does strip dead code).


It has the ability, but doesn't not part of the default IIRC unlike GWT which is built around permutations. GMail I think varies permutations. However, there are sound reasons for wanting to do so besides differing DOM capabilities, namely, that different JS VMs have different performance characteristics and what may be an optimal pattern for one, isn't for another.


"You realize that Java requires a nontrivial bytecode verifier too, right? IIRC the bytecode verifier, and certainly the compiler, require abstract interpretation to convert the stack-oriented bytecode to a virtual register-based one."

It seems like you are talking about Dalvik VM (Android). Java class files (stack bytecode) are actually converted to Dakvik's register bytecode ahead of time. (Before installation on the mobile device.) I'm unsure if Dalvik bytecode is verified or not.


No, I'm talking about the verifier [1]. In particular: "There are no operand stack overflows or underflows." That requires abstract interpretation.

And every performant Java interpreter is required to convert the stack machine to vregs in order to perform register allocation. That requires abstract interpretation too.

[1]: http://java.sun.com/docs/white/langenv/Security.doc3.html


Sure, but I'd be willing to bet that performance impact of the the verifier is insignificant for all but the most trivial of programs.


That statement came with no benchmarks or numbers, no knowledge of the Dart VM and it's relative performance to V8, and no knowledge of the Dart to JS compiler and what kinds of optimizations it does.

If you run Google's closure compiler on JS, for example, in some cases, it produces a very significant speedup than hand-tuned hand-written JS, so saying "never be decent" is a pretty strong claim to make.


The key part is (...) compared to having the Dart VM in the browser.

It seems pretty obvious to me that unless Google fucks up the VM implementation, it'll always be much faster than compiling to JS, which doesn't even implement the same concepts - let alone semantics - of Dart. In fact, if VM is not much faster, why would they have it at all?

Being faster than handwritten JS is irrelevant, because the VM will be faster still, and that's the difference he's talking about.


Browsers already have a wide performance differential. What's the ratio between V8 and IE6? Or between desktop and mobile? As long as the performance isn't pathologically bad, it won't matter. Dart apps will start up quicker and run a percentage faster. Maybe perform much better on mobile. A decent win, but doesn't really change the fragmentation equation much from what it already is.

The keyword Brendan used was "decent", and frankly, I'll take a stab and say performance of Dart to JS will be beyond decent.


I specifically referred to new number types. Dart has bignums (int). See https://twitter.com/#!/maccman/status/123400799756881920 for a clue.

Dart-to-JS can't be faster than hand-coded JS if it does not use bignums. If it does use bignums that do not fit in 32-bit ints, then Dart-to-JS will be slower than a native VM with built-in bignum support.

IE6 is irrelevant. The topic is modern browser with native Dart VM vs. without.


What do you mean Dart-to-JS can't be faster than hand-code JS? That's like saying C can't be faster than hand-coded assembly. Unless you write unmaintainable code or spend efforts counting cycles and scheduling and pipeline harzards, this usually isn't the case, and is not the way most people write large application (all in asm). If you take your average hand-written Javascript and run it through closure compiler, in many cases, it ends up executing faster (at least in V8). If you haven't already, I would recommend investigating the behavior of JS VMs with and without Closure Compiler, you might be surprised.

We're talking averages here, not pathological edge cases. Java has a 64-bit long type which JS doesn't support. GWT supports longs. In theory, GWT code should run slower, in practice, because GWT contains an optimizer, it runs as well, or faster, than hand written JS, simply because very few code bases have hot paths dependent on long arithmetic. For example, I've actually ported popular JS benchmarks to Java, ran them through GWT, and the result was faster.

Sure, would you be able to concoct applications for which Dart and Dart-to-JS have a large divergence? Yes. Will they be the common case? Probably not. I can concoct normal JS applications today which show high divergence between Chrome, Firefox, and IE, because of all of the JS VMs have different weaknesses, different optimizers, different garbage collectors. If you're writing applications that are so CPU bound and performance critical (e.g. games), chances are you're going to run into other portability problems too.

Most of this worry over runtime performance I think is a red herring. The real difference will probably be in startup time.


"That's like saying C can't be faster than hand-coded assembly"

Don't change the subject. The topic is not productivity as you seem to imply here. Also, JS engines are not hard to track super-scalar CPUs. A compiler can target one JS VM (e.g., V8) but then you're locked in. All the current JS VMs have peculiar optimization faults, some worse than others, many completely disjoint across VMs. No one has a compiler targeting each to best effect.

Hand-coders can do well in general against current JS trans-compilers, but that wasn't my point.

The point is that if you actually need bignums, a JS emulation will be slower than a native Dart VM built-in bignum implementation.

"Most of this worry over runtime performance I think is a red herring."

The "worry" (such as it is, or was) was over Google's politics, not its Dart-to-JS compiler's generated code performance in isolation. Pushing a native Dart VM into Chrome, rewriting major Google apps to use Dart, then seeing if that creates market pressure for native Dart support in other browsers, was all suggested or even explicitly called out in the leaked memo. That was the "worry".

But it sounds like a native Dart VM in Chrome won't be immediately released, and the Google web apps written in JS or Closure won't be rewritten quickly. I don't know. Maybe you do -- do you work for Google?


What percentage of apps do you think bignum performance will matter? Even in cases where it might matter (cryptography), you would be better suited using GPGPU code, or adding cryptographic APIs to the browser itself rather than insisting it be done in JS. Indeed, you could always fallback to that for browsers with poor bignum performance.

GWT has supported Java's BigInteger/BigDecember, and it's never been an issue.

(For the record, since it is no secret by searching my handle, I work for Google, but I do not work on Dart. I specifically work on the GWT compiler, and do not speak for the Dart team nor represent their views, nor am I really representing anything, but my own biases here given my experience with trying to develop high performance Javascript games both in the browser and mobile)


If bignum performance doesn't matter, then Dart-to-JS code is generally no faster than other JS code. Duh!

If, as the leaked memo asserted, JS's lack of more than one number type is "unfixable", and we now see that Dart's fix is to add bignum as well as double, then bignum must matter. Else why do bignums in Dart?

This ignores the live bignum strawman on the ecmascript.org wiki, which Googlers on tc39 failed to champion.

If your argument is that bignum literal and operator syntax, not bignum performance, are what matters, I am with you -- but then why did no Google rep on tc39 work to advance the bignum strawman?

You can't have it both ways. It looks like some Google heavy hitters focused only on Dart as if it has a native VM, and not on Dart as a source language for JS, which implies certain obvious extensions to the ES standard.


And in any case I seem to recall being assured by Mozilla that we don't need no Native Client because Javascript is going to be fast enough for everything. Even pre-existing non-JS code could just be compiled to mighty JavaScript, the Assembler of The Web™ and all would be well. But now Dart is unacceptable because Dart-to-JS won't be close enough to the performance of native Dart?


Go back to the other thread and re-read the argument there (or go back to school and learn how to argue). No one said Dart-to-JS by itself was bad for anything.

The leaked memo spoke of a native Dart VM in Chrome and Chrome-first Dart as primary source web app authoring by Google. The clear intent was to pressure other browsers to adopt the native VM.

Google can act like Microsoft (but with less market share) if it so chooses. Just spare us the open-washing hypocrisy, and don't expect productive multi-vendor standardization. Instead, expect a new browser cold war -- no one wins.


You appear to have misunderstood.

On one occasion Mozilla has argued that the speed difference between a) non-JS code running natively and b) the same code rewritten in, or even compiled to, JS is (or soon will be) too small to be important. Now on this occasion Mozilla is arguing that the speed difference between a) native-VM Dart and b) Dart-to-JS is (and will remain) large enough to be important. I pointed out that these two claims are hard to reconcile. About the only way they can both be mostly-true is if the performance gap only happens to be large and important enough on exactly those occasions when that gap happens to bite Mozilla in its platform strategy. Not very likely, unless one assumes that the performance needs of the poor sods who just want to use the web (as publishers or consumers) are never all that important.

So the fact that Mozilla's problems with the relatively poor performance of Dart-to-JS would be solved if native-VM Dart went away has little relevance, because most other people's problems with the speed of JS are not helped by making there be no alternative to JS.

> or go back to school and learn how to argue

http://www.youtube.com/watch?v=SKm5xQyD2vE


so let me get this right: you are arguing that a problem with Dart is that other browser makers won't get on board, and you happen to be CTO of one of those browser makers?

That seems a rather circular reason for dismissing it.


No browser vendor will give a free lunch to a competitor, at high opportunity cost, by locking themselves into a brand-new and 2nd scripting engine. No one. Steve Jobs' ghost will haunt anyone who tries.

It doesn't matter how "good" Dart is.


re: brand new: fine, wait until it isn't brand new

re: lock in: I'd think Mozilla isn't really locked in as long as there is a cross-compiler. If you decide the the code bloat isn't worth the speed improvement (for the sites that run dart code), yank the VM and no harm done

re: opportunity cost: well the code from Google can be dropped in, unless you really feel the need to rewrite your own version.

re: "free lunch to a competitor": it sounds like if there is is potential for another "browser war", it's coming from that sort of attitude. In fact, that sounds counter to the whole concept of open source.

re: "It doesn't matter how good Dart is.": well, that's a shame. I'd like to think merit counted for something.

Overall, maybe I don't know your world and the politics etc, but it just sounds like a lot of "not invented here" syndrome to me. And I say this as someone who genuinely admires your work and really likes Javascript (and am especially pleased that Dart looks enough like Javascript to be comfortable to me).


You seem new to software. Every addition costs. The 2nd scripting engine in particular. Mozilla went through this to no avail with CPython several years ago. I paid Mark Hammond to do the integration work. It was a constant source of bugs and overhead, and without CPython distribution on Windows, and sane versioning on Mac and Linux, no one used the non-JS code paths save ActiveState in Komodo.

WebKit is supporting Python bindings but that language is 3rd after Objective-C and Apple pays the big Obj-C costs.

My comment about Dart's merits was purely a businessman's evaluation, not NIH. Before you throw that accusation my way, consider Mozilla using Google WebM code, and lately implementing SPDY.

You expect other competing browsers to give Google a strategic free lunch, and to give up their share of influence in standards bodies? No way, not from us, or from Apple or Microsoft or Opera.

But it turns out Dart does not have obvious merit, such that I am moved to implement it (see SPDY for a counter-example). Dart is jejune, knee-jerk, and conservative in its design, from what I have now seen.


I have no doubt that there would be a cost associated with hooking in Google's code. And I'll take you at your word that it's not NIH, even though a casual reader might guess that you might be a bit more personally invested in Javascript than you'd be in the various things that, say, WebM seeks to replace.

Regardless, I'll leave your words to speak for themselves without further debating. I'll admit I'm a tad disappointed at the level of defensiveness and name calling (of both your competition and of those you debate with), though.


You had better put up or shut up. I did not call anyone names. Cite my words if you can.

As for defensiveness, that looks like your department.


Well, uh, I guess if you put it that way, I sure better....

Whether you want to debate whether the term "name calling" is accurate or not, I would suggest that the following fall astray of Hacker New's guidelines of "the principle here is not to say anything you wouldn't say face to face". Then again, I don't know.....maybe you are that way in person too.

"Grow up!"

"Such ignorance. Willful? You seem new around here, and you are not using your real name"

"How old are you, to never learn or else to forget this?"

"So your trollish commentary notwithstanding"

"I despair of your reading comprehension"

"Am I promising you a pony? No, but then you are not a three-year old."

"(or go back to school and learn how to argue)"

"Dart is jejune, knee-jerk, "

and of course:

"You had better put up or shut up"

Really? It seems you're quite a prominent figure to need to resort to that sort of schoolyard talk, but do as you wish. I'm not going to be baited anymore.

Thanks for javascript, though.


Name-calling means calling a person a name. If I say "Joe is an idiot", I have called Joe a name.

If, on the other hand, I call someone's behavior or rhetoric out, then that person has room to back up. We all make mistakes -- definitely me included.

Your previous comment accused me of NIH without evidence. I talk back to that kind of crap. I'm not baiting you. Let us take each other at our word.


For what its worth, I felt really awkward reading many of your replies in this thread. You are our spokesperson for JavaScript and the CTO of Mozilla. It would be nice if you could talk to people without making disparaging and downright condescending remarks.

I look up to you to speak for the community and to use judgement and character. Anyways, maybe I'm just being too sensitive, but I figured I'd say something.


I take your comment to heart. Sometimes I bite back too hard.

To all those here to whom I've made "grow up!" and similar remarks, my sincere apologies.


Much appreciated. =)


I've played with it today and the joy I admittedly feel was quite similar to what I felt when coding LISP. However, I only coded LISP in academia and would not feel comfortable at all using it in a live environment.

For now, I see DART as a proof of concept kind of thing and it definitely should /not/ gain widespread support any time soon. No matter how much money Google will throw at it.


I don't think any browser manufacturer would have an issue with Dart if it were only a Dart-to-JS compiler. Nobody had issues with GWT or CoffeeScript, after all.


> In fact, if VM is not much faster, why would they have it at all?

Google hasn't made dart so we can make our web apps faster. They made dart because there's a lot of java developers at google who believe that its easier to write monolithic javascript applications (GMail, G+, etc) using a language like java.

They believe it enough that they will try to convince others to follow suit.


No, I was asking "why would they have [the VM] at all?"

If the VM wasn't (much) faster than the Dart-to-JS compiler, there would make no sense to have both.


What Brendan is missing is that the contest is not closed between JavaScript and Dart. The elephant in the room is the huge and growing mobile space where ObjectiveC and Java rule. It's not about "Works only in Chrome", it's about "Works only on iOS/Android".


What you are missing is the bleeding obvious: the topic of that other thread where I commented was Google's leaked memo about Dart as "replacement" for JS -- that memo created "the contest".

Obj-C and Java are not browser-supported. Sure, there's a native apps vs. web apps contest. Native is winning? Not according to Fred Wilson (AVC) and other observers.

Who knows, really. We're speculating, but let's find out by doing. Mozilla is working on both Open Web Apps that run in modern browsers, and Boot To Gecko.


With all due respect to the awesome work Mozilla is doing, I'm not seeing in your response anything about mobile-native-jshtml5 platform. Funny enough, Microsoft is currently promoting jshtml5 as a platform harder than Mozilla is, and given their platform unification message, that ought to include the mobile/tablet space as well.

Edit. I'll have to retract the above. BootToGecko is the mobile platform from Mozilla. It would be awesome to market it harder as such.


Thanks for the retraction, and you're right: we are not going head-on against "mobile-native-jshtml5" (new one on me, but I know what you mean). I'm talking about B2G at Web 2.0 Expo New York this week. We will work up our marketing as we get closer to first hardware product launch with our partners.

Edit: we are, however, trying not to make special sauce on top of the web standards. Instead we're working with W3C and WAC to standardize device APIs progressively as we go.


I don't speak for the team but I believe I can add some perspective.

Think of it this way: what is GWT? GWT os compiling Java, a statically typed language, into Javascript. If you've used GWT you'll know that particularly early on there was a lot of friction. What's simple in Javascript with anonymous objects and duck typing doesn't quite gel with Java so you've had to do things like use JSNI for edge cases.

Google has some incredibly large and ocmplex JS apps (eg GMail). While GMail isn't written in GWT, GWT is aimed at that kind of application with deferred binding and the benefits of static type analysis.

What if you took that expertise (of GWT and static type analysis in producing Javascript) to produce a language that could run on the server, run in the browser (for browsers that support that) and compile to Javascript (for browsers that don't)?

The last point is particular is key to driving adoption as no one is going to develop only for Chrome.

I see it as no surprise the syntax is Java-like. I fully expect sometime soon to see a JVM implementation that will then leverage all the existing Java libraries.

In that context (IMHO) it makes a lot more sense. It might not be solving the problems you're interested in but it is definitely solving a particular set of problems.


If Google produced an innovative replacement for JavaScript, the world would listen. Instead, based on what they have released so far and your comments, they have released a language that is primarily designed to enhance their own tool-chain, which the rest of the world doesn't use and isn't interested in. We don't use GWT. Apparently, we won't use Dart either.


they have released a language that is primarily designed to enhance their own tool-chain

You make this sound like its a bad thing, but that's open-source always and forever. Scratch your own itch.


This was advertised as a replacement for JavaScript. It turned out to be a replacement for GWT. False advertising. I'm wasting time reading specs and code samples and comments just to figure that out. I don't like being misled or wasting time.


I'm curious what you thought a JavaScript replacement would look like. Apparently you thought it /wouldn't/ look like GWT.


Python has a great, clean, compact and expressive syntax and a lot of syntactic sugar. Erlang does concurrency and error handling well. A mix of the two would be a great language to work with.


JavaScript - the ugly + OO + update to the standard library. That's it. Essentially what frameworks like MooTools try to make it into.


Ah. I think I was getting caught up in the idea that a "replacement" was something completely different and not an evolution change in JavaScript. But it's all just a matter of terminology at that point.


He's not saying it's immoral of Google or anything, just that it doesn't help the rest of the programming world.


A lot of people use GWT for large web applications instead of JS, precisely for the reasons Dart was created, we just don't go brag about it. The biggest problem with GWT was how slow the toolchain was, and this is adressed with Dart by having a native VM in Chrome.


Out of curiosity, do you know of any notable non-Google sites using GWT? I see a few here, though none seem to be particularly large web apps - that I can tell, at least.

http://gwtgallery.appspot.com/

http://www.gwtsite.com/whos-using-gwt/

http://www.reddit.com/r/programming/comments/aqsxq/does_anyo...

http://www.quora.com/What-web-applications-use-Google-Web-To...


Given that its supported by Sencha (ExtGWT), and Spring, and IntelliJ, you can probably guess that the vast majority of GWT users are in enterprise environments. :) AngryBirds uses GWT. You actually don't see many people writing very large Web apps outside of Google precisely because most organizations didn't have the resources to produce something like Closure or GWT. Duplicating GMail with no tooling using basic JS is quite a tall order.

Many new Google services these days use GWT, like Google Flights, Hotels, the new YouTube editor, Google Offers, etc. AdWords is written in GWT too and is quite large (millions of lines)

One of the problems Dart hopes to solve is to have a large codebase(apps easily into the millions of lines of code) that is toolable, and statically optimizable. Javascript is particularly poor at this.

The Closure Compiler was actually invented by the Google Mail and Calendar teams as a way to wrangle Javascript and tame it's suckier bits to enable sane development of large apps.


I've used GWT. It's no cure all, but it has its place and time. It's too early to judge Dart.


There's already a language that runs on the server, runs in the browser and compiles to Javascript. It's Javascript!


Actually there are many of these languages[1], including Coffeescript and Clojurescript.

You now even have options!

[1] https://github.com/jashkenas/coffee-script/wiki/List-of-lang...


IMHO, this is not a "break away" approach as originally indicated in the leaked memo.

One of the biggest pain point in web dev is the inconsistent DOM implementations. I was imagining some sort of DOM-less, HTML5 Canvas-based UI controls. And something about the "Web", Semantic Web/URIs or a new approach to programming on Web. This is NOT a break away language in any sense - its more re-packaging.


The original leaked memo that everyone was freaking out about was incredibly overblown as it was entirely decontextualized.

The engineer that wrote the memo needed people for his team. To get people to join a team from any of the others at Google, you need to convince them that your team is awesome to work with. At Google, you can't offer more money/benefits, so it has to be a "join us and you'll change the world!" message. That's going to lead to a lot of overblown statements that may or may not end up being true, and may or may not even be believed by the author.

Google is not in the business of doing things that are bad for the web (and when they have done, there's been an about-face, see: Google Video shutdown), even if internal machinations might make it seem that way.


I'll stress that it's early days for Dart. Go was announced in 2007 and only this year is going to a version 1.0.

I don't know what the timeline is for Dart but I will say this:

1. If anyone is capable of the long sort of time frame that something like this can benefit from--event requires--it's Google;

2. If there is anyone who's qualified as a domain expert in Javascript generation it is, by virtue of GWT, Google; and

3. If there is anyone who's qualified to speak to the limits of what you can do with Javascript it is, by virtue of Chrome and V8, Google.

If anything, I believe the error here (if you can call it that) is failing to properly set expectations and communicate the goals of the language (as witnessed by all the comments on this thread from people who were expecting something more and/or different).


Small correction: Go was announced in November 2009.


Thank god. I was starting to think I'm getting old if 4 years goes by that quick.


We are definitely trying to improve the DOM API with Dart, but, for performance reasons, ditching the DOM and starting over from scratch in Canvas doesn't really fly, at least not in the short term.


This is completely the wrong thing to do.

Google should be proposing a standard, open, byte-code compatible, Intermediate Language standard that can run Javascript and in the interim run on Javascript.

That is something I could see Mozilla and Apple getting behind.

Only once they have actively campaigned for this should they be adding new languages to the browser which fracture the web.

I had hoped that Dart would surprise us and turn out to be a 'machine-code for the web' implementation. This would have been a much smarter move, I believe.


+1000. Give us a platform that people can grow languages on! Maybe the DartVM could become something like what you describe in the same sense as the JVM/CLR is a platform for many languages now? But since their pushing the language it doesn't seem to be the stated goal... sad ;(


Oddly enough, if in-browser Java had "won", we'd already have such a thing.


True. But in return we would have had to endure 10 years of in-browser Java.


We would have just built something else on the JVM. You could be writing Ruby in the browser with JRuby, or Python in the browser with Jython, or Scala. This would be a much better world (tooling and performance-wise) than the current compile to high level JS strategy.


As opposed to 10 years of stagnant JavaScript, and a conversion en-masse of all all video, audio, game content to Flash??

It was really sad to see everybody moving to Flash JavaScript could've done video/audio and games a long time ago.


Not necessarily - in that case, we might still be stuck with the Sun-versus-MS and Sun-versus-the-world games over control of the JVM, and with the JVM's startup-time problem. Though the startup-time problem was so bad that it's hard to imagine in-browser Java winning with the problem still there.


> Google should be proposing a standard, open, byte-code compatible, Intermediate Language standard that can run Javascript and in the interim run on Javascript.

> That is something I could see Mozilla and Apple getting behind.

Why would you expect either Mozilla or Apple to get behind such a proposal? Apple has a powerful self-interest in making sure the Web remains a second-best app platform behind iOS and desktop OS X. Mozilla wants the Web to be the premier app platform - but it's no bloody good to Mozilla if the web is the world's #1 web platform but Mozilla has no control over it anymore. The last thing Mozilla wants to see is the Web browser become some fairly-easily-implemented, highly-interchangeable runtime platform adhering to a stable, open spec somewhere. This would, I think, be very good for the world at large, but it would undoubtedly be very bad for Mozilla - it's what is known as the commoditization of the platform. Mozilla's self-interest is served by keeping as much of the Web as possible controlled, in the minutest detail possible, by the hard-to-join club of major browser vendors. Right down to things like the lexical syntax of JavaScript. So in reality, as soon as Google proposed such an IL, Mozilla's advocates would start blowing smoke about "oh no, another Java".


"Mozilla" consists in part of a bunch of people working to write down standards precisely so new browser implementors can enter the market and interoperate. I've spent years at this for the JS standard. It's part of the inevitable commoditization you cite.

Also, going back to 2006 or so when I invited pre-Google Alex Russell and other JS hackers to Ecma TC39 meetings, Mozillans including yours truly have tried to open up the standards process. Mozilla, Opera, and Apple co-founded the WHATWG and set up an open membership structure for it in 2004, to create HTML5. W3C finally relented and internalized much of that structure.

So alleging that "Mozilla" is trying to keep the old boy's club closed is simply false.

I've cited hard, in some cases physical (as in physics) reasons for why there won't be a lower-level bytecode standard for browser-based VMs any time soon. I'm not blowing smoke.

http://www.aminutewithbrendan.com/pages/20101122

A high level bytecode sounds a lot like minified JS source, with a relatively-few extensions to the standard language. Extending is easier than replacing or setting up a new, parallel cross-browser standard.

Your inflamed sense of grievance at "Mozilla" is misplaced. Mozilla has no control other than what users of our software delegate to us by trusting us enough to download and run our stuff. We do not have hundreds of millions in our browser advertising budgets. We do not have billions of revenue with which to influence people or pay for attention and distribution.

If we are really in the way of Web standards, I'll personally pull the plug.


At a technical level, virtual machines are not the panacea many folks seem to think.

Certainly, VM byte code languages are often simpler than silicon processors' machine languages; perhaps they offer safety guarantees; and perhaps they provide garbage collection. These are all big wins for language implementers targeting such VMs.

But what no virtual machine will ever do is make modules written in different source languages work together transparently. Any time one module calls another module written in a different language, at least one side of that boundary needs to be written with a detailed understanding of both languages' semantics.

Tiny details of the semantics influence the idioms people settle on in that language --- and interfaces are designed around those idioms. For example, JavaScript conditionals treat the empty array and the empty object as true, while Python conditionals treat them as false. Each of these facts shapes what people expect of a "natural" API in that language --- and that ensures that modules written in other languages will always "speak with an accent", or seem unnatural, at best.

You can define a common data world, as the .NET CLR does, and extend each source language to cover that world, but the effect is to change each participant language into a gloss on the common data world. This is why CoffeeScript fits seamlessly into the JavaScript world: JavaScript's and CoffeeScript's data worlds are exactly the same. CoffeeScript cannot deviate from JavaScript's types and objects.

Each programming language is like a city occupying an island: commerce within the city is much less expensive than commerce between islands. Although it's sometimes worth it, dealing with "foreigners" is confusing and risky. No VM will change that.


It seems to allow for more variation in languages. The only time you really need to worry about those other languages is when you're trying to load modules written in other languages. At least that's how it works in .net. You can have pretty significant semantics from language to language and as long as you don't care about other languages actually loading your modules then it doesn't matter. If you concede that the problems of the lang->il developer are essentially the same as the lang->lang developer, then shouldn't we evaluate the il option strictly on it's merits? Such as reduced compilation at runtime and smaller payloads over the wire? As well as, optimally, less quirks than a language like JS. Not to mention existing lang->il compilers would have much less effort to retarget their output to the new platform. It just seems like a huge win to add the vm layer as the base instead of a general purpose language in every way.


Oh, targeting a good VM is a huge win over a typical machine language for an implementer. My prediction is specifically that the inter-language paradise some expect will never come.

But even within the points you bring up, I'm not sure that VMs are as winning as they're made out to be. Size and compilation time aren't somehow intrinsically better for VMs; JS does quite well on both fronts these days. And I'm not convinced JS's quirks are harder for implementers to cope with than a silicon machine language's.


If this were our (Mozilla's) strategy, it's hard to understand why we'd release our code under an open source license, and why we'd insist on every standard we implement being freely implementable by anyone with the requisite chops.

Mozilla "conspires" to do almost exactly the opposite of what you suggest. Having lots of decent browsers available was our old success condition. Now the mobile world is starting to look locked up, so breaking that open to competition and free choice is our new success condition.


The last thing Mozilla wants to see is the Web browser become some fairly-easily-implemented, highly-interchangeable runtime platform adhering to a stable, open spec somewhere.

Really? I thought that was the _mission_ of Mozilla. Except for the "stable" part perhaps, because that's quite at odds with innovation.

The track record easily shows that pretty much every single thing you have said is wrong.


My hopes were high, but the more read about it, the more it started to look like Java:

    class Foo implements Comparable Observable Deniable ...
Why not have interfaces like in Go, where you just define a set of methods and all classes having the methods will automatically implement the interface.

    Collection<E>
    HashMap<K,V>
    HashSet<E>
    LinkedHashMap<K,V>
    List<E>
    Map<K,V>
    Set<E>
Why not just Array and Hash? And what's up with the whole generics thing - that's just plain old Java.

    int
    bool
    String
    Object
Why do some types start with lowercase and some with uppercase letter. Why not use a sensible naming convention?

    square(n) => n + n;
    square(5); // returns 25
    square2(n) { n + n; }
    square2(5); // returns null
Why not have an implicit return value everywhere?

I could go just on and on...


There's only one set of Rob Pikes and Ken Thompsons on this planet :-/


Good news, maybe they won't try creating an other crappy language.


>int bool String Object Why do some types start with lowercase and some with uppercase letter. Why not use a sensible naming convention?

Primitives and objects, it kind of makes sense but I do see your point.


But the thing is, they are not primitives:

> Although you might expect int and double to be primitive types, they're actually interfaces that extend the num interface. This means that int and double variables are also nums.


That's actually kind of like Haskell.


Aaaarghhh! It is full of semicolons.

Is it really necessary in the 21st century to create a language that terminates lines with semicolons? I am sure I have seen some other languages in the past that get by just fine without them.


The language isn't done yet. Now is the time to let us know if you want semicolons to go away. I know some of us on the team do too, but public interest will help a lot.


Please make the language a nice target for other languages to compile to. Better yet, make a lower level bytecode language that Dart can compile down to as well as other languages.

For example value types would be excellent. Even better would be explicit regions, but I'm sure that's not going to happen.

Also, please fix this:

The type system is unsound, due to the covariance of generic types. This is a deliberate choice (and undoubtedly controversial). Experience has shown that sound type rules for generics fly in the face of programmer intuition. It is easy for tools to provide a sound type analysis if they choose, which may be useful for tasks like refactoring.

And add proper generics. If sound type rules fly in the face of programmer intuition, that means that the programmer's intuition is wrong. The proper response is to inform the programmer of his mistake at compile time, and not to silently ignore it and add dynamic type checks on every contravariant use of generics including array access! That is worse on both programmer productivity because the programmer expects that his program is type safe when he is using static types and on runtime speed because the dynamic checks slow down all programs needlessly. Dart already has dynamic typing; use that when you want it, not something that looks like static typing but really is dynamic typing.

Another thing that would be awesome is if you provide a compact binary format for code.


I'll give you my votes then:

- no braces where avoidable, no semicolons where avoidable

- release early a free, open source reference development environment that really helps in developing with Dart. The biggest pain with Javascript, imho, is in the lack of development tools really thought out for it, and not just an adaptation of Java/C/whatever other editors

  - if you develop said IDE as a web-IDE, even better (why not writte in dart itself?)

  - give as many inobtrusive visual cues as possible to easily see tab alignment, syntax coloring, etc.

  - integrate unit testing facilities in the IDE


I like everything you propose here. Dart is earlier in development than I think most people realize, and I really hope we can get to something like what you're listing. Definitely self-hosting and a self-hosted IDE would be on my love-to-have list. As a testing nerd, integrating testing would definitely be on it too.


A good development environment is something where you could really differentiate yourselves, and I guess you could very easily integrate introspection and analysis features into chrome to help with it.

I've been thinking for years that if I ever were to develop a new language, I would start from developer usability first - I think there is much more to innovate there than in the core language features, nowadays. Developer usability is tightly coupled with the development tools the developer can use, not just with abstract language features. And especially for web development, you can't think only about top of class developers.

As I'm at it, I'll write here my dream feature of any language and IDE - not knowing if it is really feasible, but it doesn't look really impossible to me. Use case:

- You run your unit tests

- An assertion fails/there is an error

- A debugger brings you to where the problem is

- You can step back from where you are, make changes on the fly to your code and the tests, and step forward to the assertion/error again.

I guess that with a VM, and excluding some operations that depend from external status and which are destructive, this could be possible, and it could allow an incredible speedup in development...


Certainly feasible: Smalltalk debuggers do this. And Smalltalkers (allegedly) use a workflow just like you describe.


Yup. I often do something like this:

  - write a test for a method that isn't there yet
  - run it, get a red bar, as expected
  - run it in debug mode, and get a debugger
  - write the method implementation in the debugger
  - possibly write other methods that the new method calls, also in the debugger
  - resume the program, get a green bar
Writing code in the debugger is great, because you've got all the runtime state right in front of you, and you can step through the code you're working on. I'd love, love, love to see this in Dart.


But, is it possible to step backwards from the current state of the VM, to execute a changed code?


I read about this, and even tried downloading squeak once, but didn't find it very compelling. I admit that I might not have given it the time it needed, though...


A few weeks ago I whipped up this 3-minute video showcasing a Smalltalk class browser, method finder, and debugger: http://vimeo.com/27850933


You raise a good point. I'd like to see more about testing Dart.


We have a rudimentary unit test framework [1] that we've been using for the samples, though we're still playing with different styles. Personally, I'm fond of the Jasmine-inspired stuff I tried to slap together here [2]. This is of course different from the testing infrastructure for the language itself, which is its own thing.

    [1] https://code.google.com/p/dart/source/browse/branches/bleeding_edge/dart/#dart%2Fclient%2Ftesting%2Funittest
    [2] https://code.google.com/p/dart/source/browse/branches/bleeding_edge/dart/client/tests/client/view/ViewTests.dart


Thanks! I will take a look at these.


In terms of a language, it looks like a horrible step back into the 90s. What we need is the language to be terse, and clean with lots of built-in functional utilities for manipulating data structures and objects/modules/classes/etc. Ideally, take the best bits from ruby/coffeescript/coco/python and couple them with high quality tooling. Many of the most innovative minds are flocking to these languages for good reason.


Also +1 for ability to extend the language itself so we can code in DSLs. This is so powerful.


I love Python. I prefer working without semicolons.

But from a new language standpoint, it's not worth dropping them. The ridiculous arguments that come out of it, the programmers who refuse to use the language because of it, they're things that can and should be avoided by using semicolons like most languages.

I'll never understand why programmers care so much about this, but hey, that's life.


Programmers care about it because it adds extra noise to your program.

If you semantically, and naturally, communicate "end of statement" with a line break to human readers, why should you have to say it again with a semicolon to the compiler? Programming languages should focus on being DRY.


For human readers we use periods, not line breaks.


Code is more like poetry.


That was not a flippant remark.

Lines of code are clauses not sentences. This is also true of poetry.

A sentence is more akin to a stanza in poetry, and a function in code.

While I could entertain an argument for lines ending in commas and for functions to end with periods there is no reasonable argument for lines to end with periods (is there a language where a line is equal to a function?).

APL, and to some degree Perl, are examples of languages where to you might consider a period half-way through a line.


The context is programming, not writing.


Semicolon makes it possible to do jQuery-style chaining with the least noise. I think frequent-backslash is uglier than always-semicolon.


Semi-colons do not enable this feature. Multi-line chaining works just fine sans-semis.


I very often use and see "single" line of code flow onto more than one line for readability. So often a line break is definitely not enough.


I wasn't suggesting exclusive use of linebreak… Semicolon with no linebreak is fine, but semicolon + linebreak is 100% additional cruft. One or the other. Say it once.


Please make semicolons and braces optional. Unless they are explicitly required for marking blocks and endlines they are just wasted characters to people from prettier languages :-)

For those people who come from "ugly" languages like Java you could have a feature in the IDE which auto-inserted (and of course auto-removed) the braces/semicolons so that they felt at home too.


Are there really people that feel like the lack of adding semi-colons is bad? I understand why you want to be able to optionally terminate a line early for terseness but being required too is just more work. But then I also know people who wax on about the meditative aspects of dish washing instead of buying a machine to do it.


"Java" ugly ? Please name some beautiful languages.


Ruby


Yes please. No semicolons. And please make it look a lot more like CoffeeScript than Java.


Am I in the minority? I madly in love with braces and semicolons. And, madly hate Pascal, Python & Ruby for the absence of them.


I have no particular affection for semicolons. But I do rather like braces - Python is ugly without them.


I've been convinced of "less typing means less bugs", so I prefer no brackets, no semicolons, short reserved words, etc.


> The language isn't done yet. Now is the time to let us know if you want semicolons to go away.

Seeing how Go, which "wasn't done yet" either, handled external feedback... I don't see why anyone would have hopes for dart.


I've been very pleasantly surprised at how open to feedback Lars, Kasper, and the gang have been with Dart. One of my big concerns joining the team was that it would be like Go which seems like a locked castle. So far, it's been much better than I expected.


Look at how C++ handled external feedback and what it became because of this.


There's kind-of a gap between accepting all external feedback (C++) and dismissing all of it (Go) don't you think?


The Go team has actually reacted to a lot of feedback. They just haven't made sweeping changes to the language willy nilly at request.


Personally I don't even have the tolerance for braces. You're already indicating blocks via indentation.

- Requiring a redundant mechanism to mark blocks

- Possible introducing situations where code looks differently than how it executes

is all very mid-90s. python was the first to fix this, but yaml and coffescript do to. In particular, Dart will have to compete with mindshare from .coffee, so at least should be better than that.


Yeah, but with significant whitespace you create problems like "spaces vs tabs". I like semi-colons for the same reason that I prefer statically typed languages - exactness.


Sure but when I hack python (which is a lot, as it's my day job), I rarely encounter this (YMMV of course). It seems everyone knows not to use tabs in most languages, python is no exception.


I want the semicolons to go away. Braces too.


Where's the best place to lodge our complaints?

I'm fine with the semicolons, but I'm not too keen on using underscore-starting identifiers to determine what is public and what is private.


The misc mailing list is the (strangely named) main one: https://groups.google.com/a/dartlang.org/group/misc/topics. Join and fire away.

> I'm not too keen on using underscore-starting identifiers to determine what is public and what is private.

That's also been discussed. It has more going for it than may be at first apparent, but it has detractors too.


_ for enforcing encapsulation is great, probably the one thing I've seen in dart that's exciting. simple way of giving an existing naming convention significance and cutting down on character overhead. i could see a whole line of languages expand on this idea.


The homepage links to this: http://www.dartlang.org/support/index.html.


Please allow type inference wherever possible:

I shouldn't need to write 'Point p = new Point(2, 3);'

var p = new Point(2,3); should be inferred as a Point


I prefer languages without semicolons, but, for me, semicolons aren't a dealbreaker.


they should not be mandatory


If they're not mandatory, they should be out altogether unless required. Making them just optional is a path straight to js-land weird edge cases.


That's not true. Go, Python, Ruby and Scala all have optional semicolons without insanity. It's just that JavaScript's specific semantics for semicolon insertion (instead of newline elision!) are batshit crazy.


> That's not true. Go, Python, Ruby and Scala all have optional semicolons without insanity.

Python (and Haskell) solve that issue by not using semicolons unless required. The Dart code in the examples is full of semicolons, that makes it an "optional semicolons language" akin to javascript more than a "possible semicolons language" akin to Python or Ruby.


> The Dart code in the examples is full of semicolons, that makes it an "optional semicolons language"

Well of course they do because semicolons are currently required, but that says nothing about its relationship to JavaScript's inane semicolon insertion semantics, and I'm pretty sure you're smart enough to know that. All Dart needs to do is:

1. Treat both newlines and semicolons as the same kind of token (call it whichever you like). 2. Pick an elision strategy. Go's or Python's work fine. 3. Remove all of the ";" from your source files.

No crazy JavaScript "rewind the parse and try again" insanity.


And pretty much all functional languages get by just fine without semicolons (Lisp, ML, Haskell, etc).


ML uses semicolons.


I use semicolons pretty heavily in Lisp. ;)


Haskell has curly braces and semicolons.


To be honest though, small syntactical differences like that are something that is meaningful only for novice-to-intermediate level programmers. Semicolons, braces, tabs-vs-spaces - classical examples of bikeshed bickering.


No, they're not – nor is tabs-vs-spaces (convention) in the same category as semicolons (syntax). It's ergonomics; less superfluous entities and boilerplate makes it easier to express ideas.


I have actually been noticing that I seem to prefer semicolons and braces in my code. I wonder if something about them makes it easier for my eyes/brain to delimit the content on the screen. Perhaps the "easy context-free-grammar parseable" code is easier on the brain than "hard context-free-grammar parseable" code. Having written a CFG for a Python dialect for one of the classes I took, it was a little tricky the first time doing it. Sure, it's not rocket science, but the fact that the semicolon is easier to implement may have some connection with easier identification of delimition on the screen.


Awesome, your comment is a perfect epitome of the point I was making :) Convention/syntax is completely orthogonal to that point - that point being that experienced programmers don't notice the semicolons, or braces vs 'End If' (OK I realize I'm threading on thin ice with that one...) - once one reaches a certain fluency in programming and reading code, one transcends minute details like that.

It's like learning how to read, or learning how to read a new script. At first, you focus on the letters, composing words or sentences. When you get more experience, you read ever-larger blocks of text at once and you no longer need to read aloud or read aloud in your head, you can read whole lines or several lines at once and immediately transfer the concepts embodied in them into your mind (this is effectively how people read - chunks of text at a time, it's also why it's easy to read text where the letters of all words are mingled except the first and last one, a feat that is impossible when reading letter by letter or even word by word).

Anyway, in reading/writing code it's the same - at some point you no longer see a line with statements, you see a block with initializations, or no longer a set of if/then or switch statement, you see a jump table. Once you get to that level, dollar signs to denote variables disappear from the mental model you have, because you no longer see tokens - you see variables (same with semicolons, braces, most indentation, etc.). That's where the comparison came from.

(I also think that that's where eloquent code comes from - it's from programmers who make an efficient and recognizable translation from high-level concepts into the mechanics of the programming language, even if there is no explicit support for such concepts in the language).


I am such an expert at reading English that I no longer think in terms of individual letters. Instead I think in words, phrases, concepts, and lines of argument.

But a crappy typeface or color scheme can still give me a headache.


Bullshit elitist crap.

No one can argue with you because you just claim knowledge superiority that can't be confirmed or denied.

People care about this stuff, I don't care if you don't, if other people do, then it's an issue.


i prefer semi colons too. i know its annoying when you write a piece of code, then you go test it. then an error occurs, then you go look for it, and finds out that theres missing an ;

besides that, its really easier to read the code with semi colon, when you get used to it.

also.. flash got optional ;


Which languages? javascript and go are both counter examples. In Go the lexer (sic) determines where a line ends, so if you write

    if (a == b)
        && (c == d) 
it will guess a line end after the first closing paren. WTF?

Python is a totally different beast, and I wouldn't include it in the discussion, because it also uses whitespace for block grouping.


There's no guesswork involved. The carriage return implies a semicolon. The rule is very simple and obvious in use (to me, the code you posted looks wrong - and it is).

http://golang.org/doc/go_spec.html#Semicolons


This was my first thought, as well. Looks like I will still go with another layer on top like CoffeeScript to actually write my scripts. Curly braces for blocks are something I don't see the need for anymore, either.


Does having semicolons help with minification (which is common practice for sending JavaScript over the web)? Although, thinking about it, replacing ";" with "\n" would be the same number of bytes.

The only (contrived) advantage of using semicolons that I can think of is to avoid potential problems with line wrapping. For example, some mail clients add new lines after 80 characters, which would be more likely to break Dart code than JavaScript.


I am most excited about the concurrency model!

"Concurrency is supported via actor-like entities called isolates. An isolate is a unit of concurrency. It has its own memory and its own thread of control. Isolates communicate by message passing (10.14.4). No state is ever shared between isolates. Isolates are created by spawning (10.11)"

http://www.dartlang.org/docs/spec/dartLangSpec.pdf

I wonder how they compile this down to Javascript?


That doesn't sound so different from WebWorkers, which you can already use from javascript in newer browsers. https://developer.mozilla.org/en/Using_web_workers


The model seems slightly different - don't WebWorkers share state?


No, just message passing (notionally via JSON).


Agree, it would be more interesting to know if it has some kind of built-in parallel support.


Well, you can implement pretty much everything you just described as a library. The only thing missing is preventing memory sharing, but that you can sort out at compile time.

An interesting question is if this means V8 is gaining threads.


V8 with threading? interesting.


V8 has isolates already. They are intended to be used for implementing WebWorkers.


If you're going to go to the trouble of creating a JavaScript replacement, making something more like Java is the wrong way to go IMO. CoffeeScript is a lot more palatable.


CoffeeScript basically simplifies (and does a pretty good job, imo) existing JavaScript language features. Whereas, Dart adds classes and interfaces and a few other neat things.

Admittedly, however, I do cringe whenever I see things like x.compareTo(y) or X x = new X();


Classes, types, better syntax - it's all available in JavaScript today, in it's various 'forms' (CoffeeScript, Mootools, Closure, etc.). Dart isn't a bad language, it just doesn't bring anything exciting to the table (or I fail to see it). Seems like it's ECMAScript 5 but early-bound and with Java syntax. And it runs only on browsers that already support ES5 - this one is really underwhelming.


as a sidenote, it does seem they have user defined operators, just not the starship <=> for compareTo


The one thing I don't understand is why people work on new languages for the browser instead of giving us a platform (virtual machine / intermediate representation, whatever) to implement languages against.

I mean here in this thread we are already seeing it. Some people like semicolons, some people don't. Some people like a prototype based object system, some people like a class based system. And on and on.

I think what would really benefit us is a platform (like a JVM for the browser with a great API replacing the dom) where people could implement languages against. we would get

1) A lot of competition of languages (see what is happening on the JVM right now... Clojure, Scala, Groovy ... you name it), hopefully giving us better languages. Javascript has its strenghts but could you imagine that in such an environment a language that doesn't allow you to test whether or not something is a string would make it very long?

2) People could make their choice and be happy. Then you can program the server and the client in the same language, which is pretty much the main argument for server-side javascript.

Let the languages compete instead of giving us one and now another language for the platform!


I think the Dart VM will be a nice target for compilation from other languages. In general it's easier to put a typed language on an untyped runtime than it is to put an untyped language on a typed runtime.

Before Google I worked on an academic project that built a small Smalltalk-like VM that was also a compilation target for something that was very close to unthreaded Java. Worked pretty well.


> I think the Dart VM will be a nice target for compilation from other languages.

This would indeed be nice but it seems that google wants to push the language more than the VM. I quite get this because saying "you can now run your favorite language for the browser" seems much much more attractive than saying "look we made programming language 1001 and this is what you people should be using now!".

> In general it's easier to put a typed language on an untyped runtime than it is to put an untyped language on a typed runtime.

I have no knowledge of how one would do this but a lot of dynamically typed languages exist for the (I am assuming) statically typed platforms such as the JVM and CLR now.


They do, but it's a heck of a lot of work. The CLR has the DLR (Dynamic Language Runtime) that does a lot of the work for you now, but it's still a pain.


This is what JavaScript is trying to be, with the latest revision. They're explicitly trying to make it a better target language for things like ClojureScript and CoffeeScript.


Well yes, but I think the motivation here is more 'since javascript is the only game in town, we'll have to compile to that'. I am not aware of any changes made to Javascript to make it easier to compile to it. And I don't think anyone would choose Javascript as their runtime environment if they had a choice.

I like Clojure and have dabbled with clojurescript but the workflow feels pretty hacky and it is clear that this is not what javascript was made for.


Do you mind elaborating on the hackish parts of the ClojureScript workflow? It's alpha software at the moment, but we can always use some feedback.


Hello Fogus. What an honor! I loved your book ;)

Well, I am certainly not complaining, I think it is mostly just that it needs to be compiled to javascript and that you need to have the right dependencies in place. If you are compiling from the command line it takes a long time (I know this is caused by the jvm startup time, but it still is a little annoying). I know there are workarounds such as cljs-watch but setting everything up right definitely takes some time. The browser-based Repl is awesome but sometimes breaks, I don't always understand why. I do think there could be some improvements in terms of tutorials but those will surely come (I might write one...)

So if you are using it on a daily basis then all of this probably doesn't matter but it does take a bit to get started. And I think to some part this reflects the fact that, well, Javascript was not really intended to be a platform to be compiled against, it is just used that way because it is the only platform in the browser.


>>> I think to some part this reflects the fact that, well, Javascript was not really intended to be a platform to be compiled against, it is just used that way because it is the only platform in the browser.

I'm not so sure this is the case as it is VERY easy to get started with CoffeeScript and have it watch files for changes (at least on a mac with homebrew).

I've experienced general difficulty getting started with Clojure compared to something like Python or CoffeeScript. I suppose I haven't spent considerable time on it (Clojure is just a hobby for now) but the problems I've run into so far are an out of date version on homebrew (1.2 rather than 1.3) and difficulty just getting a usable REPL going (where I can edit lines and use the up-arrow). I'm very attracted to the aesthetics of Clojure but there is so much friction just to get started that I've only been using 4clojure (try-clojure has been non-functional every time I've tried it). I think this is an area that definitely needs to be addressed for Clojure/ClojureScript to get the mainstream traction that node.js/JS/CS enjoy.


A question to ask yourself: If this had been Microsoft creating an IE10-only language to replace JavaScript, instead of Google, would you feel any differently about Dart?


Definitely.

But if they released an open-source version of IE, started shipping IE frequently for Windows, Macintosh and Linux and released the Dart language spec and implementation under an open-source license... well, then I'd like it a lot better.


There's probably still a few managers left at Microsoft that can recall VBScript vs. Javascript.


Probably the biggest difference between JS and whatever revision to it you like, and Dart, is that dart is early-bound. That means, you can't change the list of methods or fields present on an object at runtime.

It doesn't really make sense to ask "Why should I switch from a language I already love?" If you're already productive and love what you're using, then you should probably keep using it.

There are people who want a language that is more modular, scales better than JS for programming in the large, for IDE tooling, or ahead of time compilation, etc. Take a look at the Dart spreadsheet Total for example (https://code.google.com/p/dart/source/browse/branches/bleedi...) I find this code a lot cleaner than JS versions I've seen.


I was liking Dart's syntax but the early static binding is a deal breaker for me. Javascript would've been stuck in neutral if it didn't support mutable objects.


Is this effectively GWT 3.0?

The project page is very unclear, so I ended up on Wikipedia instead. Wikipedia actually has a leaked memo that seems to do a really good job describing the purpose of the language.

Specifically, the language is meant for three environments: server-side, compiled to JavaScript client-side, and fast native client-side once there is browser support. (The main goal is better performance on the client-side, which is deemed to be very difficult with JavaScript.)

However, the language looks so Java-like, one wonders why they didn't just use Java and extend GWT with a native Java client in Chrome. Did it just not make sense to bet the farm on Java when Oracle controls it?

Also, what does the "structured" in "structured web programming" mean?


Dart doesn't really share much with GWT. The basic language semantics are dynamic, not static. The types are more or assertions and documentation that generate runtime warnings, but do not produce compile time errors that refuse to let the app run at all.

Dart's a lot more like CoffeeScript than Java.


Only, it looks worse than CoffeeScript from a readability point of view... just a first impression.


> Also, what does the "structured" in "structured web programming" mean?

I would guess it's a reference to the optional static typing. There's a growing movement behind static typing and functional programming that resembles the movements behind dynamic typing and OOP of a decade ago...


I'd guess it's also a lot about class-based vs. prototype-based. The latter often leads to messy architectures. It might be an intrinsic characteristic of prototype-based design, or because 99% of people learn OOP with a class-based language, I don't know.


The "hello world" example is incredibly uninformative. What does that have to do with "structured web programming"? Where is this printed out on a web page? Does the console output there correspond to the html output? Or some other console?


It's actually runnable if you click the "play"-icon in the top-left of the code sample.

The "Hello, Dart!" is written to a console emerging from underneath the code sample.

https://code.google.com/p/dart/source/browse/branches/bleedi... is a more DOM-oriented version of the same program.


I realize it's runnable -- that's what I mean by it's not clear what the console that appears underneath refers to. Is that "echoed" out to the page? Or something different?


Yeah, there are more links to code samples here: http://www.dartlang.org/samples/index.html


Hmm strange, I can't see the "play"-icon using FFX 3.6.23


Dart currently compiles to JS that runs on Firefox 4+.

http://www.dartlang.org/docs/technical-overview/index.html#h...


Yeah, exactly. This "hello world" uses a "print" function. What would such a print function do in a normal web app? Write to the console or something?


I was hoping for something like the Dylan language.

http://en.wikipedia.org/wiki/Dylan_programming_language

It was created in the 1990s and it still looks innovative, even compared to a lot of the new 'hot commodity' languages like python and ruby. I knew it wouldn't happen.

If a big corporation like Google put money into something like that, I think it would dominate the market.

But what they have delivered here isn't even interesting.

(The other option for something I would want would be standardized 'web byte-code', so that you just re-target a compiler back-end to emit it, and your language can be used as if it were javascript.

This is neither of those thing and I am sorely disappointed.


Yes, man. Dylan ftw. The best language that does not have s-exp syntax. There are still some guys tring to get a good implmentation out.

Dylan could have been what Objectiv-C is now. It was an awesome project it just had bad timing.


> But what they have delivered here isn't even interesting.

Not a big surprise. PL specialists (let alone theorists) have no place at google. Just look at the previous language coming from outside Google... Go.


I like it... It's kind of like schemey-ruby.


Creating an object:

    Greeter greeter = new Greeter()
or

    var greeter = new Greeter()
Defining a constant:

    static final myConst = 1

Plus there are classes, interfaces... It's just Java?


> var greeter = new Greeter()

Well, it's hard to get that much better than that.

> Plus there are classes, interfaces... It's just Java?

Java didn't invent classes. It certainly looks a lot like Java, but it's more Smalltalk under the hood.


> Well, it's hard to get that much better than that.

    var greeter = Greeter()
there, no need for `new`.

Alternatively,

    var greeter = Greeter.new()
or

    var greeter = greeter new
if it's not acceptable to have arbitrary callable objects.

> it's more Smalltalk under the hood.

The speed and wide-spread use of Smalltalk with the regularity, flexibility and terseness of Java's syntax? That sounds like a recipe for success.


> there, no need for `new`.

Agreed. I didn't say you couldn't get better at all, just not much better. I'd personally be in favor of ditching new (or conversely making it a method on the class).

> The speed and wide-spread use of Smalltalk with the regularity, flexibility and terseness of Java's syntax?

Oh, you. You may be right. Making a new language is crazy, especially if you're aiming for widespread adoption. Still, you have to try, right?

Dart is definitely more terse and more flexible than Java. Maybe not perfect, but it's heading in the right direction. Dart is no DSL-friendly beauty like Ruby, but I think:

    var odds = [1, 2, 3, 4, 5].filter((i) => i % 2 == 1);
is pretty tolerable without being novel enough to scare people that have never programmed outside of a curly brace language.


> Agreed. I didn't say you couldn't get better at all, just not much better.

I strongly disagree, it makes away with a keyword and magical syntax, that is much better.

> Making a new language is crazy, especially if you're aiming for widespread adoption.

That's not what I'm saying, I like new languages, and I like interesting new languages, but it pains me to see you defend (or even work on?) Dart, which so far looks even worse on the programming-language-progress continuum than Go does. It barely makes any progress on the very language it's supposed to replace.


Well to be fair, the language it is replacing for the most part works, it's mostly just patching up problems: ability to optimize for startup performance, ability to program in the large, and ability to write tools easily (IDEs, etc)

Most new programming languages are really just variations on those that came before them. You've got you're LISP derived, your Forth/Stack derived, your APL derived, your ML derived, etc.

IMHO, it's hard to consider a programming language in isolation. As far as 'feel' or 'productivity', it really comes down to the ecosystem, the runtime and tools available. Here I think is where Dart hopes to excel -- make a JS-like replacement that supports a better runtime and tools story.


> Well to be fair, the language it is replacing for the most part works

They clearly disagree, since instead of taking part of cleanup/improvement efforts (e.g. Harmony) they decided to build a brand new language from scratch.

> Here I think is where Dart hopes to excel -- make a JS-like replacement that supports a better runtime and tools story.

I fail to see why that would happen, from what I've seen so far there's little in Dart which is a significant improvement for runtimes. And as far as tooling goes... well Google's history means they're unlikely to be those handling that, who's going to build tooling for Dart, and why would they have any reason to make that investment instead of improving their JS support further, or adding CoffeeScript support?


Google's build the Android Development Environment for Eclipse, the Google Plugin for Eclipse (AppEngine and GWT integration) and Dart already released an Eclipse environment, and a compiler. So the history seems to be that Google supports tools.

Google is taking part is JS cleanup efforts, but it's a large company with 20,000+ employees, so it has the reasons to pursue many different paths: Closure, GWT, Dart, Go, etc.

As for runtime improvement, if the team who has built the one of the best/fastest JS VMs (V8 team) says the language semantics allow them to do better, I think we should listen.

The early binding alone allows for substantial improvement. If all classes are early bound, then you can significantly optimize dead code, you can know object layouts immediately on load, you can detect effectively non-virtual methods immediately, and so on. You can pretty much snapshot important information that you normally have to discover each and every time you load the application.


greeter := Greeter()


> Well, it's hard to get that much better than that.

Ok, fine, but why have both forms?


Sometimes you care about the types, sometimes you don't.


Looks a lot more like James Gosling, actually ....


I had a comment above saying something to the effect of "not everyone can be Ken Thompsons" and then edited it out, sorry.


It is. Google is betting heavy on android. Expect more Java style languages.


The thing I was really hoping for from the dart site was an "about" section, or a "why" section, explaining about the language and their goals, instead of just jumping to code examples...


http://www.dartlang.org/docs/technical-overview/index.html has some information on the design goals.


That's a fair point, but it's pretty annoying when you read about some cool new language but you can't find any good sample code anywhere. anic and newspeak come to mind.


For the record, anic was and still is pure vaporware. Sample code exists but there is no working compiler or interpreter to run it...


Wow guys. Google is not aiming at a next gen Haskell or ML. They are trying to create a less cluttered Java and a stricter, more performant Javascript. Judge a solution in context of the problem it is trying to solve. If they got Simon Peyton Jones to design this it would be somewhat intimidating for your average webdev[no disrespect].

I'm somewhat disappointed that they insist on a 90's C syntax. It's like Python/Ruby/Coffeescript never existed.

C'mon Google, you can do way better...


I think this comment gets to the heart of the matter. People want something different. They want a Haskell. How many blog posts about Perlis languages have we seen in the past several months?

Reading these comments I've come across the words: boring, uninspired, underwhelmed...

I couldn't agree more.

JavaScript's prototypal object model can at least keep my attention.


Pretty disappointed with the lack of risk with this. Conceptually seemed like something with a ton of potential, but the minute I saw the tutorial shoehorning the worst of Java and JavaScript into something new, it was a non starter for me...better being insulated writing CoffeeScript a layer up and waiting for a compiler.

Things I've noticed so far that I don't like: get/set proliferation, semicolons, braces, positional args, var for instance vars, int/num dinostyle pseudo-primitives.

It seems like they tried to do some cool things, but failed in execution. Like "Greeter.withPrefix(this.prefix);" one-line constructor definition .. where this.prefix is a "shortcut that assigns the parameter's value to the instance variable prefix". Eliminate needless code.. but instead of using a dedicated symbol to call attention to the sugar, they used "this." WTF.. did they specifically want to kill scanability.

Things I've noticed so far that I like: using "_" as a prefix to enforce encapsulation.

Enforcing naming conventions as syntax seems at least somewhat python inspired in the sense of bringing the whitespace is significant design to naming is significant.. wish there had been more of that.

If someone other than Google were doing this it would be dead in the water already.


Funny to read all these comments just a few minutes after the announcement. I bet nobody had enough time to explore the language in any significant way. Yet very strong opinions already.


On the contrary, I was able to go through all of the tutorials, follow 2 separate live-tweetings, and start looking into the source before the keynote was over; I'd imagine many here did the same.


Two things I don't like about Dart, just from reading the tutorial:

1. Consuctor syntax requires repeating the name of the class when yor define the constructor. So if you change a class name, you have to carefully find all your constructors and rename them.

2. Instance variables in classes seem to be defined at the same level as class methods. Yet one is an instance attribute, and the other a class attribute. So you have irregular scoping rules based on type (data vs. function). This scares me as it indicates functions are "special" and not first-class types in the language.


A face-washed JavaScript with classes, interfaces and such stuff. I was expecting something more syntactically close to Go, or at least to Python.


Meanwhile, I've been slowly eliminating my usage of `class` from all of my CoffeeScript code. I've found that my code is much more reusable, far less prone to `this` bugs, easier to reason about, etc. I used to be a huuuge OOP guy, but now I rarely find a use for classes...


Prototypes are fine for a language like JavaScript. They keep things simple. I'd prefer that CoffeeScript (and Dart, of course) implemented a decent syntax for prototyping, replacing the JS mess, rather than trying to disguise them as leaky and half-featured classes.


    > leaky and half-featured classes.
This is a common willful misconception. CoffeeScript classes are isomorphic to JavaScript prototypes -- the do precisely the same thing. In addition, there is a shorthand syntax for prototyping objects, if that's more your style:

    Dog = -> 
      ...
       
    Dog::bark = -> 
      ...

    Dog::run = ->
      ...
Note that the above will produce the same result as:

    class Dog
      bark: ->
        ...
      run: ->
        ...


I didn't mean that CoffeeScript classes don't cover prototypes —but the other way around: prototypes can't implement the whole typical class abstraction (encapsulation, etc.). So I think CoffeeScript would do better implementing a completely new syntactic abstraction over prototyping that fix all the JavaScript awkward ambiguities than adapting classes to prototypes. But it's still OK, I like and use CS classes.


there are performance reasons for that choice, the prototypal inheritance + constructors are way faster than anything else at the moment


There was a recent article on HN about Self and its prototype patterns. Looked far superior to Javascript's. But even still, prototypes only seem primarily useful if you want member-access as dot-notation or polymorphism by type. When preferring composition over inheritance, most polymorphic-by-type situations either go away or can be trivially replaced with maps.

That said, CoffeeScript provides another interesting use for them: Simulating dynamic binding. Basically, the trick is that `this` becomes the dynamic binding context and `@` represents a dynamically bound variable. You use `fn.call @, x, y, etc...` if you want to pass the dynamic binding to callees. Nested bindings can be established with `Object.create`


I use classes in Coffeescript, but avoid 'this' bugs by knowing that:

- JQuery examples like to abuse 'this'.

- I should always, explictly provide the event as a parameter in the callback.

So '@' is always the object, and 'event' is whatever started the method.


I have used `class` less and less in my Python code, and when I still use it, it is as little more than a struct.

I never use inheritance.

Since I started to do this, my code has become much simpler and cleaner.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: