Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This presentation saddened me.

The presentation focused on what it perceived as missing features: structs (seriously?), classes, modules, syntactic sugar, macros, etc. But the huge gaping holes in Javascript are not missing features. They are fundamental errors in the language. Things like ==, numbers as strings, eval, incorrect definitions of false, semicolon insertion, and -- heaven help us all -- improper lexical scoping.

Language designers tend to incrementally add junk to languages until they are complex, unweieldy monstrosities like C++ or Java. Rarely do they fix fundamental errors in the language because that would require backward-incompatible changes. So they stick to adding lipstick to the pig. But JavaScript isn't like other languages: its fundamental errors are so glaring, and impact so negatively on the language, that the benefit of jumping to a "JavaScript 2.0" massively outweighs its incompatibility disadvantages. That's why we see languages like CoffeeScript cropping up despite all their downsides, notably debugging.

The class bit particularly made me sad: JavaScript has a perfectly cromulent, even elegant, object model in the form of prototypes. But a variety of syntactic sugar hacks, weird constructor stuff, and general desperation to be a class-based language have sullied what would otherwise be an elegant mechanism. The solution appears to be: move more towards classes! Thus we still have all the language hacks, and two generally incompatible object models to boot. Plus structs!

Somehow after reading this presentation, I was struck with Yoda's admonition: Eich seems to be looking to the future, never his mind on where his language was.



You seem to misunderstand "structs" -- see http://wiki.ecmascript.org/doku.php?id=harmony:binary_data, this is an extension of WebGL's typed arrays, which are already in all the new browsers (IE10 too).

As for implicit coercions, I enjoyed Gary Bernhardt's "Wat", referred to it, and at past talks even mocked along with. At Strange Loop, I went through each "Wat" in the "Wat Secrets Revealed" slide series (use down arrow when you see it greyed in).

Of course (!) I regret the implicit conversions that make == not an equivalence relation with disparate types on left and right (NaN is a different story: blame IEEE754). Who wouldn't? As I said at Strange Loop, some colleagues at Netscape were hot for lazy/loose number/string matching, and "I was an idiot! I gave them what they wanted."

There may be hope of fixing even ==, if we get macros done right. You would opt into macrology redefining == to be === or whatever you want. But this is in the future.

And that's the point: JS must grow compatibly, given its cursed/blessed position in the web. There is no other option that costs as little incrementally. True, we could paint into a corner. I don't see that happening, not with the vibrant and very smart JS developer community (communities, really) with whom we are working.

On a practical level, I once ran into someone who used to work at IDEO and became a JS hacker in the course of doing a startup. I asked him about == quirks and the like. He just shrugged and said "you learn what to avoid and move on." That is the voice of practical wisdom (until such time as macros help fix the quirks for good).

So my advice is cheer up!


> You would opt into macrology redefining == to be === or whatever you want.

Oh, please let the syntax for this be something like

  let == = ===;
:-P


Let's not run an extra let statement if we don't have to!

    if (== != === || == !== ===)
      let == = ===;


Shouldn't that be var to get the right scope?


May be off-topic but: Is it just me, or are macros just a new way to get confused while reading JavaScript? Introducing language-foreign syntactic constructs seems to me superfluous and confusing - This is the job of transcompiling languages like CoffeeScript.


Reading the macros first helps. They must be defined at top of program or module, if I recall sweetjs.org's design correctly, for the staged hygienic expansion to work well.

Aside from that, you're right. But JS has higher order functions and objects with ad-hoc methods, so it can be used according to many paradigms, which can make it hard for a reader unfamiliar with the dominant paradigm in code at hand.

This is not a deal-killer for macros, although with sweet.js as the prototyping tool, your assertion about "This is the job" is satisfied. Sweet.js works with node to do AOT macro expansion at present. There's effort to integrate it in the browser too, but this will be easier with ES6 module loaders (not yet prototyped in SpiderMonkey or V8).


Of course, as in every aspect of understanding, here for source code, it is important first to learn the context, here the macro definitions. My concern is that this will impose more than just a paradigm - it will impose new syntax which could effectively completely ruin the readability of JavaScript source.

The macro syntax is definitely not simple, and it could possibly get really complex for more elaborate syntactic definitions, thus rendering the source much less readable. Is the overall benefits of introducing macros to JavaScript really worth the costs of readability? And does the effort to integrate macros into the browser mean that it'll be possible to evaluate macros "runtime"?


> The macro syntax is definitely not simple, and it could possibly get really complex for more elaborate syntactic definitions, thus rendering the source much less readable.

True, but the same can be said of any API. Reading the definitions can help but for both complex macros and complex APIs built today out of just functions and objects you need to document your abstractions. Macros don't change this, they just give you another abstraction axis (syntactic) to work with.

As with most things it just depends on how you use it. Sure you can abuse macros to make tons of crazy, undocumented, hard to understand syntactic extensions that destroy readability. But you can already do that today. Used wisely macros can increase readability, used poorly they can decrease it.

> And does the effort to integrate macros into the browser mean that it'll be possible to evaluate macros "runtime"?

Not sure what you mean here. By definition macros are expanded at compile time (well, parse time really). The browser doesn't change this.


I meant compile time of course, thank you for your clarifications. I'm eager to see how this turn out.


All abstractions can make your code unreadable; the key, as always, is to create good abstractions and document them clearly. Macros are syntactic abstractions. It's just as important to document them as it is for functions or objects. Of course, if you just have a little local macro that you're using for convenience, looking at the implementation may be sufficient. But when you write a macro that you want to share from a module, rather than requiring your clients to read the implementation, you document the syntax and the semantics, just like you would if you were writing a separate language.

But by having macros directly in JS, instead of having to use a whole language that compiles to JS, you can combine syntactic features from different sources. For example, right now there's no way to use one feature you like from CoffeeScript with another feature you like from TypeScript. You just can't combine them. But with macros, you could import two different syntaxes from two different libraries and use them in the same code.

On top of that, if we actually had macros in a future version of the standard, you wouldn't even have to precompile the macros offline, and you wouldn't need a preprocessing step at all. (For latency purposes, you might want to preprocess macros offline as an optimization. But for development, not having to do a preprocessing step is more convenient.)

Dave


So this is all in the holy name of making JavaScript the assembly language of the web? Making it possible for every JavaScripter to write "his own" JavaScript syntax definitions meaning that I'll (as a contributor or just casual watcher) would have to read his whole collection of macros before I could begin to understand the code?

I don't think this can be compared to API's as they still follow the regular syntactic definitions - This will be like reading a completely new language every time I read a different repository. (Of course this is a worst-case scenario as I imagine that many macros will be used across several projects, but still.)

Is all hope gone for writing vanilla JS gone? And isn't macros kinda going in the opposite direction than the ES specs? There's no use for many of the ES6/7 features as they could just be mocked up in macros.


"So they stick to adding lipstick to the pig. But JavaScript isn't like other languages: its fundamental errors are so glaring, and impact so negatively on the language, that the benefit of jumping to a "JavaScript 2.0" massively outweighs its incompatibility disadvantages."

"Jumping to a JavaScript 2.0" is the hard part. Any successor to JavaScript has to have a compatibility story with all the JavaScript code out there, as well as the DOM APIs and so forth. Either you ship two VMs, in which case each page has two incompatible worlds (yet both can access the DOM -- think about the massive complexity this entails), or you have to think about language versioning, which requires at the very least a functioning static module system.


The biggest challenge with multiple VMs is doing GC over both: cross-VM GC is a current research topic, and nothing has shown to be doable without a noticeable perf impact.


This presentation focused on work that's more recent. Improving the scoping story with let/const is one of the things that TC39 agreed on relatively early, though:

http://wiki.ecmascript.org/doku.php?id=harmony:let

http://wiki.ecmascript.org/doku.php?id=harmony:const

http://wiki.ecmascript.org/doku.php?id=harmony:block_scoped_...

Some of the new-function/lambda-syntax proposals even support Tennent's Correspondence Principle to some degree.


We gave up on TCP for => function syntax. I think Java reached the same conclusion.

It's really hard to retrofit TCP onto a C-like language with statements as well as expressions. At best you please only some programmers and confuse others, at a fairly high implementation cost (e.g., return from within a lambda called after its containing function deactivated).

See https://mail.mozilla.org/pipermail/es-discuss/2012-March/021... and especially https://mail.mozilla.org/pipermail/es-discuss/2012-March/021....


Thanks, it's nice to hear it from the horse's mouth.

Does it make sense to think about TCP as being a matter of degree? Would it be correct to say that the number of constructs that would break TCP is fewer with let, const, and => than with earlier versions of ES?

I think a "partial TCP" would matter for manual refactoring, if not for (e.g.) a future macro system. On the other hand, perhaps it's more confusing to mention it if it's not total.


Saw dherman's comment just now, he and you are right that there is a spectrum of TCP, and if you use a subset of JS to write expression-bodied => functions, you get the refactoring property that people associate with TCP.


Though the expression version of => comes pretty close to TCP.

Dave


I think you're being too hard on JavaScript there. Type coercion, ASI, and function scoping are all language features, not errors. Likewise, eval is not an error; it's dangerous, but it's also a feature (and one that nearly all dynamic/scripted languages provide). As Crockford said, "JavaScript is the only language people feel like they don't need to learn to use." That statement alone describes why so many developers end up scratching their heads at stuff like `1 == "1"`. That's not an error or an oversight in the language - it's a feature that the developer didn't realize he/she was using. It's hard to make it 10 minutes in ANY educational material on JavaScript that doesn't explain equality (==) vs identify (===) comparisons in JavaScript.

Lastly, the new class stuff doesn't actually change the inheritance/object models in JavaScript. It's syntactic sugar on top of prototypes, and the "weird constructor stuff" is quite analogous to existing constructor functions. For example, from the [ecmascript wiki](http://wiki.ecmascript.org/doku.php?id=strawman:maximally_mi...): "Class declarations/expressions create a constructor function/prototype pair exactly as for function declarations."

I'm clearly a bit of a fanboi and obviously biased, so take all of this with a grain of salt - but most of these new language features are a good thing. I'm glad to see JavaScript evolving in big ways. It was getting boring watching new editions come out with nothing more interesting than Array.prototype.reduce in them. Especially when you consider server-side contexts like Node.js, stuff like typed arrays, generators, Maps, etc. are welcome additions.


Eval sure, but the way that type coercion works in Javascript is an error. Boxing in Javascript is an error. You can figure them out, you can code around them, but they are bad and can't be justified. All coercion isn't bad, but some is.

Have you seen code that uses .valueOf() for anything good or useful? new String()?


Pre-JIT-compiling JS VM days, I did see new String used intentionally to eliminate auto-boxing overhead on every method called on a big string.

For ES4 (after AS3), we tried eliminating boxing. This is overtly incompatible, a web-breaking change. It won't fly.

Java has primitives that can be autoboxed or explicitly boxed too, which is why JS has them. I was future-proofing for "LiveConnect", which shipped in Netscape 3 (Nick Thompson did the integration).

But I was also in a terrible ten day hurry, so found unboxed primitives easier to deal with in the Mocha first implementation.

If I did not have the "make it look like Java" and ten-day marching orders, I like to think I would have resisted the lazy-programmer second reason. But it's all water way under the bridge now.

Implicit coercions, e.g., '' => 0, were influence by Perl 4. Nuff said!


Sorry, missed the Yoda misquote at the end, I must respond!

Yoda said "where you are" but you used "was". Possibly just a tense-typo, but it matters. Tracking where JS is is a part of the TC39 job, but only a part. It's easy to fall into a trap where we standardize only what developers can express given the language's current semantics and so miss the chance to extend those semantics.

But I'll take the tense-corrected bait: yes, I talk about the future. The past (including its most recent instantaneous sample known as "the present") is greedy. With no one to look ahead or synthesize ideas from compile-to-JS and other languages that win user support, JS will tend to stagnate, all else equal. Champions (not just me) must fight for a better future.

Stagnation on the web is a social ill. It costs developers tons of time. We know this not only from IE6, but from Android 2.x WebKit. Some not-disinterested parties might want to make use of a crisis of this sort, to force a "new JS" (Dart? remember WPF and WPF/E in the past) to be adopted for want of anything to relieve the stagnation.

Not me. The cost is too high, the lessons learned in JS will be half- or three-quarters-lost, and too much code and developer brainprint will be thrown away. I'm reminded of the XML Utopia that was planned in w3.org to save us from bad old HTML, before a few of us called b.s. and started the whatwg.org to do HTML5.

The web is an evolving system, JS is part of the web standards and must evolve too. Skate where the puck will be. Or to wannabe Yoda on ice: where the puck will be, you must skate!


> The web is an evolving system, JS is part of the web standards and must evolve too.

I don't see too many language gizmos in the presentation which reflect the requirements of web standards. Most of them are extensions to a language which desperately needs modification more than extension at this time. Evolution in a language is a matter of where you spend your development resources. &rest-args, weak maps, modules, and so on would be nice to have. But I would gladly sacrifice them to the gods to get rid of JS's global variable issues. It seems to me that ES is mostly building more and more features on top of a foundation of sand rather than taking a breath and revisiting how to reinforce the foundation.

Apple did this recently. OS X 10.6 (Snow Leopard) was an entire release that consisted of almost nothing but cleaning house. Few new features, just heavily revised internals. It's probably the most important release Apple has done in a very long time.

Now one can make the argument that fundamental fixes to long-standing language flaws is a challenging thing to produce given the bulk of development work which relies on the old language. That's a different discussion and one worth having. But moving forward with gizmos simply for the "future"'s sake, without considering the current sad state of the language, is I think misguided. I would strongly urge the committee to take a step back and identify the top twenty most problematic features of the language, and how they might be able to develop a "strict" version of the language which fixes those features, yet retains interoperability with code files written in non-strict form. Then they can go back to adding new gizmos.

(BTW, the "was" is due to the original Empire quote). http://www.imdb.com/character/ch0000015/quotes


Ok, "was" -- but not "you", rather, "he" as in "where he was".

With "me" it's a question of "is". JS is used for purposes far beyond its original design limits. A victory condition and a call to evolution. ES6 is Ecma TC39's attempt to hit that target.


>Things like ==, numbers as strings, eval, incorrect definitions of false, semicolon insertion, and -- heaven help us all -- improper lexical scoping.

==, numbers as strings => Problem: Type Coercion eval => Problem: Interpreted Language incorrect definitions of false => Problem: Type Coercion semicolon insertion => Problem: Language improper lexical scoping => Problem: Not block scope?

>The class bit particularly made me sad: JavaScript has a perfectly cromulent, even elegant, object model in the form of prototypes.

Both forms of object creation are valid.

[Object.create] http://jsfiddle.net/X4Bxq/

[new] http://jsfiddle.net/UGTga/

Both take advantage of prototypes (the later takes full advantage today, with all browsers. The former requires a hack where you lose your type information).

What is more important is that they are both awkward. So much so that most "JavaScript" programmers don't even use them. Those that do, spend most of their time arguing on which one is more correct. This is a problem. Introducing 'class' would sort this out by providing an easier syntax for class creation and an end to the arguments.


> The presentation focused on what it perceived as missing features: structs (seriously?), classes, modules, syntactic sugar, macros, etc.

The nature of a widely-used technology is that you can't remove features, you can only add. And yet adding features causes an increase in complexity. So what to do? The answer is to add features judiciously: prefer general features that cover a wide array of use cases and can provide better ways to do things that the existing features don't do or don't do well. (But also avoid over-general features that destroy important invariants -- for example, just say no to call/cc or threads.)

> But the huge gaping holes in Javascript are not missing features. They are fundamental errors in the language. Things like ==, numbers as strings, eval, incorrect definitions of false, semicolon insertion, and -- heaven help us all -- improper lexical scoping.

ES6 -- and potentially down the road, macros -- are paving paths to fix many of these problems you mention, and other important problems besides (e.g., callback hell). Lexical scoping is partially improved with ES5's "use strict" and further improved with ES6 modules. Block scoping finally exists thanks to `let`. ES6's module loaders allow translation hooks to enable dialects or alternative languages to be run in-browser (which you can do with preprocessing and build steps and directly, but using them in-language streamlines the "shift-reloadability" development experience). Module loaders also provide a saner eval, which allows you to completely sandbox the evaluated code. Macros could even allow rebinding operators like `==` to have cleaned-up semantics. We intend to try this out with sweet.js, building something like "restrict mode" (http://restrictmode.org) as a module you can import.

Dave


> We intend to try this out with sweet.js, building something like "restrict mode" (http://restrictmode.org) as a module you can import.

Oh cool!




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: