Hacker News new | past | comments | ask | show | jobs | submit login
JavaScript's world domination (medium.com)
295 points by janjongboom on Mar 24, 2015 | hide | past | web | favorite | 187 comments

Might be one of the saddest things I've read linked out of Hacker News. As developers we're doomed to a future of using a awful language because it happened to be at the right place at the right time and was good enough. Now we're stuck with it. Reminds me of DOS.

There are only two kinds of languages: the ones people complain about and the ones nobody uses. --Bjarne Stroustrup [1]

[1] A standard complaint deserves a standard response.

(And a third: Python ;)

As a former Pythonista, I disagree.

Some Python programmers complain about Python 2.

Some Python programmers complain about Python 3.

Everybody complains about the GIL.

Developers who say that JavaScript is bad are ones who haven't seriously used it recently. Personally, I've come to prefer (by a long shot) JavaScript's prototype-based OO over traditional class-based OO.

JavaScript is a great language with a dark past and it suffers from its legacy. I've programmed in C/C++, C#, Java, AVR assembly, PHP, ActionScript and Python among others and I would not hesitate to say that JavaScript is my favourite.

Why are there so many NPM modules? - Because people who know JavaScript love it. I'm yet to see the same kind of passion in any other programming community.

Yes, it has some gotchas, but so does every other language.

When used correctly, function closures are awesome! REALLY awesome. I pity the poor, poor fools who misuse them and end up with callback hell.

People mention 'callback hell' a lot, but no one talks about callback heaven.

I used to complain about JS a lot, and from an OO standpoint it is annoying, but after I started getting more into functional programming, I started to like JS more and more.

Although a shorter function syntax with default-return on everything would make it a lot nicer (and more similar to LISP).

It sounds like you're describing ES6 arrow notation: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

You could always use transpile to js language like CoffeeScript or ClojureScript for a nicer syntax. Combines it with build system like Gulp or Grunt really makes the experiences pretty painless.

Agreed. I started using .js after 20 years of C/C++/Python/LUA/Java/AS/Etc.Etc. Javascript is fucking great.. especially in Chrome.

EcmaScript 6 and 7 are actually adding beautiful language features. JavaScript is changing in a big way, a lot of new JavaScript will not even resemble the stuff you're probably referring to.

Bug not feature, unfortunately.

Were only that effort spent on eradicating the old legacy libraries!

If you are saying this, you certainly didn't read the article (properly).

You can not "eradicate old legacy libraries" on whim - this stuff is out there. It is used by software out there. Software that would break if things like that would be removed. You can't just rip out and change parts of it at whim. You have to work around problems - at least until they wane in popularity and slowly die out. Evolution, rather then revolution - as stated in the article. That is exactly what TC39 is doing when they are improving on the standard and JavaScript itself.

Case in point: Array.prototype.values was a proposed feature - returning an iterator for the values in an array[1]. The feature was introduced and had to be backed out at least twice[2] because widely used web frameworks (in this case, Sencha) used the attribute "values" on arrays and webapps broke as the method was added to the Array prototype. And this wasn't the first case.

[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... [2]: https://esdiscuss.org/topic/array-prototype-values-breaks-th...

Fucking Sencha.

So, that's the thing, right, despite all the downvotes above.

We get to strongly push for new language features, or we get to strongly push for backwards compatibility.

If we elect to do both blindly and without forethought, as the JS community seems to be doing (having learned from the C++, evidently), we are going to run into these things over and over again.

And frankly, a lot of the legacy code and terrible hacks on the Web deserve to be broken, noticed, and remove/updated. Hacking prototypes (as Sencha did here) is something they shouldn't have done in the first place.

The origins of this code come a bit before jQuery.. when libraries commonly extended existing structure prototypes... Some of those became the basis for the additional methods (map/reduce/filter etc) on Array, and others. JavaScript is very flexible which is a blessing and a curse.

I've been a fan of JS since pretty early on... the DOM differences between IE and NN were really horrible to deal with in the v4 browser days, and until jQuery didn't start to get much better really, where today you really don't need jQuery. It's an evolution here... If you're willing to use transpilers you can go anywhere you like... from BabelJS to ClosureScript, TypeScript, CoffeeScript or others... you can get the style you like.

npm + browserify (or webpack) have made developing modern web applications a dream compared to the past.

At this point I'm just glad that the language we are stuck with is JS and not PHP.

In some alternative universe Perl is the dominant web language along with HTML and inline styling.

Perl and XHTML <:O

I could handle it if it was Modern Perl. But inline styling belongs with Cthulu, asleep in the deeps.

You could be overReacting. I hope this doens't mean Facebook has a secret dev-center in R'lyeh.

The horror.

It has awful parts that should be avoided. Once you learn to avoid them you can have a great time. This isn't that different from most other languages. I think javascript has just been more heavily complained about. There is also an automatic look-down-your-nose tendency that most of us started with, including myself. Once I decided to learn more than the bare minimum I became very happy with it.

It"s the very strange mix of very bad ideas and very good ideas that's baffling. Some other languages are homogeneously bad (php) but provides less suprises than js, because of their regularity. js is bipolar/schizophreniac.

JS is special because it has to be backwards-compatible.

It basically contains its entire history, from a PHP-like hack job that started it to the very elegant ES6/7. It's probably one of the few languages that expose their whole fossil record.

Don't forget C++. Being older, it accumulated more cruft.

Yes. We tend to forget javascript was the first mainstream language (no, CommonLisp is not mainstream) proposing lambdas & closures, for instance. In the 1990's, that was quite amazing. And, on the other hand, they couldn't implement the 'this' keyword correctly, among other trivial mistakes.

Perl 5 predates JavaScript by about a year.

>> It has awful parts that should be avoided. Once you learn to avoid them you can have a great time. <<

Oh, I imagine my copy of Crockford's "The Good Parts" is more well thumbed than most, and I do my utmost best to keep it all clean, but having a great time I am not.

> This isn't that different from most other languages.

Based on what people tend to complain about, some languages have more should-be-avoided parts than others. I guess no language is perfect... but that is so obvious that it doesn't need to be mentioned. Yeah, no language is perfect. But it's still interesting to see which ones are the least "perfect".

I don't get this tendency to want to muddy the waters with appeals to "nothing is perfect". Nothing is perfect, but some things sure seem to suck 5 times or more than other things.

(For that matter - it actually does seem like there are languages where you at least don't have to actively avoid parts of the language. Maybe they're not the best or currently the ones which are recommended, but it's not like people say that you should avoid them because they might trip you up in a weird way.)

I would far prefer a language that has parts that I need to avoid than one that is completely missing parts I really want.

But what features does JS have that no other language has, other than the ubiquity of the browser as a platform? JS has been mostly frozen ever since it was created in the mid 90s and you also don't have much to work with when it comes to the standard library.

> But what features does JS have that no other language has, other than the ubiquity of the browser as a platform?

That's its only distinguishing or redeeming feature I can see. If you had a free hand to choose, would you ever want JavaScript?

"...happened to be at the right place at the right time and was good enough."

Sounds like almost every successful startup, too.

Starting out as just "good enough" is fine. Improvements will happen over time. It is short-sighted to look at where we are today and believe it to be the end of the story.

As developers, we're blessed to be a part of a future where we can take a sometimes clunky language like JavaScript and improve it to incorporate elements of our favorite languages.

Because if there is one thing we've learnt about programming languages, it is: throw enough features at a language and it gets transformed from bad to good.

That worked for C++, right? :)

It most certainly did for C#.

C++14 ain't half bad...

I mean, when it's so huge, there's bound to be some good parts in it, you just have to look for them a lot.

Isn't this the usual way technology proceeds? Betamax vs VHS, etc...

Everyone gets the wrong idea from the Betamax vs VHS thing. The correct lesson to learn is to actually solve the problems people have. VHS could record an entire football game on one tape. Betamax couldn't, because they prioritized video quality over recording length. They bet wrong; consumers prioritized recording length over video quality, because a mediocre picture of 100% of the thing they wanted to watch beat a better picture of 60% of the thing they wanted to watch.

And yet everyone turns Beta vs VHS into a sob story about how the best doesn't always win, instead of figuring out the real lesson: know what your customers want and give it to them.

I thought the reason Betamax didn't succeed was that Sony required deals to release things on their format, and it turned out a lot of people wanted videocassettes for porn?

(Same general idea though.)

I'm going to release a Javascript framework and call it Brawndo.

What do you hate about javascript?

I'll make no strong apology for JavaScript besides saying that compared to other "popular" languages I have had the opportunity to use professionally -- a list that includes perl, php, and vbscript -- JavaScript annoys me considerably less than many.

The first js hater of the night :)

The author mentioned transpiling other languages to Javascript and Javascript to Lua.

Personally, I much, much prefer working in Clojurescript to Javascript. Once my dev environment is setup, and I am rolling, there is not much inconvenience or build delays at all. My setup is editing in IntelliJ (both Clojure and Clojurescript), one clojure repl, and one lein autobuild task to keep the generated Javascript up to date, given about a one second delay.

Javascript is not such a bad language, I use it directly when doing meteor.js development, but I think the future is better languages like Clojure, Haskell, Scala, etc. transpiling to Javascript.

All that said, that was a nice article!

> Javascript is not such a bad language, I use it directly when doing meteor.js development, but I think the future is better languages like Clojure, Haskell, Scala, etc. transpiling to Javascript.

Honestly I think this has been the 'future' for a long time now, and I still don't see it happening anytime soon. Clojurescript as a language was announced in mid 2011, NodeJS came on the seen in late 2009. Both were languages never or barely used in their respective domains (server vs client side), but NodeJS has clearly had way more adoption on the server side then Clojurescript ever has had on the client.

Unless we get significant browser buy in for other languages (aka not just Chrome) and much better tooling I don't think any transpiled language is ever going to go beyond the minor niches they fill. And I say all this as a huge Scala fanboy, having used it for years in a professional way, and being very interested in ScalaJS. There simply isn't a language that has the buy in and following Javascript has been able to develop.

Don't forget about F#! WebSharper is shaping up to be a pretty nifty project.

I can't wait for the time to be able to write in javascript native code to run on Android or iOS. Maybe a future TypeScript would be the solution for this but it is not too late to remind people that transpiling is a two way street.

What does "native code" mean in that sentence?

iOS and Android native dev environment

Is your Javascript VM not already in native code when you're running Javascript on these platforms? Or are you suggesting that the instruction set for the chips on these devices "be pure javascript"?

That's what I meant by native development in iOS/Android

javascript-to-java and javascript-to-obj-c transpiling

I'm fully aware of the existense of alternatives out there like PhoneGap, Ionic ..etc but I'm envisioning a future where you could write in javascript mostly everything you could do in other languages and transpilers take care of the rest.

Both of those platforms have perfectly reasonable languages available (Java & Swift or ObjC), why would you use JavaScript over them?

Cross platform UI has AFAICT never worked, so you won't be able to reuse code from one platform on another, or is that what you intend to do? In that case you could still transpile from either Java or Swift...

Because I detest Java and I find it to be archaic, old-fashioned and clunky but that's just me.

Sounds like React Native!

but RN at the moment claims to support only iOS

So they're half way in their make believe

Great read, but it seems like the article overlooks a few other pretty significant reasons for Javascript's current dominance: (a) jQuery and (b) Apple keeping Flash out of iOS made from the start.

Say what you will about the relative inelegance of jQuery, at the time of its release it certainly made JS--particularly DOM manipulation and AJAX--very accessible and cross-browser compliant. It definitely made articles like this seem quaint. [1][2]

At about the time of jQuery's release, Flash development was a viable alternative for smaller sites. I was doing bits and pieces of freelance around 2007 and after the iPhone came out, even my least tech-saavy clients stopped talking about Flash.

Pretty sure that these two bits made it so that every front end dev/ designer in the world was at least passably decent at Javascript by the time Node arrived.

[1] http://boxesandarrows.com/htmls-time-is-over-lets-move-on/ [2] Also, Mootools and Prototype and YUI etc, but jQuery is the one that became ubiquitous

As for jQuery - nobody disputes that it had HUGE impact on the web - document.querySelector(All) itself is the living proof to that claim. It's just... it was around for waaay to long.

Building tools and extending what is currently made available to developers by the platform - all this is in our very natures as developers. We did that even in the early days, when low-level primitives were mostly missing to support us in that work.

Jake Archibald explains this in a comment on Nat Duca's post over at G+[1] - give developers some primitives; see what they do with them - then distill it and create high-performance features out of what now you already know there is a market for. TC39 works on this basic principle (everybody whines about types in JavaScript - TypeScript might be the JavaScript's jQuery here, paving the way for some distilled type system in JS (or SoundScript. or anything else that's out there).


As for Apple - I am siding with Peter-Paul Koch on this[2] - Apple had some quite a few good deeds in favor of the web, but that was a long time ago. Nowadays it just seems to be protecting its walled garden that is native apps & the related ecosystem - certainly not giving a whole lot about making the web a better place (or devs, for that matter). The Pointer Events fiasco is a perfect reminder to all this.

For any good Apple has did to the web, the surely made equal bad so the scales are (IMHO, at most) 50-50 here.

[1]: https://plus.google.com/u/0/+NatDuca/posts/De8Bv6F4fyB

[2]: http://www.quirksmode.org/blog/archives/2015/02/tired_of_saf...

> JavaScript just happened to be a nice fit to evented, non-blocking I/O, and a single-threaded event-loop based environment.

I keep seeing variations on this statement, and I have never figured it out. What, exactly, is it about JavaScript that makes it particularly appropriate for this paradigm?

The way I see it, the only abstraction JS provides that helps with this is first class functions, which are also present (and often richer and more robust) in many other languages.

Not just first class functions -- anonymous functions, and closures.

Python has first class functions, but not anonymous functions, since lambdas aren't full functions.

And Python doesn't have closures, because you have to use "global" or "nonlocal" (in Python 3) to assign a a variable in an enclosing scope.

These features let you write concurrent state machines in JS without explicit state -- you basically use the stack as state.

Python was pretty early in terms of event loop programming for interpreted languages -- asyncore in the stdlib, and Twisted. But I agree that the node.js style is nicer than asyncore, Twisted, or Tornado. I prefer coroutines over state machines, but if you're going to write state machines, than doing it with anonymous closures is nicer than doing it "manually" with classes and so forth.

Tcl was even earlier, but Tcl is pretty foreign to most people, and was more of an embedded language than a scripting language. Pretty sure Tcl has full anonymous closures, but it is a bizarre language in some other ways.

I haven't seen the event loop style in Perl but I think you can do it. If someone has an example I'd like to see it. To me, Perl only barely qualified as a real programming language because you can't even name your function params? At least when I used it, which was over 10 years ago.

> I prefer coroutines over state machines

And that's why Python supports `asyncio` (not to be confused with `asyncore` which is obsolete). So I prefer:

  sock = yield from connect('localhost', 7942)
  msg = yield from read_message(sock)
to this callback hell:

  connect('localhost', 7942, function(sock) {
      read_message(sock, function(msg)) {
And they're completely equivalent. Why would someone actually like the latter, except you're forced to use it because of the prevalence of JavaScript?

`asyncio` is not only easier to use than the traditional "manual" state machines with classes, but also much faster than something like `Twisted`. In my crappy local benchmark[1], it easily beat `Twisted` in a considerable margin. That may be because of the flaws in my test code, but still their performance is on par. I see no points in relying on callback hell, at least in Python, except the lack of the libraries that support asyncio, which makes me a little bit sad.

[1] https://github.com/barosl/ws-bench

Yeah but node.js predates asyncio by about 5 years. Python was early to the game with asyncore and Twisted (circa early 2000's), but the async I/O support stagnated and left a big hole for node.js.

Part of the reason for stagnation was because you really need new language features -- i.e. yield/send, yield from, etc. JavaScript already had the necessary language features for its concurrency model.

That's a good point. I also think `asyncio` was way too late in the game. I just wanted to mention the feature that was missed from your comment, which I think is not fair to Python.

You can do that with current ES transpilers:

    async read(connect, readMessage, print) {
        const sock = await connect('localhost', 7942),
              msg = await readMessage(sock);

You can do it now without a transpiler if you set harmony flags in Chrome/Node and use yield+co. (And just use a transpiler for deployment.)

Coroutines are much better suited to event driven, non blocking IO code. Sorry for the shameless plug but I recently released my hobby project that uses Lua coroutines for nonblocking HTTP server: https://github.com/raksoras/luaw

Lua was quite pleasant to work with in this context!

Agreed, but Node has had co-routines in the form of the node-fibers package since almost the very beginning.

async/await is the biggest reason I'm looking forward to ES7.

You can use it now with Traceur or Babel. Traceur recently (~1 month ago) got support for async generators (yield + await in same function).

Can't you use promises/deferreds instead?

One thing yield/await does is that if plays nice with existing control-flow mechanisms in the language. You can put them inside a for loop, if statement, try-catch block, etc. With promises all of those have to be reimplemented in a library and all your code needs to be converted to the "promisified" version of things.

The different syntax also prevents you from creating functions that can be reused by both sync and async code.

The big thing promises and other callback-based control flow libraries do is that they fix exception handling in async code (its a PITA to add error handlers in every callback when writing async code by hand). They also let you avoid some of the awkward "pyramid of doom" nesting but thats not really killer feature because you can also achieve that by using tons of named functions.

async/await are coming with ES7 and available today with BabelJS and other transpilers. Though not part of core, they rely on thenables/promises, which handle the reject/resolve which get composed with generators.

The required bits are in place... alternatively you can use co/koa which uses generators along with yeilding promises to act in a similar fashion.. it does work pretty well, and will only get better.

I do think promise and deferred are huge improvements to ergonomics, but still... I cannot completely love the code bloat (which is much smaller and easier to reason about than callback hell, of course) that is caused by those libraries. I much prefer Haskell's monadic interface or Python's `yield from`. But that may be my personal preference.

Grownups in charge of the lang are listening and they're doing their best to incorporate these features (async/yield) in the language as soon as possible.

Don't despair the future is bright :)

On the other end of the spectrum, Ruby blocks (and all the attached machinery) feel to me like a much nicer abstraction to work with. Go's goroutines and channels show giving a language purpose-built concurrency primitives is a really nice model (I expect Erlang would be another good example of the same, but never tried it), and the monadic approach in Haskell makes it so that you could almost directly translate node.js code and end up with something far more readable.

TBH Python is a bit of an exception there. Most languages with first-class functions (specially more modern ones) will also come with anonymous functions that aren't intentionally crippled (like Python's lambda is) and will have more traditional lexical scoping instead of local-by-default.

Which ones?

Python isn't an exception among languages that people in industry actually use.

For "production grade" server software, which is driving the newfound interest in concurrency, the following languages cover 99% of code written in the last 20 years: C/C++/Java/PHP/Perl/Python/Ruby and the Microsoft Stack (C#, VB).

None of them have anonymous closures. (C# might, but it's also newer, and it's not the the prevailing style of concurrency in any case.)

This is basically JavaScript's Scheme heritage showing through. Anonymous closures are old hat for people who went through a CS program teaching Lisp, but they're not by any means standard in industry.

One of the main reasons that none of the languages needed this feature is because the predominant paradigm for concurrency for the last 2 decades was threading, not state machines and callbacks.

C#'s had closures and anonymous functions for almost a decade, Java's got them properly now with Java 8, but you could always hack them with anonymous inner classes. C++ got them in C++11, etc. Hell, even in C, you could hack something resembling a closure with a struct and a function pointer.

PHP has closures.

Pseudo-closures. They capture variables by value, though you can use PHP's referencing mechanism to have something sort of closure-like.

There is no such thing as an "anonymous closure". There are only anonymous functions which may or may not create a closure. Early LISPs either had no way of creating closures, or did so through some sort of function or other mechanism. It wasn't until Scheme came around that lexical closures really became a thing in Lisp world.

Perl and PHP both have closures. In fact, Perl is closer to Scheme/Lisp than most languages (at least Perl doesn't screw up scoping like JavaScript, replace first-class functions with second-class-bastardizations like Ruby, or do whatever Python thinks it's doing with lambda... ugh)


    function main() {
      var a = 1;
      // named closure
      function foo() {
        a = a + 1;
      // anonymous closure
      (function() {
        a = a + 1;

I'll emphasize GPs words:

> There is no such thing as an "anonymous closure". There are only anonymous functions which may or may not create a closure.

Which is exactly what you did: calling an anonymous function which happens to create a closure over variable `a`.

It all boils down to: Javascript developers were already used to this paradigm in the browser.

This. A thousand times, this. It's a really good model for something that is I/O bound because it is easy to reason about. This makes the model perfect for events in the browser (both DOM events and XHR).

It's not that Javascript is especially suited to this model, but thanks to the browser, it was already the most used implementation of this model.

I think this combined with the fact that JS engines were already relatively isolated from the browser itself and able to be embedded with other software (in the case of node.js libuv/libev) made it a really good option. require+npm added a lot more to the mix.

But it was the broad availability of mindshare of those developers who at least knew some JS that really kicked it over the top, given JS as a DSL for I/O bound applications.

Perhaps they're used to it, but are they really fluent in it? Because I keep seeing otherwise bright developers having to use abstraction upon abstraction to keep from screwing up event driven callbacks.

Underscore, Flux, promises.js, Async, Angular, Node fibers, Step...

How much is all this abstraction costing us, both in terms of computation time and developer time (both writing and debugging)?

maintenance time, too.

We're in something of a JS framework boom (or hell, if you wish) right now. These people are developing as if their shit libraries won't exist in 3 years. Hence, the total lack of documentation and ongoing maintenance from so many of them. I've already had to maintain code that used abandoned JS frameworks. I'm a little scared of what's around the corner here...

JS isn't even particularly good at evented, non-blocking I/O: chains of asynchronous calls become series of ever-more-indented function(){}s. Languages with coroutines (Lua) or first-class continuations (Scheme) can chain asynchronous calls in a more straightforward style.

There are dozens of very decent ways to handle flow control with JS - any time you see indented function chains it tells you this might be poor quality code. For better or worse you can choose a library or write your own - Async.JS, Promises, etc all deal with this. Some abstractions like Async.Auto handle branching paths (series -> parallel -> series) very elegantly.

With coroutines in Lua, you could write:

    function handleRequest(req)
      data = db.fetch("user")
      return render("template.html", {user=data})
And if db.fetch and render (both made up functions) are written using the coroutine paradigm, your code would automatically be evented and non-blocking, no need to use libraries to handle your callbacks, and your code has the same style it does if it were blocking.

In db.fetch it uses the "yield" keyword that would tell Lua it is about to do an async operation it should wait till it finish before continue running Lua code, meanwhile run something else instead, making it non-blocking and you can run many of these Lua programs at once on a single thread, all of them concurrently.

One nice thing about Lua coroutines is that they are "stackful". You don't need to chain "yields" all the way up the call stack (like Python's "yield from") so its possible to create functions that work with both sync and async callbacks.

    function mapList(xs, f)
        local ys = {}
        for i=1,#xs do
            ys[i] = f(xs[i])
        return ys

    mapList({10, 20, 30}, function(x)
        return db.fetch(x)
In JS you can't use async functions inside of Array.prototype.map. You need to use a separate async-aware method from your favorite async library instead.

I have used this idiom to great results in my project (https://github.com/raksoras/luaw). Basically, request:read() hooks up into libuv (node.js' excellent async IO library) event loop and then yields. Server is now free to run next coroutine. When socket underlying the first request is ready with data to read libuv event loop's callback gets fired and resumes original coroutine.

That's awesome. You've me hooked. For postgres database access, what library do you suggest? Also, I remember lua-nginx can also do non-blocking IO in blocking style code. (http://wiki.nginx.org/HttpLuaModule, http://openresty.org) How does luaw compared to that?

Right now it's just a HTTP server and REST framework. It's a very first release and I don't have any DB drivers for it yet- they would need to be non-blocking as you mentioned.

I have plans to write dbslayer like "access DB over REST" service as a companion to Luaw so that it can use any and all databases that have JDBC drivers available without having to write non-blocking driver specially for each new database. This kind of arrangement where DB connection pooling is abstracted out of application server itself has other advantages related to auto-scaling in cloud and red/black or "flip" code pushes at the cost of slightly more complex deployment.

All depends on how much spare time I actually get :(

With BabelJS, or the co modules (used with generators) you can get a very similar model...

    app.use(function *(){
      var db = yeild getDbConnection();
      var data = db.query("procname", this.request.query.param);
      this.body = JSON.stringify(data);
This of course assumes you have a db interface that uses Primises and are using koa for your web platform in a version of node that supports Generators, or are using something like BabelJS or traceur.

JS has dozens of decent ways of handling it; the other languages mentioned have one good way of handling it.

According to Brendan Eich (http://devchat.tv/js-jabber/124-jsj-the-origin-of-javascript...), early on JavaScript was inspired by Scheme and the initial plan was to create a language that was "Scheme in the browser". :)

I prefer Common Lisp, but I can only dream about how wonderful Scheme in the browser would have been. JavaScript is hideous, simply hideous: it's a hack, piled atop a thousand compromises, wrapped up in a million curly brackets.

Every time I use JavaScript I imagine how good life could have been were it Scheme. Every time I have to use JSON I imagine how great life would have been were we using canonical S-expressions[1] instead. There's one good thing—and only one good thing—about JavaScript: it's incredibly well-deployed. As another commenter mentioned, JavaScript is an object lesson in path dependency.

In many ways, it's appropriate that it has 'Java' in the name: it's popular, but it's ugly. There are better languages; indeed, nearly every other non-Turing-tarpit language is better than either Java or JavaScript: Lua, Lisp, TCL, Python, Rebol, Erlang.

[1] http://people.csail.mit.edu/rivest/Sexp.txt

Lots of indented function calls is just poor design. You get the same thing in Scala when you start dealing with lots of Futures inappropriately.

Just choose a decent abstraction for callbacks. I prefer promises, but async.js does a good job as well.

The `flatMap` function on a Future in scala might be built into the language, but it is implemented and scala and not significantly different (in my mind), to Bluebird being implemented in javascript.

> What, exactly, is it about JavaScript that makes it particularly appropriate for this paradigm?


And it's a pretty powerful language. C-like syntax (familiar), Lisp-like (powerful/flexible), it's everywhere and it's fast (thanks to V8). Really, if it weren't for Chrome, Node.js wouldn't be a thing, and JavaScript wouldn't be ruling the world.

Right, but V8 doesn't make the language itself particularly appropriate. (And calling it "lisp-like" is way over-reaching. It has first class, anonymous functions and closures, and that's it.)

To me, the node story seems to boil down to "V8 was a readily-available fast runtime and people who already knew JS flocked to it" — which is fair enough, but hardly an argument for how appropriate JS is, as a language, for this sort of programming model.

"V8 was a readily-available fast runtime and people who already knew JS flocked to it"

That's basically all there is to a language's success. We like to debate syntax and closures and continuations and type systems and immutability and concurrency models on programming language message boards, but realistically, nobody cares. The two questions they have when they encounter a new language are "Can I learn this in a weekend?" and "Can I build cool things that other people would actually want to use?"

If you look at the history of programming technologies that have "won", the list includes C++, Java, Javascript, PHP, C, Objective-C, and to some extent Perl, Python and Ruby. The first 4 of those are terrible from a language-design standpoint, but the two things they all had in common were a readily-available reasonably-fast runtime (except PHP, and that was "fast enough" for the things people use it for), and a familiar syntax. C, PHP, Javascript, and Objective-C also had the benefit of being the "native" language for a major application platform, which seems to be the other major critical success factor for a new language.

There's still a defense of languages here, and I'm not sure it's due. You chalk up the efficacy of V8 as a delivery mechanism to it carrying portable knowledge in (already knowing JS) and it's usefulness as a general tool.

There is a possibility that this viewpoint doesn't leave any room for: the possibility that languages make nearly no difference. Perhaps people are savvy, and interested in more complex things, but the various languages and platforms don't bring any concrete value. Instead of being a function of how widely-ranging people's interests are, it's instead a question of whether anyone is actually making real tracks away from a center, a center whose nebulous nature is only permitting the forging of false distinctions in.

Rather than marking time by languages, perhaps it's more interesting to mark what we focused our language use on?

Don't forget Basic! Lot of people learned to program with it, and did cool things with it, in the times of 8-bit computers. And it is still used, from DarkBasic and similars, to MS' variations on VisualBasic (VBA and such).

You can make smart, high level languages like Haskel or Ceylon, but people will generally prefer dumb, easy to learn languages like Basic, PHP or JavaScript, despites their limitations.

Result: instead of making a nice car, well designed and looking good, they put big wheels and powerful engine on soap box cars! :-)

When it comes to performance LuaJIT beats V8 hands down. Its really down to the "its everywhere", IMO.

And I'm not a big fan of that theory that JS is Lisp-like. The only thing thats particularly lispy about JS is the first class function and even then they are a bit fucked up because of the wonky function-local scoping rules and the "this" keyword.

* function-local scoping rules and the "this" keyword.*

It's showing that you've been burned by these magical creatures. It takes time and effort to get used to mastering these awkward concepts

Not really. He's just using that to note that JS is not really a Lispy language because of this. My choice for "why JS != Lisp" is how the Array functions return whatever you passed to them and not the array itself, so you can't write single-expression multi-modifications to an array. Unless you use the hellish Array.splice method. Which I'd argue is even worse.

"It takes time and effort to master" - Yes, but as we know this is accidental rather than essential.

How's this accidental and not essential?

Actually, if you don't know how ((this)) works in different context, you can't call yourself a good js developer.

I think JavaScript and the associated performance war was already well underway before Chrome even existed.

Yes, JITs in WebKit and Gecko showed up around the same time as v8, or a little earlier even. And the WebKit JIT (Nitro) beat v8 on various benchmarks.

It's still possible v8 somehow sparked those other JITs, if before it was public the developers of WebKit and Gecko learned of v8 and stepped up their game. But, I don't know if that's true or not.

Anyhow, it's an impressive achievement that people still mention v8 as the reason JavaScript is fast, when it isn't the fastest today, and wasn't the first to be fast historically. Although, it is certainly a worthy VM, just one among several.

Almost every popular programing environment provides an incredibly unsound shared-memory multithreading library in the distribution, which winds up being heavily used by every other library--even sometimes built-in ones. Javascript in both the browser and not refrains from this, making it generally safe to use not-invented-here code.

In other languages, you could use libuv or tornado or aio or whatever, but it's like throwing away the entire ecosystem and using an obscure language anyway.

"What, exactly, is it about JavaScript that makes it particularly appropriate for this paradigm?"

Devs with experience writing interactions with the DOM had already convinced themselves that callbacks nested ten deep was a totes OK way to write code, "releasing Zalgo" and all.

I need a shirt that says "No integers? Totes OK!"

Most dynamic languages dont have integers

A language where 0x5eb63bbbe01eeed093cb22bb8f5acdc3 === 0x5eb63bbbe01eee000000000000000000 is not really suited for anything, tbh.

In JS, you know that library code that you call almost certainly won't do blocking I/O.

If you're trying to do non-blocking async stuff in other languages, you have to hope/check that none of your code and none of your dependencies try to do blocking I/O. (Unless you're using something that does async under the covers like haskell or go)

JS is in every web browser. thats why. number one. period. because there are plenty of other language syntaxes compatible with non-blocking mono-threaded EDA.

JavaScript is popular because browser vendors have not yet agreed on anything better. Remember Native Client? You might as well forget it -- Mozilla didn't sign on. ActiveX is a joke, but if Mozilla & Google et al had moved more confidently towards something Native Client-ish, we could be writing web apps in Go/Rust instead of JS. (or Groovy, $lang_of_choice) But what's their incentive to make that effort?

Ajax was great when IE rolled it out all by themselves. It first might seem like a skunkworks like ActiveX if you didn't know better. Give it a few years' time, coordination and cooperation, and now we've got a Websockets RFC. Consensus takes time.

As crusty as JS may be, I don't think these are dark times at all. I think the rate of evolution (in JS engines, ES6/7, ..) is so swift it's more worthwhile to stick to the JS platform than to reinvent the web yourself.

What's ironic is that Mozilla is deeply infused with its own version of ActiveX called "XP/COM" that's practically the same thing (to a point), just using a totally different set of (open source) tools.

You can use and implement COM or XP/COM components in C, C++, JavaScript and (theoretically but not commonly for XP/COM) other languages. XP/COM is not syntactically compatible with COM, and uses a completely different headers, tools, etc, but they're identical in memory layout and behavior and design. Of course beyond identical IUnknowns, XP/COM's interfaces, metadata, and JavaScript interoperability are totally different than ActiveX on IE.

They eventually realized they'd gone too far with it, and went through a "de-COMification" phase around 2002, and it's been a few years since I worked on a xulrunner application, but as far as I know, XP/COM is still pervasive throughout Mozilla/Firefox/xulrunner, including its deep internals and its chrome user interface layer.

COM was a good solution for a particular set of problems at the time, which Mozilla also needed to solve, so XP/COM made sense in that context. But these days, it makes more sense to use JavaScript and JSON directly as the interoperability layer between components and other languages, instead of COM.

JavaScript won the language war, so it gets the privilege of being the "linguistic motherboard" that you can plug other languages, emulators and components into.

> we could be writing web apps in Go/Rust instead of JS. (or Groovy, $lang_of_choice)

Couldn't work out if you're using Groovy as an example of a shoddy language used because there was nothing better (like JS), or a better language that everyone couldn't agree on (like Go/Rust). Groovy was originally based on Beanshell but with closures added, and became mildly popular for scripting on the JVM around 2006 onwards, and its later use by Gradle for 30-line build scripts. A meta-object protocol was added and Grails (a knock-off of Rails) was built to utilize it, though Grails use is on the decline partly thanks to Node.js. Although the backers of Groovy have since added other cruft, it hasn't moved beyond its original use case of scripting the JVM (like Bash for Linux).

There's plenty of interesting historical facts here, but they're interspersed with an emptier and rather sensationalistic writing style that seems to overemphasize the novelty of the various solutions listed. More importantly, the post doesn't seem to answer the question that it poses at the beginning - "It’s not too long until your very toaster will be running JavaScript… but why?" The only answer I got out of it was "Because."

I find the implication in the Moore's law section about performance to be a bit unfortunate. Ideas like this only strengthen Wirth's law. We're talking about plain old JavaScript here, after all, not some hypothetical Smalltalk/Erlang hybrid language with mystic productivity benefits or anything like that.

The impact on front-end development in the last three years has been undeniable.

Even now, you're either a Javascript Developer or you're a designer (a designer with HTML/CSS chops but really weak Js skills), the role of the front-end dev has morphed seemingly morphed into this two for some reason.

Rebecca Murphey also said this today but in my experience I'm not really seeing it. Just checked job listings now and all front end positions heavily promote HTML and CSS with minor exceptions for more JS heavy roles. More telling is a lack of non-JS roles i.e. I'm certainly not seeing any "HTML/CSS developer" positions open.

What I thought to be the most promising part of the article:

Tessel created "a compiler that translated the JavaScript source to Lua and executed the translated code on the very compact Lua virtual machine on the microprocessor of the device".

linked: https://tessel.io/blog/98257815497/how-tessel-works-the-basi...

The newer tessel is actually abandoning this model in favor of a faster system that just runs actual node/iojs and modules built for the platform architecture.

I think if React Native lives up to its promise, it's going to drive a much bigger adoption than node/iojs has.

As javascript lover I should admit that I think this is not about about javascript as language, It is more about being in right time and right place.Every one here knows javascript is awful language for optimizing and V8 kind of blow it up with every kind of optimization most of us cannot even imagine.Imagine instead of javascript , Python/Lua was standard way of browser to make dynamic content in ~2000 and web explosion days.Then I think we would talk about that language instead of javascript today.When you looking at big picture , you can see google push's Dart , which they somehow (at least I think) plan to replace javascript with Dart (I am not telling it is going to be that way , But I think this is how they have plan to do in long-term future)

UPDATE : just a day after this comment , Google announced it will not integrate dart to its browser and it will only focus on dart2js. So , I was Wrong google long term plan.

Wow. Lots of information there, lots of things I didn't know about!

Part of me is appalled that a language as icky as Javascript is taking over the world like this. But I'm happy that any language at all is giving us this level of flexibility and standardization.

Javascript is definitely an interesting language. For a while there it was the lament of code purists who shrieked every time you called it a language with the usual retorts like, "Javascript isn't a language, it's a hack"

I think ECMAScript 6 is definitely a welcome change to the language, ES7 will be undeniably better (even if for Object.observe alone). Now that releases for ECMAScript specifications will be released yearly, we'll see the language evolve and now that we have a nice range of evergreen browsers, developers can take solace in the fact they don't have to support older browsers for much longer (as new features are released, browsers incorporate them).

While Javascript definitely has its warts, what other language can boast of the same market share as Javascript? There is no such thing as a perfect language. Not to mention its low barrier to entry, nothing to deploy, no compiler to configure and how powerful it is in capable hands. I think the success of Javascript was more than being in the right place at the right time, I think its ease of use spurred a booming community. Not to mention the fact you could be confident you can write Javascript and it'll run basically anywhere.

People who feel the need to complain about how bad Javascript is to educate themselves further or most likely have never used it properly. There are definitely bad parts in Javascript, but any decent developer who has a good understanding of Javascript knows what those bad parts are and how to avoid ever encountering them. Any language allows you to write poor code, some just make it easier than others (like PHP).

For me the biggest and most exciting thing to happen to Javascript is React.js. And once it is released if my gut-feeling is correct: React Native is going to drive the Javascript language even further, possibly spurring a JS revolution even more-so than Node.js/IO.js has.

Blind luck. Web developers had copied and pasted functions for years, a few of them grew up from that basic beginning and tried to make lemonade.

Can't wait till the next big one, because this one isn't doing it for me.

Really great read... I used to write a lot of JScript in classic ASP, as well as the MS script runtime in windows... I think the only really ugly part was when you had to deal with COM enumerations. I also used to use hidden frames as post targets and use JSONP-like callbacks into request queues... most often I'd use the ADO option which allows for a response to be serialized into a string specifying a field and row delimiter.. I'd have my client-side script split this out. I also remember it being much faster early on to fully re-render a frame vs. manipulating the DOM for a lot of things. DHTML while supporting NN4/IE4 was a pain.

I couldn't be happier for where JS has come.. I use node's tooling daily, and although I do use BabelJS for ES6 features, I find it to be a very good fit (with more functional thinking) for a lot of problems.

As much as I am anally pained by the following things in JS:

1 + "2" = "12" but "1" - 1 = 0

0 == false ( "if window.scrollTop ..." will occasionally break on user scroll )

undefined == null ( why do we need 2 empty sets? )

undefined + "dog" != null + "dog" ( breaks transitive property )

undefined !== null ( but native operators like ? and if uses == and not === )

I'm really glad Javascript is taking over. Mostly because I am a small business / indie dev and using Javascript allows me to off-load a lot of work onto distributed client computers. Which, in turn, allows me to piggy back off of Google, Github, etc., for free hosting and be massively "scalable" to traffic spikes with no cost to me. Js has become massively portable so rapid prototyping and deliver is more possible now than ever.

What do you mean by "native operators like ? and if uses == and not ==="?

I believe ffn is referring to the ternary operators [1] not using identity operator, but rather the equality operator.

[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

[2] http://stackoverflow.com/questions/359494/does-it-matter-whi...

"1" - 1 = 0

Seriously, what answer were you expecting other than this?

I'm really curious!

Any decent language that wants devs to be able to reason about operators and types would throw a TypeError here.

Shitty languages will cause devs to have to purchase books entitled "ShittyLanguage: The Non-Shitty Parts", wherein chapter 8 talks about "avoid using the subtraction operator because of ambiguous precedence, transitivity, and coercion rules; instead use jQuery.minus()."

Easy.. don't let shitty programmers write code.. oh, that's right, everyone has to learn/start somewhere.

I've seen some pretty horrible code, with any number of bugs in pretty much every language I've ever seen. If you're passing a string into a function that expects a number, you deserve what you get. parseInt(value,10) for user input isn't so hard.. if you want to ensure a numeric value, you can always ~~value ... though that's slightly less obvious to someone new to the language.

JS evaluation expressions are far nicer than most languages I've worked with.. aside from C#'s addition of .? I can't think of much that comes close to as fluid in terms of handling end user input and massaging it into something that works correctly.

Your comments remind me of the XML everywhere mindset that used to be so prevalent in "enterprise" programming... JSON is a much better abstraction for data models, as it is less disconnect from actual code.

Personally, I prefer a "don't cause an error unless you really have to" approach to development... if you can recover from an error condition, log it and do so... if you can't, blow up the world. Java's error handling comes to mind here as particularly cumbersome to deal with... Node's typical callback pattern, and similarly promises/thenables is much easier to work with in practice.

JS has some really hideous parts... just the same, the Browser is an environment where you expect things to do "something" and mostly still work when parts break... the services that back browsers should likely do the same. JS is a good fit for this use case.

We caught another Java supremacist here :)

Justifiable and terrible answers:

- "" (the 1 gets converted to a string, and string-string could have meant remove-if-tail)

- NaN or undefined (bad operations could give bad results)

- "1" (bad operations could be ignored)

- "1-1" (because life is hilarious, and a-b = a+(-b))

- Parse failure (the -1 could get parsed as a constant because there's no string minus int operation, and two adjacent constants is invalid)

- throws (bad operations could act like 'undefined()' does)

What's terrible about throwing in such a case?

Edit: Apart from the terrible behaviors of browsers to just silently ignore all errors.

Some of them are justifiable, some are terrible, and some are both. That's one of the justifiable ones.

I really like this "1-1" answer. That's very creative :)

But honestly, compare each one of these answers to js's official workaround calling the Number() constructor on the primitive string value and attempting to convert to a number since the subtraction operator is only defined for number and then proceeding to complete the operation.

For me, it looks very logical and convincing given that the language is loosely typed and it breaks her heart that it fails any of its loved users :)

It's the inconsistency.

1 + "2" (12) does a string concatenation. 1 - "2" (-1) does math.

That said, whilst JS is loosely typed and won't fall over when you do this, I'd just see it as bad form to find code mixing types to this extent. Just because the language will let you do it, doesn't mean you should actually do it!

To be fair, the + issue is a mistake many other languages have made too. Incredibly, PHP, poster child of bad languages, gets this right and splits + and . (though this may just be because it copied Perl).

It's fine to have a (string) + (string) operator, it's just generally a less than brilliant idea to have a (string) + (arbitrary) operator that does an implicit string conversion.

But all of that's fine compared to PHP's implicit number conversion that ignores trailing characters, presumably so that you can add "3 onions" to "1 kg of bacon" or some such...

(string) + (string) isn't fine either, if only for performance reasons and because an operator should just do one thing.

I always hear this complaint from classical inheritance people, are you a C/Java supremacist?

Have you heard of this language called "Mathematics?"

48 might have been a much more reasonably sane answer, or just disallow operations that don't really make any sense, instead of implicitly casting in non-obvious ways.

48? Why's that? How did you reach this conclusion?

When you see a character, it usually has an underlying representation an unsigned (or byte or array of bytes or int if that's your thing, with size depending on ASCII, Unicode, etc). In both UTF-8 and ASCII the character "1" has a value of decimal 49. If the language actually allows you to subtract a number from a character (or length-one string, depending on language semantics), which is dubious to allow anyways, the expected behavior should be to return 48, but that's really a code smell to even attempt that operation.

EDIT: Clarified last statement.

Because JavaScript was first and foremost a language for validating user input... user input will always (well originally) be a string. In this light the type coercion choices JS makes are usually pretty sane. Given that the Browser is an environment where a user expects things to mostly still work in the case of an error (in formating markup, etc), having that carry through to the language is a pretty logical choice.

Given those two things, these edge cases are entirely sane... and given the flexibility of the language makes it a very good choice for a lot of tasks. No, it is not a purist mindset when it comes to languages.. but in the context of its' design goals, it makes perfect sense and is easy enough to avoid these scenarios when actually writing code that doesn't interact with user data.

For the record, I've seen C# developers pass around Guid (UUID) as string, even for code that never crosses a wire, and the dbms uses the UUID format. I've seen Java developers do some pretty asinine things too. The language can't always protect you from shooting yourself... in the case of JS for the most part at least a small bug doesn't take the building down with it.

String.prototype.charCodeAt and String.fromCharCode are never called in implicit type conversion. Type conversion is always done by calling primitive constructor functions. In the case of subtraction ("1" - 1), Number is called on any non-numeric operand. Also see:

> false - 1

> true - 1

If you have any experience in most other programming languages this would seem to violate principle of least astonishment (see "wat").

Why do you think that '1'.charCodeAt() and then subtracting 1 is more appropriate and logical than Number('1') and then subtracting 1?

Please note that the subtraction operator is not defined for strings in js and the language is loosely typed.

Because 49 isn't just the "character code" for "1" it is _the actual value_.

You're talking low level representation now and not high level.

If you're going to insist that a character (or string) is it's own magic datatype _and is not an unsigned (or similar) underneath it all_ and you're going to talk about high level niceties, then your interpreter really really should not violate principle of least astonishment. There is no sane way to frame "1" - 1 _unless you explicitly typecast the string to a number_, because you now have to reconcile it with what should be identical behavior for numeric types, like "1"+(-1), which guess what, yields "1-1" in javascript, which is the definition of insane. You've also got to deal with other less obvious cases like when the string is a in a var and is not _always guaranteed to be a nice number_ etc, which really really makes ever using anything like that a code smell. It's far easier for both the programmer and the interpreter _not_ to play the guessing game, and try and implement inconsistent numeric behavior, than to just say "well I'm not going to do this unless you really insist (via an explicit cast) that you want this".

Why you're insisting in ignoring the fact that js is a loosely typed language and automatic type casting is in its DNA?

Having automatic type casting of the form we've seen above present in the language is like having a gun without a safety--it's that 1% of the time that the pin inadvertently strikes the shell that you will really really wish you had had a language that would have faulted rather than silently proceeding with broken logic and now potentially disastrously bad data. I can't believe that anyone would pick a language like this that would allow you (especially silently) to be this sloppy.

You can't reason with dogmatic traditionalists like you people.

"1" is 49 is ASCII.

Type Error

Don't you know already that js is a dynamically typed language?

Python is also dynamically typed, but it does something sane:

>> "1" - 1 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for -: 'str' and 'int'

You're confusing dynamic/static typing with weak/strong typing. JavaScript typing is dynamic and weak, while Python's is dynamic and strong.

No I'm confusing dynamically typed langs with loosely typed ones. Thanks for bringing this to my attention :)

"Dynamically typed" doesn't mean "refuses to give any type errors, ever".

function f(a,b){

    return a+b-b

Now tell me, what is f("1",2)? Is this really what you would expect? Treating strings like strings sometimes and integers other times messes up functions, creating bugs that are extremely hard to track. Usually you can get back what you started with if you subtract after you add, but JavaScript ruins that unless you first check that the input are numbers. If it always tried to coerce the string into an integer or vice versa then it would be fine.

I know that at face value, things look messy and unpredictable but if you really know js in and out, you'd guess the right answer (1) (Check operator precedence and associativity for reference)

The problem with people coming from more classical programming languages to learn js is that some have this condescending attitude toward the language from the get go and expect that by the virtue of having C/Java experience under the belt, that everything should look similar in js and when they encounter something like "automatic type casting" they freak out and start dissing the language but once they seriously put the effort to understand it, their frustration and bad experience starts to give away to a more positive experience and consequently more positive sentiment toward the language.

So for me it's just a question of attitude and story of prejudice and perceived supremacy of one's own language background at play here.

It isn't hard to see the answer, but that wasn't my point. The point is that you want predictable behavior from your functions. If you saw the output of said function for integers you could never have guessed the output of said function with the combination of a string and an integer.

JavaScript forces you to work more to get predictable type conversions by having a lot of random conversions. While each of those conversions might make sense the combination of them usually doesn't. Why can I use other maths operators to coerce the string into a number but not '+'? Because '+' is a special case since it tries to coerce to a string before it tries to coerce to an integer, it isn't hard to get that. But it makes the other conversions dangerous since when you want to use '*', '-' or '/' you usually want to use '+' as well but as it is the other works but not '+'. What would make sense is to either remove the special case of '+' or you add special cases for the other operators so that they act similarly as '+' with strings.

Btw, you guessed the wrong answer – it is actually 10 (as number, not as string).

So it is definitely not as easy as you make it seem.

Just a typo but good catch though :)

…since both Webkit and node.js used V8 as their JavaScript engine…

Maybe you meant Blink, WebKit uses JavaScriptCore

Chromium, rather than Webkit, but yes - thanks for pointing that out I have fixed this (Blink was created much later, in early 2013).

And don't forget to write your Postgres stored functions in JavaScript with PLV8

wether one likes javascript or not,this is a good article one must read about the state of javascript today.

Ah - the pain of path dependence

Yeah, JavaScript rocks!

Stop trying to make Firefox OS happen.

I've been saying this for years (: happy to hear

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact