Hacker News new | comments | show | ask | jobs | submit login
JavaScript is Good, Actually (ashfurrow.com)
345 points by skellertor 62 days ago | hide | past | web | favorite | 358 comments



Semantics and configurability is what what makes a language great. JS doesn’t have function environments, i.e. every unlexical lookup goes to global/window and that cannot be redirected. It doesn’t have green threads (at least). There are generators, but one cannot just yield without marking all the functions generators too (async/await in modern terms). Stack traces are lost when generators throw(). JS has no good introspection — you can do some Reflect, but cannot e.g. write a module that has a function, that when called enumerates all functions in a caller module and exports them by specific criteria. JS can’t properly substitute objects with proxies. It can catch simple existing key accesses, but can’t enumerate keys or arrays. There was handler.enumerate, but it was deprecated. Vue, which was written by no fools I think, cannot “app.array[3] = v” or “app.newkey = v”. It cannot dynamically watch an entire app too, all structure has to be defined before Vue().

Other problems exist, which are just stupid (null/undefined, in/of, ===, ;, this, etc) or minor design choices like coersion and lexical rules. But these above are stoppers for turning JS into something great. It is now a BASIC-level language with scoping and sugar. Yeah, you can write like that snippet presented in an article, but you are forever stuck with doing all things by hand, from asyncing to orm, from exporting to update scheduling. I don’t find that great in any way, especially when that is the only semantics you have in a browser.


The generators or async/await in JS are 'shallow' coroutines because you can only `yield` or `await` in the direct scope of a generator or async function, but the benefit of shallowness is that the control flow is explicit; you don't have to consider whether a function call will suspend the calling context's execution or not. I find the explicit clarity to outweigh the reduced power of shallow coroutines.

As an aside, there exist green threads in JS, namely node-fibers, although it's only a Node.js extension.

You actually can reflect on module exports and dynamically change them with node modules; you can't do it with ES modules, but that has the significant advantage of enabling static analysis.

Proxies absolutely can trap the property assignments you mentioned, but Vue can't take advantage of this because Proxy can't be polyfilled in older browsers.

As for the enumerate handler, it would only have worked with for-in loops, which are a legacy feature. The iteration protocols used by for-of are a much more flexible solution. It might seem silly to have both for-in and for-of loops, but the context of the language is that it can't just go and break older websites. Same goes for == and ===, etc. Linters come in very handy for dealing with this.

Your criticism is better than most, which usually just point out some "wat" moments with mis-features like implicit coercion, but you didn't really make a case for having to do "all things by hand" in JS.


> As an aside, there exist green threads in JS, namely node-fibers, although it's only a Node.js extension.

Aren't Web Workers real threads, and supported natively in browsers? (Haven't used them myself, maybe there's some limitation that excludes them from the criteria above...)

https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers...


Thanks for node-fibers, that is something I missed and it looks promising, should try it server-side at least. But I'm not sure what you mean by "reflect on module exports", since there seems to be no way to enumerate all functions (in node), except those exported by hand. I workarounded it via "autoexport(module, x => eval(x))" and "// @export" tags, but it feels dirty.

I also bet that I couldn't make 'in' work for proxy with empty abstract target, but maybe it's just me. Btw, 'in' and 'of' are two separate iterators, one iterates array elems and the other iterates object's keys -- something essential to metaprogramming. My whole point on in/enumerate is that it considered legacy by someone special.

And on Vue: I didn't know that, but if Vue can't take advantage of Proxy, can I?


eval() has no legit use case in JS, and I really don't understand what the point of "// @export" would be. You can't reflect on module or function scopes, but that's a feature and gives you encapsulation.

for-in is a legacy feature; due to dynamic inheritance, it's generally not safe to use without also calling Object#hasOwnProperty() every iteration. for-of is not for "array elems", it uses the iteration protocols that are implemented for all built-in collection types, not just Array, and some DOM types, and can be implemented for any user types. Protocols are a much more flexible and clean approach to metaprogramming than overloading the for-in looping construct would be.

You can't use Proxy if you need to target legacy browsers like IE9, and Vue needs to, since it's about 15% of all browsers.


> You actually can reflect on module exports and dynamically change them with node modules; you can't do it with ES modules, but that has the significant advantage of enabling static analysis.

And yet even that advantage got thrown out from the language with the introduction of "import()". Apparently static analysis is a non-goal (see discussion in [1]).

[1]: https://github.com/tc39/proposal-dynamic-import/issues/35


Dynamic imports don't replace static imports.


I think that much of these criticism relate to the implementation, and the runtime (i.e. browser) rather than the language itself. As a scripting language for a browser environment it's pretty good. Modern variants have lots of lovely language features and tools like babel mean you can use a lot of these new features without sacrificing backward compatibility.

The threadless/nonblocking model is "interesting" but in my opinion its wholly suitable for user-oriented scripting as it forces a style of development that doesn't block.

The null/undefined thing actually makes sense to me. The concept of "Null" is a swirling vortex of uncertainty in most languages and it's nice to see it get some more nuanced treatment.

There's something about the bloody-minded pragmatism of javascript that appeals too ... it's very much a language that has evolved from the bottom up based on need, and much of it is very much community driven. You see problems solved in interesting and unusual ways that you mightn't see in a more stringently stewarded language.

I wouldn't use it for everything. I wouldn't prescribe it for beginners to programming either. But it's great for what it does. I like lots of other languages too, but I respect their applicable limitations. It's a nice language.


There are a lot of valid criticisms against JavaScript. This is a fact. It's not a great language (clearly demonstrated by the discussion about it being good or not) but it's not terrible. I doubt anyone calls JS terrible on its own merits. There is a lot of hate against the runtime (edit: I mean the browser, not JS VMs or interpreters). The runtime enforces JS and the runtime itself is enforced everywhere. There is also this new flow of very inexperienced developers who seem to think JS (and perhaps Ruby) is the only language. This is very similar to the hate PHP got. Pragmatic choices vs experienced having their "I told you so" moment when they see "sql_srsly_escape_string_this_time(...".

All this makes a lot of people react emotionally and makes talking about JS itself (like many other mass-adopted technology) very hard.

Is JS nice? Yeah, as my personal opinion - but we can all say that it is good enough, evidenced by the ecosystem and the things people create with it.


> It's not a great language (clearly demonstrated by the discussion about it being good or not) but it's not terrible

I'm wondering what would be an example of a great language? Because we know that there are only two kinds of languages: the ones people complain about and the ones nobody uses


I like to think of Haskell as a great language. It still has its warts (e.g. last [0,1/3..2] > 2), but if you wan't wart free, there's probably nothing beyond lambda calculus.

Being the closest thing to lambda calculus with enough syntactic sugar on top to make it practical is a large part of what makes Haskell great.


The one thing that makes Haskell not great is that its understanding is not widely intuitive.

People with mathy backgrounds that don't blink at the phrase "lambda calculus" won't consider this, but a lot of people struggle with math.

If you can't put it in the hands of a 6th grader (in the public school system with no special tutoring) and have a reasonable chance of it being understood (n.b. I self-taught myself early JavaScript when I was in the 5th grade, and picked up PHP the following year), it won't ever be "great".


This is the one criticism of Haskell that in my opinion has no merit. Programming languages are not intuitive. They are a learned skill. You know the saying (sometimes said as a joke) "such-and-such language failed because it didn't have C-like syntax" -- but C-like syntax is NOT intuitive! Reading C code is a learned skill.

Maybe it could be amended to "since many programmers learned to program using languages with a syntax inspired by C, wildly different syntaxes learned at a later stage are more difficult to them", which is a more reasonable proposition. This could be fixed by teaching programmers other languages early on.

Haskell is no more or less intuitive than JavaScript. It's just different.


> Programming languages are not intuitive. They are a learned skill.

I can teach someone Python or JavaScript in a few weeks, at a casual pace, where they can accept input from STDIN or a file, do some calculations, and produce output to STDOUT or another file.

Haskell? I'd need a dedicated fucking thesaurus on-hand for them to grok the paradigm, and it would take a few months before they could achieve the same result.

I have another point here:

If the only languages that are considered great are the ones that make people feel smarter than everyone else for being able to understand, we don't need great languages.

Most of us need easy, practical languages that help us solve problems and don't get in our way. (Most of achieving this property comes down to ecosystem rather than language design.)


> Haskell? I'd need a dedicated fucking thesaurus on-hand for them to grok the paradigm, and it would take a few months before they could achieve the same result.

This is one of those assertions people should have to demonstrate with actual experiments.

I can teach someone how to write buggy, unmaintainable code that seems to work but actually doesn't in Python and JavaScript. So? :)


> I can teach someone how to write buggy, unmaintainable code that seems to work but actually doesn't in Python and JavaScript. So? :)

Yes, because that property is totally absent from Haskell.

https://github.com/RNCryptor/rncryptor-hs/issues/2#issuecomm...

Oops. Surely the JS and Python implementations are just as bad?

https://github.com/RNCryptor/RNCryptor-python/blob/649ca23a5...

https://github.com/RNCryptor/rncryptor-js/blob/08250e00a1140...

(END SARCASM)

My point here is that, while working with a "great" and/or "better-designed" programming language can be beneficial, it's the ecosystem that really counts.


No need for sarcasm and my argument wasn't that there aren't bugs in programs written in Haskell.

My argument is that people who claim "writing Python is easier" usually ignore that it's writing buggy/throwaway code in Python that is actually easier (aka "look, I can write bugs fast"). Writing large, maintainable and bug-free code in Python is not easier than in other languages -- it's arguably harder, but since that's debatable, I won't argue it here.


Every language has its faults.

Even a "perfect" language is faulty if the barriers between its current state and mass adoption are insurmountable.

Completely reform education in this country to make purer languages like Haskell more palatable for younger generations than, say, PHP, and I'll totally be wrong in 50 or so years.


I don't think I'm arguing the perfect language exists, nor do I think a complete education reform is needed.

I'm arguing that "Python is easier" is false, simple as that.


And I'm arguing that the average newcomer (if we need a specific definition of average, how about chosen at random among low-income American sixth graders far removed from big cities like San Fransisco or New York?) would have an easier time understanding Python to the level of being capable of basic file/network I/O than they would with Haskell, because Python will be more immediately familiar to them because they don't need to even know what a thrice-damned monad is.


I understand that's what you're saying, and I'm saying you're wrong because:

- Beyond toy examples, writing Python isn't easier. Writing reliable, easy to maintain, bug-free Python programs is just as difficult, and your average person won't be able to do it right off the bat.

- You don't need to really know what a monad is in order to write Haskell as a beginner; that's a red herring.

If you want to argue that it's easier to write toy examples in Python, without regard for good programming practices, then... it'd still be debatable: if I remember correctly, some years ago there was a post here about someone teaching Haskell to highschoolers, to great success. They found it fun and easy.


That's a fallacy. Just because something is harder to operate doesn't mean it can't be better. Example f1 cars vs normal cars, two propeller ships vs single propeller ones.


> That's a fallacy. Just because something is harder to operate doesn't mean it can't be better.

Where did I say it "can't" be better?


Have you tried it?


There is no language that is innately understood. All computer languages are learned, so when they describe "intuitive" I don't think this is what is meant. However, a large portion of early programming education does take place amongst C-style curly-brackets.

It's this learned setting for our precepts that makes other languages intuitive or not. If we were all learning fortran or pascal in college it might be different, but we're not.

That said, having C-style syntax is clearly not a pre-requisite for the success of a language as is attested by the success of Python, Ruby, various flavours of BASIC and other less loved languages like COBOL ...

But "intuitive" is very important for getting traction. Once you can "intuitively" model a problem in a language such that somebody familiar with the problem can understand what's going on, then that's intuitive. I'm talking about a different kind of intuitive here.

Most people engage with computers on imperative terms, i.e. they want to tell it to do things. Imperative languages are "intuitive" because they allow you to map out a list of instructions in order.

So for instance, when you're writing a program to make a cup of tea you issue those steps one by one. You don't want to have refine the model into a functional space, do a handstand and flip the bag into a mug with your little toe while inducing a small raincloud and microwaving the drops on the way down.

Similarly true object oriented langauges (I'm not talking about C or Java here, where classes are glorified structs) model how we think of information in terms of object-relations.

Functional languages to me as an experienced programmer are "intuitive", but even I sometimes flinch when I'm exposed to a stack of lisp ellipses ...


I think you're under-stating some things. Imperative programming languages are hard beyond their initial appeal at the "this is like a recipe" level. It's debatable how people best engage with computers. There are plenty of anecdotes, if you look for them, of people failing to understand that the assignment operator in imperative languages means "place this value in this box" instead of "this means that". It's just that you (and me) are used to this learned mode of understanding.

Modern programs seldom look like a list of imperative actions anyway, beyond toy examples. They look weird and unintuitive. If you can make the leap to "this is what an actual imperative program looks like nowadays", you can make a leap to declarative/functional programs just as well. Doubly so if you don't have to unlearn years of conditioning about how programs are supposed to look :)

It currently is a self-inflicted hurdle. I don't pretend this problem doesn't exist. One way to fix it would be to start teaching programming in a different way, and with different languages.


> There are plenty of anecdotes, if you look for them, of people failing to understand that the assignment operator in imperative languages means "place this value in this box" instead of "this means that".

Are there anecdotes of non-absolute-beginners failing to understand that? For first-year students, sure. Do working software engineers have trouble with it? Do they have bugs and/or lower productivity because of it?

> If you can make the leap to "this is what an actual imperative program looks like nowadays", you can make a leap to declarative/functional programs just as well.

My own suspicion (completely unsupported by data) is that some peoples' brains find imperative languages to be more the way they think, and some find functional languages to fit their way of thinking better. You could run an experiment to test that - you'd take a group (call it A) of functional programmers, and a group B of imperative programmers, and measure their productivity. You'd then split the groups in half. A1 stays functional; A2 starts programming in imperative languages. B1 stays imperative; B2 goes to functional. Two years later, you measure everybody's productivity again. What I expect you'd find is that some of the people who switched (either way) increased productivity, and some declined.

What you might find is that everybody who switched from functional to imperative was less productive, but that might be because the (somewhat rare) people who are already functional programmers are almost exclusively the ones whose minds work better that way. You could fix that by starting with students or with fresh graduates, and arbitrarily assigning them to group A or group B. Then you'd have to wait a couple of years to measure their productivity the first time.

The difficulty, of course, is figuring out a way to at least somewhat objectively measure their productivity...


> Are there anecdotes of non-absolute-beginners failing to understand that? For first-year students, sure. Do working software engineers have trouble with it?

Sorry, I was unclear: I was talking about beginners. My argument was about intuitivity and "it's easier/harder to learn programming this way". Software engineers are already down the road of "I'm used to this, therefore this is the best way" :P

If I understand you correctly, what you say is entirely possible: that some people think best one way or the other, and that there is no universal paradigm that is more intuitive.


So a language is more intuitive if it's similar to what we already know, and we already know math, and therefore "=" for assignment is not intuitive? OK, for first learning a language, I can buy that.

I don't recall ever having trouble with that myself, but that's anecdote, not data...


To be fair, I never had trouble with that either. It was just an example of something I've occasionally read about some people approaching programming languages for the first time.


> Programming languages are not intuitive. They are a learned skill.

True.

> C-like syntax is NOT intuitive! Reading C code is a learned skill.

Also true.

> Haskell is no more or less intuitive than JavaScript.

This does not follow. I have to learn the syntax of any language, true. But that doesn't mean that all languages are equally easy/hard to learn. I learned C by reading K&R over Thanksgiving weekend while in college. I understood everything except argc and argv, even though I didn't have a compiler to experiment with. I had to learn, true; I wasn't born knowing it. But I found it to be pretty intuitive to learn.

I doubt I could have understood Haskell from reading a book over a four-day weekend, without being able to experiment, no matter how good the book.

And, sure, there could be someone out there to whom Haskell syntax is intuitively obvious, and they look at C and wonder what all the crazy symbols mean. But I suspect (but cannot prove) that such people are less common than those who find C more intuitive.

TL;DR: No language is innately known. But some can still be more intuitive (for most people) than others.

Note well: I do not take any position on JS vs. Haskell as far as how intuitive they are.


It doesn't follow because it's just my opinion :)

I don't think the relative intuitiveness of Haskell vs JavaScript is a settled matter. I'm arguing one or the other may seem more intuitive because of past familiarity with similar languages/paradigms.

For example, some people -- though not in this thread, thankfully -- make much about Haskell's allegedly weird syntax and/or operators. Never mind that its syntax is not particularly large, but also there's nothing immediately intuitive about a lot of C code in comparison. What's with all those "{}" and ";" and "*" and "&"? Parsing operators, especially with parens and pointer dereferencing involved, can be difficult, even without deep nesting, and even experienced coders occasionally trip over some production C code. Yet no-one uses this as an argument for C being "too difficult" or "too unintuitive". I argue this is because C was a language they learned long ago, and it has colored their perception of what is familiar or "easy" about programming languages.


Here's a thing about mathy languages too ... I've done a bit of lisp so I'd like to think I kind of get it ... but aren't computers by their very nature "imperative"? i.e. such that C or Pascal perhaps map more naturally to the underlying system? Can these mathy languages (oriented around our rationalist perspective of the world) truly ever be sympathetic to the hardware?

What I mean is, nice as these are ... is there always going to be a bit of waste when you use them?


Depends on how you define "great". Haskell intentionally keeps itself off the mainstream. That has always been a deliberate choice. There are now already plenty of more practical functional languages, e.g. OCaml, F#, Julia, Elixir, Clojure, which are in part driven by the advancement in research brought about by Haskell, so it has been fulfilling its duty in that way.


I love functional programming, but nobody use Haskell. Clojure, yeah.

Guy next to me at work was trying to build a web app with haskell for a hackathon, and I was blown away by how little the community had to offer for basic things he couldn't get working.

So yeah, languages people complain about, and ones nobody uses.


Haskell was the first thing that came to mind. I can show with one hand, by joining my pointer-finger and thumb, the number of people I know that use it.

It looks lovely, and it's something I'd very much like to learn some day but I can't for the life of me think of what I would use it for. Are there any killer apps out there for it?

At least with lisp, I can configure emacs ...


Well, it's a general use language. Also, there's purescript [1], a Haskell-to-javascript for the browser.

[1] http://www.purescript.org/


Pandoc and Xmonad are written in Haskell.


Interesting but again, short of contributing to these projects what am I going to do with Haskell?


Haskell is used in the industry, see: https://wiki.haskell.org/Haskell_in_industry

You could start your own project. You could contribute to an existing project. You could evangelize Haskell at your job, if possible.

All of these are hard, of course. It'll be easier to use a more mainstream language. But if Haskell strikes your fancy, maybe it's worth the effort?


> maybe it's worth the effort?

Yep. Sure. Just as soon as I get done shaving this Yak ;-)


Well, people have managed to successfully use Haskell in the industry. Occasionally someone with a success story even posts here on HN.


Write software, probably.


Well ... duh.


Also git-annex!


I also like to think of Haskell as a great language. But [0,1\3..2] being a wart? Here's an old HN thread that bashes Haskell:

https://news.ycombinator.com/item?id=9434516


Hugs:

    Hugs> last [0,1/3..2] > 2
    False
    
GHCi:

    Prelude> last [0,1/3..2] > 2
    True
¯\_(ツ)_/¯


My two cents is that I currently find Elixir and Julia to combine productivity and all the goodness from functional programming really well. Not sure what sort of backlash they'll get if they go more mainstream in the future (which I believe they will). I don't think it's totally hype since I also tried my hands on Rust a lot but I really struggled and eventually disliked it a lot. I couldn't seem to implement any complex structure without resorting to unsafe code. Maybe I'm just a shit low-level programmer in terms of thinking about ownership though.


Complex data structures often need unsafe; writing them isn’t a good way to learn Rust. Most already have implementations you can just use, so it’s not something most rust programmers do often.


My response to this is usually that Ruby is a great language because it's easy to get useful work done with a large, meaningful subset of the language. You can ignore the warts just by not using them. You can't do that in JS, because the warts are so fundamental. (yes, you can be tripped up by library authors in Ruby, but there's community backpressure against providing footguns).


I don't agree. The ruby community had a recent love affair with dsl. Taking a peak at rspec library source made my eyes bleed. There's also a huge preference for "magic" even outside of rails, so much that when you want to augment or add functionality you're supposed to monkey patch. There's also a huge preference for "clean" syntax when it doesn't necessarily improve maintainability--it just makes the number of odd syntax rules you have to learn and internalize more convoluted. Also I feel as though the reason there's so much preference for tests and strict rubocop is exactly because there are too many footguns baked into the language.

That's not to say other dynamically typed languages (including js) or even statically typed languages are all that much better. But I would say ruby is really showing its age with the number of warts and hacks that have accumulated.


FYI, there is a significant contingent of the Ruby community that doesn’t use rspec (for exactly the reasons you mention). Heck, the test suites for Rails and Ruby itself use minitest, not rspec. It’s hard to notice this, though, because the rspec people have a vastly larger written output.

It sounds like most of your criticisms are criticisms of dynamic languages. Ruby gives you a million and one footguns, but they are beautiful and elegant footguns.


One can make the exact same argument for JS as well - I’m not sure what distinguishing point is intended here.


The difference between the hostility to php and the hostility to js is that there are many alternatives to php whereas, right now, if you want code to execute in the browser or on many different platforms, you don't have a choice. So where I would normally tell people who boo php to use whatever language they prefer, and get over it, that's not possible with js.


> So where I would normally tell people who boo php to use whatever language they prefer, and get over it, that's not possible with js.

Maybe 5 years ago. For nowadays, I beg to differ: https://github.com/jashkenas/coffeescript/wiki/list-of-langu...

A significant amount of the choices there produce generally better-performing code than the hand-written JS, there's also the WebAssembly.


While technically true, don't you still have to debug and diagnose issues from the generated JavaScript? I'd love to write front-end code in anything else, but if I have to know exactly how it gets converted to JavaScript it kind of defeats the point.


Elm stands out in this regard, as it gives fairly robust guarantees of no runtime errors, so the debugging you'll do when working with Elm will almost always be limited to its compiler or Elm Debugger. The price to pay for this is limited interoperability with JS, but it may be acceptable to trade interop for type safety, depending on the use case.

Generally, the alt-js languages provide 'source maps' so that developer tools know to map errors in the 'transpiled' code to their source, and it's possible to avoid JS to a practical degree.


Usually you get source maps, so you can debug in the source language, right from the DevTools.


Compile-to-JS exists, and there are good ones out there. E.g. you can develop in Dart for web, server and mobile, and it is a solid alternative in every segment (the language and tooling is anyway).


No alternatives to JS browser runtimes & environments maybe but you can use one of many transpile-to-JS languages and you shield yourself from most of JS badness.


Not sure many people would think that "Ruby is the only language", and not sure what's your point there. From my impression, if somebody can decently do backend work with Ruby, then they're probably a better programmer than some JS-only newcomer. Is Ruby already becoming the new PHP? Don't think it's time yet, and Ruby despite its many drawbacks is still generally much better. Not to mention Node seems to be all the rage in recent years and Ruby isn't even that "popular" anymore. Though indeed I personally always had doubts about Ruby. I haven't touched Ruby ever since I started programming projects in Elixir, which I like much more.


All but maybe a few small % of the TIOBE index would fail all or some of those requirements... So while the enthusiast side of me heartily agrees with you, the professional programmer side of me has seen plenty of good code despite a lack of those features. Just because I know that better language features exist doesn't condemn every language lacking them as unusable.


Are you aware of ES6 Proxy? I may be mistaken, but almost every example of an issue that you have with the language seems to be resolvable using Proxy objects.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Yes, and “handler.enumerate” deprecation is stated in this mdn link. I thoroughly checked every described [un]feature in my recent research. There is some space for error ofc, but I’m aware of most obvious things. Again, simple get/set of existing or known keys is easy, but try to proxy an arbitrary object, array or class instance and see how it falls apart. This is almost a rule in JS: every feature is somewhat mentioned in docs, in the web, in excited conversations, but in fact it is shallow workaround that cracks under moderate pressure. So, please Vue.set(app.array, 3, v) and forget about abstracting vue away from your logic. I suspect that this is a consequence of commumity-driven design.

>almost every example of an issue that you have with the language seems to be resolvable using Proxy objects. I don’t think that light threading, introspection, scoping, etc is resolvable with proxy objects. If it is, it would be nice to know how.


We use proxies to watch models in our in-house MVC framework[1] and all those use cases are covered. You can watch setting array items with indexes, nested objects etc. It did take some jumping through hoops because the model also has to extend EventTarget which didn't like proxies. And we had to keep track of nested objects without creating new proxies on every call, but it's still not a rocket science.

Here the code of the model implementation: https://github.com/zandaqo/compago/blob/master/src/model.js

[1]https://github.com/zandaqo/compago


> Stack traces are lost when generators throw().

V8 (and probably others) now has some special cases for async functions (and generators, I think, but those are much more rare) to show useful stack traces with the lineage of async calls. For example, this code shows the stack trace you'd expect when run in latest Chrome or Node.js.

  async function c() { throw new Error('Some error'); }
  async function b() { await c(); }
  async function a() { await b(); }
  a();


Yeah, V8 in particular has possibly the best debugger tool available in any language ever; an easy to use UI, but still absolutely chock full of just about every useful feature you could imagine.


Imo, all people saying that some language or tech is good enough seriously lack imagination. Things could be so much better.


Yes, this same lack of imagination is why kludge after kludge is piled on at so many software shops. That and an absolute dearth of people that actually take pride in their work.


Any examples?


What are great languages in your opinion?


If we’re talking about “dynamic CRUD over socket abstracted to death”-style tasks, and not considering minor preferences like syntax, then python, lua, most of lisp/schemes, perl. All of these allow enough meta-anything to do:

  ./file.src:
  func api_foo()
    for x in objs
      x.a = fetch(x.b)
    ui.btnok.enabled = yes
    commit()
And have foo exported as api, and when called, all clients/servers, databases synced, validations passed, schemas updated, triggers and reactions done, errors handled, developer errors reported.


I think python is a terrible language. It has so little syntax you can't tell the difference between various things. A variable declaration, a reassignment, a keyword, a whatever else, they don't have any visual distinction from each other.

I also find that python has reserved a whole bunch of keywords that I can't use as function names, making APIs hard to create with appropriate names. You also have to pollute your code with self everywhere.

It is also really slow, has no where near the number of libraries on github as javascript, is not a client-side language, doesn't have anything like babel, which can allow you to do a lot more than reflection/introspection (but not the same things exactly), etc.


> I also find that python has reserved a whole bunch of keywords that I can't use as function names, making APIs hard to create with appropriate names.

Python is one of the imperative languages with fewer number keywords in existence. Compare for example the number of keywords in Python[1] with Javascript[2].

Maybe you're confusing the built-in functions in Python language, like len() or str(), however since they're functions you can reassign them (not a good idea, however in a limited scope it works).

[1]: https://en.wikipedia.org/wiki/Python_syntax_and_semantics#Ke... [2]: https://www.w3schools.com/js/js_reserved.asp

> It has so little syntax you can't tell the difference between various things. A variable declaration, a reassignment, a keyword, a whatever else, they don't have any visual distinction from each other.

The fact that you don't know the difference between keywords and built-in functions makes your argument weak. But I will bite, if you're having difficult to make the distinction between a keyword and a variable you probably have a very bad editor. Syntax highlighting helps a lot here (like in almost every other language).

> You also have to pollute your code with self everywhere.

I really like the fact that self is explicit in Python. It makes OO patterns more explicit (there is no magic variable like self or this, just a parameter that is passed as the first argument in any Class method) and it helps making a distinction between attributes and local variables. Really, I would go even further and make super() a method from Object, so instead of calling a magic method super() I would call:

    class Test:
        def __init__(self):
            self.super().__init__(self)
Ugly? Maybe, however it is so much easier to understand what is happening.

> has no where near the number of libraries on github as javascript

Most JavaScript libraries in GitHub are pure toys/garbage, though. Python has a number of useful libraries and if you don't concur with me, please give examples of areas where Python is lacking a good library (I can easily give an example for JavaScript: ML and scientifc computing).


> It is also really slow

1) In scientific/ML CPython libraries, most critical parts are compiled anyway, and the core language is fast and expressive enough to provide a nice and fast interface to it; so your statement makes little sense without more context

2) Python != CPython


While I use Python for ML myself, I find it weird to say that a language isn't slow because you don't really use it anyway. It's true that Scripts in Python can be fast if 99% of the executed logic is in C anyway but that doesn't mean the language isn't slow. As soon as your Python script needs to do anything not available in a library you'll notice how slow it really is.

Python is really neat to quickly experiment platform independent but it's definitely extremely slow.


As parent mentioned, python!=cpython. You've got pypy, cython, typed cython, nuitka, and probably some others I forgot about. Without knowing what you're trying to achieve and what you've tried, "extremely slow" is pretty hard to accept.


Cython is a subset of Python, while PyPy has shitty C interop if nothing changed from the last time that I checked. Ergo in a general sense Python = CPython. If you have specific needs you may want to consider the alternatives, but saying that Cython is Python is at least shady if not completely deceptive.


That python is slow is utterly irrelevant. You would never deploy an unoptimized Python codebase into production if you cared about performance. You would profile the code and optimize the hot paths with the appropriate technology, be it numba-jit, cython, numpy, cffi, or any of the other many ways you can easily optimize Python


I wouldn't but I've seen a lot of people who will. If you have code that runs, why spend extra money to optimise it? Hardware is cheap and the cloud allow to scale as much as you want (not my opinion obviously but I've heard that more than once and I'm not even directly involved in these kind of decisions).


Clearly if there is no incentive to spend money to optimize it then it is fast enough.


Python environments and versioning are a PITA too. That's my main beef with it. Getting anyone else's Python code to run is a nightmare if they haven't documented everything; most other languages I use feel like they have some sort of default versioning built in when you start including other packages.

The difference between 'npm install' (and even an added 'gulp') and the chickens I've had to sacrifice at crossroads to get Python packages working is notable.

Oblig XKCD - https://imgs.xkcd.com/comics/python_environment.png


It's funny you should say the same issues aren't as bad with Node and NPM. Node's versioning is a sliding window. I tried compiling Bootstrap recently and it simply wouldn't compile because there was some dependency error that didn't make sense. Apparently Node breaks things too frequently and you can't `npm install` anything if it's half a year behind the newest version. I've never had that problem with Python.


'Node breaks things too frequently' - Python 2.7 -> 3.0.

Yes, you need to get the right version of Node, just like you need the right version of Python. I've had both largely just work with directions of "Version Y.X", where Y is defined. I've also had the occasion where it still broke with a specific version where both Y and X were defined.

In general, if I have the right version of Node (and I agree, I'd prefer package.json to also indicate the version of Node that was used to initialize it), things work when installing from package.json. Things also generally just work with Python if I have the right major version of Python, and an environment file. My issue is more that the 'simple' steps a lot of people do when creating Python projects -don't- use an environment. They just use whatever is installed globally on their computer, and they pip install any dependencies, then write a readme to pip install things with, rather than lock it down with a virtual environment.


I don't know, getting people to run my Python code currently consists of "make sure you have Python 3, run `pipenv install`, run the code".


And if every Python developer, researcher using Python, etc, did that, it would be much less of a problem. The reality is, many, many don't.

It's kind of ironic, really. With the Zen of Python stating "There should be one-- and preferably only one --obvious way to do it" why is it that it's so common for people to not do the thing that makes it reasonably portable?

I mean, I currently am working with some code that, as part of its readme, has a pip install of a lib -off of master-, and yes, obviously that caused problems. Is the author not a Python developer? Well, he's a researcher. Why is the obvious thing for someone coming naively to develop in the language not building a portable environment?


> Is the author not a Python developer? Well, he's a researcher.

I think this is why the situation is bad in Python: we have too many non-programmers in the community, that simply want something to work and get on with theirs life. If they can get by it by installing a bunch of libraries by running some command incantations as root, they're happy enough.

You don't have this problem with Node because the only niche that Node really matters is Web, and WEB DEVELOPERS know that their environment should be reproducible.

*: Even in Node this isn't really true, since we have yarn vs npm. Ruby is way more stable thanks to bundler.


> we have too many non-programmers in the community, that simply want something to work and get on with theirs life

The impact of those people in the ecosystem is zero, though. They don't inconvenience anyone by producing badly-written libraries, they literally do their job, produce what they want to produce and everyone is happy. I'm not sure why they get lumped in with "this problem".

I don't know why I hear this a lot, it certainly doesn't echo my experience. I've rarely had dependency problems, even the regular requirements.txt (without pinning to specific versions) tends to work well enough on reasonably current code. Pipenv pretty much solves the problem by introducing a lockfile.


Well, it impacts those people who needs their work, for example for validation of scientific experiments.

It doesn't impact the ecosystem very much for sure, however I wouldn't say that the impact is zero.


The Zen of Python does not extend past the language itself, unfortunately.


My main problem with Python: passing a temporary closure to a function is very messy. You can't do functional programming well in Python.

Also, I think scoping rules in Python are terrible. I always end up with a namespace that is a mess. And you can't do something like this in a clean way:

    counter = 0

    def add(n):
      counter += n

    add(1)
    add(2)
    add(3)


> And you can't do something like this in a clean way:

That's what nonlocal is for: https://docs.python.org/3/reference/simple_stmts.html#the-no... Just stick "nonlocal counter" in your function and it works.


True. But nonlocal is relatively new.

Also nonlocal implies that other variables are local, which is not true. E.g., the following works without "nonlocal" keyword:

    counter = [0]

    def add(n):
      counter[0] += n

    add(1)
    add(2)
(This used to be my workaround.)


Yup, relatively new. Only around since python 3.0, for the last 12 years ;-)


nonlocal implies that you are reassigning a variable outside of the local scope.

Your example works because you are mutating an existing object, not reassigning a new one.

It's a disingenuous comparison IMO.


> My main problem with Python: passing a temporary closure to a function is very messy.

What's messy about it?


Python syntax is built around indentation. Try to properly indent a function that you pass to another function. And what if you have to pass two such functions? Where do you place the comma that separates both arguments?


You can't define a function via 'def' and pass it as an argument at the same time. You can do it via 'lambda' though (which doesn't require any indentation since it's just an expression), what's the problem with it?

    >>> (lambda f: lambda x: f(x) * 2)(lambda x: x + 1)(100)
    202


So the problem is that you can only inline expressions; you can't inline let-blocks (assignments), or multiple statements. The usefulness is very limited.


You pass functions just like any other argument.

clos = 42

def foo(x): return x + clos

def bar(y): return y * clos

higher_order(foo, bar)

What's so hard about that?


Here's a challenge: try inlining the foo and bar in the call (because this is how functions are typically composed in functional programming)

You could do this with a lambda construct in Python; but this only works if the function has a single expression-statement; it doesn't work e.g. when you have a bunch of assignments inside the function you are inlining.


>Here's a challenge: try inlining the foo and bar in the call (because this is how functions are typically composed in functional programming)

Sure, it's ugly but I think it's a silly challenge. Complex functions defined in the function call are un-maintainable anyway IMO.


Explain how you can't tell the difference between a variable assignment and a keyword.


Nothing is stopping you from generating JavaScript code to do what you want. You don't need reflection to do that.

I'd hate to see Python become as popular as JavaScript and to have it be used as the defacto standard in browsers because honestly, it's nowhere near as enjoyable to use as JS.


No, Python is much more enjoyable than JS (see what happens when you turn subjective statements into objective ones?).

It feels to me like JS is more for people who want to write "clever" code and Python is for people who want to write boring, more maintainable code. It doesn't help that JS has a gentler learning curve and attracts more people who just want to get things done without thinking whether those things should be done that way.


JavaScript is more enjoyable than Python to me. You can’t really say no to that.

IMO JavaScript is for people who want to get things done quickly and efficiently and have their program work everywhere. Python is for people who like to think they are doing things “the right way” as if there were such a thing.


> JavaScript is more enjoyable than Python to me. You can’t really say no to that.

Indeed I can't. I can say no to "JS is more enjoyable than Python", which is what you said in your upstream comment.

> JavaScript is for people who want to get things done quickly and efficiently and have their program work everywhere

If by "everywhere" you mean "in a browser", sure. I don't see how that's related to the common definition of "everywhere", though.

> Python is for people who like to think they are doing things “the right way” as if there were such a thing.

I don't know what you mean by that.


The criteria for this list of languages seems to have been "isn't JS". The story for metaprogramming in JS certainly is limited in some aspects; for example, there's no operator overloading and JS isn't homoiconic like lisps, but between dynamic inheritance in older JS and the newer additions like proxies, symbols, accessors, reflection API, protocols and being able to extend the 'exotic' behavior of arrays, JS can probably still do whatever the snippet you posted is supposed to illustrate.

As a concrete example, here's a library I made that relies on symbols and dynamic inheritance to extend the built-in data types: https://github.com/slikts/symbol-land


I tried to include all my rant points into this example. E.g. what if fetch() doesn't go async? Then we don't have to mark it like that and don't have to await. Where ui and commit lives? There should be a unique environment for this specific api call or a whole subsystem. objs is something both enumerable and proxied, etc. It can be done in JS, but IRL it will be:

  ./file.js:
  function foo(ctx) { // @export
    for (let x of ctx.objs.iterate()) {
      x.a = await fetch(ctx, x.b);
    }
    ctx.ui.btnok.enabled = true;
    ctx.commit();
  }
  autoexport(module, x => eval(x));
And more boilerplate, ceremony and fud down the way, if you go vanilla.

(Thank you and all commenters here for sharing your code and experience. In particular, symbol-land looks very interesting and haskell-y.)


Having to explicitly mark where the control flow goes to the event loop with `await` is a very small price to pay for making the code much more clear. This is why I don't recommend node-fibers or deep coroutines in general.

Destructuring would make your example look better: foo({ui, commit, objs}), and then there's no need for typing out ctx. Another thing that's not needed with for-of loops is .iterate().

Using eval() is a very strong anti-pattern, and it's not needed there anyway. JS has the `with` statement that allows running code with a specific context, but its use is discouraged as it's really bad for readability and hard to optimize.


I used eval as a workaround; autoexport does fs.readFile() on module’s filename and adds all lines marked with @export comment to exports. Eval is the only way to get a function value from other module, since functions are local to the context that is implicit and only accessible through eval-ing closure. That’s my code and my company, so I don’t push it to anyone now or in the future. I know how important antipatterns are in general.

Destructuring looks good here, you’re right. Actually, this thread somewhat relaxed my js hostility and I see it as an alternative that has reasonable tradeoffs (but still not as a friend though).


I have trouble picturing what advantage that setup would give over using ESM or node modules.

The Function constructor is a better alternative for eval(), but still only as a last resort. eval() itself has no use cases.

I find that most JS criticism is ill-informed, because people are too quick to jump to blaming the language due to its reputation. Not that I'd call JS a great language, but it has redeeming aspects.


Take a look at Common Lisp, Haskell, Smalltalk, and Prolog to see examples of great languages.


I have done javascript programming, but after reading this I dont feel like I know anything.


This is a lot of lines for not saying much. I was expecting much more from a "world-class software developer"(see About section) :)

Saying that the community is more advanced in the JavaScript ecosystem than it is in the iOS world is a nonsense to me. Don't we have more JS developers than iOS developers ?

Moreover, saying that JS syntax is really good with tools like TypeScript is another nonsense. You can also write Swift code and use another transpiler to get JavaScript code from it and you would say that JS is great.

Last but not least, giving a state of the art of JS without event talking about tools like npm, babel or webpack is like scratching only the surface of the subject. I mean come on, JS is not only about == and === in 2018


That's interesting because I also picked up on that: "I am a world-class software developer living in New York’s East Village"

and I just thought, hmm, here's this guy with ~5 years of ~professional dev(mainly with iOS) and he self qualifies himself as world class.

Looking at his bio, I see he's worked for 3 companies doing quite ordinary things.

I would love to know what gave him the confidence/ignorance to describe himself as such.


Oh, come on. So the guy uses some harmless exaggeration when selling himself on his own personal website & you're taking him to programmer jail over it?

Do you frequently launch into personal attacks on authors whose articles you disagree with?


If a medical doctors classifies himself as world class and then posts articles about medical matters, then my expectation would be his world class-ness is evident.

This guy's not writing an opinion piece about which brand ketchup tastes best. He has published an opinion piece in which he speaks authoritatively about matters related to the domain he claims he's a world expert in.

As for the harmless exaggeration, it's not harmless. It's poison to my chosen profession.


> As for the harmless exaggeration, it's not harmless. It's poison to my chosen profession.

Now you seem to be the one exaggerating there. While yes I agree that is a crappy way of describing himself, there is no reason to start being so mean about it.


What does world class mean?

That you are good enough that people would accept your work in the United States?


It's typically used in sports and means an athlete or team has won championships in an international league, or a person is an Olympic athlete.

Coding is less explicitly competitive, but there is an element of competition to it. If you were a major contributor to a project that had displaced multiple international competitors, that'd clearly be world class. For example, Linus is a "world-class" coder because Linux is used all over the world and has displaced many operating systems.


No less than Don Knuth’s protégé.


No but on HN you should expect that criticism in that direction is valid. You can be world-class in certain niches but I doubt that calling oneself a world-class developer holds in the context of parts of the audience here.

It's ok to sell oneself but I'd be careful with exaggerations in a field with probably >1 million professionals.


I think its valid to point out.

If the author is joking about being world class, that would be kind of odd. So is saying you're world class at something in general (unless you're the tiger woods of that something and can demonstrate/back it up).

I have found it is commonly a trend with frontend/javascript/rails devs to be senior/lead/etc after a few years of experience. I think there's a variety of reasons for this, but going back to the author he probably is very adept at his given field. But to brand yourself as world class seems a bit much.

Personal attack? Nope. But if I saw that line before interviewing a candidate - I'd definitely probe them about it


Yeah, I think pretty much any ordinary asshole that goes around calling himself world class ought to be mocked for it


Let's stick to the content of the article and not tear down our fellow engineers.


Regardless of whether tech or not, any opinion piece is framed within the context of the author of the piece.

I personally blame Jeff Atwood who pretty much green lit a lot of developers to consider themselves "elite" just because they were reading a blog about programming.

That attitude is something I'd love to see stamped out.


He is kind of inviting it by declaring himself "world-class". You don't get to make an assertion like that and then complain when it is challenged.


calculatePaycheck("i am a world class programmer") > calculatePaycheck("i am a programmer")


That 5 years blog post is from 4 years ago.

Ash is a pretty well known developer in the iOS world. He contributes to a lot of well used open source projects and what Artsy is doing (open source by default) isn't something I would describe as "quite ordinary" even if what they're working on is.

I don't know if those credentials are enough to be described as world class, but I would bet it's a bit tongue in cheek anyway.


Does "world-class software developer" even mean anything other than a big ego?


It could mean that they see themselves as competing in a global marketplace. That they don't rely on geography and are quite happy to be ranked against the best of them. Not necessarily ranked highly but content to compete based on productivity and quality of work alone.

Probably not what they meant though :)


The Leica selfie in the header says everything that needs to be said.


I can't be absolutely sure what you mean by this comment, but it feels like a needlessly personal attack.

FWIW, I don't really approve of the self-description of "world class software developer", either because (i) I'm not really sure what it means, but (ii) it sounds a bit pretentious. Still, it doesn't really relate to the quality or validity (or otherwise) of the post.


Its no more a personal attack that your comment, "it sounds a bit pretentious" since that's all my comment means.

Leica manufacture cameras made of brass and, like apple, are more "experience-oriented". I can appreciate putting a picture of yourself on your blog, but taking a photograph of yourself holding a camera which obscures everything but a brand name... that's equally suggestive of "a bit pretentious".


Given his github profile, I think we can give him credibility that he is pretty good.

https://github.com/ashfurrow


I've worked with him five or six years ago and he was very productive and smart even then. People on HN need to relax and dial down the harshness of their criticism.


Did you ever notice that real world-class software developers don't even have to say that they are ? :)


Not really if you work as a contractor or freelancer. In those cases you have to shout to the world that you are world class developer. Recruiters or Managers love terms like world-class, ninja, rockstar etc. Its unfortunate state of our industry.


This is unfortunately true in life in general. Many people of decision making power are attracted to self promoters. So if you really want more opportunities and more upward growth, you'll get what you want faster by peacocking.


Any time I see ninja or jedi in a job ad I avoid it. Rockstar is borderline because Joel coined the term as far as I am aware and actually has some decent stuff to say about software.


    s/good/busy/


are you that saying based on his contributions or the actual projects he's created? Because former doesn't make any sense to me.


Well, I just looked through some of his commits on some of the repos. Not pretty good. Productive maybe but definitely not good. Commit messages are quite lacking he's hard coding user display strings.

Without spending more time or having some domain knowledge of his problem, I can't really comment on the architecture setup/choices.


You’re showing your inexperience here nitpicking on commit messages (set by team / project conventions) and situational things like constants.

I’d work with Ash any day of the week.


I hope it's just ironic, so I give him the benefit of the doubt :)


> You can also write Swift code and use another transpiler to get JavaScript code from it and you would say that JS is great.

I haven't read the article yet, but what you're saying is something completely different, in my opinion. If you write Swift you're writing a completely different language, with all the hassles usually introduced when transpiling. TypeScript, however, is just Javascript with some of the assumptions made explicit and a transpiler that actually checks whether you're violating those assumptions. In that sense, it's not much different from a linter - another tool that makes Javascript more pleasant to write.


In the end, you have to transpile your code to JavaScript. Whether you are writing TypeScript or Swift code. But I agree, Typescript has more similarities with JS than Swift does.


That is true, but there are a lot of risks associated with transpiling completely different languages that are not there when using TypeScript. For example, if TypeScript ever goes away, you simply transpile it to Javascript once (i.e. strip away the explicit type annotations) and continue to work on the resulting Javascript - in a different language, that would be a complete mess. Furthermore, TypeScript integrates really neatly with the rest of the Javascript ecosystem (because it's just Javascript), saving you a lot of hassle and lock-in.

Reducing all that to "but it has to be transpiled to Javascript" is a simplification that doesn't help in highlighting potential concerns associated with transpilation, in my opinion.


I totally agree with you. My point here was just to prove that you cannot sell the beauty of JavaScript syntax by saying that you have to use TypeScript :)


Haha, that's true, though I guess in the context of this article that's mostly semantics, since it says that tooling (like TypeScript) is part of what makes "Javascript" great :)


To me JavaScript is very similar to VBA. Based on its own merits it’s a pretty average language with lots of design flaws. But it has a monopoly for what it does (browser/cross platform vs ms office scripting) and most of its users never really got a chance to solve the same problems with a better language. And like VBA, it is probably the only language known to the bulk of it user base (semi-amateur web designers vs ms office business users).

I care less about the performance benefits of web assembly than the fact that it will open browser/cross platform scripting to other languages and I’d be curious to find out if we are still talking about JavaScript in 10 years.


Atwood's Law: any application that can be written in JavaScript, will eventually be written in JavaScript. (2007)

I personally find all programming languages limiting in one or another way and I use almost daily C#, Python and JS (sometimes I use other programming languages as well).


That's life, some languages are better at different things; expecting any language to be all things to all problems is what eventually leaves you feeling like it's all hopeless I expect.

bash is good, for chaining small utilities together.

python is good, for single-threaded problems which don't hurt performance

go is good, if you don't want to manage memory and need easy concurrency, strong typing/etc

rust is good if you want to prevent having a footgun and low level access.

C++ is good for giving you complete control over hardware.

R is good for data science (although is being supplanted, many say, by python).

Javascript doesn't have to be the "best" if it's not competing at all things, unfortunately, as Jobs famously said: "The future is web applications" and now javascript (which, if we remember is from a spec written by 5 guys in 2 weeks) has to fit all use-cases... it's a tall order for any language.


> Atwood's Law: any application that can be written in JavaScript, will eventually be written in JavaScript. (2007)

Funny you should mention it. Excel will now support JS scripting.


That’s appropriate: it is also the case that any application that can be written as an Excel spreadsheet will be (and has probably already been) written as an Excel spreadsheet.


The crucial difference with VBA is its openness and community-driven evolution. VBA is stewarded and designed almost entirely by Microsoft whereas there are many many stakeholders involved in Javascript, and it's certified by an independent body and has lots of different variants that complement each other. Also it has many "cousins" such as actionscript or swift which you can get up to speed quite quickly in once you know JS. I don't believe VBA has the same degree of transferable skills.


Exactly that. I recently started programming JS again after not using it for a while and was surprised about some of the new language features. It certainly develops in the right direction, IMO.

VBA, on the other hand, hasn't really changed at all over the past decades. It's still as annoying as it can be with no help from the IDE. That's helpful as scripts from Office XP usually run with only minor edits but it also means that flaws annoying a decade ago are now much more severe. Luckily, we'll get JS on Office, although it'll probably take another decade until a reasonable share of companies have upgraded to a version that supports it.


>Based on its own merits it’s a pretty average language with lots of design flaws.

I don't really judge a language based on the number of design flaws. That is like judging a computer based only on its specs.

>and most of its users never really got a chance to solve the same problems with a better language.

I don't really think there is such a thing as "better languages". Also your statement is probably true, and was said about php as well. It is only true though because it has the most number of people using it, so will have the highest percentage of casual programmers. As with php though, you also have very good programmers using it.

I am not really sure what the point of characterising the users is other than to subtitute for a lack of any arguments related to the language. But of course, the arguments on these topics are just a bunch of copy and paste from "list of reasons javascript sucks" articles. These just list design flaws like === and so on.

"My PC is better because it has 2GB more ram than the mac". etc....

>web assembly

Web Assembly is not supposed to remove javascript, it is supposed to be used in addition to it when you need high performance. If people try to promote it as a way to not have to use javascript, it will just result in a more fragmented client-side programming situation, where libraries are not available for particular languages, etc.


> Web Assembly is not supposed to remove javascript, it is supposed to be used in addition to it when you need high performance. If people try to promote it as a way to not have to use javascript, it will just result in a more fragmented client-side programming situation, where libraries are not available for particular languages, etc.

I guess you have been missing the news what is being done in Go, Java, .NET, Rust, Unity and everyone else regarding WebAssembly.

We will have the revenge of plugins, like it or not.

The only way out is if the browser vendors backtrack and remove WebAssembly support.


Yep, exactly that will happen.

There are lots of developers that stayed as backend engineers as they couldnt stand javascript (as a language and its integration with DOM), they stepped over the beginner phase with their development skills while javascript forced them to write BASIC (LOGO) level code. On the top of it, there were always some junior engineers that barely started to develop and were playing smart and bragging how cool the javascript is. There is a lot of rage stored in those circles and there are lots of excelent developers (I can tell you that 90% of top developers (not 25 years old kids, people that are able to write runtime compilers and OSes if given enought time) I know never wanted to work in javascript). Different reasons but I can tell you that most of them would say "I dont like java, but javascript is humiliating".

Now the webasm is comming, I am preparing to bet that the frameworks will start to pop out in year or something after DOM is supported and they will overrun javascript in shortest possible time, just to prove the point - it sucks big time. QT is beeing prepared, all the "real" languages are starting to prepare to support compiling into webasm... the traditionally backend languages (that... you wouldnt believe... backend engineers know very well) are now having a chance to shine in browser that was restricted for them due to javascript monopoly.

I wouldnt take the javascript future as really bright, in best case it will be used in same way as today shell scripts are (this is what they meant that webassembly is not replacement for javascript). To glue some parts of "system" (read as: browser) together.

And quite frankly, this is step that should be done 10 years back. It would save world a lot of trouble.

And have fun: https://s3.amazonaws.com/mozilla-games/ZenGarden/EpicZenGard...


There's currently zero support for efficient garbage collectors and no Dom API access. Until a few more primitives are added, wasm is a pipe dream.

Even once support exists, there's a payload issue. Nobody wants to spend loads of bandwidth downloading runtimes.


That is just FUD.

It is certainly possible to package a runtime way smaller that the analytics crap most people have to endure.

Unity WebAssembly games are just a few hundred KB.

Who cares about DOM, WebGL takes care of the UI part.


Consider python. With all the built-in libraries, it's many MB of code. If my app is 1-2mb, I'll be pushing the limits (I'll definitely be looking at multiple bundles to reduce and spread out the load parse time).

If that goes up to 10-15mb (which can't be split), you're now going to have major usability issues. That's before all that analytic stuff that isn't going away. People do notice the difference and will just leave.

Unity complex with basically zero runtime and uses webgl so it doesn't need to include that either (just the actual game itself).

People use the Dom because it's standard, doesn't have to be downloaded every time, and offers tons of features.

Nobody is going to write directly to webgl for a standard website (That's easily 9,999 out of 10,000 sites). That means you have to do something like drag Qt or GTK everywhere which takes even more time to download and parse.

Your idea seems to be: download and parse a bloated runtime, download and parse a huge display library set, download and parse the misc shim pieces, and then download and parse your app. This is all done with the hope that your crud app that spends most of it's time doing nothing will be a fraction faster and you can write it in something that isn't JS.

Not a great plan.


A plan that is already in motion regardless how much you hate it.

Not all languages are like Python, a language that I only use for shell scripts anyway.

Blazor, Qt, Unity are all getting there.

Adobe can even bring Flash back.


I love the idea of wasm allowing other languages, but it's going to take at least a decade to be usable for anything aside from heavy number crunching with C++. For a webpage, replacing the DOM with UI toolkit X is a pipe dream with problems ranging from aria/accessibility to web crawling and indexing issues.

Adobe won't be bringing flash back. It was basically just ES4. ESnext and HTML5 have almost all the good stuff plus quite a bit more while having far better performance than flash could achieve.



Sorry but I believe you are wrong in basic presumptions. If they will do it properly, in c++ compiled to webassembly scenario, there will be no need for parsing (except some headers). Browser code (V8 in chrome?) will just take the opcodes and execute them, similar as CPU execute machine code in native applications. But normally, if this is a scripting language like, lets say python, that webassembly compiled python "runtime" will need to parse python source code (or not, there are pycs if I remember correctly) and there will be parsing overhead. But for compiled languages there will be "no" overhead (browser on its own is overhead) and in any case far less than for javascript (that really needs to be parsed).

Anyway, if you remeber flash, there was one runtime for all flash apps and once you had it, this was it (until next security update :D), and with todays CDNs, it will be no different than, for example, angular. Downloaded once, cached forever.


First, I suppose I'm obligated to say that I'm a huge fan of Wasm (and was a huge fan of pnacl for years before that).

Wasm (like asm.js before it) is target at unmanaged languages. A C++ codebase likely does very well with good performance. That was not what I was talking about. Convincing a game company to write their games in C++ is trivial. Convincing that same game company to write their normal CRUD website in C++ would be incredibly difficult.

Looking at apps, there's still issues. Consider Qt. While it's very possible to write everything in C++, loads of companies jumped straight onto the QML/JS bandwagon because its (generally speaking) much faster to write safe code in JS (also, running the v8 version that Qt requires would be terribly slow compared to the native engine). UI development is hard no matter what and C++ doesn't do it any favors. JS features like closures and dynamic objects make many things easier than static classes functions.

This means we need to look away from C++ to something that is managed and has faster code turnaround times. The best possible languages for this are (IMO) Scheme (or maybe Common Lisp) and SML (or maybe Ocaml or F#). You could also make arguments for something in the vein of Dart or Kotlin.

How do you bring these languages to wasm? If you use a JIT, you run into a rather large payload (4-10mb of code is going to have obvious impacts). In some cases like Ocaml where a native compiler to wasm is in the works, you still have a GC issue (basically, you can make an advanced GC that is slow or a basic GC that is "fast", but with other issues).

The addition of DOM API will solve the GUI issues (and maybe they'll rework them to be more like dart's API). Adding hooks into the builtin GC would reduce payload size down to something closer to unmanaged languages. 10 years after those are added (when outdated browsers can finally be ignored), wasm will finally be ready to replace JS.

Adding those features doesn't seem to be highest priority. Unfortunately, you and I will probably be nearing retirement age before they are generally usable.

As a point of interest, JS code could be getting much smaller and much faster to parse (plus becoming a better compilation target) with the JS binary AST proposal.

https://github.com/binast/ecmascript-binary-ast


I agree. DOM will come, thinking that GC can not be developed in language the browser (and javascript) is built in is a bit naive (not to mention GC is a complex solution for simple non-problem, I would rather use malloc/new and free/delete + destructors than relying on Terminator Skynet ripped AI doing this really simple task - tribute to java and inability to free memory when needed! Anyway with each tab running in separate process, system will take care about memory leaks ;) ) while the runtime - the last time I checked, the CDNs were serving javascript libraries over mb in size and I really see no problem for them serving python runtime, while for c++, the runtime can be the browser itself and even if you pull in libc or proxy to the existing browser functions, this can be really, really small. Believing that code, that was designed to be human readable (as js), can be shorter than the opcodes doesn't really fit here.

I have heard this runtime considerations before but they are taking into account that someone will run webassembly compiled JRE in browser (and I am sure that right now, there are some freaks trying to achieve that). But there is still good old C++ (or Rust) not as productive as scripting languages but this is just due to much less open source "technological stack" (or some might argue, tribute to low quality of software today, "technological garbage") rather of not beeing capable of beeing competitive.

I think that the coolest trick that web based technologies pulled out was to convince the world, that the whole GUI needs to be primitive, simplistic, as they were unable to create something that desktop applications were doing for decade (speed/size considerations) and it might just happen that we will return back to owner drawn controls, where you wont be able to make distinction if app is running locally or in browser (no, I don't mean Electron, pun intended). There js will be unable to compete.

Anyway, push for webassembly is not here as corporations want to do something good. They want to push all computer users into old mainframe scenario, where you would have to rent space and cpu in a cloud and have just dumb terminal and javascript has hit its limits to replace desktop environment as it is, even with Windows Metro look (another simplification of GUI, done for mainframe scenario, Windows 365 are not far away)

Javascript was usefull in era of simplistic web pages as a small hack into html, but for next step we need something more and here there are other languages that can offer much more but are now limited to running in backend as they don't have browser support. But this will change with webassembly.


One of those freaks is called Microsoft.

https://blogs.msdn.microsoft.com/webdev/2018/02/06/blazor-ex...


The runtime is probably still be a fraction of the bandwidth consumed by all the photos and videos on modern websites.

But the web is only one aspect. Web apps are likely going to take over a huge part of client development.


Runtime size depends on a few things. The biggest is JIT vs native. If you're willing to dedicate the time to making (or interfacing with) a completely native compiler, then the runtime penalty isn't that large. In contrast, any halfway decent JIT is going to be several MB of code (and unlike an image, has to be parsed and executed -- potentially on slow phones).

In both cases, the GC issue is non-trivial. LLVM has finally started to make performant GCs possible. Wasm (to my knowledge) doesn't have similar capabilities and guarantees (I suspect the are actually impossible with untrusted GC code). The only viable solution IMO is the addition of GC primitives and hope that your particular language maps well on that specific browser's GC.

https://medium.com/dartlang/dart-on-llvm-b82e83f99a70


Switching back to Java after having spent a long time in JavaScript land with modern standards made me realize how great JavaScript had become.

The biggest reason for JavaScripts greatness to me is JSON. I couldn't understand how Java developers put up with such bulky ways to deal with data. After some time in Java land I've come back to appreciating its strengths again though. Code completion is nice. I'm kind of hoping for more typescript in the future.


You should try Kotlin. It solves exactly this pain point - creating classes for data structures is very terse and easy.

You can have a one-liner class like

    class Person(val name: String, val email: String, val yearOfBirth: Int? = null)
Or you can make it a `data class` and get automatic `equals` and other things.


Thank you for telling me. I've been hovering around this possibility since I'm using Intellij and am the primary owner of the code. Although I'm also keen on this code being as accessible as possible for the rest of the organisation and few has even heard of Kotlin.

When I suggest switching to Kotlin everyone's always smiling dismissively like "ah, you with your crazy ideas again".

Might do it though. Maybe.


Don't ask, do it. Asking is a great way to get shot down. Write something smaller in Kotlin, then show people what it looks like. It is heavily interoperable with Java, so you could move some of the boilerplate heavy classes over.


Concepts borrowed (in a simplified form) from Scala.


JSON has its strengths and weaknesses, and while Java doesn't have first class support for it the Jackson libraries are excellent!


Unfortunately given that java does type erasure, you don't get the kind of full featured JSON lib that Json.Net is in the .NET world. Presumably you'd have to somehow annotate to the deserializer what each individual list/array's generic type is for it to properly deserialize, separate from the type definition itself.


I'm not sure what scenario you are envisioning. Type erasure does not completely eradicate generic type information, reflection can still be used to infer generic usage for deserialization and dependency injection (and is done in Jackson and Spring). To make your life easier in doing so, there is:

https://github.com/jhalterman/typetools


I'm using JSON all the time, but it's hardly the be-all/end-all solution for even the basic use case of service payload serialization, let alone config file syntax or document syntax. To begin with, JSON lacks types for URLs, dates/times, and spatial data; it also lacks conventions for error messages and data graphs other than trees, etc. So I think Java not integrating JSON literals at the language level or some such actually is a plus.


I think that lack is a strength. Lists, Maps, Booleans, Numbers, Strings. With these powers combined, you can create basically anything you can think of. Dates can be unix timestamps or ISO strings which are universal. Spatial data is a compound datatype anyway (plus, there are quite a few data representations depending on number of dimensions, cartesian vs polar, etc). The one datatype that JS could potentially benefit from is a byte array type, but even that can be accomplished with base64 "strings" or Lists of numbers.

To me, it's much better to let every company send error messages in the format that makes sense to them rather than having the One True Error that doesn't quite work for anyone.

My biggest complaints about JSON are syntactic. Optional comments, multi-line strings, and trailing commas would do wonders for day-to-day readability without significantly bloating the specs.


Agree with your points regarding comments and multi-line strings; don't like optional commas, though.

With respect to your "lack is strength" argument, I can't help but find this unconvincing. We all know JSON is derived from JavaScript object/array literal syntax; your argument is akin to retro-fitting the requirements for service payload serialization to the suboptimal de-facto situation with JSON.


> Switching back to Java (...)

I miss the "and a project which doesn't use modern standards"-part in your post. It's not as if JSON or other modern things are not used in many Java projects. Your project seems to be stuck in the past, as are many JS projects. Legacy code is no fun most of the time.


I didn't read this as a json vs xml comment, although I suppose it could have been. Rather, I think it's a comment about how there is no convenient, lightweight syntax in Java for dealing with records/maps/trees/dictionaries.


Yes this was closer to my point. Dealing with JSON (like data) is much less convenient in Java than JavaScript. Haven't had to deal with XML configuration in Java for a long time, which I'm thankful for.


JSON is seldom JS now, as any language has had encoder/decoder built in for years.


Sure, but there is boilerplate and abstractions that are needed in a language like Java or Scala. In JavaScript, it's a first-class citizen of the language.


It seems you're not talking about JSON (the serialization format) but rather about Javascript's object literal syntax that inspired JSON. They're really different things.


An advantage, however, is that you can directly encode Javascript data objects to JSON without having to annotate its fields or anything. I don't recall you being able to do that with Java.


You can't do that with javascript either unless your data objects are acyclic and if they are then using a JSON parser for Java is equally trivial.


You mean that if they are acyclic, you don't have to annotate the fields in Java either? Because the data objects I want to serialize usually are acyclic.


Perhaps, but probably not. ES6 recognises the distinction, and includes native functions for serialisation. That's first-class treatment if anything is.


JSON.stringify and JSON.parse? They're standard library functions that many languages have, I don't see what's ES6 about them, they're much older.

First class treatment would be if the language had JSON-typed objects with functionality, e.g. something like

    const myJson = j'{"key":"value"}';
    const value = myJson['key'];
    const newJson = myJson.set("otherkey", "newValue");
(using the j'' notation for a hypothetical json-typed value)

But AFAIK it doesn't, not even in ES6.


https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

Implementing j`{"key":"value"}`; as a tagged template literal is left as an exercise for the reader.


Isn't comparing JS to something like Java or Scala a bit misleading in this regard? They're completely different styles of language with different tradeoffs. I think it would be a lot more fair to compare to something like Python or Ruby, in which case this advantage become a lot less clear IMO.


Well you must compare a dynamic language with another dynamic language. It would be more fair to compare JS to Python or Ruby if you talk about ease of use.


I too like JSON, however it is somewhat inefficient for arrays, where you end up repeating keys a lot. If you're gzipping, it probably doesn't matter as much...


I'm slightly puzzled. Array = [[1,2,3],[4,5,6]] seems valid json and I'm not sure what's inefficient or where the keys come in?


I was thinking of arrays of objects, where the keys are repeated for each object in the array.


I'm still a bit confused how this is a problem. Can you elaborate?

Is there any serialisation standard that specifically addresses this? I know YAML and TOML doesn't at least (correct me if I"m wrong)


Something like Protobuf where there is a schema is defined means metadata like key name does not get repeated for every object, its only the values.

No more trying to have short names to keep file size down.


IMHO, it sounds like the job of the compression algorithm, not the format.


Then use compression. gzipped json is highly efficient over the wire.

https://blog.octo.com/en/protocol-buffers-benchmark-and-mobi...

The primary purpose of binary data formats is that they can be parsed quickly and some even allow direct access without a parsing step.


That's what I said in my first comment, no?


s/arrays/"collections of homogenous objects" and it's a fair point.


If you don't need to exchange the data with other apps, serialization in Java is actually quite decent.


You still need to make `ObjectStream os = new ObjectInputStream(new FileInputStream(filename))); MyObject mo = (MyObject) os.readObject();` which is the most basic form I can think of (you've got to put in all them bloody `try/catches` as well don't forget, along with `public class MyClass { public static void main(String[] args) { ...` etc. etc.).

Whereas javascript you can just go `require(myjsonfile)`

I love java, but Javascript is indeed great for some of these kinds of things :-)


> Whereas javascript you can just go `require(myjsonfile)`

I'm not current on Node features, but that looks dangerous to do. Is that equivalent to reading the file and `eval()`ing it?


> I'm not current on Node features, but that looks dangerous to do. Is that equivalent to reading the file and `eval()`ing it?

No:

    > const fs = require('fs')
    undefined
    > fs.writeFileSync('file.js', 'console.log("yes")'); require('./file')
    yes
    {}
    > fs.writeFileSync('file.json', 'console.log("no")'); require('./file.json')
    SyntaxError: file.json: Unexpected token c in JSON at position 0


This is the community that installs things by curl|sudo bash, remember


JSON has so many of these little but annoying limitations, I can't believe it's finalized like that.

JSON5 is something that should've been.

https://json5.org


Try Python - JSON works very well with it and while Python has its flaws, I think it's a much better language to use.


Ehhhh...it works okay but the same fundamental serialization problem exists. The fact that dict can be roughly equivalent to JSON helps, but as soon as you start including more complex types you're going to have to use something to handle serialization/deserialization. Which really isn't that complex with Java if you're using a few convenience libraries: gson, orika/mapstruct, lombok all make it a lot less painful to write this kind of boilerplate serialization code.


The most common issue I run into is with datetime objects, but there are well documented workarounds. For more complex objects such as ORM objects, I think a lot of people use Marshmallow. That being said, I don't have a lot of experience, so I could be missing something obvious. What issues have you had with Python JSON serialization?


I wouldn't say I've had issues, I just don't think it's really been easier/quicker in Python than most other languages I've used except for the simplest use cases (when Flask jsonify does everything you need because your data is all just strings, numbers, and booleans already)


I'm still bitter about JSON not supporting comments.


That's why I use 'ini' format for config files despite not being standardized.


The code completion in JAVA is probably the only way to write java. It is bit too bloated for my taste with all classes and subclasses that handle almost the same thing (speaking mainly from Android perspective)


Code spends most of its time being in production and not in development. During that time in production, developers will leave the team, bugs will show up, major enhancements will be made. So it is important that the codebase is easy to reason about, easy to refactor and easy to debug.

If you take a language like Java and an IDE like Eclipse or Intellij IDEA, it is trivial to find from where a particular piece of code is called from. Whereas in JavaScript, particularly in huge codebases, you can't easily tell how a piece of code ends up getting called. You will need to run the code, make some educated guesses and put breakpoints or console.log statements to verify that yes, this particular line does end up getting called on this particular action.

Refactoring is also a joy in a language like Java. You can easily modify a method signature and the IDE will take care of updating all the places this method is called from. Now imagine you add a new argument to your JavaScript function and want to update all the places where the function is called from.

So judging JavaScript on these factors, it isn't such a good language.


TypeScript offers a compelling case, however. It often can provide the refactoring superpowers of strong statically typed languages on top of JS, and its brand of structural typing allows for quite expressive, but type safe, programs.

That being said, TypeScript is not sound and obviously in a real TypeScript program you'll eventually touch things that are untyped. But still, getting a different set of trade-offs than "statically typed" and "dynamically typed" is useful. It makes me wish there was something like this for Python (no, MyPy and Pyre are not this. They only provide nominal typing, which is definitely not good enough for everyday Python like Django.)


It's useful having a spectrum types. I personally prefer noImplicitAny in Typescript as that lets me choose which battles to pick and while I often leave more `any` typed things than I would like, I have the `any` keyword as an explicit TODO marker of those places where I gave up, so I can go back and fix them as needed/time permits/tech debt requires.


This is a giant overgeneralization. If you are using Redux, for example, you _can_ easily tell most of the time what functions get called and in what context, because data flow is unidirectional and easy to reason about. If you're using an old massive tangle of spaghetti code, it is much more difficult, yes.


You can have very messy code with an excellent language. You can also have beautiful code with a poorly designed language. Let's do a thought experiments. Have a pool of SWEs fresh out of school. Divide them into groups. Each group is given a language they are not familiar with. The pool of languages will include both commonly regarded good ones, OK ones, and bad ones. Ask the groups to do a middle-sized project for 3~6 months independently. Then we compile the projects, analyze their code quality. If the pool of SWEs are big enough, it will give us some insights.


Forget code complexity. Have each team develop the exact same product in their assigned language, measure how long that takes.

Then play musical chairs and have each team develop a new feature in one of the other teams' project/language. Measure how long that takes.


You can always cherrypick advantages and disadvantages to make a language look "bad", or make another language look "better".


This is true. However, it is more important to evaluate both the quantity and the "quality" of advantages and disadvantages.

For disadvantages, how serious are they, how easy are they to be abused, to creep into the codebase, to be prevented from happening again, etc.

It also depends on the team. If it is a small team of 5 people and they are all excellent engineers, I think whatever languages are fine. The 5 engineers will discuss and decide what features to NOT use, etc. and abide to them. If it is a team of 500 engineers, it will take much more efforts and eductions and much longer to achieve that.


what can you cherry pick from javascript to make it look good?


Comparing with Java or popular languages like those at the top of tiobe?

Lambdas, First class functions, and closures are a great feature missing in basically every other popular language (things like Javas lambdas or function pointers aren't even close in real world use).

Proper tail calls is another feature missing from that list and despite some browsers refusing to honor the spec they ratified, it's still implemented in Safari/javascriptCore, XS6, duktape, node 6-7, etc.

The interplay between js dynamic objects and closures is difficult to describe, but a thing of beauty when fully understood. Object literals are also basically unique to js on that list as well.

There's a lot to love about most languages and JS is no exception.


Data from Stack Overflow Trends says that javascript, python are becoming more popular than Java, C# https://insights.stackoverflow.com/trends?tags=javascript%2C...

None of the languages are new.


Nope. I’ve worked in Java codebases which are much worse than your average js codebases. You could never tell what part of the java code calls which genericized listener. Especially with all the floating xmls and linked annotations. Its not the language, its the design that matters.


The tooling makes up for it -- VSCode, for example. Click to definition, right out of the box. The number of options is too numerous to be listed here. Modern Javascript is as much about the tools as it is about the language, as the writer points out.


If the issue is IDEs you can do all that with WebStorm.

As a matter of fact, you can even use IntelliJ for checking function usages and refactoring in JavaScript.


Even the best IDE cannot do anything about the fact that JS is dynamically typed.

Yes, modern IDE do wonderful thing to help you with JS but they will never be able to match the help they can give with a language that is statically typed.


Personally what I love about JavaScript is that it brought the basics of functional programming to the masses.

Obviously it's far from pure FP, but I think it's much more at the heart of JS than it is say Python. The Scheme parts of JS are the good parts.

I think some of the resurgence of FP is down to JS. The wealth of libraries available is pretty impressive [0]

[0]: https://github.com/stoeffel/awesome-fp-js


Seeing JS as a functional programming language is actually the way of learning how to use it properly.


I got interested in functional programming through JavaScript. It might have been React w/Redux that initially demonstrated the value of immutability and pure functions to me.

One day I got curious about monads and spent over a month trying to wrap my head around it, which obviously set me on the path to Haskell which is now my favorite thing in the world.


Oh man, I hate these articles as they bring out the trolls.

Javascript is a great language because it allows you to develop incredibly fast (scripting language) for a platform that runs everywhere (the web).

It used to be far simpler, but IMHO, insecurity because of all of the FUD that this sort of article prescribes, has meant that the language has bloated to incorporate all sorts of syntax improvements and new patterns.

It's not a systems programming language, so all the comparisons against typed, compiled languages are moot. Introducing transpiling as a mandatory pattern for JS development was a mistake.

The reason JS has won is because the web has won. Arguing about its merits misses the point.


>It's not a systems programming language, so all the comparisons against typed, compiled languages are moot.

No they are not 'moot'. At least not if you're building applications with 100k+ LOC. For dinky websites and small projects I'm with you.

>Introducing transpiling as a mandatory pattern for JS development was a mistake.

Again, what are you building? A dinky website, or a large application that you'll have to maintain for the next 10 years?


Transpilation is the number one reason I've seen that makes maintenance of older projects hard. The fact that JS didn't standardize on a module syntax until far too late means that we're forced into a transpilation cycle to build bigger projects. I can call that a mistake without denying it's reality.

If you want to build a web application, at some point, until we have true web assembly, you will have to use javascript, which does make a lot of the arguments here moot, regardless of the size of the project.


>Transpilation is the number one reason I've seen that makes maintenance of older projects hard. The fact that JS didn't standardize on a module syntax until far too late means that we're forced into a transpilation cycle to build bigger projects. I can call that a mistake without denying it's reality.

You answered your own objection. The language is deficient so alternatives are sought (lack of standard modules is one problem). Nobody likes transpilation and nobody would do it if JavaScript was conducive to building and maintaining large applications.

>If you want to build a web application, at some point, until we have true web assembly, you will have to use javascript

No. You can build it in TypeScript or Dart or any number of more sane language and transpile to JavaScript. Which is what people are doing.


>No. You can build it in TypeScript or Dart or any number of more sane language and transpile to JavaScript. Which is what people are doing.

Exactly, you can't avoid javascript. Transpilation is at best a level of indirection.

I'm not answering my own objections, rather I'm pointing out that transpilation is a necessary evil, but it _was_ a mistake compared to the alternative, namely fixing modules.


No, they are moot. I've built several 100k+ LOC apps with JavaScript. You have no clue what you're talking about.


How many of them have you had to maintain for several years?


Three of them.


If you only have hammer everything looks like nail. You will enjoy other languages much more.


10 years??

You will have to refactor everything and entirely change your toolchain every two years. If you had the misfortune of using some framework, it will no longer be supported by then.

No JS application has that kind of longevity, because the entire ecosystem is extremely volatile.


I think you touched on a key point. Javascript IS a decent scripting language. Treat Javascript as a scripting language and not for general large scale application development and you will probably be pretty happy.


Arguing about it's merits doesn't miss the point at all because it isn't limited to the browser. Even other dynamic languages generally have a better design and are much easier to debug in production in my experience.

Also, people making legitimate criticisms about a technology (in this case to try and cut through the hype train a bit) are not 'trolls'.


This was true 7 years ago. Today JS is in frontend servers everywhere, and powering everything from games to IoT to developer tools, robots and smart fridges. All things that used to be territory of typed/compiles languages.


I'm greatly in favor of the syntax changes. Working with the idiosyncrasies of data in JavaScript used to be a huge pain in the ass. Underscore/Lodash improved that story quite a bit. But ES6 goes a long way towards increasing the expressiveness of the language, such that code doesn't get bogged down in tedious data manipulation and language noise. Unfortunately, the space of possible improvements is a bit limited by the need for backward compatibility (e.g. they couldn't achieve expression-based programming, outside of single-expression arrow functions), but it's still quite good.


My mobile devices and ChromeOS adoption of Linux and Android native apps prove otherwise.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: