Hacker News new | comments | show | ask | jobs | submit login
The JavaScript Problem (haskell.org)
126 points by falava on Apr 26, 2014 | hide | past | web | favorite | 170 comments

The people who hate javascript the most are the ones who wish it was something else. I used to be one of those people. Once I decided to accept it as it is and read a few top books on the language to learn the "javascript way", my life got a lot better. It's really not that bad. I actually like it a lot, but there are still a lot of ignorant people who will look at you as if you're not l33t enough to know that foo language is better if you tell them you have a favorable view of js.

Of the things on his list, the only ones that really resonate with me at all are verbose function syntax and silent coercion. Typing function () {} gets old fast, but then I set up snippets and it ceased being a problem. That really just leaves the type coercion problem, which you adapt to pretty quickly if you're using the language in earnest.

What it boils down to is choosing to be a pragmatist who gets down to work with what's available instead of a naval gazer who swears everything will be awesome as soon as they have their preferred tool|language available. There are tons of compiles-to-js alternatives, but I don't see a huge difference in the productivity of people working in these not-JS alternatives. Where are the killer apps that were developed through a niche JS transpiler? If you want to build a web app in Haskell instead of the web's true lingua franca, all you're accomplishing is making it harder to hire and harder to collaborate.

If you want to build a web app in Haskell instead of the web's true lingua franca, all you're accomplishing is making it harder to hire and harder to collaborate.

We should be making the web multi-lingual. It's kind of absurd that we've built these great abstractions over HW and few people ever think about writing machine language. But for the web, an abstraction over several abstractions, we're largely stuck with a single language and seemingly little real effort to build a Multilanguage abstraction.

If you told me 20 years ago about what the web would become, I'd be amazed. If you went on to say that all programming would be in a single language -- I'd think you were crazy, yet here we are.

The web should have added 1st class support for plug-ins, rather than the direction we went to effectively abolish plugins from browsers.

Browsers should not be multi-lingual -- more bloat.

The "compile X to javascript" movement has things half right. Unfortunately the "compile to" is Javascript. I wish the language the browsers implemented was something more modern, strong typing, etc. But I am not sure that will happen.

The issue is that the "compile to" thing shouldn't be another programming language at all. It should be something like Java bytecode. Except not actually Java bytecode, because Java bytecode was made specifically for Java. You want something designed to be an intermediary representation between code in any arbitrary language and native instructions for any arbitrary architecture.

In theory you could use Javascript as that thing, but it wouldn't be very efficient. On the other hand, what you could do is create a compiler that compiles to both bytecode and Javascript, and then give the Javascript to older browsers that don't support bytecode yet.

If you think about it, bytecode you wish just doesn't exist, whereas JavaScript does and it works:


The point is that we have something that sucks, not that we don't have anything at all.

As an example, Portable Native Client is "proper byte code" (simplified LLVM IR), but client-side compilation takes multiple times longer then AOT-compiling the same code from asm.js (the NaCl team is working hard on improving this though). The download size is also almost the same (if compressed). Performance isn't that much better either, so for every practical purpose "real" byte code isn't automatically better then sending (compiled and compressed) "source code" over the wire.

I'm even getting worse startup times for a "cold start" for the same code compiled to native OSX then for starting the same demo as emscripten version, (presumably because the native version first needs to load a lot of DLLs).

On the other hand, because PNaCL is a sane IR, its composeability and implementability vastly exceeds that of asm.js, which requires not only a full JavaScript stack, but additionally, a complex JavaScript JIT system capable of executing it with additional asm.js-specific optimizations.

I can take PNaCL today and deploy it on alternative stacks, irrespective of the web stack. I can inter-operate cleanly with alternative libraries and environments, I can adopt efficient host platform ABIs, I can run it with a sandbox or without. I could even use it as a generic cross-platform sandboxed in-kernel driver layer.

Moreover, should NaCL/PNaCL be successful, there exists the possibility for optimization on the silicon-level via the introduction of new instruction sets that optimize for in-process sandboxing of untrusted code, much in the same way that trap-and-emulate VMs led to much higher performing silicon-implementations of the same ideas.

Issues of performance can be solved given a well-designed system that applies only as much complexity as is needed, at the level that it is needed. asm.js is a hack inserted at an inappropriate layer of the technology stack; NaCL/PNaCL is a coherent compositionally-sane design that opens the door to later optimizations and significant improvements of the underlying technologies on which it rests.

If there exists a tool that compiles your language of choice to JavaScript, and the stuff you write in your language of choice works on modern browsers and is stable and performant, why do you care that it's JavaScript being compiled to and not byte code?

Because JS will always be compromised by being JS shoe horned into a role it wasn't designed for and not byte code optimised for the purpose at hand.

Shoe horning technologies into roles they weren't intended for could describe the entire scope of why the Web sucks (e.g.: HTML mixing presentation with semantics).

I agree something w/ "byte code" would be that thing. Said "byte code" would deal with types and such.

Now-a-days that "thing" is "javascript" and it has implementation nuances that depend on browser/runtime environment.

JavaScript and HTML are a lot more bloat than machine language code written to the wire.

Writing in a language appropriate to the task is almost always the right tradeoff.

> JavaScript and HTML are a lot more bloat than machine language code written to the wire.

http://mozakai.blogspot.com/2011/11/code-size-when-compiling... suggests otherwise in a head-to-head comparison: C code compiled to both a platform-native binary blob and JavaScript, then both gzipped (standard practice for any large JS file on the web).

And if you're talking platform-dependent machine code, that's likely _worse_ on the wire in practice, because unlike a platform-independent representation you can't do edge caching or other sorts of intermediate caches. But maybe you meant a virtual machine (i.e. what people ask for when they want a common bytecode)?

That's not how you measure bloat. There's a huge runtime associated with that JS code, and you ignored performance.

Ah, you mean client memory bloat, not transfer bloat (i.e. pageload time). That part was not very clear at all.

You still need a nontrivial runtime (a la PNaCl) if you're going to provide the sort of sandboxing guarantees for the code people want for code that runs without explicit user opt-in. Or is that a non-goal in your case? In that case, I'd like to understand the problem you're trying to solve, since it sounds different from the one JavaScript in web browsers is solving.

I agree that things like asm.js have a performance hit compared to just running an unsafe binary blob. So does PNaCl (though the hit there is different from the asm.js hit: it has somewhat faster steady-state, but worse startup performance). Again, if you're not talking about something that has the same safety guarantees as JS and PNaCl you're comparing apples to oranges.

I probably wouldn't invent JS if I were tasked with coming up with a bytecode for the web. But how bad is it, on the overall scale of things? I mean, x86 is not a great ISA, and yet has prospered, despite some real nonsense corner cases and an anemic register set. Is JS bad as a bytecode the way that x86 is bad as an ISA, or bad at a different level?

There is a degree to which you can make anything do anything. Using JavaScript as bytecode is actually really easy if you don't care about performance whatsoever. And if you get together a group of smart people who try very hard for a long time, I'm sure you could even get it to have reasonable performance. Especially if the people writing the interpreter are talking to the people writing the compiler.

But why would you actually do that as a long-term solution? JavaScript isn't an ISA. It's not hard-coded into anything. You don't have to throw away all the existing JavaScript code to start supporting something else in addition to it. All you would have to do is write code to support whatever bytecode you actually want to use and contribute it to the major browsers.

I came into this discussion many hours late, but I have to ask: Why are people discussing as if asm.js didn't exist?! It is fast. (It even support integers!)

To me, asm.js looks like a bad kludge -- and a great road ahead, just extend it a little bit over the next decade with the stuff compiler writers will clamor for.

Am I missing something?

What you're missing is that there is no need to do that. Using JavaScript as an intermediary representation for other languages on a permanent basis is kind of ridiculous. Improving browser support for JavaScript-as-IR provides no benefit over adding support for a "real" IR and introduces superfluous complexity which is the enemy of both performance and security.

Compiling other languages to JavaScript is a tolerable transition mechanism while browsers that don't support something better are still popular, but trying to improve the ability of browsers to interpret compiled-to-JavaScript code from other languages is farcical -- if you're going to update the browser then update it to support something better than JavaScript, not to improve the performance of the horrible legacy transition kludge.

I don't disagree with anything you wrote, except the first sentence.

But I'm cynical about the possibility to get a new bytecode standard out, without e.g. MS sabotag... extending it. Half the value of asm.js should be that if there is a working system already in place, the motivations for such shenanigans lessens.

EEE wouldn't work here. That only works when you have overwhelming market share. Microsoft has basically no browser market share on mobile and even on the desktop they no longer have so much that developers can afford to ignore everything else.

The way to handle Microsoft in a situation like this is to ignore them. Support the same IR on Firefox, Chrome and Safari and then Microsoft can play along or go home. At that point the worst they could do is implement an extended version that allows sites made for all browsers to work with IE but then encourage developers to make sites using Windows-specific features that won't work on other platforms. That's effectively what they tried with Silverlight and you see how far that got them.

Why do you care what the target of the compile is if you're happy with the language you're writing in? That seems like being upset that some code you wrote in a language you like might be running on some processor you don't like.

Because it requires some really difficult conversion work under the surface. For example, JS has no concept of integers. To shoe-horn other languages in involves a lot of extra work, which by nature is going to be less efficient. Worse, the javascript engine's JIT can't take advantage of some of the semantics of the language being converted, e.g. if the JIT knew for sure that the number was an int, it could take advantage of that in various ways, or make a smarter decision based on the known performance characteristics of the hardware it's running on.

because it's inevitably a leaky abstraction

Jesus, the next time I will read phrase 'leaky abstraction' I am going to throw up.

I guess Google's Dart is trying to go that way. Not sure it's going to get there though.

The web was multi-lingual, it's why script tags have a "type" parameter.

I personally think that the death of plugins came about as the result of the iPhone, because x86 binary plugins won't run on an ARM-based phone.

Uh, scripts aren't plugins, right? In theory there could be other scripting languages for use in web browsers, which is what the "type" attribute would have been used to indicate. It's just that nothing else ever really took off. Other than Microsoft's VBScript, the only browsers I can think of that supported other scripting languages were early proof-of-concept attempts like Grail.

I think you're a bit confused about browser plugins in general here. They're OS-specific and CPU-specific. I can run Linux, Windows and OS X on a MacBook, but I need OS-specific versions of both the browser of my choice for each one and the browser plugins of my choice for each one. If the iPhone has helped contribute to the decline of plugins, it's because iOS doesn't support them at all, full stop -- if it did, though, they'd have needed to be compiled in iOS-specific versions even if the iPhone used x86 chips.

I was responding to two statements in one comment; I apologize for the confusing non-sequitur.

I do agree that plugins are OS and CPU specific, but I think the popularity and explicit lack of support in iOS and Mobile Safari (which, to my understanding, are stripped down ports of OS X and Safari) really caused web developers and their clients to move off of Flash (and other plugins). For a number of years, Adobe shipped a version of Flash for Android, and I'm under the impression that they were always willing to port it to other platforms/devices if paid enough money (since the Linux binaries had an explicit restriction from inclusion in devices; you could only use them with general-purpose PCs). Java ME ran on phones for years, too, although I don't recall anyone shipping a mobile browser with Java SE plugin support. If someone (a client or web developer) wanted their website to work on iPhone, they had to remove the plugins, and I think many people just started designing around that constraint.

I think it comes down to two things.

* Acknowledging that it has many many warts. Weak types, modularity, every number is a double, etc. some of those warts are there. Explaining why they are there historically doesn't make them go away either. "Oh but it was only written in 1 month so.." -- "Yeah it is still broken".

Many things can be chucked to "not broken but different". But, some are just broken, no sane language released after 1990's should not be this broken unless it was on purpose (brainfuck).

* Realizing that in practice, currently, there are not too many alternatives and just learning to use it. Like you say just getting down to being pragmatic. One can talk about functional purity but that doesn't usually bring food to the table. A finished site does.

But you see how the two are different. Many confuse them. For example, saying the language is fine and great just because they learned it and are using it. Yeah, at some level we want to feel good about the tools we use, so it is harder psychologically to say "this stupid thing I have to use every day". It goes the other way, just because the language is broken and there is some pure language, re-writing the site into might be very risky.

I personally am looking at Dart with excitement but at the same time won't be urging anyone yet at work to switch to it yet. Kind of a chicken and egg problem, I understand, we just can't afford to take that risk yet. But with time who knows.

Javascript's real problem is unfamiliar semantics hidden behind familiar syntax. It throws people off and they hate the language because they think it's weird. Really, they just haven't learned how to actually use it.

Javascript is a tiny language, but it pack a serious amount of gotcha.

If you learn the javascript way, you just realise that the core language provide little to no tooling around it. Like if you really want to fully utilise the prototype based inheritance, or the functional aspect of the language, you will basically have to write a whole set of utilities and extension on your own.

Of course, javascript run in the browser, is small and super simple, so it packs a lot of fun. To me it oddly reminds me toying with assembly in the old DOS 16 bit days and I like it despite its flaws ( and well those flaws are being addressed in latest iterations ) But the flaws are real, and if you need to use it on a big project with a regular team, and not complete freedom of choosing the browser you support, javascript will make you cry more often than smile.

> Like if you really want to fully utilise the prototype based inheritance, or the functional aspect of the language, you will basically have to write a whole set of utilities and extension on your own.

Hmm, this is definitely something of an exaggeration. With old-school cross-browser JS you just need something like Backbone's `extend` implementation (jashkenas posted a shorter example a while back [1]) for inheritance and underscore.js [2] for your functional aspects (assuming that's the sort of thing you mean).

If your criticism is that those things had to be written, then this was solved with ES5 which gives you `Object.create` [3] for your inheritance needs and `map()`, `filter()` et al for your functional stuff.

I certainly don't see why you'd ever need 'to write a whole set of utilities and extensions of your own', that reads like pretty hand-wavey and unfair criticism.

> if you need to use it on a big project with a regular team, and not complete freedom of choosing the browser you support, javascript will make you cry more often than smile.

I think you are depicting a pretty broadly-shit scenario that I don't think you can say any other language ecosystem deals with well. Trying to support many different clients is a nightmare across e.g. mobile and the desktop using any other language/ecosystem too, with same 'regular team'. And I don't agree there's anything particular about JS that will make you 'cry more often that smile' in that scenario, but that's obviously subjective.

  [1] https://news.ycombinator.com/item?id=7244023
  [2] http://underscorejs.org/
  [3] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/create

Doesn't that argument get weaker as JavaScript gets more widely used? It's already one of the most prominent languages in the world, and is probably increasingly the first programming language encountered and learned by beginners.

One problem is that these users do not necessarily understand the semantics. I helped a friend of mine, a reasonably competent programmer, debug his javascript once. He had no idea that you needed to use "var" in order to make a variable local! He'd written quite a lot of javascript without triggering any funny behavior until this bug.

So, instead, imho, the argument gets stronger. Why is the first programming language encountered by beginners these days full of traps? How cruel! What a poor impression of programming they must get.

> Where are the killer apps that were developed through a niche JS transpiler? If you want to build a web app in Haskell instead of the web's true lingua franca, all you're accomplishing is making it harder to hire and harder to collaborate.

You're right about one thing: JavaScript is awful, but trying to avoid JavaScript while targeting a web browser is worse.

I simply don't write software for the web. HTML/CSS/JS is too much of a train wreck of bad design for me to find any pleasure whatsoever in the task, and I'm content leaving the job anyone that can stomach it.

However, if we followed your argument to its conclusion, we'd never escape the atrophied staleness of a browser technology core that somehow has never moved past 1995-era Netscape.

> The people who hate javascript the most are the ones who wish it was something else.

Ask yourself,how can a single language can fit every use cases,for every developpers? We cant agree on what we should use on the server,but we are basically forced to use 1 language in the browser,of course some people are going to hate it.

I shouldnt have to adapt javascript,javascript should adapt my coding style.But aint gona happen.

Weak typing and Type coercion are imho the biggest flaw of javascript,I can live with the rest,but not that.

Proxies will help a bit though.Devs will definetly abuse them.

IMO the biggest problem, even worse than type coercions, is the error silencing regarding wrong argument counts.

This can easily hide so many bugs, it's amazing it is acceptable to anyone.

In practice, this issue is rarely (like 1-2 times a year) encountered. And then functional tests catch the error. It would not be acceptable if it was a constant thorn.

Do you have an example?

Call a function "foo" that expects 3 arguments with 5 or 2 arguments, instead.

Javascript will gladly accept that and not yield any error, not just at compile-time, but even at run-time!

Unless the function explicitly checks that the argument list is correct, it will simply hide the bug.

When changing function signatures in any language, it is the programmers' collective responsibility to fix all callers (some of which in the same working tree, some hidden in others' branches, etc). In a static-language, it is very easy, you get compile-time errors at some point. In dynamically typed languages, it is slightly harder -- you get an error (ideally under a test suite) and fix it there. In Javascript you get cryptic bugs. No compile-time error. No run-time error. Happy debugging!

How is that an issue ?

How is a common change (updating a function signature) easily causing cryptic bugs rather than a visible compile-time or run-time error an issue?

Bugs are bad?

Guess the result of this fragment:

    ["1", "2", "3"].map(parseInt)

Ok, that's crazy.

Here's the explanation for anyone interested:


> verbose function syntax

I didn't agree with that one myself. I mean, ok, now we have C++11 and can just type [](){} for inline lambda functions, but before C++11 you needed a huge amount of boilerplate to pass a function to something. JavaScript's way to just type "function something" inline was so non-verbose and convenient compared to that.

So what is it being compared to instead here? Functional programming languages?

> lack of module system

ES6 fixes this with, well, a module system.

> weak-typing,

Yup, this is a problem.

> verbose function syntax,

ES6 fixes this with arrow functions.

> late binding

Does this just mean dynamic typing? Well, yes, JavaScript is dynamically typed, but I wouldn't call that a language flaw. Static vs. dynamic typing is a tradeoff.

> which has led to the creation of various static analysis tools to alleviate this language flaw

The footnote talks about JSLint and friends, but none of those impose a type system, which means that they do nothing about dynamic typing.

> but with limited success (there is even a static type checker)

Well, yeah. It's very hard (read: "research problem") to impose a good static typechecker on a dynamically typed system, though.

> finicky equality/automatic conversion

Yeah, this is bad. I really wish tools like restrict mode [1] had caught on: together with ES6 they eliminate a lot of what people dislike about JavaScript.

> this behaviour,

Fixed in ES6 if you use arrow functions (finally!)

> and lack of static types.

Again, I wouldn't say it "sucks" for this reason, just that it's dynamically typed. That's a tradeoff.

[1]: http://restrictmode.org/

> Again, I wouldn't say it "sucks" for this reason, just that it's dynamically typed. That's a tradeoff.

This is the Haskell wiki. I would expect most participants to view having a single global implicit union type to be a flaw, not a trade-off.

It's sad that we should expect Haskell users to be chauvinistic about static typing. I have yet to read anyone who advocates for dynamic typing describe static typing as a "flaw".

There's nothing chauvinistic about it. If you want to hear people honestly and accurately describe "static" type system's flaws -- talk to type system theorists.

Type systems can be provably flawed.

What you can prove about a uni-typed late-bound ("dynamic") languages is quite simply that they're unilaterally flawed.

The chauvinism involved is the assumption that a type theory is the only legitimate way to assess the quality of a programming language's type system. Haskell is, of course, a near-ideal language when judged by the values of its creators and advocates. The inability to recognize that those values are subjective is chauvinism.

It's chauvinistic to believe that the only way to assess math is through deductive reasoning?

By that measure, all "STEM" fields are inherently "chauvinistic".

Programming isn't math.

That's really, really funny.

Engineering isn't math either, but you can't do it right without math.

I know some people really want programming to just be an application of the lambda calculus. It's a model that really appeals to people who prefer a top-down, deductive model to their work. It's similar to the appeal that draws people to Austrian economics.

And yet that is not the original model for thinking about software, nor the only valid one. Haskellers who rely solely on deductive arguments in favor of their method are going to be stuck forever wondering why so few people are using their clearly superior tool.

Less accurate "models" for "thinking about software" are less accurate, and whether they were thought of first or last has no bearing on that. The fact that you're using "earliest date of discovery" as supporting evidence demonstrates that you're not approaching the question as an objective observer.

This isn't a popularity contest, though if you were measuring popularity based on positive impact, that of Haskell has been substantial.

The only advantage to leveraging simpler and less accurate models is that they're easier for the people using them to understand at a micro scale -- at the cost of systemic accuracy and verifiability.

In being less accurate, they create systems that are more difficult to understand, more difficult to abstract and compose, more difficult to optimize, and more difficult to measure.

There's something destructive and anti-intellectual about trying to convince the world that deductive reasoning is just a matter of opinion.

> The fact that you're using "earliest date of discovery" as supporting evidence

It was a fun discussion while it lasted, but conjuring words out of thin air and putting them in my mouth is where it ends. Thanks.

"And yet that is not the original model for thinking about software ..."

>It's sad that we should expect Haskell users to be chauvinistic about static typing.

We don't expect that. We expect them to recognize that better type systems are better than worse type systems. Which seems pretty obvious when stated that way.

>I have yet to read anyone who advocates for dynamic typing describe static typing as a "flaw".

Try looking on the internet. Every "static vs dynamic" argument has 99% of the dynamic side arguing that static typing is bad because java's type system limits them and doesn't prevent any bugs.

> We expect them to recognize that better type systems are better than worse type systems. Which seems pretty obvious when stated that way.

Obvious - because you just said "with better defined as has more static typing, static typing is better". That's an empty statement if I ever saw one.

That it's better to accurately and reliably modeling complex systems and invariants through provable assertions is objectively is no more an empty statement than to say that it's better to apply informed materials and structural engineering to the design of complex physical products than to build ad-hoc designs of unknown parameters through guesswork.

If you think that building software is like building bridges, then yes. If you think that it's the exploration of a problem domain that may not be known to you at the start of the process and is subject to various degrees of iteration (e.g. a process more like writing a script/play), then you get vastly different requirements. That's what I meant by "if you define better as more static typing". "Better" is a normative statement and can never be absolute, it's always relative to a choice.

I don't see how it follows that iteration reduces the degree to which one must understand a system. That's just a rephrasing of the attractive but ultimately empty "dynamic typing is just more creative."

Bridges (or car engines, or any other engineered product) aren't invented whole cloth sans iteration. However, it's understanding of the invariants of the iterated system that allow for directed iterative design and ultimately better products.

Rather than repeat this argument, I'll refer to https://news.ycombinator.com/item?id=7656184

You just proved djur's point.

What's wrong with being passionate about something you believe in?

It seems to me that this "chauvinism" is in the eye of the beholder, offended that others are suggesting that the tools they have invested time in may not be the best way to do things.

Personally, I am quite excited by the prospect of learning and investigating better tools. I think we're only just beginning down the road to finding better ways of programming.

Best languages allow both type systems with dynamic being opt-in.

I accept that I will be downvoted for this.

My impression of the Haskell community is its full of snobbery and complaining.

This has got to be the 4th javascript ducks post on hn in the past week.

I like hearing the positive aspects of Haskell. However, the language x sucks posts are tiresome.

Develop an imagination. Javascript can be wielded very effectively. It's just different. I hope that I or someone else wires posts on what can be done to be effective with javascript and weakly typed systems.

There are advantages to using weakly typed languages, mostly centered around flexibility and rapid iteration. The disadvantages are alleviated by comprehensive functional or black box testing. Skip the unit tests.

I feel liberated by not having a strong type system restricting my options. Ymmv. People have different styles and preferences.

>My impression of the Haskell community is its full of snobbery and complaining.

Probably because the Haskell community is used to having a programming language that is, in and of itself, very good. They complain because they know what they're missing when they have to use inferior tools.

>Develop an imagination. Javascript can be wielded very effectively. It's just different.

The same can be said of x86 assembly. That doesn't mean we don't complain about programming in it, and we don't try to find better alternatives. The entire field of programming language development is about finding tools that are better than the old ones, even if the old ones work OK.

>The disadvantages are alleviated by comprehensive functional or black box testing.

If you think "more testing" alleviates all the problems of weakly typed languages, you have no idea what you're missing.

>I feel liberated by not having a strong type system restricting my options.

If you feel restricted by a type system, you're using it wrong. A type system is a useful tool, not an impediment. It only prevents you from doing wrong things.

"I like assembly because it lets me do any operation on anything."

>>I feel liberated by not having a strong type system restricting my options.

>If you feel restricted by a type system, you're using it wrong. A type system is a useful tool, not an impediment. It only prevents you from doing wrong things.

I'm always baffled about how people say that using a dynamic language feels "liberating". I feel completely the opposite; I have to be extremely more careful with a dynamic language since either there is no compiler or it can't detect many errors, so I have to do it myself (because, and this is a very important lesson for absolutely everyone without exceptions: you will commit mistakes, no matter how good you are or how good you think you are). Getting rid of static typing feels to me like, above all, getting rid of a lot of very useful guarantees.

Aren't these two equivalent statements? Liberty to do any particular thing, ever, increases the ability to do something wrong. And so requires more caution. That doesn't stop restrictions that would alleviate said caution from being overbearing themselves. (Maybe they aren't always overbearing, either way...)

> Liberty to do any particular thing, ever, increases the ability to do something wrong.

The difference is that "liberty" tends to increase one's ability to do both right and wrong. Dynamic typing tends to only increase one's ability to do wrong.

> Dynamic typing tends to only increase one's ability to do wrong.

Apparently you place no value on programmer time. Why should I have to waste time with type declarations that add no benefit to my program? Why should I have to construct an artificial type hierarchy just so I can write common code to deal with two different kinds of objects that are duck-type compatible but don't happen to share a common base type that the type system recognizes?

I agree that there are certain kinds of applications where static typing can be a benefit that is worth spending the extra programmer time; but the claim you are making is much stronger than that.

Static typing can save a lot of time otherwise debugging later, and really isn't that much of a time investment.

I agree, for certain types of applications. But as a general statement applying to all types of applications, I disagree strongly.

Static typing gives me the liberty to explicitly define much more complex ideas in ways that leverage the compiler to create powerful and beneficial systems that would otherwise be untenable in a language in which one could not leverage the intelligence of the type system to offset their human limitations.

This is why unit testing took over the dynamic language world more than a decade ago.

And at least in Perl, which have a bit of extensible syntax, you can declare parameter types of methods/functions. That is especially a good idea for external APIs to libraries, etc.

All tools have quirks, which forces a bit of change in how you use them. You have to look at the total sum of these effects. How much extra days will it take to learn? How much will the work arounds cost you? Etc.

Granted, e.g. JavaScript have more weirdness than should be possible -- but is is fast, fun and has lots of use cases.

>If you feel restricted by a type system, you're using it wrong. A type system is a useful tool, not an impediment. It only prevents you from doing wrong things.

Only a perfect type system is capable of preventing only wrong things, and allowing all correct things. In the mean time, we're left with things that are slightly less perfect. Like OCaml, Haskell, SML, Java, Smalltalk, Ruby, Javascript, Python, PHP, Prolog, and everything else that even attempts a type system.

Here's the problem with your post - and it has nothing to do with what you think it does. It attacks two things at once and quite distinctly. On one hand you're defending JavaScript - and that's separate from your attack on the Haskell community. It's just unfocused and confusingly conflated. I'm not going to down vote you, but you need to organize your thoughts a little more coherently if you don't want to get down voted.

PS The reason there are so many JavaScript sucks posts is because JavaScript, a language designed for very menial things in 10 days in the mid 1990s, has become the lingua franca of the world and is used for sophisticated applications outside of its wheelhouse. A lot of very respectable people believe it's a poor tool for the complex applications that it's being used for and the less respectable people just reiterate this as "it sucks". It's not that it sucks for making a photo gallery, it's that it sucks compared to other languages for making a rich, modern, immersive, large application. They key word there is large. All of the reasons they reiterate about the "why" it sucks are only relevant when you couple it with the "large".

Apologies about the lack of cohesion of my post. It's tough to make posts on a phone.

I have to confess that my interactions with the Haskell community have not been positive as I would like. My development preferences are not respected by Haskell fans. I'm ok with disagreement. However, if I do "defend" javascript, I'm attacked as being ignorant because I don't see things the same way.

My point is javascript can be an amazing experience, and very scalable, if approached in certain ways.

I acknowledge that Haskell is amazing too.

>It's not that it sucks for making a photo gallery, it's that it sucks compared to other languages for making a rich, modern, immersive, large application. They key word there is large.

Very large apps have been made in c. I'm making a fairly large app in javascript. It's been pretty fun. I'm confident that the app im working on can be scaled to be considerably bigger.

A problem is most people try to use techniques that are more appropriate in other languages.

I think the "very large app" idea is something of a strawman. It's used by all languages from time to time in order to say "hey, look, I can be serious too!" but it's clear that with enough effort and familiarity you can write a large app in anything.

I think the real distinction between Javascript and Haskell runs much more deeply and is difficult to talk about. As a hint to the moral here, many of the posts defending Javascript here try to talk about its syntactic flaws as being not so bad. I don't think anyone in the Haskell community gives a damn about syntax—not in that kind of way—because they've all learned quite nicely that syntax is a tiny fraction of what makes a language interesting.

So the real distinction is semantics. Semantics are hard to argue about because it's difficult to distinguish between semantics until you've either studied them in the abstract or experienced several wildly different kinds. It's also hard to argue about the value of different semantics until you've seen those different kinds used in anger.

Prolog is different from C is different from Erlang. Lisp isn't so different from C, but it does have some cute functional tricks. Clojure is different from Lisp in spirit as its semantics are often distributed and immutable, though as a language it cannot be trusted that way, only as a community.

Haskell is far different from all of those because it's pure, lazy, stupidly committed to its System F and Hindley-Milner heritage, and has one of the most interesting and powerful subtyping systems of any language today. It's completely different from everything in my prior list, though there's passing similarity to Scala and OCaml.

Someone might argue that the semantic island where Haskell lives is better than the one where Javascript lives. They might argue historically by noting that Haskell builds upon 30 years of PL research while Javascript is lightly influenced by one, interesting historical point. They might argue categorically by saying that Haskell's semantics benefit so much from being mathematical. They might argue theoretically by saying that Haskell's semantics can be shown to subsume the semantics of other languages (by the uni-typed argument and Dynamic coupled to monadic regions).

Javascript seems to be relegated to saying that it's not so bad because you can get stuff done in it, it's more interesting than C or Java, and dynamic typing is not that bad once you get into it.

From the Haskell perspective, the first two are givens and the last one is silly knowing that in Haskell you can choose to use Dynamic types whenever you like and they become quickly relegated to just the problems they're most appropriate for.

> They might argue historically by noting that Haskell builds upon 30 years of PL research while Javascript is lightly influenced by one

Javascript, since it has not "evolved" over a long period of time, is more of a blank slate & more flexible in it's semantics. It does not force you into a particular tradition, so you are free to develop your own semantics. There is much life into what you can do. It's up to the programmer to evolve the practice.

I'd like to explore the root of our differences. I have a hard time assimilating deep traditions. I respect my autonomy as an individual & creative being.

I prefer to be directly exposed to the problem and create solutions & practices based on that. I can't blindly follow a teaching or tradition. I'm always questioning.

There are advantages & disadvantages to this. A disadvantage is I may be repeating mistakes that others have already done. In my case, I have a hard time fully appreciating other's mistakes because I was not presented with the same situation. Group oriented developers tend to despise this "independent" approach because it does not follow already established conventions. I've gone years with the mindset of if it's not a standard approach, then the code is "unmaintainable". I have since evolved to realize that code is like an interactive story or engine. If you can clearly express intent & purpose and there's automated verification (testing, type systems, etc.), then it's maintainable.

Advantages to this "independent" approach are it's easier to assimilate the good parts of multiple traditions. You can also reject the legacy overhead of a particular tradition. Like how Bruce Lee rejected the extraneous movements of Kung Fu when he created Jeet Kune Do, which is an assimilation of Kung Fu, Karate, Boxing, Fencing, etc. The approach is "Absorb what is useful, discard what is useless". http://en.wikipedia.org/wiki/Jeet_Kune_Do

Bruce Lee, like most true innovators, was criticized by his contemporaries. However, when the rubber hit the road, he kicked everybody's ass.

That can be great when there are 1 to a few programmers with a like mindset. I prefer to be in such environments. It can be quite difficult in projects with large teams. I don't like such environments. Politics become necessary & it feels dehumanizing.

I'd argue that you're correct in highlighting a certain tradition but that most arguments about the advantages of Haskell are completely orthogonal to this idea.

Haskell is more powerful than JavaScript as you can embed JavaScript into the semantics of Haskell. This is where the ideas of existential types and unityped languages come into play. Haskell is sufficiently rich to directly emulate JavaScript as a type-contained embedded language. Thus, all of the same exploration is available.

This idea is highly idiosyncratic Haskell. Nobody would honestly suggest it as Haskell guides the intuition of the programmer quite strongly. This embedded language must also immediately throw away much of the machinery and speed of Haskell.

The reasoning again is not that Haskell is right but instead that its semantics are a superset of those available in JavaScript.

But given this inclusion, most people rarely use features like pervasive state and dynamic typing while programming in Haskell. Part of this can be answered as language convention force and regularity, but these techniques do show their advantage at times. This is what gives the Haskell programmer confidence that they understand the value of many language features: they've constantly redefined "Haskell" in many miniature, well-interleaved parts to be whatever makes sense for the problem at hand and thus have seen the value of a variety of language features.

So I'll end by suggesting that once you learn Haskell this notion of "design your own solution" is actually even greater than the one in JavaScript. I don't take this argument to be ground truth, but instead just a challenge. JavaScript and Haskell both allow the programmer great control over the semantics of their language, and Haskell has many mechanisms which give that power greater leverage than JavaScript can.

But by doing so, the post marks a real problem: The critique of JS seems to be mostly based on the prestige the languages is enjoying -- or rather the lack thereof -- in a community. Any of those features is to be found in highly respected languages, too. Most of the rants are about respect and not being willing to respect the specifics of the language. Why has JS to be built to suit test suites and tool chains originally designed for C/C++? Why not adapt those tools for the use with JS? Isn't this, what a universal machine is for? "Sucks" may be an easy answer, but it shouldn't be a general one.

Edit: There are only few high level languages which would adopt to such a wide range of programming patterns like JS does, or which would allow to be redefined by such large an extent. (This is mostly for the much frowned on late binding.) Isn't this rather adorable than awful?

Other reasons, why JS would "suck" seem to be quite deliberately: Curly brackets -- what about C or Java? Semicolons -- In the beginning, JS was critiqued for not requiring them and using them rather as a separator (in the first syntax version). Verboseness -- isn't the lack thereof why APL used to be quite a horror? Type coercion -- Isn't this, what made Perl that useful as a glue-language? The critique is quite à la mode.

Nope. JS has technical problems which were all stated clearly and with references to academic works and industry examples (Google). You simply choose to ignore my carefully worded criticism calling it a “rant” and an “easy answer” and “à la mode”, with talk of irrelevant things like C/C++ and platitudes about Turing machines.

Sorry to see you downvoting a comment which doesn't please your expectations.

1) You are ignoring the notion of some, if not most, of the incriminated features being also present in highly regarded languages.

2) "Most of the rants" (please mind the plural) doesn't address your post, but is on certain communities which are producing anti-JavaScripts post in series.

3) "easy answer" is quite what it means. There are so many patterns that are just reproducing approaches to be found in other languages for things that are already built in or are overriding built-in features. (My favorite one is using "var self = this;", while `self` is a predefined variable in JS, pointing to the global object. What does this mean for code maintainability? What is "self.location" in a browser? Mhm. Just read the whole code to know.) At least some of these pattern do in deed reflect some laziness in coping with the language. Also, "easy answer" was meant to address the stereotypical use of "sucks". (P.e.: There is much criticism on the behavior of `this` in the DOM-interface, which is, by the way, not part of the language. So, read the specs and use "Object.handleEvent". Criticize deprecated implementations of the DOM-interface in older IE for lacking this, but do not qualify the language for this by "sucks"...)

4) "à la mode", again, isn't specific on your post, but on various historic positions and utterances on JavaScript to be found over the last 19 years. If you would try to read this carefully, you would find them to be contradictory and mirroring the popular notions of programming styles of the day. This is, what "à la mode" means. Take the criticism on the semicolons, for example: In the times when JS was critiqued for not requiring them (à la "a serious language has explicit statement delimiters as a requirement"), CoffeeScript would have been regarded as being even worse. Obviously, this perspective changed over time.

5) The notion of test suites (and the tradition of having analysis tools especially for C/C++) is not deliberately, but reflects a real raison d'être of most of the frameworks attempting to fix the "problems". While these frameworks highly depend on late binding themselves, the most critiqued feature -- which is also a real problem for analysis tools -- is late binding. I would be optimistic that these issues could be theoretically overcome, based on the concept of "Turing completeness". (There is no reason, why paths that can be resolved by a runtime wouldn't be to be resolved by an analysis tool. Modifying the language instead of adapting the analysis tools is what I would call an "easy answer".)

I would be grateful, if you would bother to read a comment, or just would simply ignore it, rather than just taking it as an apparent aggression. There are developers who have committed to this language and who are quite happy with its working. (As another recent post, on JS framework reproducing the schemes of enterprisy backend schemes, put it: "You are ruining it".) There are different notions on this issue and there is no need of demonizing contrary perspectives.

So you didn't read this (again)? Yes, it's a carefully worded criticism on my side, too. And as is clearly stated in the intro of the initial post, it's on criticism on a language from the point of view of a community formed around an other language in general. (I'm in the JS-business for 18 years now and have contributed some to its use, so I would appreciate it, if you would consider this to be more than just a deliberate utterance.)

And BTW, you accused me of things (in the post above) that weren't in my statement at all. I was just pointing out that there was in deed a point made by briantakita by noting that the various articles on JS published lately are mostly originating from communities grouped around other languages. And that perspectives may vary. Even the criticism on JS varies over time.

> People have different styles and preferences.

In terms of Haskell, it's not a question of style or preference, but provable correctness of the underlying theory.

I don't think programmers should have to be category theorists to 'effectively wield' a language, but category theorists sure as shit ought to be involved in designing the language being wielded.

I'm not a Haskeller, but I very much respect the research they've done, and continue to do -- and how it significantly improves the languages I do use.

However, the design and culture of languages like JavaScript are very much mired in inconsistent misguided design and cultural anti-intellectualism; they do not and can not benefit from the advancement in the state of the art, and are the programming equivalent of building bridges and skyscrapers without consulting structural/material engineers, based on a back-of-the-napkin sketch.

Personally, I don't prefer static typing -- I simply don't know of any other mechanism by which I can actually have a hope of understanding software I'm writing of any appreciable level of complexity.

I don't see where "preference" fits in here, other than "preferring" to not actually fully understand the system you're building.

Bold claims and strong language do not make up for a lack of citations. You prefer static typing because you admit it is the only hope you have of understanding your complex code. However you dismiss other peoples' preferences by claiming they are ignorant of the systems they are building. Are you, perhaps, the one true Scotsman?

It's not possible for anyone to build complex software in a unityped language while having a full, complete, and verifiable understanding of their own system invariants at any given moment.

Claims otherwise are the literal equivalent of making complex mathematical assertions without a single condensed proof.

This isn't opinion, any more than a formal proof is opinion.

You are making an opinion as to the definition of complex software. "Complex software" means different things to different people. Additionally, you've inserted a requirement of understanding one's systems that seems arbitrary and not universally applicable. Whatever happened to each his own?

> You are making an opinion as to the definition of complex software. "Complex software" means different things to different people.

"Complex" math or "complex" engineering means different things to different people, but "mathematical proof" and "deductive reasoning" do not.

> Additionally, you've inserted a requirement of understanding one's systems that seems arbitrary and not universally applicable.

It's not arbitrary, it's how we build systems that work, and work better.


"By understanding the constraints, engineers derive specifications for the limits within which a viable object or system may be produced and operated."

Without understanding one's system, it is impossible to understand its relation to one's constraints, and it is impossible to define the specifications within which that system operates.

It's possible to build a car or an engine without understanding the math and science behind them, but it's impossible to build an engine of known specifications, it's impossible to provably assert that it operates within those specifications, and it's impossible to engage in directed design based on those specifications and defined operational limits.

The best you can hope for is purely empirical evidence, and empirical evidence is of limited utility without a theoretical framework in which it can be applied to extrapolate further knowledge.

As a result, systems designed and implemented without concern for understanding the theoretical basis of their operation invariably perform more poorly and cannot compete with systems designed and implemented based on building and applying a theoretically driven specification and understanding of constraints.

To willfully prefer ignorance is objectively worse.

> Whatever happened to each his own?

Applied maths. It's either correct and verifiable or it's not.

This wiki page is 2 years old, I assure you the Haskell community settled this debate years ago and have moved on with our lives. We've since implemented a bunch of compilers to JavaScript. I suspect, rather, that you are criticizing the part of the HN community that submits links and those that upvote them.

For what it's worth, for someone calling people “snobbish” and “complaining”, it's quite rich to make a comment which is complaining and to say “develop an imagination.” :-)

Just upvoted the post, sorry. ;-)

These rants really seem to be based on the degree of respect some languages are enjoying in a certain community. Take weak typing / type coercion for an example: This is a common feature of scripting languages of the time, e.g.: Perl. I never read Perl being complained on for the same issue, which might well be, because it is a "serious" system language and because you might have to "really" learn it for any advanced use. (Even the boolean testing of int return types in Unix and C could be considered as some kind of premature type coercion.)

Late binding is probably most powerful feature of the language. (The language is built around it.) If it doesn't go with your test suites, considering that both the language and the language the tests are built with are Turing complete, adapt the tests rather than the language. This is a bit like ranting on C not being Logo and for it having other output facilities than a turtle.

Why not just develop in JS -- read: think JS --, for a change?

Horses are okay, but they're expensive, require much more maintenance, can only go about 40mph for a brief period of time, and can only seat two people, uncomfortably. And you're telling people who want to build roads to put cars on “why not just think in horses for a change?”

I never found javascript to be expensive. Sorry, but incorrect analogy.

> Sorry, but incorrect analogy.

You might as well say "incorrect human being", but okay.

> I never found javascript to be expensive.

That's very good for you. I'd stick with JavaScript if I didn't find it a horrible maintenance burden. But I do. You're free to differ, but you must accept my experience as part my criticism, you can't discard that and then attack what remains.

>I accept that I will be downvoted for this.

Why would that happen? Ignorantly bashing haskell strawmen is pretty common here. Btw, while you hit two of the biggest ones, you did forget to complain about it being "academic" and thus unusable for "real world" programming. That should be pretty much standard by now.

I'm surprised that Elm is not mentioned:


True, it's not Haskell, but it's clearly highly-influenced by Haskell. Besides, they mention Fay, which isn't Haskell, either.

If the Haskell wiki didn't require a log-in and registration that requires special, personal permission, I'd add it myself.

I don't recommend Elm because it's not Haskell and it's too boxed into its way of doing things. Elm is really, really far from being an acceptable substitute for Haskell. It's a gap comparable to that of what Java and Scala can do.

PureScript is a better choice for a Haskell'ish language that is generically applicable to both browser and node-backed JS.

Importantly, you can decide how you want to do FRP/callbacks/whatevers according to the needs of the libraries/app/problem you're working with.

Also, PureScript has typeclasses, higher kinded polymorphism, the works. It even has some cool stuff that Haskell does not built in!

See this post for example of why higher-kinded polymorphism is important: http://bitemyapp.com/posts/2014-04-11-aeson-and-user-created...

I would agree that there are probably more "Haskellish" compile-to-JS languages, but the wiki mentioned CoffeeScript and TypeScript. Surely Elm ranks above them in your book?

Besides, Fay doesn't have typeclasses, either, and it's mentioned.

I wasn't making the claim that Elm was somehow better than those options, just that I was surprised it wasn't included. I've just played around with Elm, nothing serious.

You're correct that Elm seems more like a DSL in some ways with its required FRP, and I agree that typeclasses and HKTs are nice, but if I could choose between doing my day job in Elm or staying with JavaScript, I know which I'd pick (hint: it's not JS).

Do not underestimate how powerful and useful HKTs and typeclasses are until you've used them in anger.

I have a guide to learning Haskell here: https://gist.github.com/bitemyapp/8739525

You should give it a whirl.

Just use PureScript if you're fortunate enough to know alternatives to JS exist :)

Why are you intentionally ignoring half of the parent's comment, twice? You're not contributing to the discussion at all.

It's a wiki. You are free to add this information. =]

Did you not read my last sentence? I'm actually not free to add the information. There's a special registration process that requires human intervention.

If the Haskell Wiki didn't require me to e-mail somebody personally and ask for permission to register, I'd be much more inclined to provide a one-of edit. By the time this person gets back to me (he's probably in Europe), I'll have neither the time nor inclination to make a one-time edit to a wiki that I've only visited on a handful of occasions.

I gnome-edit wikis all the time -- at least the ones with user-friendly sign-ups. Was the Haskell Wiki so beset by viagra spammers that it had to institute a human-mediated registration process? Maybe it was, but the downside is that then people like me are less likely to register for one-off edits.

Good point, it was full of spam and registration was closed. I forgot about that. It was because we had a really old MediaWiki version so it was actually unsafe to allow anyone untrusted to post on it due to exploits possible at the time. I say “we”, I don't have access to manage haskell.org. I'mma ping the mailing list to see about fixing this.

I think that's an excellent idea. Well-done.

I think spam was exactly the problem.

It's a shame and seems to really hurt. There's a lot of outdated information in the wiki and I suspect the obstacle to making fixes is a big contributor.

Clojurescript is an excellent solution to most of the problems mentioned. In particular, functional data structures & approaches are a great fit for coordinating dependent updates to a UI based on events happening to a base state - for a concrete example, the Om wrapper around React is very slick, performant, and idiomatic.

And if you want to leverage those functional data structures in a vanilla JavaScript context, there's a pre-compiled ClojureScript library for that!



Really the only thing missing from Clojurescript, as all Lisps, is static types.

This is especially a problem (which can only be discovered at runtime) with newbies. It has some nice constructs but my friends and even sometimes myself end up with problems like trying to deref an atom that is already dereffed, or not dereffing in the first place.

clojurescript depends on the JVM.this is a no go for me.Once a cjs compiler is implemented in javascript and can run on nodejs,then maybe i will consider it.

Would you articulate why it's a "no go" for you?

Leiningen[1] takes away 99.9% of the pain of having to work directly with javac, maven, classpaths, etc. And lein-cljsbuild[2] offers a configuration-driven approach to compiling and testing.

It's perfectly possible to create setups where node (plus grunt, gulp, shell scripts, make, etc.) "drives" leiningen, and vice versa. About a year ago I contributed such a setup[3] to David Nolen's mori library.

[1] http://leiningen.org/

[2] https://github.com/emezeske/lein-cljsbuild

[3] https://github.com/swannodette/mori/blob/master/package.json...

Because if other parts of your stack don't depend on Java, you have the overhead of having to install a complete java environment (on dev machines, CI, ...) just for compiling your front-end code. It also makes "language independent" tooling for javascript harder. Then again: I guess most people who use clojurescript are already using clojure so it wouldn't be a problem for them.

Getting the JVM installed requires very little effort. On ubuntu linux, for example, apt-get can be used to install openjdk or Oracle's distribution. Travis CI provides java (with a choice of JDKs) in its testing environments.

So, I understand your point, but the installation "overhead" seems more like a mole hill, hardly amounting to even a speed-bump sized obstacle.

I recently had write up for a README how to install the JDK and set it up correctly on OSX. You'd be surprised. Thanks to Oracle's policies it isn't even close to where other runtimes/sdks are (e.g. installing ruby, node, gcc, go, ocaml).

With homebrew and homebrew-cask installed on Mac OS 10.9, I was simply able to do:

   brew cask install java
That might not cover the bases for every Java developer's needs, but leiningen seems to work fine with the resulting installation.

I'll have to look into versions (gotta be java7) but that looks pretty great - thanks!

wouldn't it make more sense to implement the clojurescript compiler in clojurescript so it can run on node?

i get disliking having to fire up the jvm versus a light runtime like python or node if that's where you're coming from, but i don't mind twiddling my thumbs for a moment when i first sit down to write clojure, because not just the language but also the libraries and tools are more fun/interesting to use than a pure node (or pure python, for that matter)두environment.

in short, and not to be snide, perhaps you've already written some clojurescript, but to anyone else reading over: try writing some clojure/clojurescript. you might not like it for whatever reason, but at least then it'll be a case of "no return" instead of "no go."

Seems a bit strange that there is no mention of ClojureScript in the "FP -> JS" section.

I feel like "The Birth & Death of JavaScript" by Gary Bernhardt is appropriate for this discussion. https://www.youtube.com/watch?v=Nr7ZEXLJHtE

In fact, the talk seems appropriate for every discussion regarding, but not limited to, how terrible JS is or how it should die.

I'm going to chime in to the chorus of "Why wasn't X mentioned" and throw out Scala.js.

It would seem that Scala solves quite a few of the article's complaints about Javascript, and in a manner that is accessible and usable for the masses. I love Haskell as much as the next guy, but I'd rather use Scala if i need to get stuff done.

I've been trying to convince my workmates to try something other than pure JS (with Angular), but am not having much luck. TypeScript seems to have the most buy-in, but most of the guys in the office aren't convinced of the benefits of a proper type system, the just see it as more work for them for little upside. Any ideas?

What kind of office are you in? If it's filled with a lot of front-end JavaScript "ninjas" I doubt you're going to make much headway. I hate to be pejorative, but I do mean the ninja part pejoratively.

My best advice would be to try to start by convincing the guys with the most dev experience in other languages/serious dev experience/(dare I be controversial and say )guys with the most formal CS education first.

Mostly back end guys who have to do front end, and the genuine front end guys are indeed "ninjas" (though, I must say they are very good at wrangling JS and debugging the problems we inevitably run into). My boss is convinced at least, and I recently just convinced the team to move to proper testing + CI system (which the boss had wanted to do for 12 months but never had enough buy In from the team, so he was pretty stoked when I brought it up), so perhaps it'll just be a waiting game.

I'm tempted to reimplement a part of the system I'm currently working on in TypeScript to demonstrate it. I do worry that I'll come across as trying to show the team up though, as I'm still pretty new here :(

You're not going to get anywhere with the ninjas. Suggest a system that's similar to what you do your backend in. Is your backend in Java? Suggest Dart. Is your backend in C#? Suggest TypeScript. Is your backend in Ruby? Maybe you can at least get some CoffeeScript going.

It is hard, if you look at the article, it even says "but we are stuck with it". With Angular JS there is just not that much JS to make it a big issue. So I am not sure if that is the best framework to switch to a new language (they'd have to re-map all the tutorial and documentation and everything they learned to this new language).

Coffeescript was a very popular on HN some year or two back. Then there was kind of a backlash against it and I haven't heard much since.

Now I am looking at Dart, I think that one of the few viable alternatives coming up. They are moving to make it a standard. There is already a performant VM for it (Dartium) that can run in Chrome. It compile to JS and what what you want but it has Google behind it. So it has some chance. Dart+Angular integration is going very well. They even added IDE completion for it. So if I gave you any advice I would say look that.

Typescript is hard.

It's module system is a mistake(it should be module system agnostic,not AMD or CommonJS),and very confusing.

The biggest problem is javascript is so dynamic you cant create type definitions for every library outthere and expect the typesystem to work.

Typescript is good if you are porting something from AS3/C#/Java to javascript,or if You want to model a complex domain,without touching the DOM api(or node api).

It sucks as soon as you try to use jQuery with it.You end up with any everywhere which defeats the purpose of that language.

In my opinion,if you need to pick a js alternative,choose Traceur.

I could not agree less with your post. I ported a 12k loc JS project to TS and it usese JQuery heaviliy. The result was that the static type system could point out 6 bugs from my original code that unit tests did not find and now I have statement completion and powerful refactoring tools for everything as well. It plays really nice with JQuery if you have the right type declarations and I am more productive now with all the code intellisense I get from the IDE.

Also the module system is module system agnostic and it can compile either to commonjs or amd, it depends on a compiler switch. I am using mostly requirejs and it works well.

Umm, no experience with typescript here, but can't you use the type annotations from the DefinitelyTyped project [1] to avoid everything becoming `any`? There are open source type annotations for all major (and many minor) libraries, including jquery [2].

[1] https://github.com/borisyankov/DefinitelyTyped

[2] https://github.com/borisyankov/DefinitelyTyped/tree/master/j...

definetly typed doesnt fix everything,just have a look at the ambient declarations.

Is it that they don't properly understand the benefits of switching on their productivity, or that they just don't think the productivity gain will be large enough to justify the switch?

If it's the former, then yeah, a logical explanation might help, but the latter seems a bit more subjective, and it's possible they just see their current workflow as completely adequate. Quantifying productivity gains is tricky, and the "if it ain't broke, don't fix it" philosophy has some merit.

The things that gave most of javascript quirks were auto comma insertion, this, and global variable declaration. It's the same as how HTML has horrible parts because from the start it bending backward to be as fault tolerant as implementable.

That's why it succeeded and why a strong staticly typed language with very strict error enforcement wouldn't have made it in the first place. Some things are added bits by bits later (e.g modules, strict mode). The rest is a matter of taste and use case: late binding allows different things, all numbers as float is tasteless but it's not handicaping and there are ways around it, syntax is a matter of tooling.

I think anyone wanting to write serious applications for a platform should try to be the closer to the native environment, and adding an abstract layer on top of javascript because 'it sucks' feels childish.

Using words like “sucks” is a bit of fun. Seeing obvious deficiencies in a language and putting up with it for years instead of doing some work and using a better one is childish.

I wrote (the start of) this page. Please bear in mind this is the Haskell community. We like our static types. This page is also kind of old news (2 years old) for us, the discussion is over, really. But I'll describe the history of it for those interested.

So I wrote this paragraph (http://www.haskell.org/haskellwiki/The_JavaScript_Problem#Ot...) in 2012 in a reply to a reddit comment. Around that time the Haskell community was growing in its web dev circles, and we as a community were cultivating a sense that writing our web apps was not fun in 50% of the task, which was the front-end.

There were actually a bunch of alternative attacks.

* Using so-called widgets which are compiled by Haskell and contain very minimal pieces of JavaScript that the user of the library never sees. This is okay. But if you're developing something more complex, you start to wish you were back in Haskell.

* Some were also thinking, perhaps it's just the framework that makes this work awful, we just need to pick the right reactive-MVC kind of framework.

* Use HJscript, an EDSL of JavaScript embeded in Haskell. This is not bad, I used this in hpaste and it's still running today: https://github.com/chrisdone/lpaste/blob/master/src/Hpaste/V... However, you're still left with the semantics of JavaScript, it doesn't really improve upon it. Although, if you're interested in this approach, definitely checkout Sunroof https://github.com/ku-fpg/sunroof-compiler which is like a new and improved version which has continuations out of the box.

* Some thought maybe CoffeeScript was enough, but it quickly becomes obvious that the problem is nowhere near syntax-deep.

In the end, I think everyone pretty much agreed just using Haskell would be better. But there were no viable transpilers, so we were continuing with JavaScript feeling that things could be better.

After that I decided to start trying out the bitrotted crop of compilers. I tried GHCJS first, and reported my findings: http://chrisdone.com/posts/ghcjs The first couple paragraphs are now on the wiki. I thought this community "feeling" should be documented, and copying “The Expression Problem”, thought it would be catchy to make a “The JavaScript Problem” post that is concise, opinionated and comprehensive. Ask any random Haskeller and they will pretty much agree with both paragraphs. It's easy to point someone to a wiki article to have a common understanding of the problem.

I'm not one to simply whinge about things, however. I wrote down alternatives and tried a few out myself. I experimented with GHCJS (above) and UHC (here http://chrisdone.com/posts/uhc-javascript) and HJScript. Eventually, unsatisfied, I wrote the Fay compiler: https://github.com/faylang/fay/wiki which was inspired by the simplicity and small output of Roy and the FFI in UHC, and was a success (to some degree) because it was super easy to setup and its output was understandable. We're using Fay at FP Complete for the IDE, we have about 16k lines of code in Fay.

In parallel, the GHCJS codebase got a new set of very active maintainers, the Haste compiler appeared http://haste-lang.org/ and generally the community started to feel “hey, not only is compiling to JavaScript practical, but we're starting to feel this should become standard web dev in Haskell.” If you look at the crop of Haskell web frameworks (big three are Yesod, Snap, Happstack) they all support compiling via Fay and I'd expect haste, too.

I see some comments in this submission that the Haskell community is "snobbish" and "complaining". We're apparently among the people who “wish JavaScript was something else” and we shouldn't be one of those people, that we just don't really “get” JavaScript and how to write it properly. We want a car because we don't appreciate how to ride a horse properly. I'd say the opposite. Here we saw the problem (as we saw it), and started working on practical solutions and now people are getting paid to write Haskell that runs in the browser. If that's not putting money where your mouth is, I don't know what is.

Also look at the OCaml community with http://ocsigen.org/js_of_ocaml/ (also an inspiration for my endeavors) and the Clojure community with https://github.com/clojure/clojurescript They identified the same problem and got to work. :-)

Since I added that wiki article, a bunch of new compilers and languages have been added to the page (or invented). For the chorus of people asking "why isn't X listed?", it's simply because users contribute to the article, it was much smaller. So if you've heard of another approach, and, importantly, if you've tried it and can report practicality, please go ahead and add it. I think I'll add Opa to the list, it's like Ur in that you write front-end and backend code in the same language.

Pretty strange to mention TypeScript and not Dart... which solves a lot of his criticisms and is arguably more popular. I respect the functional guys a great deal but the use of Haskell for front-end dev is realistically going to only be an interesting option for those already using it for the backend. It has nothing to do with the merits of the solution and everything to do with people's comfort zones.

The page is about JS. TS is a superset of JS, and Dart is an entirely different language.

All other languages listed on that page are entirely different languages as well.

The article starts with the assertion "We need Javascript". Adding Dart to the mix makes that assertion less obvious.

It mentions Coffeescript.

I'm surprised that Haxe isn't in the list of alternatives. It's been around for quite a long time and it's very mature:


Oh, interesting. What's it like to use? Please add it to the article. I'm adding Opa now.

I'm surprised that lack of module system is the first thing brought up. Is that really such a big deal? It's very much just a nice to have for me and has plenty of third-party implementations if you want it.

It's just another annoying thing on the list of warts with Javascript that make maintain a large codebase a pain in the arse.

Plenty of third party implementations is half the problem. Want to import a module by somebody using a different module format? More pain in the arse!

It's similar in the Scheme community. It's easy to write your own module system. So every compiler and every second Schemer has their own module system with its own quirks and assumptions.

Most large projects use AMD and it works just fine.

I take issue with the paragraph about how Javascript sucks:

* Javascript is actually untyped, not weakly typed (untyped means no type declarations, not that it has no types) and I don't see that as a problem.

* Being an entirely interpreted language, JS cannot have static type checking. This has not been an issue in my experience.

* I think the syntax argument is laughably ridiculous. Programming languages that live in glass houses shouldn't throw stones. Just today, I've seen three links to guides "to learn basic Haskell syntax", two of which contradicted each other.

* I agree with their statement on `this` behavior. This (pardon my pun) is one of the places where JS's well-intentioned DOM-manipulation features cause problems with it as a language.

> Javascript is actually untyped, not weakly typed (untyped means no type declarations, not that it has no types) and I don't see that as a problem.

That's not what any of that means.

"No type declarations" only means just that. Haskell has no obligatory type declarations due to global type inference and nobody is calling Haskell untyped.

Untyped means untyped. None of the APIs or code are built around types, structurally or by name. It's just a one big union type of possible JavaScript values all over the place even if that's not what you meant when you wrote the function.

Being interpreted has nothing to do with type-checking. Haskell's REPL is an interpreter. Using a library (Hint) you can eval arbitrary Haskell from strings in a type-safe manner.

You can even parse arbitrary data from strings in a type-safe manner without resorting to an interpreter:

    -- throws error
    read "blah" :: Int 

    -- returns 1
    read "1" :: Int
via the magical joy that is Haskell typeclasses. Trivial but still very nice stuff.

Type-checking requires statically derivable information. You won't get any of the niceness unless you design your language around it.

I don't care about syntax either. It don't think it really matters and it takes all of an evening to get used to syntax from any language.

Yeah, `this` is freakin' ridiculous.

> Javascript is actually untyped, not weakly typed (untyped means no type declarations, not that it has no types)

I've never really heard the term untyped before -- I guess I'm old school and still call that dynamic typed. But JavaScript is also weakly typed. It will automatically convert strings to numbers, etc. Some languages, like Python, are dynamically typed but also strongly typed: No type declarations but no automatic conversions.

It's been popping up lately. It's meant as a smear on dynamically typed languages, because dynamic has a positive connotation and untyped is decidedly negative. Pretty similar in intention to uni-typed.


Edit: Sorry, learned something new just now. I'm very, very wrong.

Untyped is where operations are valid on everything because it's all a sequence of bits.

Dynamic is completely different and refers to when the type is discovered.

Sorry. However, Javascript is definitely dynamically typed.

Uni-typed is actually a very specific technical term. Mathematically you can prove type theorems about languages regardless of whether they implement types as language features. Dynamic languages are thus modelable like this and when you do so the first approximation is to say that a dynamic language is a language with a single type (uni-typed).

The reason this is valuable is it opens up directly the techniques for "hybrid typing" such as PHP->Hack and Javascript->Typescript. It also is the basis for the compiler using type inference to accelerate certain portions of the code.

* Being an entirely interpreted language, JS cannot have static type checking. This has not been an issue in my experience.

This doesn't mean it isn't an issue, just you haven't encountered a situation where it is.

* I think the syntax argument is laughably ridiculous. Programming languages that live in glass houses shouldn't throw stones. Just today, I've seen three links to guides "to learn basic Haskell syntax", two of which contradicted each other.

And you've never seen three guides to "learn basic Javascript syntax" which contradict each other?

You can argue it both ways, as far as I am concerned. Javascript like any other language has issues, what exactly those issues are... depends greatly on what you use Javascript for.

>This doesn't mean it isn't an issue, just you haven't encountered a situation where it is.

You're right, but it seems to be fairly rare. I've only once ever seen static type checking help find bugs in rarely-used code paths, and that was a single occurrence five years ago.

>And you've never seen three guides to "learn basic Javascript syntax" which contradict each other?

No, I haven't. The only things I can think of off the top of my head that a JS newbie would find contradictory are semicolons and declaring arrays with [], both of which are used by the vast majority. I've only seen one codebase ever that used new Array(), and only a handful that didn't use semicolons unless necessary.

>You can argue it both ways, as far as I am concerned. Javascript like any other language has issues, what exactly those issues are... depends greatly on what you use Javascript for.

Good point. Claiming Javascript is issue-free is as ridiculous as claiming that (to quote this guide) "Javascript sucks".

On the `this` feature: This is mostly a "problem" with the DOM-interface, which is not part of the language. But there is even a solutions for this in JS: Object.handleEvent() -- Why is everyone complaining on the lack of a feature that is in fact already built in? Just start using it ... Also, since functions and objects are first class entities, you may just store references and make use of them as you like. (But, please, don't use "var self = this;", since `self` is already a predefined variable pointing to the global object.)

This is well put, but I can't offer the same concession about "this" ... what exactly is the problem? It provides access to the state of the object on which you are working, if utilizing OO paradigms within JavaScript. Sure "this" values when executing pure functions in some runtimes make little sense but if you understand what "this" means with respect to OO, you wouldn't be trying to access it in that situation in the first place.

> untyped means no type declarations, not that it has no types

I found this to be a very bizarre statement, given the existence of dynamic typing. For anyone else confused, this is the best discussion I could find of the issue:


The programmer theorists view of what a "type" is will always differ from a programmer's view. If I'm talking to a bunch of theorists, I know that types are absolutely static entities, and everything else in the dynamic world are actually tags. If I'm talking to programmers and am careful to make this distinction, they will find me to be overall pedantic and verbose.

So unless one is having a discussion with someone like Bob Harper, dynamic typing absolutely makes sense and is not a misnomer.

Untyped, however, is a bag of worms that you can't really get out of. To some it means no type declarations, in which case Haskell is untyped to some extent with its excellent support for type inference. To others, untyped means the complete absence of types and type checking (static or dynamic)...like doing something completely unchecked at run time with a void* in C. The views are diverse among programmers, let alone theorists! So I just avoid this term in any case.

There seem to be a few different viewpoints on the matter. This is the one I go with (as well as Brendan Eich and other high-profile JS people.)

I guess the Haskellers noticed this HN thread. One moment my comments were at 2 points, the next they are at zero. It seems to be the same for other non-anti-Javascript comments.

If you're trying to solve the perception of being snooty and unpleasant (as mentioned in this thread), downvoting everything isn't going to get you far.

Could you clarify on `this`? I find how it works very obvious and natural to me, therefore not quite sure why everyone consider it broken and not 'different'?

I assume it's referring to the fact that 'this' changes in a given piece of code depending on how it's called, whereas other vars are lexically scoped, so you are often forced to work around the issue by making your own lexically-scoped equivalent, aka the old "var self = this;" trope.

and this is perfectly fine. it allows you to do more if you know how to use it. and if you don't - it is really simply to understand. again - this is how js different from many (all?) languages out there, but why it is bad - i am not sure. extra line of code in some cases? I don't think this is really that bad.

I won't say it's good or it's bad, but I will say that it bites me on a pretty regular basis. You start with some code that says this.something() and you decide to move it into a callback and miss converting the 'this' references to 'self' because in the previous incarnation, it didn't matter, but now you're calling out to code that helpfully fixes up the 'this' reference when calling your callback. I literally fixed one of these today.

Are you using strict mode? It doesn't fall back to the global object as `this` (except for setTimeout/setInterval, which are specified that way) - since I've started using it, I find out pretty quickly if I was missing a `.bind(this)` when passing a callback function.

The problem appears to be, some tend to get confused, who is the caller and who is the callee. For this, please just consider that the DOM-interface is not part of the language, but just -- as the name suggests -- an interface to a separate environment. From the point of view of the DOM, the element is the callee, which applies itself to a callback function (the event-handler). Makes quite some sense.

In case you've issues with this, you may just use `Object.handleEvent()`, and the this-object will be your object (or function). Why is everyone complaining about this behavior, but no one is using the built-in feature that would be the perfect solution to the problem?

Javascript is weakly typed. And untyped does not mean no type declarations. It means no types, just like it says. Assembly is untyped. Javascript is dynamically typed. Very different. http://en.wikipedia.org/wiki/Programming_language#Typed_vers...

Javascript does not define the implementation details. It does not have to be interpreted, and in fact I don't believe any modern implementation is interpreted. Javascript is generally compiled to byte code which is then run in a VM, or native machine code (V8 at least does this). There is no reason javascript could not have a static type system, there are statically typed scripting languages. It is dynamically typed by deliberate decision, not "we can't do it any other way".

The problem you see is what I like about the language.

The javascript languages is great because I'm not forced into other peoples way of thinking. With javascript I can choose my own paradigms.

module system: You can implement your own. Node JS has one built in.

verbose syntax: Use an editor that support macros. But compared to dot net and Java, the JS syntax is not verbose at all.

getters and setters: I actually love not having to to write getters and setters every time I want to add a property.

One important part in JS though is good naming. You can't get away with naming everything x, y, z, a, b, c, etc.

Once you figure out that everything in JS is objects, it will be much easier.

Scala.JS is doing well for me.

Please add it to the article and report your findings! =)

Oh, article with attitude.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact