Hacker News new | comments | show | ask | jobs | submit login

Semantics and configurability is what what makes a language great. JS doesn’t have function environments, i.e. every unlexical lookup goes to global/window and that cannot be redirected. It doesn’t have green threads (at least). There are generators, but one cannot just yield without marking all the functions generators too (async/await in modern terms). Stack traces are lost when generators throw(). JS has no good introspection — you can do some Reflect, but cannot e.g. write a module that has a function, that when called enumerates all functions in a caller module and exports them by specific criteria. JS can’t properly substitute objects with proxies. It can catch simple existing key accesses, but can’t enumerate keys or arrays. There was handler.enumerate, but it was deprecated. Vue, which was written by no fools I think, cannot “app.array[3] = v” or “app.newkey = v”. It cannot dynamically watch an entire app too, all structure has to be defined before Vue().

Other problems exist, which are just stupid (null/undefined, in/of, ===, ;, this, etc) or minor design choices like coersion and lexical rules. But these above are stoppers for turning JS into something great. It is now a BASIC-level language with scoping and sugar. Yeah, you can write like that snippet presented in an article, but you are forever stuck with doing all things by hand, from asyncing to orm, from exporting to update scheduling. I don’t find that great in any way, especially when that is the only semantics you have in a browser.




The generators or async/await in JS are 'shallow' coroutines because you can only `yield` or `await` in the direct scope of a generator or async function, but the benefit of shallowness is that the control flow is explicit; you don't have to consider whether a function call will suspend the calling context's execution or not. I find the explicit clarity to outweigh the reduced power of shallow coroutines.

As an aside, there exist green threads in JS, namely node-fibers, although it's only a Node.js extension.

You actually can reflect on module exports and dynamically change them with node modules; you can't do it with ES modules, but that has the significant advantage of enabling static analysis.

Proxies absolutely can trap the property assignments you mentioned, but Vue can't take advantage of this because Proxy can't be polyfilled in older browsers.

As for the enumerate handler, it would only have worked with for-in loops, which are a legacy feature. The iteration protocols used by for-of are a much more flexible solution. It might seem silly to have both for-in and for-of loops, but the context of the language is that it can't just go and break older websites. Same goes for == and ===, etc. Linters come in very handy for dealing with this.

Your criticism is better than most, which usually just point out some "wat" moments with mis-features like implicit coercion, but you didn't really make a case for having to do "all things by hand" in JS.


> As an aside, there exist green threads in JS, namely node-fibers, although it's only a Node.js extension.

Aren't Web Workers real threads, and supported natively in browsers? (Haven't used them myself, maybe there's some limitation that excludes them from the criteria above...)

https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers...


Thanks for node-fibers, that is something I missed and it looks promising, should try it server-side at least. But I'm not sure what you mean by "reflect on module exports", since there seems to be no way to enumerate all functions (in node), except those exported by hand. I workarounded it via "autoexport(module, x => eval(x))" and "// @export" tags, but it feels dirty.

I also bet that I couldn't make 'in' work for proxy with empty abstract target, but maybe it's just me. Btw, 'in' and 'of' are two separate iterators, one iterates array elems and the other iterates object's keys -- something essential to metaprogramming. My whole point on in/enumerate is that it considered legacy by someone special.

And on Vue: I didn't know that, but if Vue can't take advantage of Proxy, can I?


eval() has no legit use case in JS, and I really don't understand what the point of "// @export" would be. You can't reflect on module or function scopes, but that's a feature and gives you encapsulation.

for-in is a legacy feature; due to dynamic inheritance, it's generally not safe to use without also calling Object#hasOwnProperty() every iteration. for-of is not for "array elems", it uses the iteration protocols that are implemented for all built-in collection types, not just Array, and some DOM types, and can be implemented for any user types. Protocols are a much more flexible and clean approach to metaprogramming than overloading the for-in looping construct would be.

You can't use Proxy if you need to target legacy browsers like IE9, and Vue needs to, since it's about 15% of all browsers.


> You actually can reflect on module exports and dynamically change them with node modules; you can't do it with ES modules, but that has the significant advantage of enabling static analysis.

And yet even that advantage got thrown out from the language with the introduction of "import()". Apparently static analysis is a non-goal (see discussion in [1]).

[1]: https://github.com/tc39/proposal-dynamic-import/issues/35


Dynamic imports don't replace static imports.


I think that much of these criticism relate to the implementation, and the runtime (i.e. browser) rather than the language itself. As a scripting language for a browser environment it's pretty good. Modern variants have lots of lovely language features and tools like babel mean you can use a lot of these new features without sacrificing backward compatibility.

The threadless/nonblocking model is "interesting" but in my opinion its wholly suitable for user-oriented scripting as it forces a style of development that doesn't block.

The null/undefined thing actually makes sense to me. The concept of "Null" is a swirling vortex of uncertainty in most languages and it's nice to see it get some more nuanced treatment.

There's something about the bloody-minded pragmatism of javascript that appeals too ... it's very much a language that has evolved from the bottom up based on need, and much of it is very much community driven. You see problems solved in interesting and unusual ways that you mightn't see in a more stringently stewarded language.

I wouldn't use it for everything. I wouldn't prescribe it for beginners to programming either. But it's great for what it does. I like lots of other languages too, but I respect their applicable limitations. It's a nice language.


There are a lot of valid criticisms against JavaScript. This is a fact. It's not a great language (clearly demonstrated by the discussion about it being good or not) but it's not terrible. I doubt anyone calls JS terrible on its own merits. There is a lot of hate against the runtime (edit: I mean the browser, not JS VMs or interpreters). The runtime enforces JS and the runtime itself is enforced everywhere. There is also this new flow of very inexperienced developers who seem to think JS (and perhaps Ruby) is the only language. This is very similar to the hate PHP got. Pragmatic choices vs experienced having their "I told you so" moment when they see "sql_srsly_escape_string_this_time(...".

All this makes a lot of people react emotionally and makes talking about JS itself (like many other mass-adopted technology) very hard.

Is JS nice? Yeah, as my personal opinion - but we can all say that it is good enough, evidenced by the ecosystem and the things people create with it.


> It's not a great language (clearly demonstrated by the discussion about it being good or not) but it's not terrible

I'm wondering what would be an example of a great language? Because we know that there are only two kinds of languages: the ones people complain about and the ones nobody uses


I like to think of Haskell as a great language. It still has its warts (e.g. last [0,1/3..2] > 2), but if you wan't wart free, there's probably nothing beyond lambda calculus.

Being the closest thing to lambda calculus with enough syntactic sugar on top to make it practical is a large part of what makes Haskell great.


The one thing that makes Haskell not great is that its understanding is not widely intuitive.

People with mathy backgrounds that don't blink at the phrase "lambda calculus" won't consider this, but a lot of people struggle with math.

If you can't put it in the hands of a 6th grader (in the public school system with no special tutoring) and have a reasonable chance of it being understood (n.b. I self-taught myself early JavaScript when I was in the 5th grade, and picked up PHP the following year), it won't ever be "great".


This is the one criticism of Haskell that in my opinion has no merit. Programming languages are not intuitive. They are a learned skill. You know the saying (sometimes said as a joke) "such-and-such language failed because it didn't have C-like syntax" -- but C-like syntax is NOT intuitive! Reading C code is a learned skill.

Maybe it could be amended to "since many programmers learned to program using languages with a syntax inspired by C, wildly different syntaxes learned at a later stage are more difficult to them", which is a more reasonable proposition. This could be fixed by teaching programmers other languages early on.

Haskell is no more or less intuitive than JavaScript. It's just different.


> Programming languages are not intuitive. They are a learned skill.

I can teach someone Python or JavaScript in a few weeks, at a casual pace, where they can accept input from STDIN or a file, do some calculations, and produce output to STDOUT or another file.

Haskell? I'd need a dedicated fucking thesaurus on-hand for them to grok the paradigm, and it would take a few months before they could achieve the same result.

I have another point here:

If the only languages that are considered great are the ones that make people feel smarter than everyone else for being able to understand, we don't need great languages.

Most of us need easy, practical languages that help us solve problems and don't get in our way. (Most of achieving this property comes down to ecosystem rather than language design.)


> Haskell? I'd need a dedicated fucking thesaurus on-hand for them to grok the paradigm, and it would take a few months before they could achieve the same result.

This is one of those assertions people should have to demonstrate with actual experiments.

I can teach someone how to write buggy, unmaintainable code that seems to work but actually doesn't in Python and JavaScript. So? :)


> I can teach someone how to write buggy, unmaintainable code that seems to work but actually doesn't in Python and JavaScript. So? :)

Yes, because that property is totally absent from Haskell.

https://github.com/RNCryptor/rncryptor-hs/issues/2#issuecomm...

Oops. Surely the JS and Python implementations are just as bad?

https://github.com/RNCryptor/RNCryptor-python/blob/649ca23a5...

https://github.com/RNCryptor/rncryptor-js/blob/08250e00a1140...

(END SARCASM)

My point here is that, while working with a "great" and/or "better-designed" programming language can be beneficial, it's the ecosystem that really counts.


No need for sarcasm and my argument wasn't that there aren't bugs in programs written in Haskell.

My argument is that people who claim "writing Python is easier" usually ignore that it's writing buggy/throwaway code in Python that is actually easier (aka "look, I can write bugs fast"). Writing large, maintainable and bug-free code in Python is not easier than in other languages -- it's arguably harder, but since that's debatable, I won't argue it here.


Every language has its faults.

Even a "perfect" language is faulty if the barriers between its current state and mass adoption are insurmountable.

Completely reform education in this country to make purer languages like Haskell more palatable for younger generations than, say, PHP, and I'll totally be wrong in 50 or so years.


I don't think I'm arguing the perfect language exists, nor do I think a complete education reform is needed.

I'm arguing that "Python is easier" is false, simple as that.


And I'm arguing that the average newcomer (if we need a specific definition of average, how about chosen at random among low-income American sixth graders far removed from big cities like San Fransisco or New York?) would have an easier time understanding Python to the level of being capable of basic file/network I/O than they would with Haskell, because Python will be more immediately familiar to them because they don't need to even know what a thrice-damned monad is.


I understand that's what you're saying, and I'm saying you're wrong because:

- Beyond toy examples, writing Python isn't easier. Writing reliable, easy to maintain, bug-free Python programs is just as difficult, and your average person won't be able to do it right off the bat.

- You don't need to really know what a monad is in order to write Haskell as a beginner; that's a red herring.

If you want to argue that it's easier to write toy examples in Python, without regard for good programming practices, then... it'd still be debatable: if I remember correctly, some years ago there was a post here about someone teaching Haskell to highschoolers, to great success. They found it fun and easy.


That's a fallacy. Just because something is harder to operate doesn't mean it can't be better. Example f1 cars vs normal cars, two propeller ships vs single propeller ones.


> That's a fallacy. Just because something is harder to operate doesn't mean it can't be better.

Where did I say it "can't" be better?


Have you tried it?


There is no language that is innately understood. All computer languages are learned, so when they describe "intuitive" I don't think this is what is meant. However, a large portion of early programming education does take place amongst C-style curly-brackets.

It's this learned setting for our precepts that makes other languages intuitive or not. If we were all learning fortran or pascal in college it might be different, but we're not.

That said, having C-style syntax is clearly not a pre-requisite for the success of a language as is attested by the success of Python, Ruby, various flavours of BASIC and other less loved languages like COBOL ...

But "intuitive" is very important for getting traction. Once you can "intuitively" model a problem in a language such that somebody familiar with the problem can understand what's going on, then that's intuitive. I'm talking about a different kind of intuitive here.

Most people engage with computers on imperative terms, i.e. they want to tell it to do things. Imperative languages are "intuitive" because they allow you to map out a list of instructions in order.

So for instance, when you're writing a program to make a cup of tea you issue those steps one by one. You don't want to have refine the model into a functional space, do a handstand and flip the bag into a mug with your little toe while inducing a small raincloud and microwaving the drops on the way down.

Similarly true object oriented langauges (I'm not talking about C or Java here, where classes are glorified structs) model how we think of information in terms of object-relations.

Functional languages to me as an experienced programmer are "intuitive", but even I sometimes flinch when I'm exposed to a stack of lisp ellipses ...


I think you're under-stating some things. Imperative programming languages are hard beyond their initial appeal at the "this is like a recipe" level. It's debatable how people best engage with computers. There are plenty of anecdotes, if you look for them, of people failing to understand that the assignment operator in imperative languages means "place this value in this box" instead of "this means that". It's just that you (and me) are used to this learned mode of understanding.

Modern programs seldom look like a list of imperative actions anyway, beyond toy examples. They look weird and unintuitive. If you can make the leap to "this is what an actual imperative program looks like nowadays", you can make a leap to declarative/functional programs just as well. Doubly so if you don't have to unlearn years of conditioning about how programs are supposed to look :)

It currently is a self-inflicted hurdle. I don't pretend this problem doesn't exist. One way to fix it would be to start teaching programming in a different way, and with different languages.


> There are plenty of anecdotes, if you look for them, of people failing to understand that the assignment operator in imperative languages means "place this value in this box" instead of "this means that".

Are there anecdotes of non-absolute-beginners failing to understand that? For first-year students, sure. Do working software engineers have trouble with it? Do they have bugs and/or lower productivity because of it?

> If you can make the leap to "this is what an actual imperative program looks like nowadays", you can make a leap to declarative/functional programs just as well.

My own suspicion (completely unsupported by data) is that some peoples' brains find imperative languages to be more the way they think, and some find functional languages to fit their way of thinking better. You could run an experiment to test that - you'd take a group (call it A) of functional programmers, and a group B of imperative programmers, and measure their productivity. You'd then split the groups in half. A1 stays functional; A2 starts programming in imperative languages. B1 stays imperative; B2 goes to functional. Two years later, you measure everybody's productivity again. What I expect you'd find is that some of the people who switched (either way) increased productivity, and some declined.

What you might find is that everybody who switched from functional to imperative was less productive, but that might be because the (somewhat rare) people who are already functional programmers are almost exclusively the ones whose minds work better that way. You could fix that by starting with students or with fresh graduates, and arbitrarily assigning them to group A or group B. Then you'd have to wait a couple of years to measure their productivity the first time.

The difficulty, of course, is figuring out a way to at least somewhat objectively measure their productivity...


> Are there anecdotes of non-absolute-beginners failing to understand that? For first-year students, sure. Do working software engineers have trouble with it?

Sorry, I was unclear: I was talking about beginners. My argument was about intuitivity and "it's easier/harder to learn programming this way". Software engineers are already down the road of "I'm used to this, therefore this is the best way" :P

If I understand you correctly, what you say is entirely possible: that some people think best one way or the other, and that there is no universal paradigm that is more intuitive.


So a language is more intuitive if it's similar to what we already know, and we already know math, and therefore "=" for assignment is not intuitive? OK, for first learning a language, I can buy that.

I don't recall ever having trouble with that myself, but that's anecdote, not data...


To be fair, I never had trouble with that either. It was just an example of something I've occasionally read about some people approaching programming languages for the first time.


> Programming languages are not intuitive. They are a learned skill.

True.

> C-like syntax is NOT intuitive! Reading C code is a learned skill.

Also true.

> Haskell is no more or less intuitive than JavaScript.

This does not follow. I have to learn the syntax of any language, true. But that doesn't mean that all languages are equally easy/hard to learn. I learned C by reading K&R over Thanksgiving weekend while in college. I understood everything except argc and argv, even though I didn't have a compiler to experiment with. I had to learn, true; I wasn't born knowing it. But I found it to be pretty intuitive to learn.

I doubt I could have understood Haskell from reading a book over a four-day weekend, without being able to experiment, no matter how good the book.

And, sure, there could be someone out there to whom Haskell syntax is intuitively obvious, and they look at C and wonder what all the crazy symbols mean. But I suspect (but cannot prove) that such people are less common than those who find C more intuitive.

TL;DR: No language is innately known. But some can still be more intuitive (for most people) than others.

Note well: I do not take any position on JS vs. Haskell as far as how intuitive they are.


It doesn't follow because it's just my opinion :)

I don't think the relative intuitiveness of Haskell vs JavaScript is a settled matter. I'm arguing one or the other may seem more intuitive because of past familiarity with similar languages/paradigms.

For example, some people -- though not in this thread, thankfully -- make much about Haskell's allegedly weird syntax and/or operators. Never mind that its syntax is not particularly large, but also there's nothing immediately intuitive about a lot of C code in comparison. What's with all those "{}" and ";" and "*" and "&"? Parsing operators, especially with parens and pointer dereferencing involved, can be difficult, even without deep nesting, and even experienced coders occasionally trip over some production C code. Yet no-one uses this as an argument for C being "too difficult" or "too unintuitive". I argue this is because C was a language they learned long ago, and it has colored their perception of what is familiar or "easy" about programming languages.


Here's a thing about mathy languages too ... I've done a bit of lisp so I'd like to think I kind of get it ... but aren't computers by their very nature "imperative"? i.e. such that C or Pascal perhaps map more naturally to the underlying system? Can these mathy languages (oriented around our rationalist perspective of the world) truly ever be sympathetic to the hardware?

What I mean is, nice as these are ... is there always going to be a bit of waste when you use them?


Depends on how you define "great". Haskell intentionally keeps itself off the mainstream. That has always been a deliberate choice. There are now already plenty of more practical functional languages, e.g. OCaml, F#, Julia, Elixir, Clojure, which are in part driven by the advancement in research brought about by Haskell, so it has been fulfilling its duty in that way.


I love functional programming, but nobody use Haskell. Clojure, yeah.

Guy next to me at work was trying to build a web app with haskell for a hackathon, and I was blown away by how little the community had to offer for basic things he couldn't get working.

So yeah, languages people complain about, and ones nobody uses.


Haskell was the first thing that came to mind. I can show with one hand, by joining my pointer-finger and thumb, the number of people I know that use it.

It looks lovely, and it's something I'd very much like to learn some day but I can't for the life of me think of what I would use it for. Are there any killer apps out there for it?

At least with lisp, I can configure emacs ...


Well, it's a general use language. Also, there's purescript [1], a Haskell-to-javascript for the browser.

[1] http://www.purescript.org/


Pandoc and Xmonad are written in Haskell.


Interesting but again, short of contributing to these projects what am I going to do with Haskell?


Haskell is used in the industry, see: https://wiki.haskell.org/Haskell_in_industry

You could start your own project. You could contribute to an existing project. You could evangelize Haskell at your job, if possible.

All of these are hard, of course. It'll be easier to use a more mainstream language. But if Haskell strikes your fancy, maybe it's worth the effort?


> maybe it's worth the effort?

Yep. Sure. Just as soon as I get done shaving this Yak ;-)


Well, people have managed to successfully use Haskell in the industry. Occasionally someone with a success story even posts here on HN.


Write software, probably.


Well ... duh.


Also git-annex!


I also like to think of Haskell as a great language. But [0,1\3..2] being a wart? Here's an old HN thread that bashes Haskell:

https://news.ycombinator.com/item?id=9434516


Hugs:

    Hugs> last [0,1/3..2] > 2
    False
    
GHCi:

    Prelude> last [0,1/3..2] > 2
    True
¯\_(ツ)_/¯


My two cents is that I currently find Elixir and Julia to combine productivity and all the goodness from functional programming really well. Not sure what sort of backlash they'll get if they go more mainstream in the future (which I believe they will). I don't think it's totally hype since I also tried my hands on Rust a lot but I really struggled and eventually disliked it a lot. I couldn't seem to implement any complex structure without resorting to unsafe code. Maybe I'm just a shit low-level programmer in terms of thinking about ownership though.


Complex data structures often need unsafe; writing them isn’t a good way to learn Rust. Most already have implementations you can just use, so it’s not something most rust programmers do often.


My response to this is usually that Ruby is a great language because it's easy to get useful work done with a large, meaningful subset of the language. You can ignore the warts just by not using them. You can't do that in JS, because the warts are so fundamental. (yes, you can be tripped up by library authors in Ruby, but there's community backpressure against providing footguns).


I don't agree. The ruby community had a recent love affair with dsl. Taking a peak at rspec library source made my eyes bleed. There's also a huge preference for "magic" even outside of rails, so much that when you want to augment or add functionality you're supposed to monkey patch. There's also a huge preference for "clean" syntax when it doesn't necessarily improve maintainability--it just makes the number of odd syntax rules you have to learn and internalize more convoluted. Also I feel as though the reason there's so much preference for tests and strict rubocop is exactly because there are too many footguns baked into the language.

That's not to say other dynamically typed languages (including js) or even statically typed languages are all that much better. But I would say ruby is really showing its age with the number of warts and hacks that have accumulated.


FYI, there is a significant contingent of the Ruby community that doesn’t use rspec (for exactly the reasons you mention). Heck, the test suites for Rails and Ruby itself use minitest, not rspec. It’s hard to notice this, though, because the rspec people have a vastly larger written output.

It sounds like most of your criticisms are criticisms of dynamic languages. Ruby gives you a million and one footguns, but they are beautiful and elegant footguns.


One can make the exact same argument for JS as well - I’m not sure what distinguishing point is intended here.


The difference between the hostility to php and the hostility to js is that there are many alternatives to php whereas, right now, if you want code to execute in the browser or on many different platforms, you don't have a choice. So where I would normally tell people who boo php to use whatever language they prefer, and get over it, that's not possible with js.


> So where I would normally tell people who boo php to use whatever language they prefer, and get over it, that's not possible with js.

Maybe 5 years ago. For nowadays, I beg to differ: https://github.com/jashkenas/coffeescript/wiki/list-of-langu...

A significant amount of the choices there produce generally better-performing code than the hand-written JS, there's also the WebAssembly.


While technically true, don't you still have to debug and diagnose issues from the generated JavaScript? I'd love to write front-end code in anything else, but if I have to know exactly how it gets converted to JavaScript it kind of defeats the point.


Elm stands out in this regard, as it gives fairly robust guarantees of no runtime errors, so the debugging you'll do when working with Elm will almost always be limited to its compiler or Elm Debugger. The price to pay for this is limited interoperability with JS, but it may be acceptable to trade interop for type safety, depending on the use case.

Generally, the alt-js languages provide 'source maps' so that developer tools know to map errors in the 'transpiled' code to their source, and it's possible to avoid JS to a practical degree.


Usually you get source maps, so you can debug in the source language, right from the DevTools.


Compile-to-JS exists, and there are good ones out there. E.g. you can develop in Dart for web, server and mobile, and it is a solid alternative in every segment (the language and tooling is anyway).


No alternatives to JS browser runtimes & environments maybe but you can use one of many transpile-to-JS languages and you shield yourself from most of JS badness.


Not sure many people would think that "Ruby is the only language", and not sure what's your point there. From my impression, if somebody can decently do backend work with Ruby, then they're probably a better programmer than some JS-only newcomer. Is Ruby already becoming the new PHP? Don't think it's time yet, and Ruby despite its many drawbacks is still generally much better. Not to mention Node seems to be all the rage in recent years and Ruby isn't even that "popular" anymore. Though indeed I personally always had doubts about Ruby. I haven't touched Ruby ever since I started programming projects in Elixir, which I like much more.


All but maybe a few small % of the TIOBE index would fail all or some of those requirements... So while the enthusiast side of me heartily agrees with you, the professional programmer side of me has seen plenty of good code despite a lack of those features. Just because I know that better language features exist doesn't condemn every language lacking them as unusable.


Are you aware of ES6 Proxy? I may be mistaken, but almost every example of an issue that you have with the language seems to be resolvable using Proxy objects.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Yes, and “handler.enumerate” deprecation is stated in this mdn link. I thoroughly checked every described [un]feature in my recent research. There is some space for error ofc, but I’m aware of most obvious things. Again, simple get/set of existing or known keys is easy, but try to proxy an arbitrary object, array or class instance and see how it falls apart. This is almost a rule in JS: every feature is somewhat mentioned in docs, in the web, in excited conversations, but in fact it is shallow workaround that cracks under moderate pressure. So, please Vue.set(app.array, 3, v) and forget about abstracting vue away from your logic. I suspect that this is a consequence of commumity-driven design.

>almost every example of an issue that you have with the language seems to be resolvable using Proxy objects. I don’t think that light threading, introspection, scoping, etc is resolvable with proxy objects. If it is, it would be nice to know how.


We use proxies to watch models in our in-house MVC framework[1] and all those use cases are covered. You can watch setting array items with indexes, nested objects etc. It did take some jumping through hoops because the model also has to extend EventTarget which didn't like proxies. And we had to keep track of nested objects without creating new proxies on every call, but it's still not a rocket science.

Here the code of the model implementation: https://github.com/zandaqo/compago/blob/master/src/model.js

[1]https://github.com/zandaqo/compago


> Stack traces are lost when generators throw().

V8 (and probably others) now has some special cases for async functions (and generators, I think, but those are much more rare) to show useful stack traces with the lineage of async calls. For example, this code shows the stack trace you'd expect when run in latest Chrome or Node.js.

  async function c() { throw new Error('Some error'); }
  async function b() { await c(); }
  async function a() { await b(); }
  a();


Yeah, V8 in particular has possibly the best debugger tool available in any language ever; an easy to use UI, but still absolutely chock full of just about every useful feature you could imagine.


Imo, all people saying that some language or tech is good enough seriously lack imagination. Things could be so much better.


Yes, this same lack of imagination is why kludge after kludge is piled on at so many software shops. That and an absolute dearth of people that actually take pride in their work.


Any examples?


What are great languages in your opinion?


If we’re talking about “dynamic CRUD over socket abstracted to death”-style tasks, and not considering minor preferences like syntax, then python, lua, most of lisp/schemes, perl. All of these allow enough meta-anything to do:

  ./file.src:
  func api_foo()
    for x in objs
      x.a = fetch(x.b)
    ui.btnok.enabled = yes
    commit()
And have foo exported as api, and when called, all clients/servers, databases synced, validations passed, schemas updated, triggers and reactions done, errors handled, developer errors reported.


I think python is a terrible language. It has so little syntax you can't tell the difference between various things. A variable declaration, a reassignment, a keyword, a whatever else, they don't have any visual distinction from each other.

I also find that python has reserved a whole bunch of keywords that I can't use as function names, making APIs hard to create with appropriate names. You also have to pollute your code with self everywhere.

It is also really slow, has no where near the number of libraries on github as javascript, is not a client-side language, doesn't have anything like babel, which can allow you to do a lot more than reflection/introspection (but not the same things exactly), etc.


> I also find that python has reserved a whole bunch of keywords that I can't use as function names, making APIs hard to create with appropriate names.

Python is one of the imperative languages with fewer number keywords in existence. Compare for example the number of keywords in Python[1] with Javascript[2].

Maybe you're confusing the built-in functions in Python language, like len() or str(), however since they're functions you can reassign them (not a good idea, however in a limited scope it works).

[1]: https://en.wikipedia.org/wiki/Python_syntax_and_semantics#Ke... [2]: https://www.w3schools.com/js/js_reserved.asp

> It has so little syntax you can't tell the difference between various things. A variable declaration, a reassignment, a keyword, a whatever else, they don't have any visual distinction from each other.

The fact that you don't know the difference between keywords and built-in functions makes your argument weak. But I will bite, if you're having difficult to make the distinction between a keyword and a variable you probably have a very bad editor. Syntax highlighting helps a lot here (like in almost every other language).

> You also have to pollute your code with self everywhere.

I really like the fact that self is explicit in Python. It makes OO patterns more explicit (there is no magic variable like self or this, just a parameter that is passed as the first argument in any Class method) and it helps making a distinction between attributes and local variables. Really, I would go even further and make super() a method from Object, so instead of calling a magic method super() I would call:

    class Test:
        def __init__(self):
            self.super().__init__(self)
Ugly? Maybe, however it is so much easier to understand what is happening.

> has no where near the number of libraries on github as javascript

Most JavaScript libraries in GitHub are pure toys/garbage, though. Python has a number of useful libraries and if you don't concur with me, please give examples of areas where Python is lacking a good library (I can easily give an example for JavaScript: ML and scientifc computing).


> It is also really slow

1) In scientific/ML CPython libraries, most critical parts are compiled anyway, and the core language is fast and expressive enough to provide a nice and fast interface to it; so your statement makes little sense without more context

2) Python != CPython


While I use Python for ML myself, I find it weird to say that a language isn't slow because you don't really use it anyway. It's true that Scripts in Python can be fast if 99% of the executed logic is in C anyway but that doesn't mean the language isn't slow. As soon as your Python script needs to do anything not available in a library you'll notice how slow it really is.

Python is really neat to quickly experiment platform independent but it's definitely extremely slow.


As parent mentioned, python!=cpython. You've got pypy, cython, typed cython, nuitka, and probably some others I forgot about. Without knowing what you're trying to achieve and what you've tried, "extremely slow" is pretty hard to accept.


Cython is a subset of Python, while PyPy has shitty C interop if nothing changed from the last time that I checked. Ergo in a general sense Python = CPython. If you have specific needs you may want to consider the alternatives, but saying that Cython is Python is at least shady if not completely deceptive.


That python is slow is utterly irrelevant. You would never deploy an unoptimized Python codebase into production if you cared about performance. You would profile the code and optimize the hot paths with the appropriate technology, be it numba-jit, cython, numpy, cffi, or any of the other many ways you can easily optimize Python


I wouldn't but I've seen a lot of people who will. If you have code that runs, why spend extra money to optimise it? Hardware is cheap and the cloud allow to scale as much as you want (not my opinion obviously but I've heard that more than once and I'm not even directly involved in these kind of decisions).


Clearly if there is no incentive to spend money to optimize it then it is fast enough.


Python environments and versioning are a PITA too. That's my main beef with it. Getting anyone else's Python code to run is a nightmare if they haven't documented everything; most other languages I use feel like they have some sort of default versioning built in when you start including other packages.

The difference between 'npm install' (and even an added 'gulp') and the chickens I've had to sacrifice at crossroads to get Python packages working is notable.

Oblig XKCD - https://imgs.xkcd.com/comics/python_environment.png


It's funny you should say the same issues aren't as bad with Node and NPM. Node's versioning is a sliding window. I tried compiling Bootstrap recently and it simply wouldn't compile because there was some dependency error that didn't make sense. Apparently Node breaks things too frequently and you can't `npm install` anything if it's half a year behind the newest version. I've never had that problem with Python.


'Node breaks things too frequently' - Python 2.7 -> 3.0.

Yes, you need to get the right version of Node, just like you need the right version of Python. I've had both largely just work with directions of "Version Y.X", where Y is defined. I've also had the occasion where it still broke with a specific version where both Y and X were defined.

In general, if I have the right version of Node (and I agree, I'd prefer package.json to also indicate the version of Node that was used to initialize it), things work when installing from package.json. Things also generally just work with Python if I have the right major version of Python, and an environment file. My issue is more that the 'simple' steps a lot of people do when creating Python projects -don't- use an environment. They just use whatever is installed globally on their computer, and they pip install any dependencies, then write a readme to pip install things with, rather than lock it down with a virtual environment.


I don't know, getting people to run my Python code currently consists of "make sure you have Python 3, run `pipenv install`, run the code".


And if every Python developer, researcher using Python, etc, did that, it would be much less of a problem. The reality is, many, many don't.

It's kind of ironic, really. With the Zen of Python stating "There should be one-- and preferably only one --obvious way to do it" why is it that it's so common for people to not do the thing that makes it reasonably portable?

I mean, I currently am working with some code that, as part of its readme, has a pip install of a lib -off of master-, and yes, obviously that caused problems. Is the author not a Python developer? Well, he's a researcher. Why is the obvious thing for someone coming naively to develop in the language not building a portable environment?


> Is the author not a Python developer? Well, he's a researcher.

I think this is why the situation is bad in Python: we have too many non-programmers in the community, that simply want something to work and get on with theirs life. If they can get by it by installing a bunch of libraries by running some command incantations as root, they're happy enough.

You don't have this problem with Node because the only niche that Node really matters is Web, and WEB DEVELOPERS know that their environment should be reproducible.

*: Even in Node this isn't really true, since we have yarn vs npm. Ruby is way more stable thanks to bundler.


> we have too many non-programmers in the community, that simply want something to work and get on with theirs life

The impact of those people in the ecosystem is zero, though. They don't inconvenience anyone by producing badly-written libraries, they literally do their job, produce what they want to produce and everyone is happy. I'm not sure why they get lumped in with "this problem".

I don't know why I hear this a lot, it certainly doesn't echo my experience. I've rarely had dependency problems, even the regular requirements.txt (without pinning to specific versions) tends to work well enough on reasonably current code. Pipenv pretty much solves the problem by introducing a lockfile.


Well, it impacts those people who needs their work, for example for validation of scientific experiments.

It doesn't impact the ecosystem very much for sure, however I wouldn't say that the impact is zero.


The Zen of Python does not extend past the language itself, unfortunately.


My main problem with Python: passing a temporary closure to a function is very messy. You can't do functional programming well in Python.

Also, I think scoping rules in Python are terrible. I always end up with a namespace that is a mess. And you can't do something like this in a clean way:

    counter = 0

    def add(n):
      counter += n

    add(1)
    add(2)
    add(3)


> And you can't do something like this in a clean way:

That's what nonlocal is for: https://docs.python.org/3/reference/simple_stmts.html#the-no... Just stick "nonlocal counter" in your function and it works.


True. But nonlocal is relatively new.

Also nonlocal implies that other variables are local, which is not true. E.g., the following works without "nonlocal" keyword:

    counter = [0]

    def add(n):
      counter[0] += n

    add(1)
    add(2)
(This used to be my workaround.)


Yup, relatively new. Only around since python 3.0, for the last 12 years ;-)


nonlocal implies that you are reassigning a variable outside of the local scope.

Your example works because you are mutating an existing object, not reassigning a new one.

It's a disingenuous comparison IMO.


> My main problem with Python: passing a temporary closure to a function is very messy.

What's messy about it?


Python syntax is built around indentation. Try to properly indent a function that you pass to another function. And what if you have to pass two such functions? Where do you place the comma that separates both arguments?


You can't define a function via 'def' and pass it as an argument at the same time. You can do it via 'lambda' though (which doesn't require any indentation since it's just an expression), what's the problem with it?

    >>> (lambda f: lambda x: f(x) * 2)(lambda x: x + 1)(100)
    202


So the problem is that you can only inline expressions; you can't inline let-blocks (assignments), or multiple statements. The usefulness is very limited.


You pass functions just like any other argument.

clos = 42

def foo(x): return x + clos

def bar(y): return y * clos

higher_order(foo, bar)

What's so hard about that?


Here's a challenge: try inlining the foo and bar in the call (because this is how functions are typically composed in functional programming)

You could do this with a lambda construct in Python; but this only works if the function has a single expression-statement; it doesn't work e.g. when you have a bunch of assignments inside the function you are inlining.


>Here's a challenge: try inlining the foo and bar in the call (because this is how functions are typically composed in functional programming)

Sure, it's ugly but I think it's a silly challenge. Complex functions defined in the function call are un-maintainable anyway IMO.


Explain how you can't tell the difference between a variable assignment and a keyword.


Nothing is stopping you from generating JavaScript code to do what you want. You don't need reflection to do that.

I'd hate to see Python become as popular as JavaScript and to have it be used as the defacto standard in browsers because honestly, it's nowhere near as enjoyable to use as JS.


No, Python is much more enjoyable than JS (see what happens when you turn subjective statements into objective ones?).

It feels to me like JS is more for people who want to write "clever" code and Python is for people who want to write boring, more maintainable code. It doesn't help that JS has a gentler learning curve and attracts more people who just want to get things done without thinking whether those things should be done that way.


JavaScript is more enjoyable than Python to me. You can’t really say no to that.

IMO JavaScript is for people who want to get things done quickly and efficiently and have their program work everywhere. Python is for people who like to think they are doing things “the right way” as if there were such a thing.


> JavaScript is more enjoyable than Python to me. You can’t really say no to that.

Indeed I can't. I can say no to "JS is more enjoyable than Python", which is what you said in your upstream comment.

> JavaScript is for people who want to get things done quickly and efficiently and have their program work everywhere

If by "everywhere" you mean "in a browser", sure. I don't see how that's related to the common definition of "everywhere", though.

> Python is for people who like to think they are doing things “the right way” as if there were such a thing.

I don't know what you mean by that.


The criteria for this list of languages seems to have been "isn't JS". The story for metaprogramming in JS certainly is limited in some aspects; for example, there's no operator overloading and JS isn't homoiconic like lisps, but between dynamic inheritance in older JS and the newer additions like proxies, symbols, accessors, reflection API, protocols and being able to extend the 'exotic' behavior of arrays, JS can probably still do whatever the snippet you posted is supposed to illustrate.

As a concrete example, here's a library I made that relies on symbols and dynamic inheritance to extend the built-in data types: https://github.com/slikts/symbol-land


I tried to include all my rant points into this example. E.g. what if fetch() doesn't go async? Then we don't have to mark it like that and don't have to await. Where ui and commit lives? There should be a unique environment for this specific api call or a whole subsystem. objs is something both enumerable and proxied, etc. It can be done in JS, but IRL it will be:

  ./file.js:
  function foo(ctx) { // @export
    for (let x of ctx.objs.iterate()) {
      x.a = await fetch(ctx, x.b);
    }
    ctx.ui.btnok.enabled = true;
    ctx.commit();
  }
  autoexport(module, x => eval(x));
And more boilerplate, ceremony and fud down the way, if you go vanilla.

(Thank you and all commenters here for sharing your code and experience. In particular, symbol-land looks very interesting and haskell-y.)


Having to explicitly mark where the control flow goes to the event loop with `await` is a very small price to pay for making the code much more clear. This is why I don't recommend node-fibers or deep coroutines in general.

Destructuring would make your example look better: foo({ui, commit, objs}), and then there's no need for typing out ctx. Another thing that's not needed with for-of loops is .iterate().

Using eval() is a very strong anti-pattern, and it's not needed there anyway. JS has the `with` statement that allows running code with a specific context, but its use is discouraged as it's really bad for readability and hard to optimize.


I used eval as a workaround; autoexport does fs.readFile() on module’s filename and adds all lines marked with @export comment to exports. Eval is the only way to get a function value from other module, since functions are local to the context that is implicit and only accessible through eval-ing closure. That’s my code and my company, so I don’t push it to anyone now or in the future. I know how important antipatterns are in general.

Destructuring looks good here, you’re right. Actually, this thread somewhat relaxed my js hostility and I see it as an alternative that has reasonable tradeoffs (but still not as a friend though).


I have trouble picturing what advantage that setup would give over using ESM or node modules.

The Function constructor is a better alternative for eval(), but still only as a last resort. eval() itself has no use cases.

I find that most JS criticism is ill-informed, because people are too quick to jump to blaming the language due to its reputation. Not that I'd call JS a great language, but it has redeeming aspects.


Take a look at Common Lisp, Haskell, Smalltalk, and Prolog to see examples of great languages.


I have done javascript programming, but after reading this I dont feel like I know anything.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: