Hacker News new | past | comments | ask | show | jobs | submit login

> It used to be that we’d get new computer languages about every generation. […] And then it kind of stopped. There are still people developing languages, but nobody cares.

I think this is false. We can see great interest in new languages, and I feel like languages like Rust and Go have achieved to move the ball forward significantly for backend / system software development.

It's just that noone has been able to replicate that kind of success in the web-based frontend space.

This happened in the case of Rust and Go after LOTS of searching for better alternatives to C/C++ and later Java. There are a lot of failed attempts along the way, and some which find their niche but fail to displace the older generation languages in any meaningful way.

There has been TypeScript and Dart and transpilers that have tried to shield us from the horrors of JavaScript development, but in the end they still are too closely related to JavaScript and its runtime to truly displace it. I feel like we have an opportunity now, with WebAssembly, to move beyond these limitations.

If browsers and web standards move to a point where we can use WebAssembly as a first class citizen, without the need for JavaScript scaffolding, we could see the rise of a new crop of languages for the frontend. Perhaps even making it possible to remove JavaScript entirely and move it into a WebAssembly module eventually, to remove it as the default choice and put it on equal footing with new alternatives.

We could take the lessons we have learned from other modern languages, and apply them more natively to frontend problems and practices, with language-native tooling and a higher level of integration in those tools than we have been able to achieve with JavaScript.




> It's just that noone has been able to replicate that kind of success [of Rust's and Go's achievements] in the web-based frontend space.

While it's easy to dog on Javascript, it's also necessary to consider what Javascript does right.

The main thing that comes to mind is JS' async-everything, async-by-default, and first class async abstractions (like the Promise). Not necessarily something you want all the time, but certainly a powerful feature when it comes to IO-bound tasks and UI development. We don't give enough credit to JS for this imo since we take it for granted.

But consider something like this (JS) using WaitGroups in Go:

    const aPromise = promise()
    const [b, c] = await Promise.all([promise(), promise()])
    const a = await aPromise


What JavaScript does right, is to have an installed VM on basically every computer out there. If you are building a consumer focused thing, it is hard to argue for any other installation method. Especially when you consider you can remove the need to track multiple deployments in the same way, since you can basically force an update to all users.


Microsoft had VBScript installed on >90% of all browsers at one time. Google almost installed Dart by default in Chrome and then walked it back. Java was installed on all browsers for a decade, but no one wanted it. WebAssembly allows for languages like Rust to be used, but its adoption is still very niche.

JavaScript lives on. It's not just inertia. IE for example supported multiple languages at the same time. As a lingua franca, JS is simply better than the alternatives.

Python is subjectively cleaner than JS, but it isn't sufficiently better to warrant replacing JS, especially after JS started becoming more Pythonic. Lua is arguably a step backwards. Go adds a necessary compile step. Folks already gave Java a shot. C/C++ was a hard-learned lesson after ActiveX regarding web security. Perl ain't gonna be it obviously. Rust has a much too steep learning curve for the vast majority of web developers to tolerate. Ruby is slower and also not sufficiently better.

I know folks don't like to hear it, but JavaScript is nowhere near as bad as folks like to go on about it. In fact it's so flexible, an entire ecosystem rose up around it on the server side more than a decade after its client-side debut. If JS were really that bad, no one would adopt it for other areas if they didn't have to. It is familiar and gets the job done. We only highlight its shortcomings because we've had almost 30 years to pick it apart and dissect it.

Google could team up with Apple and make Swift a supported language. Within 2 years, it would be on >90%+ of all devices. Maybe 95%. And it wouldn't matter. Sure, a bunch of folks would use it, especially if they were Apple devs. But the vast majority would ask, "What would this new language give me that JS can't do? Is it actually worth rewriting apps and retraining my staff?" Honestly, the answer is 'no'.

Because JS really isn't bad. It has warts (though many/most of them due to the DOM rather than the language). It has legacy. But after 30 years, it's still doing surprisingly well at both speed, flexibility, and the ability to evolve.


This is taking some liberties with "installed." Java was never meaningfully installed on all machines. Such that getting it to work was surprisingly difficult for most users. You could maybe argue that Flash was well entrenched, but I don't think that would get too many objections. Indeed, many early Flash sites were better at interactivity than many modern sites.

VBScript, I'm almost willing to cede. That said, I don't remember it ever being a thing that websites tried to use. Even back in the days of them ripping off Sun with JScript. I'm also curious when they had 90% of the browsers with it? Would love to see a solid timeline on that.

Note, too, that I never pushed that JavaScript is bad on this. Indeed, I agree with you that it is nowhere near as bad as is often stated. What it lacks, is discipline. Which is why it seems to have near every paradigm accounted for nowadays.

That said, /if/ Google teamed up with Apple and got that pushed on all devices for native, I suspect you would see it leak into the browsers in that 2 years and that we would indeed start seeing more Swift developers at large. And a ton of "reasons you should migrate to Swift" for your websites.


You probably misremember. Java was indeed installed on effectively all browser-capable machines from 1997-200x. All you needed was an <applet> tag, not an <object> or <embed> like other plugins like Flash. Speaking of Flash, it came preloaded for a time, but mostly rode the ActiveX wave for installs. You could not count on it being installed though. I vividly remember the fallback markup when it was unavailable.

Internet Explorer had 90% marketshare in the years around 2004. Netscape was dead. Mozilla/Phoenix/Firefox was a hopeful, not a contender. VBScript was everywhere IE was, and folks still preferred JS, even if their sites proudly proclaimed "Best viewed with Internet Explorer". In the late 1990s/early 2000s, MSDN was full of examples pushing VBScript. It became second nature to myself and coworkers to just reason out what the equivalent JS looked like on the fly. Microsoft absolutely tried its best to replace JS, but devs wouldn't have it, and the number of JS-powered sites was just too large for Microsoft to simply drop compatibility.

It was around that time that Microsoft stopped making updates of any kind to Internet Explorer for years. Folks today really don't comprehend the debt we hold to Mozilla for breaking out of the notion that the web was feature complete.


No, I remember quite well how that never worked as well as you'd have wanted it to. So, yes, there was an installation of java. No, it probably didn't work correctly. Worse, it was probably not updated. With no real path on how to update for most folks.

Such that, yes it was "installed," but it was about like relying on vanilla JavaScript back then. Which you didn't do. You pulled in jquery or whatever and monkey patched some sanity into the environment. Something you couldn't do with Java.

VB had the odd curse of being VB. Everyone was certain that MS wanted it dead, and everyone also knew that if you were writing a VB application, you might as well just make it directly in Access. Which, granted, wasn't a bad solution for a lot of things.


The big difference is we’re talking about it working on many devices, plus a lot of people have very bitter memories of Windows XP deprecation breaking apps. And you seem to be contradicting your own thesis here, since people saying “this isn’t so much better to justify rewrites and retraining” is precisely inertia.


Yeah, I agree. Whether you like it is almost beside the point. And then eventually people do start to like it through familiarity and tons of investment going into it and the momentum is tough to stop.


I think async-by-default was an interesting idea and well worth trying, but I don't really think it was a good idea. Turns out that most things you want to do are synchronous, and "sync-by-default unless mentioned otherwise" makes a lot more sense. This includes I/O by the way, because most of the time you want to wait until an operation has finished.

Or to put it in another way: JavaScript makes the uncommon case easy and the common case hard(er).


I disagree. I'd take JS-style promises over trying to manage Futures in ForkJoinPools or thread pools any day. Being able to write async expressions in parallel by default means even junior devs take advantage of parallelism. I've seen plenty of code written in Java and Ruby where multiple network and DB requests are made in serial despite having no dependency on each other. The usual reason is that there's just a lot more friction to have it be parallel there.


I have also found Promises very hard to reason about at times: "okay, so I have some code here, and when exactly is this run?" can be a difficult question to answer with Promises. Part of that is inherent in async code, but part of that is also because IMHO Promises make it harder than it needs to be.

I never really used Java, but I have used Ruby and Python (IIRC Python's APIs were modelled on the Java ones) and I agree it can be painful. The thing is, even with an awkward async implementation it's something you have to deal with relatively infrequently when synchronous is the default (as it is in most languages). When you do it can be a pain, but I'd rather have this "occasional pain" vs. "pain every time I want to do any I/O operation".

Personally I like how Go does things.


My code is running <- no await

My code is accessing another resource (storage, network, etc.) <- await

It's really not that complicated. If you're surprised by the presence of absence of a Promise, you might want to take a moment to understand what is being processed. There's a good chance there's a gap there that extends beyond a simple keyword in JS.


> I have some code here, and when exactly is this run?

If there are `await` keywords previously in the function, then the line you're looking at will run after these async calls are done. Otherwise it'll run ASAP. Is there something else to it?


People often get confused because they expect `await` to sequence promise resolution too. For example

    const example = async () => {
        const ifError = Promise.reject("something went wrong")
        const value = await someOtherPromise()
        await (valueIsOk(value) ? runNextStep(value) : ifError)
    }
will always throw.


I don't think I follow. Your example left out all the definitions of these functions, so you can't really deterministically say what will happen. If `someOtherPromise()` fulfills, `valueIsOk(value)` evaluates to `true` or truthy, and `runNextStep(value)` fulfills, `example` will fulfill and not reject. If any of those conditions don't hold, `example` settles as rejected.


The issue is that `ifError` throws whether `example` fulfills or not. Promised values are sequenced by `async`, but promise side-effects are sequenced like side effects of any other javascript statement.


Sure, ifError rejects, but I don't think the behavior here is surprising or strange at all. This is exactly how one would want it to work. If you wanted to await it, you could do that.

Is the concern you're raising that people may accidentally orphan floating promises? That can be addressed with linter rules. [1][2]

[1]: https://github.com/typescript-eslint/typescript-eslint/blob/...

[2]: https://github.com/typescript-eslint/typescript-eslint/blob/...


> but I don't think the behavior here is surprising or strange at all.

Tell that to my junior coworkers. It's probably the single most common cause of async bugs in our codebase.


Are you running those two lint rules I mentioned? They should completely remove cases of accidental floating promises.


Yes, but the floating isn't the issue: throwing in my example was just a concrete stand in for promise side effects in general. Running queries you only need in one not so common branch before the conditional gets checked, for instance.


Async doesn't give you parallelism by default though, you just get concurrency. You don't get parallelism without using Workers.


Well, I wouldn't put it like that. Async can trivially cash out into parallel work like if you're just sending queries to a database or ffmpeg or ImageMagick or, frankly, most common use cases which are going to do work in parallel out of process.

All of your I/O work could even be happening in parallel in a run of the mill JS app. Workers just give you parallelism with your sync JS code which is a more narrow claim.


Fair, I should've been more careful with the way I worded that. The common example I was highlighting was concurrent network IO requests that effectively resolve to parallel work that runs on different nodes. With a service oriented architecture, this can be the norm rather than the exception.


And workers get you isolation, no shared memory. You must explicitly pass data ownership from one thread to the next. (And I consider all of that a good thing.)


async-await is not the best solution for most things, but it is usually the best solution for UI and I/O (database interactions, data-fetching, file-system, etc)

Which makes it default-best for (and I speculate here) 90% of the code that is written because it covers client-side and server-side API layers. Ie business logic. Like you mention this kind of code is very rarely synchronous (in the sense that you want to block the execution thread until you get the result). An UI needs to keep answering to users inputs and an API server needs to keep answering requests while waiting on I/O

The places where it isn't good are usually the kind of software that can be packaged and reused (libraries, ie the heavy lifting software). Things like databases, image processing, neural networks, etc

Talking about JS specifically there are a few more use-cases where it isn't good even though an async-await system makes sense, like embedded low-memory environments

IMO the main problem with JS is not JS, it is the browsers DOM. It is about time we get a replacement.


"But consider something like this (JS) using WaitGroups in Go:"

Sure, though if this is something you're doing a lot of it's not hard to abstract out.

However, let me put on the other side of the balance the sheer staggering quantity of code there is out there that just "await"s everything in line as if they were writing some version of Go that required you to specify every function that could potentially be async, because Promise.all and friends are quite uncommon.

Before you jump up to protest to the contrary, go run a grep over your code base and count the "await"s that are simply awaiting the next thing in sequence versus doing something less trivial. Even those of you who write this code fluently will probably discover you have a lot of things just getting directly awaited. The exceptions, being rare, are cognitively available; the number of times you almost unthinkingly type "await" don't register because they are so common. Only a few of you doing something fairly unusual will be able to say with a straight face that you are using Promise.whatever abstractions all the time. Even fewer of you will be able to say you use all different kinds and construct your own and couldn't solve the lack of some particular abstraction in Go in just a few minutes of writing some function.


Promise.all, Promise.race, and Promise.allSettled are a non-trivial amount of my await calls. Also, while you may consider "await" noise, I consider it signal. I want to know when the execution queue has a break 100% of the time. Implicit await would make ordinary JS code a nightmare to debug.

Contrast this with Go where you must synchronize shared resources. Yes, the go-routine model allows relatively simple concurrency. However concurrency management is simply not a concern in JS-land. Yes, JS can be optimized so that more happens in parallel, but deadlocks can happen. Multiple writers to the same object can't happen. Passing data between threads enforces ownership and visibility out of the box. JS is bulletproof from a developer standpoint, which is a boon to security and absolutely, positively required in an environment where you're executing random code from random authors on the internet.

JavaScript really doesn't get enough credit for what it's accomplished.


I agree with you that JavaScript's async abstractions are nice, but I think a lot of people will misunderstand the example you've given, as ensuring that the asynchronous value for `b` and `c` will be resolved prior to the asynchronous value of `a`.

Promises begin to resolve as soon as they are created. Therefore, assuming that the three executions of `promise()` in that code take 3 seconds, 10 seconds and 2 seconds respectively, we should expect `aPromise` to be resolved during `Promise.all([promise(), promise()])` immediately prior to `b` and `c`. The only thing is that we do not unwrap `aPromise` with `await` and point `a` towards it until 10 seconds after `await Promise.all([promise(), promise()])` was called.


Not going to comment about Go here but JavaScript code like what is written above confuses many people. `Promise`s begin to resolve immediately on creation whether or not we've attempted to unwrap their value using `await`.

Therefore, it's quite possible that the first call to `promise()` might resolve after 3 seconds, while the next two calls could take 7 seconds and 10 seconds respectively. In this case, the `Promise.all` would produce a result after 10 seconds, and then the next line `const a = await aPromise` would unwrap the value within `aPromise` immediately as it would already have been fulfilled 3 seconds into execution of the `await Promise.all([promise(), promise()])` line.


Not going to comment about Go here but JavaScript code like what is written above confuses many people. `Promise`s begin to resolve immediately on creation whether or not we've attempted to unwrap their value using `await`.

Therefore, it's quite possible that the first call to `promise()` might resolve after 3 seconds, while the next two calls could take 7 seconds and 10 seconds respectively. In this case, the `Promise.all` would produce a result after 10 seconds, and then the next line `const a = await aPromise` would unwrap the value within `aPromise` immediately as it would already have been fulfilled 3 seconds into the `await Promise.all([promise(), promise()])`.


Not going to comment about Go here, but the JavaScript code above is confusing for a lot of people. Promises begin to resolve immediately on creation whether or not we've attempted to unwrap their value using `await`.

Therefore, it's quite possible that the first call to `promise()` might resolve after 3 seconds, while the next two calls could take 7 seconds and 10 seconds respectively. In this case, the `Promise.all` would produce a result after 10 seconds, and then the next line `const a = await aPromise` would unwrap the value within `aPromise` immediately, since it would already have been fulfilled.


Async in interface by default was a mistake and led to a ton of pain. Actually-async-under-the-hood is fine.

See: how much JS is rightly and justifiably littered with "await" on seemingly almost every line, now that that's an option. It's downright comical to look at, and as clear a sign of mis-design in a language/runtime as you can get. Nine times out of ten (maybe more...) you just need to treat all that async stuff as synchronous, from the perspective of the current block of code. "Await" or something like it should have been the default behavior of async calls. A great deal of the mess in Javascript over the last couple decades has been due to going the other direction with that default.


> It's downright comical to look at, and as clear a sign of mis-design in a language/runtime as you can get.

That's your opinion. Some of us prefer to know when a call is I/O-constrained and when the execution queue is being interrupted. JS had fearless concurrency before it was cool.

When was the last time you heard of a JS program in a thread deadlock under load (other than a VM bug)? Never. It can get caught up in an infinite loop like any language, but that's not deadlocking. Because the language doesn't allow it. Not "makes it easier to avoid". Straight up doesn't allow it. That's no small thing, and it's not something Go can claim.


> Straight up doesn't allow it.

Right—so making the most-commonly-desired behavior (await-like behavior) default would have been a pretty big win, then, and not harmed that quality at all. Meanwhile it's only with the addition (and liberal sprinkling across most active JS codebases) of async/await that the default behavior doesn't result in ugly, awful code—wrestling with that defect is basically the story of the evolution of JS during and since the '00s. The number of person-hours lost to unwanted async-by-default semantics, writing competing libraries to work around it and make it less painful, learning those libraries, contorting code to use them just so you can (more often than not) cancel out the (perceived) effects of async behavior, learning the next set of libraries after that set's defunct, et c., must have been enormous—so incredibly large that I don't think there's any saving that decision from the "mistake" pile.

Now, it's merely funny and a little unsightly, seeing "await" rivaling, on a usage-count basis, keywords like "const" in so many JS codebases.


Straight up doesn't allow deadlocks.

> Right—so making the most-commonly-desired behavior (await-like behavior) default would have been a pretty big win, then, and not harmed that quality at all.

And you'd eliminate easy access to things like Promise.all, Promise.allSettled, and Promise.race. You'd also make it harder to see when the execution queue has a break.

If you want implicit awaits, Lua will welcome you with open arms. It is not to my liking.


localStorage is not async and it often causes problems because of that It seems to me what you are suggesting is implicit async (or rather automatic await insertion) which I guess could work. However I fail to see how it would make code easier to write or understand while actively increasing problems due to unclear async vs sync calls. Especially now that typecheckers can detect non-awaited calls.

Explicit await in a sense is documentation, sure 90% of async functions just do things sequentially, but the awaits in there are clear signs that "hey things won't happen instantaneously here"


also don't forget pretty much all similar event-loop systems in other languages also have explicit await calls


Personally I feel like having functions and closures as first class objects is the killer feature. Without that, even dynamic languages feels unnecessarily restrictive when you tried JS.


JS achieved critical mass especially with the advent of NodeJS; it'll be difficult for a competitor in the same space to take over.

But that's fine; JS is fine for what it's used for, i.e. browsers / webpages and at a stretch rich desktop UIs. I think the main issue that the author has is that for a lot of people it's become a golden hammer.

A few years ago I started applying for jobs again; what I found (but this might have been the recruiter) is that multiple companies were in the process of replacing their PHP backends with Javascript / NodeJS backends. I couldn't fathom why people were starting a new project - building the future of their company - on Node.

I mean it makes sense from a hiring point of view because JS developers are everywhere, but it doesn't make sense from a business point of view, or right-tool-for-the-job.

But maybe that's me being judgmental. I mean PHP is fine for back-end too, if that's what they were replacing.


> It's just that noone has been able to replicate that kind of success in the web-based frontend space.

TypeScript and WASM have both been developed in the last decade.

The former is amazingly successfully, and the later is seeing increasing adoption.




The deadline for YC's W25 batch is 8pm PT tonight. Go for it!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: