Hacker News new | past | comments | ask | show | jobs | submit login
Douglas Crockford on JavaScript (browsertech.com)
165 points by lrsjng on June 12, 2023 | hide | past | favorite | 202 comments



> It used to be that we’d get new computer languages about every generation. […] And then it kind of stopped. There are still people developing languages, but nobody cares.

I think this is false. We can see great interest in new languages, and I feel like languages like Rust and Go have achieved to move the ball forward significantly for backend / system software development.

It's just that noone has been able to replicate that kind of success in the web-based frontend space.

This happened in the case of Rust and Go after LOTS of searching for better alternatives to C/C++ and later Java. There are a lot of failed attempts along the way, and some which find their niche but fail to displace the older generation languages in any meaningful way.

There has been TypeScript and Dart and transpilers that have tried to shield us from the horrors of JavaScript development, but in the end they still are too closely related to JavaScript and its runtime to truly displace it. I feel like we have an opportunity now, with WebAssembly, to move beyond these limitations.

If browsers and web standards move to a point where we can use WebAssembly as a first class citizen, without the need for JavaScript scaffolding, we could see the rise of a new crop of languages for the frontend. Perhaps even making it possible to remove JavaScript entirely and move it into a WebAssembly module eventually, to remove it as the default choice and put it on equal footing with new alternatives.

We could take the lessons we have learned from other modern languages, and apply them more natively to frontend problems and practices, with language-native tooling and a higher level of integration in those tools than we have been able to achieve with JavaScript.


> It's just that noone has been able to replicate that kind of success [of Rust's and Go's achievements] in the web-based frontend space.

While it's easy to dog on Javascript, it's also necessary to consider what Javascript does right.

The main thing that comes to mind is JS' async-everything, async-by-default, and first class async abstractions (like the Promise). Not necessarily something you want all the time, but certainly a powerful feature when it comes to IO-bound tasks and UI development. We don't give enough credit to JS for this imo since we take it for granted.

But consider something like this (JS) using WaitGroups in Go:

    const aPromise = promise()
    const [b, c] = await Promise.all([promise(), promise()])
    const a = await aPromise


What JavaScript does right, is to have an installed VM on basically every computer out there. If you are building a consumer focused thing, it is hard to argue for any other installation method. Especially when you consider you can remove the need to track multiple deployments in the same way, since you can basically force an update to all users.


Microsoft had VBScript installed on >90% of all browsers at one time. Google almost installed Dart by default in Chrome and then walked it back. Java was installed on all browsers for a decade, but no one wanted it. WebAssembly allows for languages like Rust to be used, but its adoption is still very niche.

JavaScript lives on. It's not just inertia. IE for example supported multiple languages at the same time. As a lingua franca, JS is simply better than the alternatives.

Python is subjectively cleaner than JS, but it isn't sufficiently better to warrant replacing JS, especially after JS started becoming more Pythonic. Lua is arguably a step backwards. Go adds a necessary compile step. Folks already gave Java a shot. C/C++ was a hard-learned lesson after ActiveX regarding web security. Perl ain't gonna be it obviously. Rust has a much too steep learning curve for the vast majority of web developers to tolerate. Ruby is slower and also not sufficiently better.

I know folks don't like to hear it, but JavaScript is nowhere near as bad as folks like to go on about it. In fact it's so flexible, an entire ecosystem rose up around it on the server side more than a decade after its client-side debut. If JS were really that bad, no one would adopt it for other areas if they didn't have to. It is familiar and gets the job done. We only highlight its shortcomings because we've had almost 30 years to pick it apart and dissect it.

Google could team up with Apple and make Swift a supported language. Within 2 years, it would be on >90%+ of all devices. Maybe 95%. And it wouldn't matter. Sure, a bunch of folks would use it, especially if they were Apple devs. But the vast majority would ask, "What would this new language give me that JS can't do? Is it actually worth rewriting apps and retraining my staff?" Honestly, the answer is 'no'.

Because JS really isn't bad. It has warts (though many/most of them due to the DOM rather than the language). It has legacy. But after 30 years, it's still doing surprisingly well at both speed, flexibility, and the ability to evolve.


This is taking some liberties with "installed." Java was never meaningfully installed on all machines. Such that getting it to work was surprisingly difficult for most users. You could maybe argue that Flash was well entrenched, but I don't think that would get too many objections. Indeed, many early Flash sites were better at interactivity than many modern sites.

VBScript, I'm almost willing to cede. That said, I don't remember it ever being a thing that websites tried to use. Even back in the days of them ripping off Sun with JScript. I'm also curious when they had 90% of the browsers with it? Would love to see a solid timeline on that.

Note, too, that I never pushed that JavaScript is bad on this. Indeed, I agree with you that it is nowhere near as bad as is often stated. What it lacks, is discipline. Which is why it seems to have near every paradigm accounted for nowadays.

That said, /if/ Google teamed up with Apple and got that pushed on all devices for native, I suspect you would see it leak into the browsers in that 2 years and that we would indeed start seeing more Swift developers at large. And a ton of "reasons you should migrate to Swift" for your websites.


You probably misremember. Java was indeed installed on effectively all browser-capable machines from 1997-200x. All you needed was an <applet> tag, not an <object> or <embed> like other plugins like Flash. Speaking of Flash, it came preloaded for a time, but mostly rode the ActiveX wave for installs. You could not count on it being installed though. I vividly remember the fallback markup when it was unavailable.

Internet Explorer had 90% marketshare in the years around 2004. Netscape was dead. Mozilla/Phoenix/Firefox was a hopeful, not a contender. VBScript was everywhere IE was, and folks still preferred JS, even if their sites proudly proclaimed "Best viewed with Internet Explorer". In the late 1990s/early 2000s, MSDN was full of examples pushing VBScript. It became second nature to myself and coworkers to just reason out what the equivalent JS looked like on the fly. Microsoft absolutely tried its best to replace JS, but devs wouldn't have it, and the number of JS-powered sites was just too large for Microsoft to simply drop compatibility.

It was around that time that Microsoft stopped making updates of any kind to Internet Explorer for years. Folks today really don't comprehend the debt we hold to Mozilla for breaking out of the notion that the web was feature complete.


No, I remember quite well how that never worked as well as you'd have wanted it to. So, yes, there was an installation of java. No, it probably didn't work correctly. Worse, it was probably not updated. With no real path on how to update for most folks.

Such that, yes it was "installed," but it was about like relying on vanilla JavaScript back then. Which you didn't do. You pulled in jquery or whatever and monkey patched some sanity into the environment. Something you couldn't do with Java.

VB had the odd curse of being VB. Everyone was certain that MS wanted it dead, and everyone also knew that if you were writing a VB application, you might as well just make it directly in Access. Which, granted, wasn't a bad solution for a lot of things.


The big difference is we’re talking about it working on many devices, plus a lot of people have very bitter memories of Windows XP deprecation breaking apps. And you seem to be contradicting your own thesis here, since people saying “this isn’t so much better to justify rewrites and retraining” is precisely inertia.


Yeah, I agree. Whether you like it is almost beside the point. And then eventually people do start to like it through familiarity and tons of investment going into it and the momentum is tough to stop.


I think async-by-default was an interesting idea and well worth trying, but I don't really think it was a good idea. Turns out that most things you want to do are synchronous, and "sync-by-default unless mentioned otherwise" makes a lot more sense. This includes I/O by the way, because most of the time you want to wait until an operation has finished.

Or to put it in another way: JavaScript makes the uncommon case easy and the common case hard(er).


I disagree. I'd take JS-style promises over trying to manage Futures in ForkJoinPools or thread pools any day. Being able to write async expressions in parallel by default means even junior devs take advantage of parallelism. I've seen plenty of code written in Java and Ruby where multiple network and DB requests are made in serial despite having no dependency on each other. The usual reason is that there's just a lot more friction to have it be parallel there.


I have also found Promises very hard to reason about at times: "okay, so I have some code here, and when exactly is this run?" can be a difficult question to answer with Promises. Part of that is inherent in async code, but part of that is also because IMHO Promises make it harder than it needs to be.

I never really used Java, but I have used Ruby and Python (IIRC Python's APIs were modelled on the Java ones) and I agree it can be painful. The thing is, even with an awkward async implementation it's something you have to deal with relatively infrequently when synchronous is the default (as it is in most languages). When you do it can be a pain, but I'd rather have this "occasional pain" vs. "pain every time I want to do any I/O operation".

Personally I like how Go does things.


My code is running <- no await

My code is accessing another resource (storage, network, etc.) <- await

It's really not that complicated. If you're surprised by the presence of absence of a Promise, you might want to take a moment to understand what is being processed. There's a good chance there's a gap there that extends beyond a simple keyword in JS.


> I have some code here, and when exactly is this run?

If there are `await` keywords previously in the function, then the line you're looking at will run after these async calls are done. Otherwise it'll run ASAP. Is there something else to it?


People often get confused because they expect `await` to sequence promise resolution too. For example

    const example = async () => {
        const ifError = Promise.reject("something went wrong")
        const value = await someOtherPromise()
        await (valueIsOk(value) ? runNextStep(value) : ifError)
    }
will always throw.


I don't think I follow. Your example left out all the definitions of these functions, so you can't really deterministically say what will happen. If `someOtherPromise()` fulfills, `valueIsOk(value)` evaluates to `true` or truthy, and `runNextStep(value)` fulfills, `example` will fulfill and not reject. If any of those conditions don't hold, `example` settles as rejected.


The issue is that `ifError` throws whether `example` fulfills or not. Promised values are sequenced by `async`, but promise side-effects are sequenced like side effects of any other javascript statement.


Sure, ifError rejects, but I don't think the behavior here is surprising or strange at all. This is exactly how one would want it to work. If you wanted to await it, you could do that.

Is the concern you're raising that people may accidentally orphan floating promises? That can be addressed with linter rules. [1][2]

[1]: https://github.com/typescript-eslint/typescript-eslint/blob/...

[2]: https://github.com/typescript-eslint/typescript-eslint/blob/...


> but I don't think the behavior here is surprising or strange at all.

Tell that to my junior coworkers. It's probably the single most common cause of async bugs in our codebase.


Are you running those two lint rules I mentioned? They should completely remove cases of accidental floating promises.


Yes, but the floating isn't the issue: throwing in my example was just a concrete stand in for promise side effects in general. Running queries you only need in one not so common branch before the conditional gets checked, for instance.


Async doesn't give you parallelism by default though, you just get concurrency. You don't get parallelism without using Workers.


Well, I wouldn't put it like that. Async can trivially cash out into parallel work like if you're just sending queries to a database or ffmpeg or ImageMagick or, frankly, most common use cases which are going to do work in parallel out of process.

All of your I/O work could even be happening in parallel in a run of the mill JS app. Workers just give you parallelism with your sync JS code which is a more narrow claim.


Fair, I should've been more careful with the way I worded that. The common example I was highlighting was concurrent network IO requests that effectively resolve to parallel work that runs on different nodes. With a service oriented architecture, this can be the norm rather than the exception.


And workers get you isolation, no shared memory. You must explicitly pass data ownership from one thread to the next. (And I consider all of that a good thing.)


async-await is not the best solution for most things, but it is usually the best solution for UI and I/O (database interactions, data-fetching, file-system, etc)

Which makes it default-best for (and I speculate here) 90% of the code that is written because it covers client-side and server-side API layers. Ie business logic. Like you mention this kind of code is very rarely synchronous (in the sense that you want to block the execution thread until you get the result). An UI needs to keep answering to users inputs and an API server needs to keep answering requests while waiting on I/O

The places where it isn't good are usually the kind of software that can be packaged and reused (libraries, ie the heavy lifting software). Things like databases, image processing, neural networks, etc

Talking about JS specifically there are a few more use-cases where it isn't good even though an async-await system makes sense, like embedded low-memory environments

IMO the main problem with JS is not JS, it is the browsers DOM. It is about time we get a replacement.


"But consider something like this (JS) using WaitGroups in Go:"

Sure, though if this is something you're doing a lot of it's not hard to abstract out.

However, let me put on the other side of the balance the sheer staggering quantity of code there is out there that just "await"s everything in line as if they were writing some version of Go that required you to specify every function that could potentially be async, because Promise.all and friends are quite uncommon.

Before you jump up to protest to the contrary, go run a grep over your code base and count the "await"s that are simply awaiting the next thing in sequence versus doing something less trivial. Even those of you who write this code fluently will probably discover you have a lot of things just getting directly awaited. The exceptions, being rare, are cognitively available; the number of times you almost unthinkingly type "await" don't register because they are so common. Only a few of you doing something fairly unusual will be able to say with a straight face that you are using Promise.whatever abstractions all the time. Even fewer of you will be able to say you use all different kinds and construct your own and couldn't solve the lack of some particular abstraction in Go in just a few minutes of writing some function.


Promise.all, Promise.race, and Promise.allSettled are a non-trivial amount of my await calls. Also, while you may consider "await" noise, I consider it signal. I want to know when the execution queue has a break 100% of the time. Implicit await would make ordinary JS code a nightmare to debug.

Contrast this with Go where you must synchronize shared resources. Yes, the go-routine model allows relatively simple concurrency. However concurrency management is simply not a concern in JS-land. Yes, JS can be optimized so that more happens in parallel, but deadlocks can happen. Multiple writers to the same object can't happen. Passing data between threads enforces ownership and visibility out of the box. JS is bulletproof from a developer standpoint, which is a boon to security and absolutely, positively required in an environment where you're executing random code from random authors on the internet.

JavaScript really doesn't get enough credit for what it's accomplished.


I agree with you that JavaScript's async abstractions are nice, but I think a lot of people will misunderstand the example you've given, as ensuring that the asynchronous value for `b` and `c` will be resolved prior to the asynchronous value of `a`.

Promises begin to resolve as soon as they are created. Therefore, assuming that the three executions of `promise()` in that code take 3 seconds, 10 seconds and 2 seconds respectively, we should expect `aPromise` to be resolved during `Promise.all([promise(), promise()])` immediately prior to `b` and `c`. The only thing is that we do not unwrap `aPromise` with `await` and point `a` towards it until 10 seconds after `await Promise.all([promise(), promise()])` was called.


Not going to comment about Go here but JavaScript code like what is written above confuses many people. `Promise`s begin to resolve immediately on creation whether or not we've attempted to unwrap their value using `await`.

Therefore, it's quite possible that the first call to `promise()` might resolve after 3 seconds, while the next two calls could take 7 seconds and 10 seconds respectively. In this case, the `Promise.all` would produce a result after 10 seconds, and then the next line `const a = await aPromise` would unwrap the value within `aPromise` immediately as it would already have been fulfilled 3 seconds into execution of the `await Promise.all([promise(), promise()])` line.


Not going to comment about Go here but JavaScript code like what is written above confuses many people. `Promise`s begin to resolve immediately on creation whether or not we've attempted to unwrap their value using `await`.

Therefore, it's quite possible that the first call to `promise()` might resolve after 3 seconds, while the next two calls could take 7 seconds and 10 seconds respectively. In this case, the `Promise.all` would produce a result after 10 seconds, and then the next line `const a = await aPromise` would unwrap the value within `aPromise` immediately as it would already have been fulfilled 3 seconds into the `await Promise.all([promise(), promise()])`.


Not going to comment about Go here, but the JavaScript code above is confusing for a lot of people. Promises begin to resolve immediately on creation whether or not we've attempted to unwrap their value using `await`.

Therefore, it's quite possible that the first call to `promise()` might resolve after 3 seconds, while the next two calls could take 7 seconds and 10 seconds respectively. In this case, the `Promise.all` would produce a result after 10 seconds, and then the next line `const a = await aPromise` would unwrap the value within `aPromise` immediately, since it would already have been fulfilled.


Async in interface by default was a mistake and led to a ton of pain. Actually-async-under-the-hood is fine.

See: how much JS is rightly and justifiably littered with "await" on seemingly almost every line, now that that's an option. It's downright comical to look at, and as clear a sign of mis-design in a language/runtime as you can get. Nine times out of ten (maybe more...) you just need to treat all that async stuff as synchronous, from the perspective of the current block of code. "Await" or something like it should have been the default behavior of async calls. A great deal of the mess in Javascript over the last couple decades has been due to going the other direction with that default.


> It's downright comical to look at, and as clear a sign of mis-design in a language/runtime as you can get.

That's your opinion. Some of us prefer to know when a call is I/O-constrained and when the execution queue is being interrupted. JS had fearless concurrency before it was cool.

When was the last time you heard of a JS program in a thread deadlock under load (other than a VM bug)? Never. It can get caught up in an infinite loop like any language, but that's not deadlocking. Because the language doesn't allow it. Not "makes it easier to avoid". Straight up doesn't allow it. That's no small thing, and it's not something Go can claim.


> Straight up doesn't allow it.

Right—so making the most-commonly-desired behavior (await-like behavior) default would have been a pretty big win, then, and not harmed that quality at all. Meanwhile it's only with the addition (and liberal sprinkling across most active JS codebases) of async/await that the default behavior doesn't result in ugly, awful code—wrestling with that defect is basically the story of the evolution of JS during and since the '00s. The number of person-hours lost to unwanted async-by-default semantics, writing competing libraries to work around it and make it less painful, learning those libraries, contorting code to use them just so you can (more often than not) cancel out the (perceived) effects of async behavior, learning the next set of libraries after that set's defunct, et c., must have been enormous—so incredibly large that I don't think there's any saving that decision from the "mistake" pile.

Now, it's merely funny and a little unsightly, seeing "await" rivaling, on a usage-count basis, keywords like "const" in so many JS codebases.


Straight up doesn't allow deadlocks.

> Right—so making the most-commonly-desired behavior (await-like behavior) default would have been a pretty big win, then, and not harmed that quality at all.

And you'd eliminate easy access to things like Promise.all, Promise.allSettled, and Promise.race. You'd also make it harder to see when the execution queue has a break.

If you want implicit awaits, Lua will welcome you with open arms. It is not to my liking.


localStorage is not async and it often causes problems because of that It seems to me what you are suggesting is implicit async (or rather automatic await insertion) which I guess could work. However I fail to see how it would make code easier to write or understand while actively increasing problems due to unclear async vs sync calls. Especially now that typecheckers can detect non-awaited calls.

Explicit await in a sense is documentation, sure 90% of async functions just do things sequentially, but the awaits in there are clear signs that "hey things won't happen instantaneously here"


also don't forget pretty much all similar event-loop systems in other languages also have explicit await calls


Personally I feel like having functions and closures as first class objects is the killer feature. Without that, even dynamic languages feels unnecessarily restrictive when you tried JS.


JS achieved critical mass especially with the advent of NodeJS; it'll be difficult for a competitor in the same space to take over.

But that's fine; JS is fine for what it's used for, i.e. browsers / webpages and at a stretch rich desktop UIs. I think the main issue that the author has is that for a lot of people it's become a golden hammer.

A few years ago I started applying for jobs again; what I found (but this might have been the recruiter) is that multiple companies were in the process of replacing their PHP backends with Javascript / NodeJS backends. I couldn't fathom why people were starting a new project - building the future of their company - on Node.

I mean it makes sense from a hiring point of view because JS developers are everywhere, but it doesn't make sense from a business point of view, or right-tool-for-the-job.

But maybe that's me being judgmental. I mean PHP is fine for back-end too, if that's what they were replacing.


> It's just that noone has been able to replicate that kind of success in the web-based frontend space.

TypeScript and WASM have both been developed in the last decade.

The former is amazingly successfully, and the later is seeing increasing adoption.


He is obviously right about the stagnation but he does not seem to be connecting the dots (at least in this video) about why this is so - which in turn might inform us as to when to except some change.

Languages and their tooling ecosystems express how computing is concretely embedded and used by society. People adopt the tools to get jobs and to get the job done, whatever the "job" is. In turn the available remunerative jobs fit certain business models and markets.

The ecological landscape that prevails today is largely monocultures centered around the distortion fields of a few oligopoly entities. But not exlcusively so. You still have all sort of remnants of previous era landscape, the enterprise world stuck in its java coffin, the quirky projects of the Web 1.0 era still trusting php etc.

Massive adoption of a fresh and "clean" new thing will only happen with the emergence of a new economic reality, expressed for example through new actors. New tools that make desirable new things possible may enable such an evolution and eventually may be synomymous with it but these things don't happen made to order.


> quirky projects of the Web 1.0 era still trusting php

Disagree with the dig at PHP. PhpStorm+Psalm doesn't give you perfection, but a perfectly respectable development environment. Using PHP isn't anachronistic, these shallow dismissals are. If anyone out there hasn't seen PHP 8 and Psalm yet, it's worth a look. All languages and ecosystems have trade offs, PHP is no exception, it is a good fit for many scenarios.


Sorry, I didn't mean to sound dismissive (much less shallow!) about php. In fact I have great admiration for projects from wikipedia, to wordpress, moodle, matomo, you name it, before even talking about the modern php landscape you mention. In my book what you achieve is far more important than how you achieve it.

Crockford is arguing about "clean starts" and that is fine but the thrust of my comment that this will be driven by business models, not so much by tools. Sometimes tools are enablers of new things, so there is a chicken-and-egg aspect to it.

So we have all these ecosystems that were once flourishing but are now in a stationary state because the business models are in stationary state. There is a variety of tools that are good enough to get the current "job" done, but its not clear what will bring us to the next phase...


Modern Java with its modern IDEs used with good coding patterns is a completely fine environment as well

I think he meant more in the sense of projects that weren't adapted to modern coding standards


While I applaud the recent-ish improvements to PHP, the biggest issue is still unresolved: the standard library is a mess. It is inconsistent as a rule -- in terms of function naming, argument order, error behaviors, etc.

I think this issue is probably intractable unless PHP wants to go the Python route and have a hard fork.


I say this as someone who hasn't professionally used PHP in about a decade, but still has a few open source projects: the arguments against PHP are silly at best, especially this particular argument. With modern IDEs it's easy to work around these quirks in the language.


OK, but are there any good arguments for PHP? Why not use something less quirky instead?


Adoption of new languages stalled out with the proliferation of installed base attached to a language. Specifically, if you are deploying anything consumer facing, it makes little sense to consider anything other than javascript for web. If you have more of a budget, you will target whatever iOS supports nowadays. Even more budget, whatever the current Android stack is.

There is some nuance, of course. But, at large, those are your options.

Similarly, if you are writing a game, whatever dev kit your target of choice supports. With a heavy bias to the asset pipeline.

So called "backend" programming is, perhaps, different. That said, you are almost certainly best to pick whatever is close to "native" where you are deploying.


"Languages and their tooling ecosystems express how computing is concretely embedded and used by society. People adopt the tools to get jobs and to get the job done, whatever the "job" is. In turn the available remunerative jobs fit certain business models and markets."

This seems like the key thing. We've gone through something similar with operating systems where there hasn't been a ton of change for a very long time now. It seems like we've just hit the same part of the lifecycle with programming languages where change happens much more slowly.


Trying not to make this an ad-hominim attack, but Crockford has been a net negative to JS for 20 years now.

While people like John Resig were innovating (jquery) working with the language and around all kinds of language quirks 15 years ago, Crockford wrote his book "The good parts" that tried to write java in javascript. And probably did more to make people write bad JS code than anything else.

Then he made the mess that was YUI at yahoo with the same enterprise patterns. When that failed he went right back to these types of doomsday messages in the media every few years calling for the end of Javascript.

Meanwhile we have seen ES6/typescript/react and countless other innovation take place.

He might be right on some fronts, but at some point its a case of put up or shut up imo. There are far better experts to talk to about the state of JS and its future.


If you think "The Good Parts" was a bad idea, I can only assume you didn't work on any large (or even small) coding projects before it was popular. It was the wild, wild west where everyone wrote JS without learning the language and abused/misused the worst features it had to offer (actually, that sounds like a lot of devs today. Somethings never change...)

> Crockford wrote his book "The good parts" that tried to write java in javascript.

This is particularly untrue. At that time (look up his talks), he was always vocal about JS prototypal design being bad BECAUSE it tried to be like Java. He pushed for things like `Object.create()` in ES5 (and opposed the ES6 Java-style class syntax) because it is much closer to how JS prototypes actually work.

He also pushed for closures and higher-order functions as the right way to code JS -- something Java wasn't even capable of. He was very much in favor of things like map/filter/reduce which are functional rather than class-oriented.

There's a LOT of stuff you can criticize about Crockford, but "The Good Parts" certainly isn't one of those things.


What? The man who created JSON was writing enterprise patterns? When, how?

This the same man who wrote this?

> JavaScript has a syntactic similarity to Java, much as Java has to C. But it is no more a subset of Java than Java is a subset of C. It is better than Java in the applications that Java (fka Oak) was originally intended for. ...

> This is misleading because JavaScript has more in common with functional languages like Lisp or Scheme than with C or Java.

https://www.crockford.com/javascript/javascript.html

> We would be making a tragic mistake if we didn’t retain the language’s simplicity. Most of the modifications I would like to make in the language would be to make it even simpler. There’s some cruft on it, and there are some attractive nuisances in it, which we don’t need but which people become dependent on. We’d be better off without that. https://www.sitepoint.com/interview-doug-crockford/

> A lot of JavaScript’s critics want to go back in the other direction, and make it more Java-like, but I think that would be a bad thing because it would alienate most web developers. So, I’d rather go in the other direction and train our web developers how to be programmers, how to be computer scientists, because you can in this language.

This the one? Writes Java in Javascript you say.


> Crockford wrote his book "The good parts" that tried to write java in javascript

This is a weird statement because Crockford is known for actively recommending against writing JavaScript like Java, and goes as far as saying JS is best used when you avoid classes and `this`.


I think this must be a reaction to his generally conservative, “avoid using these features and write wordier code” views.


I do kinda see why Crockford does all of that. Crockford, before the JSON fame, worked on a distributed language E and believes that "the next paradigm will be globally distributed, secure, eventual programming" [1], so all existing languages are old and JS happened to be the best transitional language among them. Assuming this is a sincere belief, Crockford should've tried to steer JS to suit this vision, and if that's not possible, to throw JS away and recommend other languages or make a new one. Having yet to see the latter, I had a strong suspicion that Crockford is not as sincere as Crockford wants to be seen. Maybe Crockford also realized that or I was too premature to think that.

[1] https://howjavascriptworks.com/sample.html#0


> Meanwhile we have seen ES6/typescript/react and countless other innovation take place.

I don't see these as innovations but workarounds to, or built upon, the poor foundations that the author mentions. They wouldn't need to exist if the foundations were solid, or at least not in the states they are in today


On the other hand, all the tools built on top of Javascript could just as easily be explained by solid foundations that are easily to target with higher levels of abstraction rather than symptoms of the foundation being rotten.

It makes more sense to ask "compared to what?" and look at what's going on in the lateral client application platform space.

On iOS, it's very hard to target the foundation with higher level competing abstractions not because the foundation nails it but because the foundation isn't built on simple primitives that area easy to target. You have one blessed way of building iOS apps (UIKit) that's then, over the period of a decade+, slowly replaced by the next blessed way to build iOS apps (SwiftUI). And you generally have to wait for Apple to build APIs that you need because you're not given solid building blocks to do things yourself.

Most discussion around web client development takes place in a vacuum where we look at it and go "well it could be better" (or "it's a dumpster fire"). But that's either a trivial or meaningless statement in isolation. It's more interesting to at least compare the state of web clients with what is the cutting edge state of the art across all client development. When you do that, I don't see how most HN claims about the dire state of JS actually hold up.


> could just as easily be explained by solid foundations

No they couldn't. There were two options for the web, build on js or find a different job.


> Crockford wrote his book "The good parts" that tried to write java in javascript

Oh wow, five different people have already responded to challenge and correct this misapprehension.

What made you write this particular point? Crockford has consistently criticized Java for its boilerplate as well as criticizing the Java-isms in Javascript like the Math namespace/object.


In what way was The Good Parts trying to write Java in JavaScript?


> Crockford has been a net negative to JS for 20 years now.

I'd say JSON is pretty successful as a protocol, even beyond JavaScript.

Also The Good Parts is probably as important now as it was when it was published. The language hasn't improved — it's just grown.


JSON is a protocol? I'm not sure I follow.

That said, JSON is still an interesting study. Basically, "take the object literal syntax of javascript, get rid of functions, require double quotes, don't recognize comments." It is good to get folks to stop using 'eval' to pull some data into the browser, though it is a shame that couldn't have been preserved a bit more safely.

And, of course, json-lines and then the steady expanse of json-schema is looking to be hilarious. How long until we add in namespaces to the idea? :D


Ok, perhaps not strictly a protocol, but a data format. I'm using the word a little more loosely than some dictionaries suggest, though it appears to comport with this definition published on Cloudflare.

> In networking, a protocol is a set of rules for formatting and processing data. Network protocols are like a common language for computers. The computers within a network may use vastly different software and hardware; however, the use of protocols enables them to communicate with each other regardless.

https://www.cloudflare.com/learning/network-layer/what-is-a-...


Protocol typically implies a conversation. Expectations and messaging.

That said, synonyms all over the place. So, I can see it. I think I would avoid it here, as this would also make "mp3" a protocol. Data format already covers that, though. Whereas something like WebRTC seems more in the protocol vein.


I think many people think of JSON as a protocol as it's almost always used in the context of HTTP, which is a protocol. They're just combining the data format and the protcol when they say "JSON"


I mean, sure? But I don't think we'd accept people calling HTML a protocol, would we?


You should have a look at JSON-LD: a major part of it is adding namespaces ("prefixes") and a default namespace ("@vocab") to JSON: https://www.w3.org/TR/json-ld11/#compact-iris


Oh wow. At what point do we just switch back to XML?


Oh wow. This is the guy who did YUI? That was the most abjectly horrifying thing I've ever been forced to deal with (on one random client project) in the history of web development. Just the utter peak of esoteric artificially complex inelegant developer slop.


YUI 2 was very bad and java-like.

But YUI 3 was ahead of its time. The dependency management and lazy loading in YUI 3 would not make it to the mainstream for another 10 years.


"Javascript the good parts" is explicitly about avoid Java like programming in JS. It is about writing Javascript like its a bad implementation of Scheme (which is closer to reality than a version of Java).


At the time - we talking 2005-2008 - Crockford made a few net positive contributions to JavaScript and web:

1. He educated people about JavaScript the language and made it accessible to a large body of programmers as "a real language". Before him a lot of JS knowledge was obscure, and people were doing all JS programming by sharing random snippets here and there. Crockford showed how to build modular systems in JavaScript, how to avoid common pitfalls, how to reason about JS object model, DOM APIs and build larger systems.

John Resig was doing lord's work with jQuery, but his JavaScript book came out way too late to make a big difference. "Good Parts" on the other hand helped growing a whole generation of JS engineers. There were other books, too, that helped immensely, including John's book and especially Marijn Haverbeke's book, which eventually became a somewhat spiritual successor, and was "The Book" for JavaScript for about a decade.

2. He created the first linter for JavaScript and popularized the notion of using static analysis as part of JavaScript development process. For many years JSLint was the only game in town. And, on a larger level, JSLint introduced the whole notion of having a build / release process to JavaScript world. People joke about node_modules a lot, but before JSLint most teams simply FTP-ed sources to servers or were editing them live over SSH. Web programming was not a "real" programming, and Crockford helped change this perception.

3. Same goes to minification - he was the first one to talk about JavaScript execution as part of browser performance story, and created JSMin - the first minifier.

4. JSON format was largely responsible for ending XML dominance and let adopting new languages for server side development easier. I still remember times when I've been forced to only use Java because "the project required XML".

5. There's an argument that his "sticking to good parts" idea was harmful. He almost single-handedly killed ES4 proposal that would have add classes, modules, and other cool additions to JavaScript in around 2006-2009. But here's a thing: ES4 was a hot mess of seemingly unrelated proposals and features that played poorly together. Instead we spent extra years making the foundation of the language better, and then we still got all these and many other features through ES6 and further standards. Like many, I was pissed about ES4 at the time, but seeing where we are now with the language I believe he and Chris Wilson of IE did the right thing with "Harmony".

He was at the right place at the right time. Yahoo circa 2005-2010 did ground breaking job at making frontend development a separate proper discipline. They _invented_ the term "Frontend Development", they build tools for analyzing JS performance, they talked about coordinating work in larger teams and structuring large projects. Before them there were individual groups at a few large companies that were building large web applications (Gmail and Google Maps were the primary examples at the time), but the knowledge of how to do that remained hidden, tribal, and fragmented. YUI group aggregated it, published it and promoted it, making it available to a larger developer community.

They had a messy transition from YUI2 to 3 but the later was way ahead of its time. Crockford was not the primary author of the library, btw. The team was led by Eric Miraglia with significant contributions from Nate Koechley, Dad Glass, Ryan Grove, Nicholas C. Zakas etc. For many years it was one of the most popular libraries for building web applications in larger teams. It was eventually superseded by the arrival of frameworks like Backbone, Angular, React, etc. but for a period between 2006 and 2010 YUI along with Dojo were essentially acting as frameworks for us.

I agree, Crockford tends to sound like a naysayer, partially because his other ideas about compartmentalizing code, building more robust security into the language didn't pan out as he wanted, and things like Content Security Policy largely remain underutilized. But his work was important, his accomplishments are undeniable, his contributions were beneficial for JavaScript community at large.


Not only JavaScript but pretty much all server web programming stacks Ruby on Rails and similar frameworks wouldn't be nearly as popular if we were still using XML as data exchange format


I never really understood the need for „enterprise patterns“. Usually it’s the same spaghetti code than without them, just wrapped in „pretty“ patterns, with a lot of boilerplate code. In the end you have just more code, and more code to maintain. Which is usually more effort.

Okay, you can draw „pretty“ UML diagrams for your code. But that makes it even more effort to maintain.


>He might be right on some fronts,

JS was designed in a week. He's completely right, it's just poorly designed. Javascript will always be around because of technical debt and habit. But that doesn't change the fact that Douglas is right.

Also nobody really uses javascript anymore. It's completely insane. We compile Typescript into javascript then run javascript. That should tell you something about javascript.


TS is literally JS with a few unsound type hints. If you're writing TS, you are already writing JS.


> a few unsound type hints

Which turns out to be exactly what you need to be productive writing software. People who obsess about type systems and try to solve everything within them create a whole new bag of problems.


StandardML is a counter-example. It has sound types but keeps them utilitarian and simple enough so people can’t easily overdo the typing. I’ve become very sick of the “type palaces” people construct (they remind me of “enterprise java” where people do stupidly complex stuff because they can).


I mean I could say C++ is just python with memory management macros.

We're getting into the domain of opinion here. I would say TS is different enough such that it's two different languages.

Programming with type checking and programming with plain JS is really different. A programmer without experience in types (ie plain JS) won't be able to pick up types that that quickly on average. Things like enums, Generics, sum types, product types, recursive types, really change the game by restricting what functions can do.


Typescript's devs define it as simply JS with types to the point that they even have a proposal to add those type constructs into JS at which point the syntactic difference would be very small indeed.

> Things like enums, Generics, sum types, product types, recursive types, really change the game by restricting what functions can do.

If only that were true. The reality is that if you can write it in JS, you can add TS types no matter how horrible or anti-pattern the code happens to be. The constructs you mention don't restrict what functions can do in any way at all.

> A programmer without experience in types (ie plain JS) won't be able to pick up types that that quickly on average.

The average JS dev seems to have a Java/C# background where types exist. Further, they seem bent on slowly transforming JS into one of those languages.


Is he talking about the old < ES5 javascript or the new ES6+ with WebAPI standards?

Because to me, it's quite remarkable how JavaScript has evolved since < ES5. ES6+ introduced a slew of features like arrow functions, template literals, async/await, destructuring, classes, enhanced object literals and native modules. It's way more developer friendly when writing new stuff than it used to be.

Not to mention, the integration with Web APIs has been a game changer. Fetch API, WebSockets, Web Storage, WebRTC and Service Workers, WebAssembly really enables a lot of functionality that's all easy to use and very fast. TypeScript also helps with gradual typing together with syntax highlighting and lookups are superb for avoiding unexpected mutations and a lot easier re-factoring.

Also, what do he suggest would be the replacement? Because it's not enough to replace the scripting language, it would need to fit into the DOM, HTML and CSS too.


> ES6+ introduced a slew of features like arrow functions, template literals, async/await, destructuring, classes, enhanced object literals and native modules.

This is the problem.

All of this has made the language worse. Just accreting features doesn't make the foundation less broken.


> All of this has made the language worse. Just accreting features doesn't make the foundation less broken.

I see this view a lot - what I rarely ever see is a concrete discussion about what exactly is wrong with the "foundation" of javascript.

Because to me... Javascript is actually a decent-ish solution to the UI space (it nicely balances reactivity and code complexity by presenting an event driven, single threaded context)

It has the same warts that literally every production language accumulates: some operators are overloaded in ways that make for wacky edge-cases, some features were introduced by not great, and so they still exist but mostly gather dust.

JS also has some really incredible work put into it -

---

> ES6+ introduced a slew of features like arrow functions, template literals, async/await, destructuring, classes, enhanced object literals and native modules.

This is the problem.

---

Frankly - I understand that increasing language complexity is sometimes not the right call - but I think the vast majority of the features you just poo-pooed are pretty damn nice. I don't even mind the classes - just because it makes Crockford and the other enterprise java folks shut up (otherwise - I sort of mind the classes, at least when not using TS).

What I do find particularly impressive is how flexible JS is, and how much support can be added without actually changing the runtime - that's not something all that many enterprise languages can attest to, and it's that same flexibility and simple extensivity that has allowed JS to continue to grow.

Following on: JS (and browsers in general) are actually one of the absolute wins for software freedom (free as in freedom, not as in beer) because absolutely everything is shipped to you in inspectable and modifiable payloads. I can and have edited JS/html/css to make broken sites work.

My single biggest complaint about WASM is that we lose this property for websites, and I think that's a pretty huge fucking loss.


> I don't even mind the classes - just because it makes Crockford and the other enterprise java folks shut up

This is the second time I've seen in this thread that people lump Crockford in with Java enterprise folks, but Crockford routinely says `class` was the worst addition to JS.

> What I do find particularly impressive is how flexible JS is, and how much support can be added without actually changing the runtime

Funny enough, this sounds a lot like something Crockford would say.

I think Crockford's main gripe with JS these days is that TC39 is more concerned with adding superfluous features than cleaning up footguns. There is a video from 2018 where he goes through some of the more popular ES6 features and mentioned which ones he likes/dislikes. [1]

[1] https://www.youtube.com/watch?v=XFTOG895C7c


> This is the second time I've seen in this thread that people lump Crockford in with Java enterprise folks, but Crockford routinely says `class` was the worst addition to JS.

Because Crockford was one of the people advocating for a particular style of initialization of objects that mirrored classes, but was not directly a class before classes existed in JS. (see: https://crockford.com/javascript/inheritance.html)

It is utterly enterprise and classlike in nature though, and not my cup of tea. Mainly - I just wanted enterprise folks to stop trying to re-invent classes in the language, and the class keyword stopped that behavior.

Big net win for the language in my opinion - even though I personally still don't use classes all that often.

---

His modern take is fairly reasonable, though.


His modern take, advocating prototypes, has been around since 2006 (https://crockford.com/javascript/prototypal.html).


I know - I just keep getting older... hard to believe that was nearly 20 years back now, and not 20 months.


I might not have the best take since don't have a huge grudge against javascript but I definitely think there are some concrete problems in its foundations.

Specifically js has very unusual attitudes to basic syntax operations that are at this stage undoable without breaking an insane amount of existing code. For some examples:

- Use of == does not do what it does in almost every other language, but won't flag an error if a user thinks if does. Instead very difficult to track bugs will be introduced.

- Calling nonexistent object keys won't flag an error but return None. I've seen this lead to weird errors that are hard to find a lot of times.

- Duck typing in a lot of operators like + and - can create unexpected results.

At a high level, I think these are all choices to bake "truthyiess" into fundamental operations that often end up masking errors. They could pretty easily be solved if js, with those errors present, wasn't such a massive foundation for so much stuff.

I obviously can't speak for anyone else, but I think those are the kind of things most people are referring to when they talk about js having problematic foundations.


But none of those are actual issues with shops that are writing JS today (Literally: Zero items on that list).

> Use of == does not do what it does in almost every other language, but won't flag an error if a user thinks if does. Instead very difficult to track bugs will be introduced.

"==" does basically what it should... (I also avoid it, but it makes a perverted kind of sense for easing into programming) and the bugs introduced aren't difficult to track at all: literally find and replace "==" with "===" resolve places that became too strict, then make it a lint rule.

>Calling nonexistent object keys won't flag an error but return None. I've seen this lead to weird errors that are hard to find a lot of times.

This... is basic dictionary behavior across SO MANY LANGUAGES. Frankly - having worked with languages that make a missing key an exception (looking at you C#/C++) I'd take the JS route any day.

> - Duck typing in a lot of operators like + and - can create unexpected results.

Yes. This is programming. Have you seen Ruby (or python - or god help you custom operator implementations in c++)? Because holy fuck is JS reasonable as all get here when compared to some other popular languages.

----

> I obviously can't speak for anyone else, but I think those are the kind of things most people are referring to when they talk about js having problematic foundations.

Basically - Look: I agree JS has some warts. Literally every language does. I just really don't see those warts as deserving of FUD around foundational problems that so many people talk about.


These are problems easily solved with tooling today. ESLint + TypeScript literally addresses every example you raised. Static analysis works pretty well these days.


I agree but I guess we differ in that I wouldn't consider Typescript JavaScript but instead a very closely coupled but different language.


>I see this view a lot - what I rarely ever see is a concrete discussion about what exactly is wrong with the "foundation" of javascript.

The things that were added mostly were to fix problems of JS:

arrow functions => because of "this" shenanigans

template literals => I'd say a basic part of a language

async/await => Promise/future hell

destructuring, classes, enhanced object literals => These are all syntactic sugar/nice to haves. Nothing really was broken here.

native modules => Another time I'd say that it's a basic part of a language


> All of this has made the language worse. Just accreting features doesn't make the foundation less broken.

Nope, most of these have made the language better. Optional chaining is better than a litany of `&&`, `...` is generally better than `Object.assign` or `concat`, `async/await` are generally better than callbacks... The list goes on. Rejecting these new features wouldn't have magically improved the foundation.


What exactly is "broken" about the foundation of JS? I hear this all the time and people can never back it up.

It is extremely performant for a scripting language, it is easy to debug, and it works in many scenarios.

People try too hard to be contrarian. Same thing happened with PHP and many other languages.


Have you ever read JavaScript: The Good Parts? It does a great job of describing lots of broken bits.

More specifically, a few things that I think are pretty bad off the top of my head:

- Implicit type coercion

- Confusing scope binding, i.e., `this`, `bind`, etc.

- Inconsistent Array API — some methods return a new value; some methods mutate the value![0]

- `['1', '7', '11'].map(parseInt)` …lol?!

Maybe you shouldn't be so quick to jump to this conclusion that any criticism of your pet technology comes from a place of ignorance or pretension.

[0]: I've actually spoken with Brendan Eich about this one, and he conceded it wasn't a good idea. IIRC, he was just copying what Perl did.


> Maybe you shouldn't be so quick to jump to this conclusion that any criticism of your pet technology comes from a place of ignorance or pretension.

Maybe your argument just isn't that great?

I mean - look, I've worked in a LOT of languages now in the 25 years I've been writing code. Js is certainly no bastion of language perfection, but it's also sure as fuck not on shaky foundations.

Almost all of your criticism basically boil down to: JS won't break backwards compatibility for me! WAHHHHH!

Because none of your examples are really issues:

- Implicit type coercion.

happens in a lot of languages - keep a table around if you need it.

- Confusing scope binding, i.e., `this`, `bind`, etc.

this is literally core to the language - it's not any more confusing than learning about the difference between class definition vs an object instance.

- Inconsistent Array API — some methods return a new value; some methods mutate the value![0].

This is a fair complaint - but all the functions that mutate have non-mutating versions now - JS just won't break compatibility for you by removing the old ones... "WAH!")

- `['1', '7', '11'].map(parseInt)` …lol?!

You know damn well what you're doing Mr radix. Variadic languages have some edge cases. If you don't like it, use Number like a sane person:

['1', '7', '11'].map(Number)

ParseInt is usually not what you want, but again... JS won't break compatibility for you - that doesn't mean the foundation is shaky...

It just means that some parts are older than others.


And on the seventh day He finished his work that he had done, and he rested on the seventh day from all his work that he had done.


How performant a scripting language ends up being is partly due to a couple of fundamental design choices, and partly just due to investment.

The fact JS has very limited interaction between threads makes a JIT much easier to write.

In terms of design there’s a lot of annoying things round equality, operators, and coercion which makes everything just that little bit harder to keep in your head. The standard library also has a bunch of inconsistencies in the way it works. I work on a JS implementation every day and I’m always referring to the spec to double check odd corner cases.

I think the choice of double as the only original numeric type has caused a lot of confusion too, even though I’m not in the camp that says you should never use floats.


> The fact JS has very limited interaction between threads makes a JIT much easier to write.

And easier for developers to write and reason about. This is not small thing. Re-entrant code by default (and basically by mandate) eliminated deadlocks in even a beginner's code. Do you understand how amazing a language is than encourages concurrency and allows parallelism with any fear of deadlocks or explicit atomism?


You can absolutely have concurrency bugs in async JavaScript code, don’t kid yourself that you can’t, or even that it’s hard to do.

The reason I mentioned the concurrency model making the JIT easier is that the concurrency model makes it easier to manage the replacement of methods on the stack when an assumption has been invalidated, and this helps with the speed of the compiled code. The same also applied to various languages with green threads or global locks, but outside the JS world being able to use multiple cores has proved more useful than the difference in single threaded code.


Never said you couldn't have concurrency bugs.

I specifically mentioned thread deadlocks and atomism. You cannot claim locks, so you cannot claim them out of order. JavaScript has no notion of "atomic integer increment" since all variable access is atomic/single threaded.

That is not the same as the straw man claim you asserted for me.

And I reiterate: no deadlocks or atomism concerns greatly aids developers in writing secure and robust code. It does not eliminate all bugs, concurrent or otherwise, nor did I ever claim it did.

If you can find a code example in browser-based JS that triggers a deadlock, I'd be fascinated to learn about it.


> Just accreting features doesn't make the foundation less broken.

That's indeed the gist of Douglas' remark.

But if you have not much experience beyond JS, that is very hard to recognize.

Douglas is grey and old. Most JS devs are on the younger side of the IT work force.


> "[...] we are crushing ourselves with the accumulated complexity we’ve piled on top of bad foundations [...]"

The same could be said about plain HTML/CSS though. I think the author is correct overall and I don't really see improvement on the horizon. WebAssembly, while great that it exists, can morph browsers into some poor mans virtual operating system and this can lead to a less open web.

We already see more closed Platforms like Discord. I use it too, the product is completely fine, but it sucks in information that isn't availble on the open net if you don't go through the platform. Some communities exclusively use it and they get little discoverability through web searches. Sure, this isn't really related to Javascript/Webassembly, just a development I fear will increase and which could be accelerated with different approaches to languages.

A front-end scripting language available in every browser is very, very useful. I think few new languages could replace it here, I don't know of any at least. And I think a lot of flexibility is lost if you begin to transpile anthing into Javascript. Granted, for large projects it is pretty much a requirement to do so at some point.


> The same could be said about plain HTML/CSS though.

Genuine question: How much of the complexity of working with HTML/CSS is unnecessary and how much is inherent to the problem they solve? Is it as bad as with Javascript?

I would say that we, for the most part, have a very clear idea of how we could (theoretically) replace Javascript with something much better.

I don't know of alternative layout languages, so I don't know how good CSS is in comparison. And what could we replace HTML with? My general appreciation, from my ignorance, is that they are not that bad.


Seriously, every time people complain about CSS and HTML I think about how much work it is to support all the localisation and layout stuff they do. Managing massively different sized screens as elegantly as them is hard and yeah, it's still not that elegant or easy, but I haven't used anything better that didn't sacrifice something for it.


Every time I have to build a UI in canvas, I yearn for all the conveniences CSS and the DOM provide. Do people even realise how much trouble it is to make text wrap within a given space?


It is, but yet thousands of videogames have been doing this for decades...


The UI of many games break if you scale the text though, if it offers such a feature (e.g. text won't fit in boxes). This is also the case on the web at times, but much less often.


And most games will use an engine like Unity or Unreal so they don't have to re-invent the wheel. HTML and CSS may not be perfect, but they've been battle tested for a good 30 years and have had improvements added. No need to re-invent that wheel.


Absolutely, I totally agree. Layout development has gotten a lot better with Flexbox, Grid, and the new Container Queries.


I believe the complexity often stems from historical development as well. There are often multiple ways to achieve the same thing or workarounds for specific browser. Either you have HTML tags to format something or you can do it with CSS. CSS is difficult, because there are obtuse rules. The order of CSS-rules is relevant as well as the specificity (which is hell in my opinion).

You can write an extremely clean and awesome HTML/CSS document. But you very rarely find that kind on the web.

I think we neglected being more careful here because browsers became more and more forgiving. So they often render as intended, but not really like it is stated in the often invalid document. That made browser insanely complex as well. No new browser will ever be successful if it would be parsing strictly. More than half of all websites would probably stop working correctly.

I believe you could replace both HTML/CSS with something much more clear and as capable. But on the other hand we should be glad that we have standards like this and perhaps we shouldn't let perfect be the enemy of good.

Perhaps JS doesn't qualify as good, but on the other hand I think its success speaks for itself. It even grew beyond the browser and some electron apps are extremely well received (apart perhaps by your system memory). Despite that, it is a scripting language and it should mostly be used as that. Since every browser comes with a parser, it is a quite mighty tool to have.

Python is a similar contender. It is the ultimate choice for certain domains and I don't see that changing any time soon, even if python isn't perfect itself.


TeX solves the core problem of responsive layout using just a small handful of primitives. No one seems to have noticed that FlexBox is mostly a copy of the TeX boxes and glue model.


> How much of the complexity of working with HTML/CSS is unnecessary

Talk to Google.

They are the ones aggressively increasing the API surface of the Web (driven largely by commercial reasons e.g. ChromeOS) often without much thought to the security, privacy and performance implications.


Nobody will ever agree on an alternative. And, none of the attempted alternatives have ever taken off because the velocity of javascript's incrementalism is far greater than the velocity of any new shiny thing.

JS has the weight of the largest corporations in the world behind it that don't want to lose their investments


Worse than trying to replace it entirely is that you just end up with two implementations that need to be maintained.


> [closed platforms] will increase and which could be accelerated with different approaches to languages.

the business incentive is towards such closed platforms - data and platform ownership has value after all. It makes zero sense for a business to keep a platform open and potentially help a competitor.


I disagree, it would be in their interest to somehow gap that border in the long run, although I wouldn't know an easy way to achieve this. Its users often tend to not understand the repercussions in my opinion and that is true even for software developers.

Sure, an exchange on Discord is easy and personal, you get ample support from your supporters and that can really propel projects to the next level. But as I said, all the knowledge from these exchanges is lost for the net. The user that comes 2 years later won't see these exchanges and not even you product in some cases. I would always suggest to also use some kind of forum or knowledge base.

Discord has APIs to make such content discoverable as well in theory. I use the platform too, it shouldn't be seen as an indictment for the Discord developers at all. On the contrary they offer an extremely useful service and in most cases completely for free and it shows that they care for the platform. But just as things are, we have it as something separate to the usual web and I believe that some communities suffer if they do not provide alternative venues.


Discoverability for communities is a really good point. I wonder if any chat software exists that allows easy/good indexing by search engines.


I am skeptical about the performance part (object lifecycle management being a subset of that), whether this is a serious issue causing bottlenecks in modern (not over engineered) production software. It seems to me that subset of software that needs the performance (like hashing, large binary data processing, geometric processing, etc) are things that should be offloaded to GPU for real performance or if it has to be done on the CPU, be run under optimized WebAssembly with no dynamic memory allocations. Since JavaScript has a very good packaging ecosystem compared to other languages, often you don't even have to go out of your way to do that. Someone else has probably done it, and it may just be a require("xxhash") away.

I think it's quite backward thinking actually to bend the language making it more complex for the human for the benefit of the compiler. Especially with the shift to LLMs, programming languages should be closer to natural language and expression, not vice versa.

The good JS packaging ecosystem takes care of the small standard library issue, but I definitely agree there's room for expansion there. This seems like a surmountable problem. And by standard lib I'm not talking about things that can be trivially be implemented with Array/Object/Map (like queues or ordered maps), I mean things like RNG, math functions, or things like the recently added `fetch` method for HTTP calls.


As we nowadays compile TS to JS anyways, it would be a small step to go full WASM and have language agnosticism mostly.

A bit related: I still do not really understand why WASM has no direct DOM access. Answers to this question seem to fall into two categories:

- We don't need it, because you can do everything via a Javascript detour easily

- We don't want it for reason X

To me both feel a bit like excuses. I've yet to see a hard technical reason why it is not done. So, why not make WASM a first class citizen and do away with Javascript for good?


My understanding is that the main limitation is technical. WASM doens't do GC or the host system calling conventions and cannot interact directly with object from Javascript because of this. However, this is being worked[0] on and will be solved eventually. Even without this, the performance overhead of bridging to JS is low enough that WASM frameworks can beat out React.

0: https://github.com/WebAssembly/gc/blob/main/proposals/gc/Ove...


In addition to GC, the stringref extension is another crucial one for enabling easy interop with JS. Fingers crossed that both make it past the finish line. GC seems pretty much certain, but not sure about stringref.


Because DOM access would have bloated the MVP, and you _can_ access the DOM through JS.


I would imagine it a security risk to allow attackers accessing the DOM through WASM. Adding complexity to the sandbox environment.


Software quality has never mattered in web development. When the W3C tried to push XHTML web devs balked at the idea that they should understand markup languages and not have syntax errors.

JS is installed on every machine, you don't need to deal with IT bureaucracy getting apps installed or heaven forbid ports opened, and it'll never ever have anything less than the full backing of the browser developers. Even Brainfuck would be the dominant language with that feature set.

So long as that remains the case, as it will in my lifetime, nothing will change.


> When the W3C tried to push XHTML web devs balked at the idea that they should understand markup languages and not have syntax errors

At the time a lot of the people generating content for the web were not web developers, so HTML had to be forgiving. Most CMSes allow people to add arbitrary markup — it is far preferable for the browsers to be forgiving than to refuse to show pages if there is a syntax error.


> But as increasingly complex software targets the browser, developers are wrangling codebases where UI is not the dominant source of complexity, like CAD tools and scientific data visualization.

I guess WASM is already taking this place?


Ok show me another language which has near native performance, gradual typing (thinking TypeScript here), and lets me do lightweight functional and lightweight OO programming.



> speed-comparison

a tiny tiny ten-line snippet of code

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


The second you remove threading and SIMD from those examples, they get rather close to JS. At most, you are making the argument that killing JS-SIMD for WASM and not adding native threading support was a mistake.


> The second you remove threading and SIMD

Anything else?


AsmJS coercion hints still work, so not really. Monomorphic code and some optimization can take you a surprisingly long way toward good performance. You certainly won’t find that in most untyped languages.


Much faster than Python or almost any non-native-compiled language (except Java)


Python is a notoriously slow language so that's not saying much.

And there are a number of languages on the JVM e.g. Java, Scala, Kotlin, Clojure as well as countless compiled ones which are near-native.


JVM languages do look interesting and with Graal there seems to be an evolutionary upgrade path from JavaScript code.


"Faster than Python" is a very different thing than "near native performance".


Typescript is very nice, just need to get rid of the javascript underneath.


If we ever replace JS, it should be with some kind of StandardML that looks more like JS.

TS is unsound, unperformant, and overly-complicated in comparison.



I'm sure microsoft would absolutely love that!

and that's why it should never happen.


Deno?


I wouldn't class NodeJS as being near native based on benchmarks.

And its poor concurrency story compared to other languages e.g. Scala means that it's hard to push the runtime in real-world situations.


How about Dart/Flutter?


Moving the Web to Dart/Flutter would cost billions. How would it help in ANY way solving any challenge that currently exists? What even are these challenges? The OP is pretty blurry on that.


I forgot to add "not controlled by a single corporation" to my list of requirements. Also not gradually typed, I think? But I would consider it if it weren't all tied into Google and had a larger ecosystem.


Isn't Typescript "controlled" by Microsoft?


This is a good point, yes you are right about that.


Typescript is completely open source (Apache 2.0 licensed) and all of its roadmap, planning, issue tracking, peer reviewing, is also out in the open (in GitHub Issues). It's about as "controlled" by Microsoft as Linux is "controlled" by Red Hat at this point.


OK, but the point of comparison was Dart/Flutter, which I suppose is similarly open.

Open source does not mean that it's not controlled by a single person or organization. It's not meant as a critique though, as the alternative I guess is a language designed by committee, which has its own problems.


Last time I looked into it Flutter performance wasn't great.


lua


Luajit definitely. Lua not so much.


Javascript is jit? so it would be fair to compare the jit versions.


Sorry, I missed your reply.

In terms of performance and memory use, LuaJIT beats every other JIT out there. Lua is a much smaller language, and Mike Pall is a cyborg, so it's an unfair comparison when compared with such a huge language like JS.


First we need a good alternative. JavaScript may not be the best, but it works well.


He has been saying this for a decade. I had to double check the dates to see if it was even relevant.

A problem is that the 3 problems that the articles highlights have been fixed with incremental fixes. Performance with V8. Object lifecycle management has had improvements from ES6 WeakMap and WeakRef although JS will never have RAII. ES6 also added more data structures.

No new alternative will be able to keep up with javascript's incrementalism


> A problem is that the 3 problems that the articles highlights have been fixed with incremental fixes.

(author here) - I disagree. If V8 solved performance, we wouldn't see so much of the JS tooling stack being rewritten in Rust. Modern JS runtimes are fast because they are tuned for the types of workloads that are common on the web. This is great for the performance of a virtual DOM, but if you go off the beaten path of workloads that the browser is used to, you're still limited by the performance of the runtime.

JS has improved for sure but it’s always going to be slower than a language with a BDFL because it has so much legacy baggage and political consensus-building required. IMHO WebAssembly is the right approach: standardize a minimal bytecode and let languages compete on innovation.


Javascript is optimized for workloads to keep latency low and maximize concurrency while constraining itself for resources. This is pretty ideal for a GUI scripting language, and covers most use cases for the web.

It is hard to get performance perfect in all workloads. I have always been able to optimize my scripts to get the responsiveness that I need.


> This is pretty ideal for a GUI scripting language, and covers most use cases for the web.

Yep, fully agreed. This is why I say in the article that JavaScript hasn’t been and won’t be replaced for those use cases.

> It is hard to get performance perfect in all workloads.

The way languages do this in practice is by giving developers the ability to drop down to lower levels of abstraction if they want to. With modern JavaScript you don’t have any levers to pull, you have to hope that a runtime optimized for the GUI use cases you mentioned happens to also take a happy-path for whatever non-GUI code you write.

For example, AFAIK the JS spec doesn’t provide runtime bounds. Array.shift() could be O(N) depending on the runtime. At my company we spend way more cycles than I’d like trying to figure out why some code happy-paths on Chrome but not Firefox or vice versa.


I'm not _entirely_ sure which RAII you mean, but if you mean something like C#'s `using` or Java's `try-with-resources` or Python's `with`, then https://github.com/tc39/proposal-explicit-resource-managemen... and https://github.com/tc39/proposal-async-explicit-resource-man... are in stage 3 (of 4 stages) in ECMAScript's language proposal lifecycle and will be coming to a JS engine near you behind a flag soon-ish.


I wish! I wish!! I so wish this would happen. And the ‘type’ attribute to the script tag would finally mean something!

But nope. It won’t be. Just as var remains to support old webpages. Use strict to support newer features without affecting old ones. Imports. Etc.

But the post is factually off. In fact there is now an ordered map in JS. I think whoever is responsible for the spec and the implementations need to be complimented. Its actually shocking that it works so well. I have written painting software (much more computationally expensive than CAD software) using Canvas and WebGL apis and it’s pretty darn performant. Never native. But still very very good.

Although I don’t know about the vitriol in the comments toward old crocky. And his good parts book was good and useful to me in parts.


What is one of those 'really clean' languages that we should be using then?

I'm interested!


Compared to other languages, IMHO Ruby (Not Rails), checks the boxes.


Purescript. By far the best type system of any frontend language, good interop, type-directed emit (so that you can, for instance, automatically generate parsers from type definitions), and unlike Elm actually treats users like adults.


Elm is an excellent choice.


Smalltalk


JavaScript is fine, the real problem is HTML/CSS and the DOM.

We were so, so close to a semantic, reader-based web ~20 years ago.

The ad supported Internet killed it. Now that the ad supported Internet is dying, maybe we can get back to it.


No it's not, and we do have semantic web accessible to screen readers. It is just also this incredible universal platform for building and distributing applications. It's a universal layer accessible to all, no mater your socioeconomic background, politics or location.

Criticising the web platform as not confirming to a set of self defined rules is lazy and pointless.


> JavaScript is fine

What languages to you have experience with?

The article mentions some alternatives, like Elm, and I'd be hard pressed that people who used those (say Elm) effectively still believe JS is fine.


Not the OP you replied to, but I've worked in about a dozen languages in 15+ years and I also think Javascript is fine. It will never be #1, but it does a good job for the things it is good at.

The fact that you mentioned Elm is funny to me as it is so low ranked in terms of usage or desire that it doesn't even show on StackOverflow's survey results.


Truth is not measured in mass appeal --- Immortal Technique


Exactly! Perfect is the enemy of good.


You're too old to be a language elitist.


Any time something is adopted as broadly as JavaScript it's going to be a mess. Even outside the world of computing. Take... city planning. Cities are a mess. In fact human civilization is one big freakin' mess.

Wouldn't it be nice if we could start over and apply all the lessons we've learned over the generations? The world would be a much better place!

Or would it?

We'd end up with yet another mess. It'll be a different mess but it'll be a mess nonetheless. Or worse - we'll end up with two big messes instead of one big mess!


> "Notably, none of these are walls you’ll hit if you’re writing a blog or e-commerce site, which is why JavaScript isn’t going away."

Pretty much sums up everything :(


Most of boilerplate code i wrote is mapping data structures between API standards. None of the is related to javascript. So i won't stop using javascript.


He's got a point. TypeScript alleviates some pain, but I still feel like JS is not a great language. I have some nostalgic love for it because I used to be "the JavaScript guy" at the beginning of my career. But ever since I started using TS and Go, using Vanilla JS feels like fighting code smells and complexity all the time. At the same time, I acknowledge and appreciate how much JS has improved.


As a culture, we are stuck. This article illustrates this well: https://lindynewsletter.beehiiv.com/p/culture-stuck

This language refusing to fade away like other ancient languages is just a symptom of the root cause. Myself, I use Elixir as my primary language.


I remember Crockford's lectures at Yahoo in 2009, and they were inspiring, but in the hindsight, most his bold points turned out incorrect.

#1: The promise of high speed with event loop & JIT compiler. Crockford's idea was that if you make small functions that return quickly, JIT will boost their speed drammatically. Plus, fast yielding would make programs seem faster for the client.

By then, Google Chrome wrote impressively fast JS engine, but later the speed of JS didn't improve much. JIT-based languages and engines didn't match performance -- I myself tried Pypy, and it wasn't fast like C.

I must give him credit that his lectures made me finally get how the event loop works.

#2: That Promises and async would make async programming much easier. Promises do make code better than callback hell, but still are tedious. The side costs of async are well summarized in this post & presentation: https://vorpus.org/blog/notes-on-structured-concurrency-or-g...

#3: Object construction with closure function. We tried that wholeheartedly at contemporary project, and it was awkward. I haven't seen any popular framework do this approach either.

#4: Decentralized web evangelism in mid-'10s. This is a bittorrent concept extrapolated to web hosting, later it got traction branded as Web3. It's clear by torrents that they work only when everyone cooperates, and carries some serious load, otherwise torrents die out. The decentralized web requires every user to host a good bit of data, and any document published puts this load on everyone in the system. As with Ted Nelson's concept, it makes sense only in small environment of cooperating users.

That being said, Crockford's lectures were interesting for history part, and he made a good point that one should looki into history and see what is logical and progressive, what we got just accidentally and it stays as legacy.


> #3: Object construction with closure function. We tried that wholeheartedly at contemporary project, and it was awkward. I haven't seen any popular framework do this approach either.

yeah I have been struggling with this for a while, at my project we use typescript and non stateful objects we just use plain objects, no classes. However we started using classes for stateful code, I was opposed to the idea and thought that using closure-based state would be better but I ended up losing that battle A lot of those classes are single-instance in the application, which feels even more of an anti-pattern to be classes

To be clear I am not advocating for closure-based state, but classes also feel awkward. It feels we should just have bit the bullet and used some kind of state-management library


I get his point, but it may be too late for it as some applications even have backends in js..

However I agree that it is somewhat problematic that instead of creating new languages, we create frameworks (sometimes frameworks based on other frameworks like nextjs). Especially considering how messy js can be.


He is promoting an "actor language" to replace JavaScript. https://www.crockford.com/misty/


And it's not clear it would be worth the retooling/training on the UI-side of things. The actor model is great for massive concurrency, which simply isn't a primary or even a secondary concern for 99.9% of client-side code. There's a reason why Go has such a huge following in the server space but a deafening silence of folks wanted it available in browser UI code.


We had this argument in 2012-13 when Google Dart came out. I advocated for a Dart VM in the browser but the Blink team killed that idea. Brenden Eich explained here on HN what the problems were and made good sense. Now there would seem to be even less reason by using JS as a target language and by using WASM. JS devs have other options, but they still choose JS or TypeScript. Crockford sounds like an old man yelling at clouds (I'm older than him, so not ageism)


Strip down the browser APIs and make the web more user friendly and deterministic again while your at it. The History API, for example, should never have been implemented.


The problem is not just a language problem. The problem is the repurpose of the web as an "operating system". HTML and CSS are OK for document layout and styling, and JS is OK for simple interactions. BUT none of these were designed to handle highly complex and dynamic applications. We are just building on the wrong foundation, hacking to extremes to twist the the framework to the current trend. We are using the wrong tools for the job.


> we’d get new computer languages about every generation.

> accumulated complexity we’ve piled on top of bad foundations

OTOH when we get new computer languages, they usually start really simple as a sort of protest against the "old" complexity and then spend the next few decades shoehorning back in all the things they left out to keep it simple but that it turns out we actually needed.


While we're at it, we should stop using VHS.


Listen also to Crockford's interview on the Corecursive podcast: https://corecursive.com/json-vs-xml-douglas-crockford/


Languages are validated by their use. All other interpretations of their value are abstractions from the only real metric, that people use it and it is alive in that way. We don’t speak dead languages and we don’t adopt them either.


By this logic, the Imperial System is the best measurement system for the US (pick whatever entrenched system you want). Whether you agree or disagree with it, it's the most widely used in US so its superiority there is self-evident.


You are free to pick up the better system in your opinion but if you want interoperability and consistency with other US measurers then you may have issues.


I agree that the Imperial System is superior to the Metric System just from the point of view of human usability.


He says that a new language used to pop up every generation -- and that seems to me like it's been the case -- Elixir, Rust, and Go all seem to have arisen during the last 15 years.


How can you take seriously a guy who didn’t allow comments in JSON?

As smart as he is - he obviously doesn’t give a single f* about real life problems and challenges of real life programmers.


Can someone summarize crockfords opinion on the current state of the browser js api? And his opinion on reactjs


[flagged]


> The new COBOL is here to stay.

Very much. :) We have a javascript interpreter parked at the L2 Lagrange point.[1] As far as the preservation of human made artefacts go that is basically one of the safest places it can be. When us and all of our earthly possessions have long crumbled to dust that thing is still going to be out there. (Albeit very likely drifting without power around the sun.)

1: Event-driven James Webb Space Telescope Operations - https://arc.aiaa.org/doi/pdf/10.2514/6.2006-5747


Yeah, the disaster is borderline interplanetory. It will spread to Mars and Moon soon enough.


Crockford is not Eich.


I stand corrected! Still, good to see he realized the error of his ways and the damage caused.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: