I'd argue that other languages did this (or something similar) to great success, most notably Java.
That is, the Hotspot VM was such a phenomenal engine that lots of other languages sprung up to take advantage of that: Closure, Scala, Kotlin, etc.: https://en.m.wikipedia.org/wiki/List_of_JVM_languages . Even with the Java language itself, syntactic changes happen much more frequently than VM-level bytecode changes.
With an interpreted language like JavaScript, the dividing line is a little grayer, because the shippable code isn't bytecode, it's still just text. But it still seems to make sense to me to target a a "core", directly interpretable language, and then let all the syntactic sugar be precompiled down to that (especially since most JS devs have a compilation step now anyway). Heck, we basically already did this with asm.js, the precursor to WebAssembly.
It seems to me that wasm should just support the web/browser API instead of the current trampoline business; this way, JS build tooling can emit wasm files, which is similar to the example you use.
asm.js came about because it was a very optimizable subset of JavaScript, then it was superseded by WebAssembly, then the proposal in TFA is basically asking for asm.js back, but perhaps the better answer is to make WebAssembly fully support all of what JS could originally do.
This is perhaps why as I get older I sometimes feel like I want to get out of software development and become a goose farmer like that dude on LinkedIn - lots of times feels more like spiraling in circles than actually advancing.
I think it’s much harder to make big leaps as a community with languages and other - relative to the technology stack as a whole - lower level concerns.
Look at the stronghold grip of C/C++ and how long it’s taken Rust to gain a meaningful foothold in those realms for example.
Google wanted to flat replace JS once already, that was the entire origin of Dart. They only pivoted to the cross platform mobile framework as its primary target after it failed to gain traction as a standard
It’s not just better than JavaScript, I would argue it’s probably the best general purpose modern OOP based language out there right now.
It was actually really fortunate in that for a long time it didn’t have a big community behind it and they just put a lot of very smart language designers on the team where they had ten years to try various approaches and learn from the mistakes of not only themselves but others without a lot of outside noise.
But there’s no other language I would prefer to write applications in. It’s just a really nice mix of ergonomic, expressive and powerful.
PNaCL was essentially "let's shove LLVM into every browser and make it a mandatory part of the web", which somehow seems even worse than "let's shove the JVM into every browser and make it a mandatory part of web"
Given Google's power a decade later, with Safari left as the only non-Chrome clone with market relevance, it hardly makes a difference.
Additionally we already have LLVM all over the place, alongside JVM and CLR, it is the most deployed compiler infrastructure with contributions at the same level as the Linux kernel.
They wanted to implement a typing system first so they could transfer complex types with a strict contract first, as large parts of DOM management would benefit enormously from that, and it would be far better to design an API around. This system has been stuck in different iterations for years.
Because it's already solved from day one and people keep repeating that it's a problem anyways.
Anything you can do in JavaScript, including access to the DOM, can be put into a JavaScript function. You can import that function into a WebAssembly Module, and you can use WebAssembly Memory to transfer large or complicated data as an efficient side channel. It all works.
Could you link an ergonomic example? I have cemented in my memory that DOM access in WebAssembly is not trivial and I suspect others too.
This is what StackOverflow tells me (2020):
> Unfortunately, the DOM can only be accessed within the browser's main JavaScript thread. Service Workers, Web Workers, and Web Assembly modules would not have DOM access. The closest manipulation you'll get from WASM is to manipulate state objects that are passed to and rendered by the main thread with state-based UI components like Preact/React.
> JSON serialization is most often used to pass state with postMessage() or Broadcast Channels. Bitpacking or binary objects could be used with Transferrable ArrayBuffers for more performant messages that avoid the JSON serialization/deserialization overhead.
This feels like "we can have DOM access at home" meme.
Web Workers can't directly access the DOM in JavaScript either. This is not a WebAssembly problem. If you want a Web Worker to manipulate your document, you're going to post events back and forth to the main thread, and Web Assembly could call imported functions to do that too.
I don't even know what he's on about with Preact/React...
Save the following as "ergonomic.html" and you'll see that WebAssembly is manipulating the DOM.
That `easy(arg)` function could do much more elaborate things, and you could pass lots of data in and out using the memory export.
I'd like to believe a simple standalone example like this would be enough to get people to shutup about the DOM thing, but I know better. It'll be the same people who think you need to link with all of SDL in an Emscripten project in order to draw a line on a canvas.
> This feels like "we can have DOM access at home" meme.
And I'm sure somebody (maybe you) will try to move the goal posts and claim some other meme applies.
> I don't even know what he's on about with Preact/React...
Around 10 years ago, I was having lunch in a food court and overheard "Luckily I don't have to use javascript, just jquery".
Around 5 years ago, a co-worker admitted he still had issues distinguishing what functionality was python and what came from Django (web framework), despite having used them both daily for years. He thought it was because he learned both at the same time.
I wouldn't be surprised if this was more of the same, and just getting worse as we pile more and more abstractions on top.
but what bothers me a bit is that this example still uses custom javascript code.
i tried to find and answer but essentially what appears to be missing is the ability to access js objects from wasm. to access the document object it looks like i need a wrapper function in js:
jsdocument(prop, arg){
document[prop](arg)
}
so far so good, i can import this jsdocument() function and use it to all any property on the document object, but if document[fun](arg) returns another DOM object, then what?
i can call this function with the arguments ("getElementById", "foo", "append", "<div>more foo</div>") in any WASM language and it will result in calling document.getElementById("foo").append("<div>more foo</div>"); which allows some basic DOM manipulation already. but then i want to continue with that object so maybe i can do this:
getDOMobj(prop, arg){
var len = objlist.push(document[prop](arg))
return len-1;
}
callDOMobj(pos, prop, arg){
objlist[pos]["prop"](arg)
}
can you see what i am getting at here? building up some kind of API that allows me to access and manipulate any DOM object via a set of functions that i can import into WASM to work around the fact that i can't access document and other objects directly. it looks like this is similar to this answer here: https://stackoverflow.com/a/53958939
solving this problem is what i mean when i ask for direct access to the DOM. i believe such an interface should be written only once so that everyone can use it without having to reinvent it like it appears to be necessary at the moment.
> i'd also like to thank you for patiently answering all the questions
It's nice of you to say so. Thank you.
> can you see what i am getting at here?
I mostly can, but I'm not sure we're clear what we're talking about yet.
I see a lot of people who repeat something about "WebAssembly isn't usable because it can't manipulate the DOM". Ok, so I show an example of WebAssembly manipulating the DOM. That should put that to rest, right? If not, I'm curious what they meant.
> building up some kind of API that allows me to access and manipulate any DOM object via a set of functions that i can import into WASM to work around the fact that i can't access document and other objects directly,
This is a shortcoming in the language implementation, or the library for the language. The machinery is already there at the WebAssembly level. If your language is low level (Rust, C, or C++), and doesn't have what you want, you could roll your own. If your language is high level (Python or Lua), you're at the mercy of the person who built your version of Python.
The core of WebAssembly is a lot like a CPU. It's analogous to AMD64 or AArch64. It'd be weird to say you need changes to your CPU just to use a function called `getElementByName()` or `setAttribute()`. Some WebAssembly extensions have added features to make that "CPU" more like a Java style virtual machine. There are (or will be) garbage collected references to strings, arrays, and structs. This might make it better for implementing Go and Java style languages, and it could help with a fresh implementation of Python or Pike too. And maybe some of those changes will give controlled access to JavaScript style objects.
There's a last discussion to be had about performance. Maybe the bridge between WebAssembly imports and exports is too slow for intensive use. That's a debate that should be backed up with benchmarks of creative solutions. Maybe accessing JavaScript strings is so common, so important, and so slow that it really does require an enhancement to the standard.
i am talking about a js library of generic functions that can be imported from wasm to make DOM access easier. handling of string arguments still needs to be solved (i am guessing the shared memory access is the right place for that) and the respective functions on the wasm side need to be implemented for each target language so that DOM access in the target language becomes natural and easy to use.
If you picked a language that gave you low level control, and if you had strong opinions about what you wanted, you could probably write that JS library in a weekend or two. Strings and arrays through shared memory. Maybe use a JS Map of integers to act as handles mapping to JS Objects.
Thanks for confirming that WebAssembly still cannot manipulate DOM in 2024.
It can only call custom javascript functions that manipulate DOM AND I need to write some arcane function signature language for every DOM manipulating function I want to call.
I'll give another 4 years and see if they fixed this.
> Thanks for your simple concrete examples and explanations!
I'm glad someone liked it :-)
> I love his description of Forth as "a weird backwards lisp with no parentheses"
I've been interested in that duality between Forth and Lisp before, but my progression always seems to following this path:
- Since Forth is just Lisp done backwards and without parens, and since it's not hard to write an sexpr parser, I might as well do Lisp to check the arity on function calls.
- But in addition to arity errors, I'd really like the compiler to catch my type errors too.
- And since I've never seen an attractive syntax for Lisp with types, I might as well have a real grammar...
And then I've talked myself out of Forth and Lisp! Oh well.
PostScript is kind of like a cross between Forth and Lisp, but a lot more like Lisp actually. And its data structures, which also represent its code, are essentially s-expressions or JSON (polymorphic dicts, arrays, numbers, booleans, nulls, strings, names (interned strings), operators (internal primitives), etc.)
Not coincidentally, James Gosling designed the NeWS window system and implemented its PostScript interpreter, years before designing and implementing Java. And before that he designed and implemented "MockLisp" in his Unix version of Emacs, which he self effacingly described like: "The primary (some would say only) resemblance between Mock Lisp and any real Lisp is the general syntax of a program, which many feel is Lisp's weakest point."
DonHopkins on May 10, 2017 | parent | context | favorite | on: Emacs is sexy
Hey at least Elisp wasn't ever as bad as Mock Lisp, the extension language in Gosling (aka UniPress aka Evil Software Hoarder) Emacs.
It had ultra-dynamic lazy scoping: It would defer evaluating the function parameters until they were actually needed by the callee (((or a function it called))), at which time it would evaluate the parameters in the CALLEE's scope.
James Gosling honestly copped to how terrible a language MockLisp was in the 1981 Unix Emacs release notes:
12.2. MLisp - Mock Lisp
Unix Emacs contains an interpreter for a language
that in many respects resembles Lisp. The primary
(some would say only) resemblance between Mock Lisp
and any real Lisp is the general syntax of a program,
which many feel is Lisp's weakest point. The
differences include such things as the lack of a
cons function and a rather peculiar method of
passing parameters.
"Rather peculiar" is an understatement. More info, links and code examples:
PostScript is much higher level than Forth, and a lot more like Lisp than Forth, and has much better data structures than Forth, like polymorphic arrays (that can be used as code), dictionaries (that can be used as objects), strings, floating point numbers, and NeWS "magic dictionaries" that can represent built-in objects like canvases, processes, events, fonts, etc.
Yet Forth doesn't even have dynamically allocated memory, although in a few pages of code you can implement it, but it's not standard and very few Forth libraries use it, and instead use the linear Forth dictionary memory (which is terribly limited and can't be freed without FORGETting everything defined after you allocated it):
PostScript is homoiconic. Like Lisp, PostScript code IS first class PostScript data, and you can pass functions around as first class objects and call them later.
Yeah, I've never used Postscript except as a document format created by LaTeX :-)
There was a language called "V" a while back, different than a more recent language called V. It was basically a Forth where quoting was done with square brackets. This replaced the colon-semi notation for defining words, and it was also nice for nested data structures. This language seems to have fallen off the web though.
You mentioned FExprs. I never looked at Mock Lisp, and it sounds like Gosling doesn't think I should! However, I'm sure you're aware of Kernel. I think of Scheme as the "prettiest programming language I don't want to use", and I think the vau stuff in Kernel makes it even prettier. (But I still don't want to use it.)
For homoiconicity, I've also considered something like Tcl or Rebol/Red. The latter two blur the lines between lexer and parser in a way that I'd like to learn more about.
But really, I always come back to wanting static typing. Both for compile time error checking, and to give the compiler a leg up in runtime performance. Instead of using separate declarations like you see in Typed Racket and some others, I wonder if a Lisp with the addition of one "colon operator" to build typed-pairs would do it. Just one step down the slippery slope of infix syntax sugar. In the context of WebAssembly, something like this:
Using colons to specify the types of the parameters and return result(s). It'd also be nice to have colon pairs like this for key:value in hash tables, or as cond:code in switch/conditional constructs.
I.e. they are correct that it is arcane. What percentage of programmers today do you think have ever seen code written in any Lisp dialect, let alone understand it?
The point is that anyone who's distracted by the arcanity of Web Assembly Text Format obviously doesn't understand the first thing about WASM or its potential use cases. The poster could just have easily have used a regular programming language and compiled that down to a WASM binary. However, for the purposes of a minimal example to be pasted in a text box, this might have led to a much larger Uint8Array.
I wonder what the response would've been if I had left the WAT source code out and just claimed, "this WASM binary was built with a compiler", and not specified the language.
i would have liked to see the source. how would the example look like when written in python for example? and how would it compare with non-wasm javascript?
> how would the example look like when written in python for example?
That seems like an easy question, but there a lot of choices which complicate it. I'm sure people have compiled CPython to WebAssembly, but I think you only get the WebAssembly imports they (or Emscripten) have chosen for you. I can't use that as an example of what I was trying to show.
It looks like py2wasm is aiming for a true Python to WASM compiler, rather than compiling an interpreter. However, I don't think it supports user-defined imports/exports yet. There's a recent issue thread about this (https://github.com/wasmerio/py2wasm/issues/5).
> how would it compare with non-wasm javascript?
I'm not sure I understand the question. If you're just using JavaScript, it just looks like JavaScript.
> so does that mean accessing the DOM from python is not possible yet?
No. It means you only get what the person who ported CPython or py2wasm gave you. It's not a limitation in WebAssembly, and maybe they have some other (hopefully better) API than the `easy(123)` example I was trying to show.
> for the second question the example you gave is equivalent to what in plain html/js?
It means you only get what the person who ported CPython or py2wasm gave you
that's what i meant. it's not possible until someone adds the necessary features to the wasm port of the language. makes sense of course, like any feature of a new architecture.
If I understand what you're asking
exactly that, thank you. it is easier to understand the example if there is no doubt as to what is the js part and what is wasm (it also didn't help that the code was not easy to read on a phone)
What percentage of programmers on Hacker News haven't?
Flaunting your ignorant anti-intellectualism isn't a good look.
You do know this is 2024, you have Internet access, and you can just look shit up or ask ChatGPT to learn new things, instead of cultivating ignorance and pointlessly criticising programmers trying to raise awareness, share their experiences, and educate themselves and other people.
In case you've been living under a rock and didn't realize it, JavaScript, the topic of this discussion, is essentially a dialect of Lisp, with garbage collection, first class functional closures, polymorphic JSON structures instead of s-expressions, a hell of a lot more like and inspired by Lisp and Scheme than C++, and LOTS of people know it.
My point is that if someone says something is arcane, “it’s not, it’s just [something that you’ve potentially never heard of and almost definitely don’t understand even if you have heard of it]” doesn’t help your case. They could look it up, but the fact that they would have to do so proves the commenter’s point - relatively few programmers understand Lisp syntax, i.e. it is arcane.
If you’re trying to raise awareness of something, don’t act like the reader is stupid if they don’t already understand. Insisting that something is obvious, especially when it is not, means any reader who does not understand it will likely perceive the comment as snobby. As does including snide remarks such as “in case you’ve been living under a rock”.
> Flaunting your ignorant anti-intellectualism isn’t a good look.
Why do you assume that I personally don’t know what s-expressions are just because I agree that they’re arcane? Labelling someone as an ignorant anti-intellectual just because they disagree with something you said isn’t a good look either.
There's nothing "arcane" about WebAssembly Text format. The fact that you don't recognize it just means you don't know much about WebAssembly, which is fine, but you're whining, lashing out, and attacking people who are trying to explain it, and trying to police and derail discussions between other people who are more knowledgeable and interested in it, which makes you a rude anti-intellectual asshole.
Why don't you just go away and let other people have their interesting discussions without you, instead of bitterly complaining about things you purposefully know nothing about and refuse to learn? How does it hurt your delicate feelings to just shut up and not bitch and whine about discussions you're not interested in?
> you’re whining, lashing out, and attacking people who are trying to explain it
I think you’re assuming that all of the comments you’re talking about are written by the same person, when they’re not. I haven’t been attacking anyone, and I don’t think I’ve replied to anyone who’s tried to explain it.
> things you purposefully know nothing about and refuse to learn
Why do you still assume I don’t know what they are? I’ve already pointed out that my belief that s-expressions are arcane doesn’t mean I don’t know what they are.
As another illustration of my point, I just stumbled across this comment on another post:
> But maybe the whole "ease of use" budget is blown by using a Lisp in the first place.[0]
The fact is that Lisp syntax is understood by relatively few programmers, which meets the definition of arcane. You immediately flying off the handle when someone calmly points this out will not help your goal.
I had a basic idea of what Lisp was before getting into it some 25 years ago. It soon became obvious that, no, I actually had no idea. It's not that what I thought had been wrong, but it had no content.
I knew Lisp the way I know that that guy walking down the street is my neighbor Bob. But since I've never had a conversation with Bob, I actually have no idea who he is.
When I see Korean writing in hangeul, I know it is Korean writing, but can't read a letter of it (nor speak a word of Korean).
These examples are like knowing what Lisp is.
The thing I had not expected was how the knowledge in the Lisp world and its perspectives are very informative about a whole lot of non-Lisp!
> There's no point in trying to make other people stop talking about Lisp
Nobody is trying to make you stop talking about it. We’re trying to make you understand that the way you’re talking about it is elitist. When someone said they were confused by the syntax, you could have just explained it without judgement. Instead, you felt compelled to flaunt your membership of the in-group who understands Lisp, and try to make others feel stupid by implying that people who don’t understand it aren’t good programmers, or are anti-intellectual.
You’re doubling down on it in this comment, too, still insistent on making people feel like they’re “less than” because they don’t know Lisp:
> so other more knowledgeable and curious people
If I didn’t know Lisp, and my first exposure to it was from someone who sees this kind of toxicity as a reasonable way to speak to people, would I want to join their community?
> If I didn’t know Lisp, and my first exposure to it was from someone who sees this kind of toxicity as a reasonable way to speak to people, would I want to join their community?
Wouldn't (didn't!) faze me. Every community has it. The most popular languages, platforms and tools in fact bring out unbridled hostility. Probably, hostility finds a peak in the second most popular camps. :)
We have already lost people who are influenced by this sort of fluff, because those people will be turned away from Lisp by the anti-Lisp trolling about parentheses, niches and slow processing over everything being a list, and so on. There aren't enough Lisp people around to counter it.
Sorry to bust your minuscule lisp bubble but just because someone ignored your favorite niche language in an educated career choice, it doesn't mean they are ignorant.
Infantile language tribalism though, have no place in engineering and is blatant ignorance when coming from a supposed adult.
So what you mean by niche is actually popularity, and not a specific application area?
Fortran has a niche: numeric computing in scientific areas. However, even Fortran is not your grandfather's Fortran 66 or 77 any more. I had a semester of the latter once, as part of an engineering curriculum before switching to CS.
It supposedly has OOP programming in it, and operator overloading and such.
I don't know modern Fortran, so I wouldn't want to look ignorant spreading decades-old misinformation about Fortran.
Most devs do know what Lisp'ish languages are because you just can't forget how weird a bunch of nested parenthesis look.
They just don't care enough to invest time in it because it is niche. And proponents tend to tirelessly spam about it from their ivory towers like it's flawless and everyone who didn't learn it is somehow inferior, somehow justifying personal attacks like yours. Classy as usual.
> people who become hostile when you try to talk about something interesting
We’re becoming annoyed not because people are trying to talk about something interesting, but because they are being intentionally insulting and condescending and then using bad faith arguments like this one when they’re called out on it.
Which of these quotes represent the commenter “trying to talk about something interesting”?
> flaunting your ignorance and your anti-intellectualism
> The point is that anyone who's distracted by the arcanity of Web Assembly Text Format obviously doesn't understand the first thing about WASM
> You do know this is 2024, you have Internet access, and you can just look shit up
> In case you've been living under a rock and didn't realize it
> but you're whining, lashing out, and attacking people who are trying to explain it, and trying to police and derail discussions between other people who are more knowledgeable and interested in it, which makes you a rude anti-intellectual asshole… Why don't you just go away and let other people have their interesting discussions without you, instead of bitterly complaining about things you purposefully know nothing about and refuse to learn? How does it hurt your delicate feelings to just shut up and not bitch and whine about discussions you're not interested in?
> And that proves my point that you're flaunting your ignorance and your anti-intellectualism. But you be you. There's no point in trying to make other people stop talking about Lisp by complaining about how proudly ignorant you are, and how you want to remain that way, so you don't want anyone else to talk about it. This really isn't the place for that, since you always have the option of not reading, shutting up, and not replying and interrupting and derailing the discussion, so other more knowledgeable and curious people can have interesting discussions without you purposefully harassing them like a childish troll.
> Look, it's pretty clear I stepped on some insecurity.
I listed those quotes to rebut your assertions that we’re being hostile towards a group of people who are merely trying to talk about something interesting, not to imply attribution of all of the quotes to you. I chose the general phrase “the commenter” because it would not have been correct to say “you”, as I was aware you were not the source of many of them.
I can see where you're coming from, but my intent was to talk about the very first interaction that started all of this nonsense:
- I said WebAssembly can already manipulate the DOM with functions.
- He asked for an ergonomic example because StackOverflow told him it can't be done. The "we can have DOM access at home" bit seems like the start of things to come.
- I provided a concise example, and expressed skepticism that this would settle the discussion.
- He responded with sarcasm, and weirdly accused me of sarcasm.
- I reacted poorly to his bitchy and ungrateful reply.
My best guess is that the WAT format confused him. He didn't know it was a programming language, and he didn't know you could do it with other programming languages, so he got insecure and lashed out.
Do you have a better explanation for the weird transition from technical discussion to flame war and hurt feelings?
They’re already moving it that way, it’s not like it isn’t without complexities and such either. WASM isn’t the silver bullet everyone seems to cling to.
I feel like the WASM fervor has more to do with the fact people don’t enjoy using Frontend tools or JavaScript etc. vs looking at the actual utility tradeoffs
You're two library functions away from having it easy:
Copy from JavaScript to WebAssembly:
Use TextEncoder to convert a JS String to Uint8Array
Copy the bytes from the Uint8Array to WemAssembly.Memory
Copy from WebAssembly to JavaScript:
Copy the bytes from WebAssembly.Memory into a Uint8Array
Use TextDecoder to convert from Uint8Array to JS String
JS Strings are pretty much always going to be "rope data structures". Trying to provide anything other than copy-in and copy-out is going to expose implementation details that are complicated as fuck and not portable between browsers.
"the overhead of importing glue code is prohibitive for primitives such as String, ArrayBuffer, RegExp, Map, and BigInt where the desired overhead of operations is a tight sequence of inline instructions, not an indirect function call"
I guess the more elegant and universal stringref proposal is DEAD now !?
We don’t yet have consensus on this proposal in the Wasm standardization group, and we may never reach there, although I think it’s still possible. As I understand them, the objections are two-fold:
WebAssembly is an instruction set, like AArch64 or x86. Strings are too high-level, and should be built on top, for example with (array i8).
The requirement to support fast WTF-16 code unit access will mean that we are effectively standardizing JavaScript strings.
I really like stringref and hope the detractors can be convinced of its usefulness. Dealing with strings is not fun right now.
And dealing with strings isn't fun in many other languages or runtimes or OSes.
e.g.1. C# "Strings in .NET are stored using UTF-16 encoding. UTF-8 is the standard for Web protocols and other important libraries. Beginning in C# 11, you can add the u8 suffix to a string literal to specify UTF-8 encoding. UTF-8 literals are stored as ReadOnlySpan<byte> objects" - https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
e.g.2. Erlang/BEAM/Elixir: "The Erlang string type is implemented as a single-linked-list of unicode code points. That is, if we write “Hello” in the language, this is represented as [$H, $e, $l, $l, $o]". The overhead of this representation is massive. Each Cons-cell use 8 bytes for the code point and 8 bytes for the pointer to the next value. This means that the 5-byte ASCII-representation of “Hello” is 5*16 = 80 bytes in the Erlang representation." - https://medium.com/@jlouis666/erlang-string-handling-7588daa...
> The Erlang string type is implemented as a single-linked-list
This refers just to Erlang's string() type, not BEAM strings in general; it's just a bad default. If you're not using binaries, you're doing it wrong, and that's exactly why Elixir's strings are UTF-8 binaries.
Thank you for the links. To the extent I understood it from a quick reading, it all looks like stuff you could get with the existing import/export mechanisms. I would choose (modified) UTF-8, but I understand why UCS16 is always going to be around.
I agree about keeping wasm bytecode cleaner. The core plus simd stuff is such a great generalization of the ARM and X86 CPUs we mostly use. The idea of gunking it all up with DOM related stuff is distasteful.
It supports nearly arbitrary imports of anything you want. How much more flexibility do you need? You could provide an `eval` function to run arbitrary code with a small amount of effort.
Is the problem that Emscripten and/or Rust haven't laid it all out on a platter?
- a bytecode "language" that roughly corresponds to Javascript semantics, and that is what the engines interpret (and JIT compile)
- browsers still include a compiler to compile JS sourcecode to the bytecode. Possibly wasm could work, although it would need a lot more functionality, like native support for GC, and DOM access, etc.
- browsers include a disassmbler/decompiler to improve debugging the bytecode
Then simple sites, and development can use plain JS source, but for higher performance you can just deploy the pre-compiled bytecode.
I am certain this would mainly add problems while not improving performance as websites would just add more stuff on top until it's again on a barely tolerable level, for both the user and the developer who know probably has to manage yet another super simple, blazingly fast tool to keep everything running.
Java is a product of the JVM, which was the innovation, not the reverse. A successful language moving post-success to a new byte code format would be as far as I know unprecedented.
The idea that JavaScript is an interpreted language is also fairly shaky. It’s JIT compiled as soon as it arrives to your browser. Honestly, a modern JS engine is not different from any other VM.
The question as you rightfully pointed really is what do you send to the browser and under it lies the fundamental question of what is a browser actually. Is it a way to browse hypertext content or a standardised execution environment?
The problem is within that gray area. For enjoyers of vanilla js, like myself, I'd hate it if "core" js got so small that I now started to require a compiler for my ES6 code. If I was in charge I'd say "fine, but the core must be at least as large as ES6" and I'd reserve the right to tweak browser native modules in minor ways (for example, it would be nice to support a SPA syntax where you could export/import modules from within the same page without either a) hacking the global object or b) generating unnecessary resources).
I think you missed a tense. Was atrocious, we got jquery, which got folded back into javascript. If the last time you worked with vanilla js was when jquery was hot, you're views are outdated.
1) JavaScript, the original assembly language of the internet, does not need new language features.
2) JavaScript, the front-end web development language is a fractal of infinitely many sub-languages that transpile back to ES5.
The proposal, as I read it, is: Let's stop adding front-end web features to the assembly language; it doesn't get easier, better or faster if we change the underlying, slowly adopting and hard-to-optimize foundation.
When you want a new language feature, add it to the fractal part that transpiles back to the part well-supported and highly optimized in existing runtimes. The only loss is that you need to transpile, so your build pipeline becomes non-trivial. But it's either "code assembly" or "get a modern build tool".
This isn’t really true on a practical level any more. ES6 support is very widespread (97% of all web users according to caniuse.) That even includes module import syntax!
There are still some new language features that need to be transpiled, but most projects do not need to worry about transpiling cost/let/arrow functions/etc.
I mean even newer features like nullish coalescing and optional chaining are at 93-94% support.
At the end of the day, I would say tools like babel for transpiling are less and less important. Yes, you still use a bundler because the web has a lot of runtime constraints other native applications don’t have (gotta ship a tiny bundle so the page loads fast), but it’s better for the language features to be implemented in the VM and not just faked with more JS.
> 3% of people is a lot of people to cut off if your JavaScript is essential
These are probably the 3% that won’t affect your business much. They’re more likely to be on older hardware and also have less discretionary income. Or browsing on really weird hardware that is also unlikely to lead to a sale.
People with "less discretionary income" still deserve to access the web in a way that isn't broken. This might come as a surprise nowadays, but the web can be useful for more than just selling things.
Did they solve GC and DOM access ? It's been years since it was "just about to happen" and I stopped paying attention in the meantime. But if it had that I agree - it would be ideal if JS was a legacy thing and a saner WASM first class language got to replace it.
Keep the single threaded event loop approach but kill the JS semantics.
The problem is that they promised that the WasmGC would include the much desired access to JavaScript objects but now this crucial aspect is no longer part of it and postponed again.
Wasm GC is shipped in stable releases of all major browsers except for Safari but that will be changing shortly if it hasn't already (my info is a few weeks old.) The important thing to note about Wasm is that all important functionality, such as access to I/O and the DOM, have to arrive in the form of host imports to a Wasm module. With this in mind, thanks to Wasm GC it is possible to do web UIs from Wasm by importing the relevant bits of the DOM API that the module needs. Projects like Hoot (Scheme) and Kotlin port are already demoing such things.
> Projects like Hoot (Scheme) and Kotlin port are already demoing such things.
And Scala.js has shipped it. [1] Although technically experimental, it has no known bugs and it has full support of things like manipulating DOM objects from Scala.js-on-Wasm code.
I thought the js DOM API was atrocious. But they copied it over with Java object hierarchy into Scala.js. Makes me want to give up on coding altogether.
Actually I don't want DOM access and GC for wasm. At least not yet. It overcomplicates a lot and I simply cannot imagine that a GC can be one-size-fits-all languages.
I want fixed-size buffer-backed structs for JS. Basically a DataView as a C struct. This would massively benefit interop and solve some shortcomings of DataView.
There was a proposal for a binary AST for JS several years ago [1]. Why not just use that as JS0? It's separate and can offer new possibilities as well.
good point. in a sense webassembly is that minimal very performant language. let javascript and typescript compile to webassembly, and you essentially got what is being proposed here
The DOM itself is very slow, much slower than Javascript, so you're not going to be seeing any great performance increase if WASM can access the DOM directly.
I also have to wonder if people are excited about replacing Javascript, why they would want to have HTML/CSS/DOM on top of WASM. A different front-end UI tech could be much better than slow, old DOM.
All imports have to come from the host, which in the case of the web means they have to be expressed as JavaScript. Behind the scenes they could be optimized, though, and I've heard that JS/Wasm engines maybe already be doing this with well-known imports (think Math.sin).
Drawing to canvas means recreating the UI and all its wide-sweeping concerns, which is quite an undertaking. And even if it were accomplished in a central open source library like Flutter, that adds a considerable amount to the package size of any application. Acceptable (or even preferred) for certain applications but not for most.
Providing access to an already proven DOM would be the better solution.
> And even if it were accomplished in a central open source library like Flutter, that adds a considerable amount to the package size of any application.
The download isn't much different to a typical website. That Flutter demo in wasm is 2 megabytes.
Uno Platform's WebAssembly implementation uses the DOM rather than drawing to canvas: https://platform.uno/
Uno's philosophy is to use platform native controls. The benefit is that you get platform native characteristics, the cost is it will never be exactly the same in each browser and platform.
You're joking, right? A 2mb bundle is *absolutely unacceptable*. People complain about React which is less than 100kb minified and gzipped. This website doesn't even include any images or anything...
What would access to the DOM look like? WASM already has import and export (nearly) arbitrary functions. People keep saying it can't manipulate the DOM, but it clearly can. So, what's missing?
I thing we agree. JavaScript is awful, and TypeScript is simultaneously impressive and still awful. I think we have three options:
A) Get your hands dirty and write what you want. Once.
B) Chant along with the mob who doesn't even understand what they're asking for.
C) Wait several years for some super complicated solution to be designed by committee.
I wouldn't even want direct access to the DOM if we had it today. The DOM as an API is atrocious.
Instead, I want a set of nice functions that do things like put a graphical chart on the page - all in one call. Or one call to pass a bunch of 3D triangles or splats to visualize in a WebGL canvas. Or one call to play some audio samples. Or a function to poll for recording audio. And so on...
if option A works, why aren't there any frameworks yet that implement it?
maybe all the framework devs are waiting for C?
but why?
you could be right about A but at present the majority view seems to be that C is the right option. which is what pushes me into going with B because i have no interest in developing my own framework.
if a framework appears that implements option A i'll gladly consider it. (just as long as it isn't tightly coupled with a backend)
So throwing out literally 99% of what makes the web actually portable and useful?
A random drawn rectangle is not a UI, it’s not accessible, not inspectable, not part of the de facto OS native toolkit.
If all we wanted is a random cross-platform canvas element to draw onto from a vm, it could be solved in a weekend. There are million examples of that.
Its web targeted version is still not accessible, even though they promised that they will actually render to HTML elements as much as possible. A single canvas element is not that.
> The Flutter team would like to eventually turn the semantics on by default in Flutter Web. However, at the moment, this would lead to noticeable performance costs in a significant number of cases, and requires some optimization before the default can be changed
Ah, so you admit it does indeed include accessibility but now what you're complaining about is performance. Not that you've actually tried it of course.
Can you give an example of anything anywhere that manipulates the DOM without using JavaScript? Because it seems to me that pretty much every web application is currently using the javascript host, and the well written ones are pretty snappy.
this is going beyond my level of experience, but i thought there can't be any such example because javascript is the only way. the difference is between code written in javascript which is fast of course and accessing js functions from WASM, which is slower. how much slower, i don't know. i also don't know how old that discussion is where i learned about this. so maybe it improved since. that would be good news.
did you mean there are snappy webapplications running in WASM? if you have any examples, i'd be curiuos to learn more.
> i thought there can't be any such example because javascript is the only way. the difference is between code written in javascript which is fast of course and accessing js functions from WASM
That doesn't have to be true.
Eventually WASM will get direct access to the full browser API, without going through JavaScript.
The browser exposes a browser API to the JavaScript VM it hosts, so things like the DOM are available.
Those things aren't available in other JavaScript VMs, like Node. (There's no DOM to interact with.)
And they're not yet available in the WASM VM in the browser, either.
The reason is that the WASM APIs/ABIs have not stabilised. It takes time to make right, but there is progress.
Eventually WASM will get direct access to the full browser API, without going through JavaScript.
well, that is what i am waiting for. my point is that it's not the case yet, while the gp seemed to suggest that it's not needed because access through the host is available
A fundamental aspect of the Wasm capability security model is that all access to the outside world (I/O) is controlled via imports. Direct access to the entire browser API doesn't make sense in this context.
> [...] go through the javascript host, which is slow
And now you admit:
> this is going beyond my level of experience [...]
> how much slower, i don't know
I guess people just repeat what they hear without questioning or understanding it, and then it becomes dogma.
> did you mean there are snappy webapplications running in WASM?
No. I meant that all existing web apps go through the "javascript host", using JavaScript. So if any of them are fast enough, and some certainly are, the problem isn't the "javascript host".
I commented this elsewhere, but the funny thing is that asm.js was the precursor to WebAssembly, and this proposal is essentially asking for asm.js back again.
Yes and no, there is a significant bundle size problem with wasm which is hard to fix.
I'd rather we just move to native cross platform applications and stop using a document browser to build interactive applications.
What's more likely is that all of this will probably be eclipsed by LLM and virtual assistants - which could be controlled by native apps with a dynamically generated GUI or voice.
I think APIs exposing data and executing functions will fundamentally change what we think the web is.
Throwing out the baby with the bath water? There are millions of standardized APIs available in the browser that would be probably impossible to recreate in anything else due to failing consensus.
Not that please. The flutter example took 20 seconds to load and the scroll is super choppy.
It’s unfortunate there isn’t a more native “app like” UI toolkit. Especially on mobile, web apps generally are bad and a lot of the reason is trying to shoehorn an app experience onto the dom.
It loads fast for me in Firefox, MS Edge, and Chrome. It's a 2 megabyte transfer and runs quickly.
If you're using Safari it's true that Safari's WebAssembly implementation is behind the other browsers. But that's a Safari problem more than a WebAssembly problem.
Assembly is not a language runtime. Even if you had WebAssembly as the core, you'd still need to compile JavaScript to WebAssembly, manage the GC, etc which all would still suffer from the performance implications mentioned in the article.
Also... do you really think its wise to rewrite v8 to target WebAssembly?
Amazon has been migrating its Prime Video app from JavaScript to WebAssembly. They're compiling Rust to WebAssembly and they've seen increased performance and lower memory usage:
Well this person is not the only one who thinks js (and web stuff in general) is terrible. I mostly dislike the pace of crapification; I comment almost daily on repositories asking why they made that breaking change with zero benefits for the user. Usually 'we are cleaning up' or so; there are npms that really need never be changed that are changing weekly just because 'keep them fresh'. That kind of misery is just weird and frankly depressing; update something one week after writing and plop, it breaks. For No Reason as no improvements were made, just breakage for the sake of upping the date on github and npm (the most terrible I find the VC backed open source ones; I guess they have to update all the time for the sake of the VCs looking in even though no updates are required). I use lisp, c++ and go libraries that haven't been updated in 10 years; guess what; they work and work well and stable for another decade because they really never need anything new and people are not vying for update kudos or whatever.
But you can use vanilla js you say? Yes it is true, but I find js very terrible and I want to write things in common lisp, or, if need be, Go. Neither of these require any bullshit with tooling or anything hard. You can learn enough Go in a few hours to be productive (like C) and llms are super at it (unlike js/ts where they produce things that had breaking updates 40 times since the llm knowledge cut off for no reason at all). With vugu framework you don't need to touch js either. Common lisp is also not hard to learn, bit harder but even better tooling and you don't need to know all of it to write nice stuff; there is the clog framework which is basically all you need to get going as it is an ide, web dev environment and you get away with writing almost no js.
So need to install node and npm and then typescript. Go is 1 binary to download and throw in a dir.
I don't find js hard to learn (and I have been programming in it since it came out (I was in the CMS business since the early 90s), I just find it ugly and annoying to work with. But yeah, I guess the ecosystem definitely doesn't help as it's hard to see the proliferation of terrible software/frameworks and habits totally separate from the language; apparently there is something in it that attracts these terrible practices.
Kotlin is a lipstick on a pig that effectively doesn’t exist outside of IntelliJ, single vendor that is only interested in driving sales to their IDEs and doesn’t work without bringing baroque Gradle.
It has amazing coroutines library and it started with a nice set of features but failed to evolve. Sealed types are a joke compared to union types in TS. No inline types so you’re forced to created stupid data classes everywhere even if it’s used only once. Constant fight between wannabe functional programmers that try to replicate Rust’s Result monad, but without official language support, and exceptions crowd.
Static delegation. Still no pattern matching when even freaking Java has it nowadays. Hilarious. Constant focus on KMM, even though language stagnated for a while.
Rust is just a pain to develop. Slow to compile and constantly have to please borrow checker. I’m not sure if you’re joking, but you can’t seriously think that Rust is better language for prototyping than JS/TS.
> designed much later and not so hastily, so they have less warts than JavaScript as a result.
That’s irrelevant. Modern JS has evolved over the years and is a joy to use now.
No, I wasn't suggesting that Rust is better for prototyping, the only thing I said is that Kotlin and Rust (that you mentioned yourself) are better than JavaScript because they were designed much later and more thoroughly. TypeScript is a different thing to me, it makes part of this new generation of languages, its only problem is that it works on top of JavaScript.
Then you add capability to hide the browser chrome and build close to native user experience and we will have truly looped the loop. Think about that: the browser as an intermediation ensuring resources are properly shared and each application is shielded from the others. Everything that’s old is new again. /s
On a serious note I don’t see the point in turning browsers into an OS on top of the OS. I know it’s some kind of Google wet dream so they can suck up even more data than they already do but still. If you want to ship applications, just do that. The sandboxing should be done at the OS level where it belongs.
I was in your camp, but we truly lost. Almost everyone I know literally ONLY uses a browser for everything. On mobile some things force you app use, but if that weren't the case, people would use it from a browser. Games in browsers, movies, email, anything. So it makes sense to only open a browser in an OS; it is basically what users do anyway.
Sounds like the presenters are assuming that everyone must be using JS in the same way they are. As someone who prefers to work in VanillaJS, I'm already frustrated by all the language changes being forced upon me.
So maybe I agree with them that fewer changes should be made, but for completely different reasons.
I'd rather them work on API improvements so that web apps get closer to parity with native apps. All this other talk is pointless while web apps remain second class IMO
What do you mean forced upon you? There is no requirement for you to use new features. If you do, it’s because you or your team thinks it’s more convenient — and it really is! map, filter, arrow functions, scope improvements with const/let, etc. are major language improvements. So is null coalescing and optional chaining. You don’t have to use them, but they do make JS more straightforward to work with.
As a dabbler in JS, it jumps out to me that a lot of tools are trying to force ESM, despite the fact that they lack many essential features that other module systems have.
Conditional imports (e.g. depending on browser-version, browser-vs-server) is the big one. This does not inhibit the alleged static analysis goal at all, but the specification is pointlessly hostile to it. I would've expected the "import assertions" proposal to cover this obvious problem, but no, it's just another useless thing that will always have to be transpiled into nonexistence.
Transparent polyfills (or anything that really needs to be loaded "first") also don't work, since async means you can't specify the order to load things. This means that every single module you write has to explicitly mention which exact polyfill it's going to end up using (hope you don't accidentally skip one or specify a conflicting polyfill) ... or you just abandon modules and load your polyfills with a normal script (which means that now your source is a bastard mixture).
Lazy imports are technically possible via dynamic imports, but unnecessarily annoying and break in all sorts of places. Granted, the standardization of the "leaky browser abstraction" makes this pretty awful regardless.
I think the proposal is actually good for people like you.
You can choose to just work with the core and maybe a minimal sugar library. Which will probably be faster and don't include "all the language features that were forced upon you".
> Regarding BigInt, the presentation states that “use cases never materialized.”
Yet every language has either that or BigDecimal. Even if Google's frontend devs haven't found a use, there also exist JS devs outside of Google who certainly have found uses (though possibly more of them on the backend).
Similarly, not every developer has a compilation step in their JS work. And there are places where you can't have one, e.g. in the browser console. Develop the language instead of tons of incompatible tools.
That part caught my attention too. It reminds me of the discussion to remove complex numbers from Go. Funny enough, compiler writers can't even imagine why you would want BigInt or Complex, because those aren't useful for writing compilers.
The problem with Google compiler developers is that they will do a search of the google3 repository, find no uses (because Google doesn’t do any advanced math, for example) and declare the language feature to be useless.
Which is ironic because I use Google Sheets a lot and occasionally run into problems when it doesn't support BigInt calculations. Sheets is the best excuse Google has for keeping BigInt support in the language.
one contributor to the pike programming language when asked why he took the effort to optimize syntactic sugar responded: so that pike users can write simple code and still have it run blazingly fast.
in pike, bigint and int are integrated in such a way that the transition is automatic. there is no overflow but as long as the values fit in an int, that is used internally to keep the code fast.
Nice or not, pretending that double precision floats and arbitrary precision integers can be stacked as a tower is foolish. There are floats that can't be represented as integers, and integers which can't be represented as floats.
This is where you say something about "exact" vs "inexact" as though that will hand wave it away.
> Nice or not, pretending that double precision floats and arbitrary precision integers can be stacked as a tower is foolish. There are floats that can't be represented as integers, and integers which can't be represented as floats.
The numeric tower in Scheme describes general number types with above of in the tower graphic (in the Wikipedia article) meaning subtype of. double precision floats and arbitrary precision integers are representations of numbers. Both would also be Real numbers.
> This is where you say something about "exact" vs "inexact" as though that will hand wave it away.
I'm not familiar with this debate, but how is that a hand wave? The article describes a reasonable-sounding way to extend the tower with a second dimension of precision. Following those rules, you would never just convert between bigint and float, but an expression involving both would output a float.
I wouldn't parade it around as a triumph over the problem, and it's arguably better to require people to be explicit about whether converting the float to bigint, or the bigint to float, is what you wanted.
That doesn't really strike me as worse than any other use of == on a float. If anything needs to change there, I think it's more rigor in float comparisons.
Basically, ULP-level inaccuracy is a problem inherent to having float at all, even without bignum interactions. They would be a menace even if you had a pure tower from 32 bit int to double to complex to more.
I wasn't trying to draw attention to comparisons for equality. Perhaps I should've used an arrow => instead of == to indicate "the result of this operation", but that probably would've caused confusion too...
The real point is that you can get some non-intuitive answers from letting that numeric tower make conversion decisions for you. It's just a rule, and it's not an amazing rule.
> I wasn't trying to draw attention to comparisons for equality. Perhaps I should've used an arrow => instead
Then it's even less of issue. Yes if you convert to a float you get rounding, what did you expect when you introduced a float?
It's somewhat unintuitive but that's the nature of floating point.
> The real point is that you can get some non-intuitive answers from letting that numeric tower make conversion decisions for you. It's just a rule, and it's not an amazing rule.
But again, you can have the same kind of issue without bignums. It's not a tower problem it's a float problem.
The specific type of conversion is one I don't see as a big issue. The programmer deliberately decided to use an imprecise data type for the calculation.
But more importantly, I'm saying that the problematic rounding can occur even if your tower does not have both bigint and float. It can happen even if every layer can completely represent every value of the layer above it. Do you have any complaints that are unique to a tower that has both bigint and float, and don't apply to towers that only have float?
To elaborate on that, an implicit cast directly from a single bigint to a single float won't happen with the rules in the wikipedia article. You'd have to do something like bigint+float, which can have horrible rounding errors, but those horrible rounding errors are also present in float+float.
And you can even have these problems without a tower. So I don't see how the bigint and float scenario is an argument against towers.
From a distance, I kind of like Scheme, so I went and re-read the R5RS section on the topic. To me, the numeric tower (generalized) says:
N < Z < Q < R < C < H
ℕ ⊂ ℤ ⊂ ℚ ⊂ ℝ ⊂ ℂ ⊂ ℍ
That's a nice statement about idealized sets of numeric values. So `integer?` implies `rational?` implies `real?` implies `complex?` implies `number?` in Scheme predicates and type conversions.
But no programming language can have "Reals" (they aren't computable), so floats are a common/useful approximation. And in actuality `bigint?` doesn't imply `floating?`, and `floating?` doesn't imply `bigint?`. Neither is a strict subset of the other, and because of this you can easily find examples where implicit conversion does something "questionable". You've made it about rounding errors, but I'm trying to criticize something about pretending they are subtypes/subsets. Claiming it's a tower and hand waving about exact/inexact doesn't make it a tower, and so I think implicit conversion for these is a poor choice.
You can have little subset relations for implicit conversions:
> But it's not much of a tower, and it's more of a collection DAGs.
> I'm trying to criticize something about pretending they are subtypes/subsets. Claiming it's a tower and hand waving about exact/inexact doesn't make it a tower
I thought we established right away that it's not a single tower. The description in the wikipedia page is two towers with links between them. (Or at least it's two if you don't waste effort on things like having both float64 and complex32.)
But I don't see any hand waving. The relationships and conversions are very clear. That's why I interpreted your complaint as being more about the specific operation. So with your correction, I need you to explain where you see hand-waving.
If you just don't like the name "Tower" for an implementation that has both bignums and floats then okay I agree I guess?
> I thought we established right away that it's not a single tower.
Where did we say that? The first picture on the Wikipedia page shows the tower as a linear stack of items from set theory. The Scheme predicates are named similarly. This is the appealing myth.
> The description in the wikipedia page is two towers with links between them.
Not on the page I'm seeing. Are you reading the English page? At the bottom, I see a tree of abstract types (sets).
This shows that you can traverse (Integer to Rational to Real) and (Float to Real) to find the common abstract type Real. But there isn't actually a Real type you can do operations with. You've got concrete BigInt and Float64, and even if Real is implemented as a C-style tagged-union of the two types, you still need to pick one or the other for doing operations like addition. Then the Scheme standard says stuff like, "try to be exact when you can, but inexact is ok sometimes". So all the set theory justification is out the window, and it's really just an ad hoc rule.
It's just not as elegant as it seems, and it gives an unsound justification to making implicit conversions.
> If you just don't like the name "Tower" then okay I agree I guess?
Please don't do that. I've tried to clarify details in response to your questions, but if you're just going to dismiss it with some snarky crap like that then you can go fuck yourself.
Reply if you want, but I'm guessing we're done here.
> Where did we say that? The first picture on the Wikipedia page shows the tower as a linear stack of items from set theory. The Scheme predicates are named similarly. This is the appealing myth.
In the section where the wikipedia page talks about exact and inexact, the specific thing you were calling out, it says "Another common variation is to support both exact and inexact versions of the tower or parts of it; R7RS Scheme recommends but does not strictly require this of implementations. In this case, similar semantics are used to determine the permissibility of implicit coercion: inexactness is a contagious property of numbers,[6] and any numerical operation involving both exact and inexact values must yield inexact return values of at least the same precision as the most precise inexact number appearing in the expression, unless the precision is practically infinite (e.g. containing a detectable repetend), or unless it can be proven that the precision of the result of the operation is independent of the inexactness of any of its operands (for example, a series of multiplications where at least one multiplicand is 0).".
I want to especially highlight the phrase "exact and inexact versions of the tower or parts of it" Which I then reacted to by saying "The article describes a reasonable-sounding way to extend the tower with a second dimension of precision." Once you have two dimensions it's no longer a single tower. I thought that was the common ground that we were talking on, that if you use that method it's not a true tower anymore.
> Please don't do that. I've tried to clarify details in response to your questions, but if you're just going to dismiss it with some snarky crap like that then you can go fuck yourself.
That wasn't snark. I am really trying to understand your argument, because it looks like we've been talking about different things the entire time.
I had thought we established from the very start that the description on the wiki page wasn't actually a single tower. If you are still trying to convince me it's a more complicated graph, then I agree with you, and I don't understand how we got so far without that being clear. Sorry for sounding reductionist about it.
So please, honest question for clarification, do you object to the graph of number types described by that paragraph, do you object to using the word "tower" to talk about it, or do you object to both? Please don't get mad at me for asking, or think I'm trying to dismiss you.
And if someone builds a pure tower that goes int32, double, complex, quaternion, do you think that's inherently self-defeating because it can't live up to the promises of a tower? It doesn't have the issue of floats versus bignums; it's strict subsets all the way down.
I'm sorry for misreading your comment about the term "tower". :-)
> do you object to using the word "tower" to talk about it
No, I don't really care about the terminology, except when it helps to communicate.
> do you object to the graph of number types described by that paragraph
I think the problem boils down to using a flawed analogy to arrive at a conclusion and then pretending the conclusion is sound and elegant. There are really two things going on:
First, we've got a tower, or tree, or DAG of "abstract" types. These are mathematical constructs or Platonic ideals. So you can build a tower that says "All Integers are Rationals" and "All Rationals are Reals". And it's supported by Set Theory! So you conclude that you can use an Integer anywhere that a Rational or Real is allowed. Then, knowing that we're going to apply this to a programming language, you add "All Floats are Reals". Fine, we've got abstract Floats, and it looks lovely.
Second we've got actual "concrete" data types. These are things like Float64, Int32, or BigInt. Importantly, you can't have an implementation of Real anything. In general, Real numbers can't be processed on a Turing machine. You can have a tagged union of Computable things, but that's not really the same as "Real" in bold quotes.
Ok, so the mistake comes when you try to combine those first and second sets of things. We say concrete BigInt is like the abstract Integers, and concrete Float64 is like the abstract Floats. So far so good. Then we look at the abstract tower, we decide that Integers and Floats need to become Real, so we say BigInt and Float64 need to use Reals to get a common type. But there is no common type. We said the concrete types are analogous to the abstract types and made an unsound conclusion.
Finally, we write the compiler, and reality hits us. So we go back to the standard and add some bits about "Some things really should be Exact. Conforming implementations should try to avoid Inexact when they can." It's not a separate tower - it's a bandaid for flawed logic.
Anyways, this is all a bit too philosophical. I'm not actually passionate about it, but our discussion kept going, and you kept asking, so I kept trying to explain. Most people like implicit conversions in their programming languages, and so you've got to make up some rules. I just don't like pretending the rules are not ad hoc, and it's nothing a smug lisp weenie should really be smug about.
> And if someone builds a pure tower that goes int32, double, complex, quaternion, do you think that's inherently self-defeating because it can't live up to the promises of a tower?
Assuming the obvious implementation of complex and quaternion built on two or four doubles, it's fine. Each type represents a set that is a proper subset of the next type in the list.
Annoyingly, it'll all go to crap if you have int64 though.
You gotta read between the lines with the commenter above. Their name is a reference to "Smug Lisp Weeny", and they're part of the religion (cult) that thinks everything in Lisp (usually Common Lisp) is perfect. He couldn't care less about Pike, except as an excuse to be smug about Lisp.
Besides his nick, was he smug? He just noted that Lisp(likes) solve this problem for some version of 'solve' with the numerical tower. The implementations don't (usually; I am not aware of any) mix exact and inexact as that would be foolish obviously.
I used LPC a long time ago on an LP Mud, so Pike has always had a fond spot in the back of my mind, even if I don't use it now.
However, that works for int and bigint, but Number (double precision) can represent numbers that BigInt can not, and BigInt can represent numbers which Number can not. There isn't a graceful way to automatically promote/degrade one to the other in all cases, and a silent conversion will do the wrong thing in many cases.
heh, yeah, that's where i started too. i don't know how or even if LPC did it, but in pike the transition really is seamless. give it a try. as a naive user i can't even tell the difference (you can see it though if you compare typeof(1) vs typeof(20000000000000000000) (i hope that number is big enough))
Example: Sindre Sorhus' FNV library uses BigInt to support hashes up to 1024 bits. It's quite popular (for a hashing algorithm) on NPM, with 80k+ downloads / week.
This stood out to me as well. Proper decimal type would be my #1 missing language feature, seconded by a standardized runtime implementation of the same.
The problem expressed is fundamentally correct, but the proposed solution is a band-aid, which is worse than not solving the problem at all. The fix provides a long term change with short term benefits. Reliance on tooling will continue to make code instances are the progressively larger and slower until we arrive at this problem again. At some point JavaScript must become a professional language written by adults, people capable of self-organization and measurement, and not be the subject of fashion by people who aren't qualified to program in the first place.
If the goal really is higher performance and lower complexity the most desirable solution is to create a new language with forced simplicity directly in the image of JavaScript, where simple means less and not easy, and transition to that new language slowly over time. Yes, I understand Google went down this road in the past with Dart, but Dart solved for all the wrong problems and thus was dead on arrival. Instead the idea is to take something that works well and shave off all the bullshit reducing it down to the smallest possible thing.
Forced simplicity means absolutely ignoring all vanity/stylistic concerns and instead only focusing on fewer ways of doing things. As an example consider a language that requires strong typing like TypeScript and thus thereby eliminates type coercion. Another example is operators that have a single purpose (not overloaded), single character, and no redundant operators.
Will there be a lot of crying about vanity bullshit... yes, absolutely. Just ignore it because, you cannot reasonably expect to drive above 120mph on a child's tricycle. If people wish to offer their own stylistic considerations they should include performance metrics and explanations how their suggestions reduce quantity of operations without unnecessary abstraction.
This is also a result of the detachment of TC39 and the developer community. Just how many JS developers are participating TC39? I can recall multiple TC39 proposals that didn't even consult opinions from authors of notable open-source stakeholder libraries, and went straight into stage 3.
And btw, the TypeScript tooling scene is far from being able to get standardized. TypeScript is basically a Microsoft thing, and we don't see a single non-official TypeScript tool can do type-checking. And there's no plan to port the official tools to a faster language like Rust. And the tsc is not designed for doing tranditional compiler optimizations. The TypeScript team made it clear that the goal of tsc is to only produce idiomatic JavaScript.
Are there debuggers that can single step over the transpiled bits so that it feels like the methods are implemented natively? Otherwise, it becomes a mess.
With the right debugging tools, transpiled alternatives to JavaScript are easier to debug than vanilla ES5.
For example: TypeScript's sourceMap [1], Elm's time-travelling debugger [2], Vue.js DevTools [3], just to name a few I've tried. Especially well-typed languages tend to behave well at run-time once they pass type-checking. Or rather, I have not made enough front-end code to discover transpiler bugs.
Because debugging better languages affords you more context and more tooling.
Elm's debugger lets you step forwards and backwards in the application's state.
TypeScript's type system lets you catch bugs before you run the code.
Vue.js's DevTools extend the browser's with a component-based overview, so you can interactively see what's going on at a high level of abstraction. (I'm sure something similar exists for most frameworks similar to Vue.js, and possibly even frameworks made in vanilla ES5, I'm just picking one I've tried.)
I started with your position (vanilla js 4ever!) and after being dragged into the world of transpilation via typescript/eslint/prettier/webpack/babel/etc I do agree that it’s at least as easy. Not sure about “easier” but my debugging needs are not exotic. The painful part is initially setting up the toolchain.
You wouldn’t see much difference as a user of those tools. And if you’re writing vanilla JS, you’d have less features creeping in over time. So it seems like you would benefit from this kind of change.
Who cares? If backwards compatability is maintained then this fails to have any impact on my experience as a developer. It sounds like the VM maintainers are busy making their own lives hell. Not my problem.
I do. Maybe if someone programs in one language it's okay for them to keep up with language changes, but if you have to constantly juggle multiple languages it becomes a real chore to stay up to date with every one of them.
I use the language. The existence of new language features has not forced me to adopt them. The standard library for browsers is a different story but it is always going to be.
Thankfully.. both maintain reasonable backwards compatability where security is not otherwise implicated.
Fuck, no. Everyone's free to make better tooling, but don't standardize it. There's no point. It'll only lead to further fragmentation. Libraries and frameworks will be split between plain JS and whatever this new version will be called. Just freeze the language and be done.
Imagine Google not having the resources to maintain their V8 engine after the hiring downturn, and telling us they want to change JavaScript because their V8 engine has become challenging to maintain.
A proposal rooted in attempts to improve the language would be one thing. This appears to be about Google struggling with technical competencies and not having the budget to do the right thing.
“A Google engineer presented … JavaScript VMs (virtual machines), they say, are “already very complex because of pressure to be fast,” which compromises security, and “feels especially bad” when new features do not get adoption.“
“The foundational technology of JavaScript should be simple, according to the proposal, because security flaws and the “complexity cost” of the runtimes affects billions of uses, whereas the benefits are restricted to developers and applications that actually use that complexity to advantage.”
Maybe try something besides c++ for V8 if you are having security issues?
The apathy towards proper tail calls in V8 leads me to distrust Google’s language proposals. But now that abdication appears to possibly have been a canary? Perhaps even prior to the pandemic they couldn’t keep up with maintaining the V8 C++ codebase, and that’s why PTCs got skipped?
Wouldn't it make more sense for some of these features to be implemented as a desugaring step in the runtime itself? i.e. if implementing them directly as new language features doesn't make sense, then preprocess them away before executing the scripts. You could even do this for past features that made it into ECMAScript but haven't turned out to be useful, instead of ossifying a specific moment in time's tooling.
> Wouldn't it make more sense for some of these features to be implemented as a desugaring step in the runtime itself?
I think if it was that simple, it would be done that way already (maybe it is, for some features). Two big arguments for doing the "desugaring" offline are the (1) speed, and (2) security of the browser. Those two things also conflict somewhat if addressed on the client, since faster but more complex compiler code increases the surface area for potential exploits.
But if you do this compile step offline, you don't need to worry about compromising the performance or security of the browser.
Why wouldn't you take the complexity and performance hit involved once at buildtime rather than offloading it to the client at runtime?
If you read the original slides from the proposers, they're presenting a framing where there is an inherent tension between "serving the user" and "helping the developer". They argue that there is too much of the latter, and that a formalized splitting should push more to the former.
From an end-user perspective, it definitely makes more sense that the js doesn't have to be transformed locally before it can interpreted. I think your suggestion is not compatible with the motivations of the proposal.
If there’s still going to be a standard JSSugar, yes, seems like it should be desugared in the runtime. On the other hand, if we want to make it easier to fragment the high-level language into incompatible sugared versions, this seems like the way to go. (Hard to believe that would be the TC’s goal.)
> features to be implemented as a desugaring step in the runtime itself
The problem is distributing the runtime(s). By having developers transpile to a small core, anyone can freely invent new language features without waiting for the rest of the internet to download support for them.
No word about Actionscript 3 (ECMAScript 4) here? It compiled to the ActionScript Virtual Machine 2 as bytecode. Everybody was happy. And then, Steve Jobs came around the corner and damned it with a single magical curse. Too bad.
Hell yes. I've been advocating for this for years. From an engine-implementer perspective, full-fledged JavaScript is just too hard to make both fast and secure.
One of the examples given makes sense, since Symbol.specie messes with prototypical inheritance and is likely hard to secure as a result because that touches so much of JS as a whole.
BigInt failing to materialize I think has more to do with ergonomics around it, they’re a bit unwieldy and there aren’t able to be used with the built in Math object functions.
They also have zero JSON support out of the box which is a huge miss.
Honestly it should have been roadmapped to replace the built in Number type
Can't replace Number with BigInt as BigInt is orders of magnitude slower on certain operations. Try to do bitwise operations with BigInt, you'll see what I mean.
bigint cannot be replaced by regular number when working with arbitrarily large whole numbers. With regular numbers you will lose precision out of the range [Number.MIN_SAFE_INTEGER, Number.MAX_SAFE_INTEGER]. If you try to implement such functionality via JS, it would be even much slower than the bigint native implementation. So, it's great to have such thing natively implemented.
I think I understand the argument but this sounds like it would make things worse. The argument that new features almost always make the language worse doesn’t hold true from my perspective as a developer. (I could imagine the perspective of a language implementor being very different.)
I like that JavaScript now has modules/imports, destructuring, Proxies, async/await, etc. These were all new features at one point, But yeah, why did Symbol.species get in? Seems like it’s to enable some odd subclassing pattern? I’m an anti-OOP zealot, so my hot take would be that maybe OOP subclassing is unnecessarily complex already, so stuff like that shouldn’t make it in. We got the OOP syntactic sugar, which is enough. Stop there.
How much of the extra complexity is from stuff like that that is rarely used? Maybe we just need to be a lot more conservative about what makes it in, but stopping changes and forcing everything into more tooling complexity is not the direction I’d like to go in. We need to reduce tooling, not increase it.
If we take this logic, we should get rid of JavaScript support entirely and only support WASM, which would be a direction, but it would ignore how developers are using the platform.
It is now becoming rare when I see any serious project that still uses JavaScript, everyone I know is using TypeScript and I don't recall any job posts not requiring TypeScript. What are the standards bodies doing? They are still implementing hacks upon JavaScript instead of seeing the writing on the wall.
Maintaining JS engines is difficult because of the old stuff that few developers actually use, it would make a lot more sense to start deprecating those features and adding the new ones developers actually want.
I started using alternatives to NodeJS because I don't feel like I should subject myself to a compilation step if I don't have to.
I would like to see an in-depth treatise explaining why existing bytecode VMs (LLVM, JavaVM and Ecma CLR) were never seriously considered for the world of browsers. These VMs already exist for numerous platforms, have been optimized to death, already have plethoras of languages that compile to them, and beside JavaVM are open source (Ecma CLR exists in the Mono project). I've looked at WebAssembly and I don't understand why it needed to be reinvented from scratch and why it needs to be so limited. We could already be writing web code in Rust, Java, C#, Python, and heck even Haskell, if we had just done that. I know that I'm skipping over the engineering effort required to make this happen but I get a sense that the engineering effort is not the stumbling block. I want to know the details of what is.
My guess is that none of those bytecode VMs were designed with the explicit goal of running untrusted code at global scale in a rock-solid sandbox.
If anything, I expect those existing VMs to slowly be replaced by WebAssembly due to how crucial and complicated that very specific sandbox requirement is - and how useful that is once you have it working reliably.
Personally I never want to run untrusted code on any of my computers outside of a robust sandbox. I look forward to a future where every application I might install runs in a sandbox that I can trust.
The Web is an evolving system too large and long-lived for any single company, stable consortium, or standards body capable of doing the deed to do it, so none of Java, Flash (AVM), .NET/CLR, NaCl/PNaCl, Dart, and others I have forgotten about ever had a chance to take over.
Java was mismanaged as a plugin (and only ever a plugin -- no deep or even shallow browser integration worth talking about) by Sun, who tried getting it into Windows after Microsoft was killing Netscape (Microsoft then killed Java in Windows, pulled trigger on .NET; Oracle later bought Sun).
Flash had its day but fell to HTML5 and fast JS, Adobe threw in the towel well before Wasm announcement, even salted the earth re: good Flash tools instead of retargeting them at the Web.
Google was a house divided all along but had absolutely no plan for getting PNaCl supported by Apple, never mind Mozilla or Microsoft. I told them so, and still get blame and delicious tears to drink as I sit on my Throne of Skulls, having caused all of this by Giant-Fivehead mind control (testimony from one of my favorite minions at https://news.ycombinator.com/item?id=9555028).
"Secure Java" is something I recall hearing decades ago. No idea if it still exists.
The more important thing to consider, however, is the fact that CLR, JVM, etc. provide internal memory safety whereas Wasm runtimes don't.
e.g. a C program that goes sufficiently out of bounds on an array is guaranteed to segfault in the C runtime, but that runtime error does not necessarily occur on a wasm target. That is to say, the program in the sandbox can have totally strange runtime behavior -- still, defined behavior according to wasm -- although the program has undefined behavior in the source language. In the case of JVM languages, this can't really happen.
SecurityManager? Java's current direction (using the word "integrity" rather than "security", but seems relevant) looks interesting to me https://news.ycombinator.com/item?id=41520246
> As told in JavaScript: The First Twenty Years, Brenden Eich joined Netscape in April 1995.
> [..]
> However, Eich didn’t think he’d have to write a new language from scratch. There were existing options available — such as the research language, Scheme, or a Unix-based language like Perl or Python. So when he joined, Eich “was expecting to implement Scheme in the browser.” But the increasingly fractious politics of the software companies of the day (it was, basically, everyone against Microsoft) soon saw the project take a more creative turn.
> On 23 May 1995, Sun Microsystems launched a new programming language into the world: Java. As part of the launch, Netscape announced that it would license Java for use in the browser. This was all well and good, but Java didn’t really fit the bill for the web. Java is a general-purpose programming language that promised Write Once, Run Anywhere (WORA) functionality, but it was too complicated for web designers and other non-programmers to use. So Netscape decided it needed a scripting language, which was a trendy term at the time for a smaller, easier to learn programming language.
There's a whole lot more interesting stuff but I think that part directly answers most of what you're wondering.
They had to build a verifier that attempts to ensure the bytecode isn't doing anything bad. That proved to be fairly difficult, and comes at a considerable cost.
But it's not as if security concerns are specific to the Web. Look at the vulnerabilities found in CPUs over the last decade or so. Security is necessary no matter what the delivery medium, so I don't see why this is a rationale for reinventing the wheel.
NIH and CIL is probably an ultra-overkill for browser-based scenarios. It implements a complex type system with all sorts of high-level features that significantly complicate the runtime/compiler. It makes it drastically easier to target but not to write an implementation.
I'm not a huge fan of WASM but it's easy to see that the authors would clearly not want to leave control in the hands of Microsoft or Oracle (and as a result all of us are hostages to Google instead because of evil that is Chromium).
They were.
Lots of reasons why it turned out how it turned out. Basically a local minimum in the gradient descent.
Computers were much slower is one reason. JVM wasn't open source at the time is another. NIH is another 100 reasons.
A core requirement of WebAssembly was that (ignoring I/O for the moment and considering only the computational core) you should be able to run arbitrary existing code on it, and the effort involved in getting it working should be comparable to porting to a new architecture, not to a new programming language. What this particularly meant, in practice, was that it needed to be a good compilation target for C and C++, since most code is written either in those languages or in interpreted languages whose interpreters are written in those languages. (It also needs to support languages that's not true of, like Go, Rust, and Swift, but once you've got C and C++, those languages don't pose major additional conceptual difficulties.)
The JVM and CLR are poor compilation targets for C and C++, because those languages weren't designed to target those runtimes and those runtimes weren't designed to run those languages. (C++/CLI isn't C++.) It's possible to get something working, and a few people have tried, but you run into a lot of impedance mismatches and compatibility issues. I think you would see people run into a lot more problems trying to get their code running on the JVM or CLR than they in fact run into trying to get it running on WebAssembly. (Though I think the CLR is less bad about this than the JVM.)
As for the idea of using LLVM bitcode as an interchange format, we don't have to guess how that would have gone, because it was actually tried! Google implemented this in Chrome and called it PNaCl, and some sites and extensions relied on it for a while. They ultimately withdrew it in favor of WebAssembly. I don't understand all the reasons why it failed, but I think part of the problem is that it ran into a bunch of "the spec is whatever LLVM happens to do" type issues that were real problems for would-be toolchain authors and made the other browser vendors (including Apple, LLVM's de facto primary corporate sponsor) reluctant to support it. WebAssembly has a relatively short and simple standard that you can actually read; writing a WebAssembly interpreter is an undergraduate exercise, though of course writing a highly performant one is much more work.
Also, as far as I can tell, LLVM hasn't at all been optimized to death for the use case of runtime code generation, where the speed of the compiler is about as important as that of the generated code. The biggest dynamic language I know that uses LLVM is Julia, which is a decently big deal, but the overwhelming majority of LLVM usage is for ahead-of-time compilation of languages like C, C++, Swift, and Rust.
On a bigger-picture note, I'm not sure I at all understand why adopting an existing bytecode language would have made things easier. Yes, it would have been much easier to reuse existing Java code if the JVM had been adopted, or to reuse existing C# code if the CLR had been adopted, but those options are mutually exclusive; the goal was something that would work at least okay for all the languages. Python doesn't have a stable bytecode format, and Rust and Haskell compile to LLVM bitcode (which LLVM has no problem lowering to WebAssembly since WebAssembly was designed to make that straightforward), so I don't see how those languages are in any way disadvantaged by the choice of WebAssembly as the target bytecode language instead of some alternative.
Or are your concerns about I/O? That's a bigger can of worms, and you'd need to explain how you imagine this would work, but the short version is that reusing the interfaces that existing OSes provided would not have worked well, because the browser has a different (and in many ways better) security model.
This is not true. CIL could be an excellent compilation target for C++ and was quite literally made with that in mind. C# was inspired as much by C++ as it was by Java. And CLR back then was made with consideration of C++/CLI, which exists even today. You can't effectively express C++ code with JVM bytecode, you absolutely can with CIL. You can even express most of Rust's generics with CIL nowadays, retaining monomorphization, save for zero-sized structs and other edge cases.
I mostly don’t mind JavaScript only thing for me is number data type and no int. The other part that is annoying is lack of standard library so we get left-pad crap.
What disqualifies "the JVM" (usually referring to HotSpot implementations) from being considered open source? Are you talking about OpenJ9 or something else?
Java is as open-source as it gets (it’s reference implementation, OpenJDK, has the same license as the linux kernel)
And it was used by some browsers, there was just no consensus between different vendors due to politics. The problem largely solved itself by.. only one vendor remaining, chromium.
"ECMA TC39", not "Emca TC39". Also, looks like a bad markup link for TC39. Also note that it's either "co-authored by Mozilla, Apple, Moddable and Sony" or "authored by Guo along with others from Mozilla, Apple, Moddable and Sony", but directly related to that statement, that makes this "not a Google proposal" but clearly an "industry proposal" if it has Mozilla and Apple buy-in.
Also, "the proposed solution is not to backtrack on existing features" makes very little sense. If you're going to split something into core and "compiles down to core", then a LOT of features can be moved out of core because they're just (definitely worth keeping, but not necessary in core if that split were made) convenience APIs.
> 1995 - Brendan Eich reads up on every mistake ever made in designing a programming language, invents a few more, and creates LiveScript. Later, in an effort to cash in on the popularity of Java the language is renamed JavaScript. Later still, in an effort to cash in on the popularity of skin diseases the language is renamed ECMAScript.
If this happened then JavaScript would be split into 20+ languages, one for each popular compiler. There would be nothing stopping a tool maker from adding their favorite language features even if no other compiler ever adopted them. That would be a disaster.
In many ways this has already started happening. TS has enums, Svelte has runes, React has jsx. None of these features exist in JS, they are all compile-time syntax sugar.
While it is admittedly confusing to have all these different flavors of JS, I don’t think this proposal is actually as radical as it seems.
Outside of browsers this is already how it works. Nothing is stopping LLVM versus GCC from adding their favorite language features, but it's not a disaster.
absolutely - one for the great minds to play with while running in great circles and one that passes https://jslint.com and is allowed to be on the internet.
It's a good idea, but a better idea would be to make the browser a virtual machine running wasm. HTML, SVG and everything else could be implemented in wasm, and loaded from the cloud as needed.
> It's a good idea, but a better idea would be to make the browser a virtual machine running wasm. HTML, SVG and everything else could be implemented in wasm, and loaded from the cloud as needed.
So now a huge swaths of use cases are going to be killed by this change. E.g. AdBlock, NewPipe and yt-dlp - how is that better? All of them (expect maybe AdBlock) rely on parsing incoming JS from YouTube which will be rendered obsolete by WebAssembly blob.
WASM is slower than running JavaScript on V8 in almost all scenarios and will likely continue to be for a very long time. Also, many of us don’t want a compile step.
While I don't want any compile step either (js should stay), I'm actually confused by your statement.. are there any benchmarks? Are you saying that for example v86 would run faster without wasm?
I think that would probably fall outside the norm. My information might be outdated, but I was under the impression that JavaScript usually wins in most algorithm benchmarks because the JIT is so good.
I don't understand the problem? If v8 wants to split into frontend (sugar, transpiled) and backend language (es5 + few things probably; runtime engine proper), they can just do it, no?
All this championing Javascript as a single language for front and back end work, now to be split for different use cases. Hopefully this is how Javascript dies if that is the route Google pushes it.
There is already too much exhaustion around switching frameworks and paradigms in the JS world, but I guess everyone likes getting jerked around by corpos and evangelists these days.
I’m really tired of this discourse. The JavaScript ecosystem is the lingua franca of the web. Furthermore, while a segment of the programming community has sat around complaining, JavaScript has gotten really good and continues to improve every passing year. Incremental progress is the key to making progress, not giant paradigm shifts.
We’ve run React for almost a decade now and the only major parts we’ve swapped have been react build with Vite. Angular has been even more stable since the switch to TS. As far as the frontend frameworks themselves changing massively, that’s a different story, but it’s not like C# didn’t go from Windows .Net to Core/Framework to cross platform .Net, and so on for different language frameworks.
On the Backend there are very few issues, outside of FFI only being in unstable for Deno I suppose, but you could frankly be running the same old Express API you did a decade ago and be perfectly fine.
If you’re burnt out on changes and keeping up with things I think the issue is mostly a “you” issue. You don’t have to chase down the latest hypes or fads. In fact I think you almost never should.
I’m not chasing down hypes and fads, the new product person who wants to make a splash by rewriting the core app does.
This is an incredibly disingenuous response. You maybe like the world this way. It doesn’t mean there isn’t room for change or improvement away from Javascript.
> What's wrong with VanillaJS?
Absolutely nothing, we all love it, and we also love things built upon that foundation, like TypeScript. But it's optional, and that's a good thing that some people miss to recognize. Therefore, they seek more standardization that 'should be enforced' by your Big Brand's top used product (ie. browser).
My point is that "framework fatigue" is a self-inflicted problem. Nobody forces you to use flavor of the week, VanillaJS and bog standard HTML/CSS are always there for you.
Work in a publicly traded company where people are moving things around for promotions sake. Then you’ll see how forced you are to use the latest flavor of the week. People absolutely do force you.
It’s not just the flavor of the week frameworks, it’s libraries and best practices. Want to work with dates? Do you use moment? Nope that’s deprecated, what do you use? Which moment successor? How do you write react? Classes or functions? You can’t use hooks with classes, so you better update to functions. On and on you run into a decision tree because of the shifting target of javascript. It causes a lot of churn to be migrating and updating to new systems, especially when the new hire can’t help because they don’t understand prototypal inheritance.
> Work in a publicly traded company where people are moving things around for promotions sake. Then you’ll see how forced you are to use the latest flavor of the week. People absolutely do force you.
I can tell you such stories about any language, it’s not unique to JS. Welcome to working with people.
Do not sit there and tell me Javascript hasn’t absolutely proliferated across the stack and that these problems don’t surface more. Just because ANYONE can introduce ANY framework of ANY language doesn’t meant that Javascript hasn’t championed a lot of those issues. You’re handwaving away my points for no good reason.
The US government needs to fast track breaking up Google asap. Chrome needs to be torn from their festering lich hands, so that the web can be free of their self serving, and frankly bad, proposals.
Out of curiosity, which of the fragments of Google would you expect to take ownership over that codebase?
I had always imagined that if the DoJ took any action it would be to cleave the ad business away from Google. Although if they went so far as to take action against GCP I bet Amazon, Amazon Marketplace, and AWS would start to get sweaty palms
I'm a little annoyed that they said Bigint uses cases never materialized. I've used it several times and it isn't something that's easy to transpile. A bigint class has bad ergonomics and there's no way the perf can be equivalent.
Same toughs. It's irreplaceable when working with arbitrary sized integers. Even if it's used on 1-2% of websites, it's still a significant usage. Surely, it won't be used as commonly a regular floating point number.
After watching it evolve for many years my conclusion is either politics or it being run by the wrong people who think they literally need to invent the whole world (in component model) before they can give us basic string objects. No, wasm string objects were an independent feature in experimental Chrome for a while but they got removed again. Wasm structs are beautiful without the component model but useless because they can't be read or written in JS.
> The tooling idea is particularly appropriate for JavaScript since many developers actually code in TypeScript and rely on compilers such as Babel, Webpack or the TypeScript compiler to output JavaScript.
No, it won't be faster, if you only optimize a lesser language. If you have higher level code running your optimizer can do more than if it only has a low level version.
No, it won't be more secure. Js0 might be more secure, but if we sites all run any of dozens of different tools those tools are going to be creating the vulnerabilities. It's shifting where security issues occur, and creating more of them.
I'm terrified this could happen. JS has gotten so much better over time. We are so close to being able to not need transpilers. This sounds like such an absurd cop out for browsers to say, meh, we just don't want to do the work to implement. Being so close & then saying, sorry, you must use big toolchains to develop for the web is a monstrously bad future.
JS0 is supposed to be a high level language. Theoretically it could improve performance by allowing more explicitness in the generated code than in normal JS, thus helping out the JIT optimizer.
I don't see how a browser running JS0 can be any less secure than a browser running JS
Browsers operate at a scale never seen before. Imagine all the extra energy and bandwidth needed if core functionality is moved to the application code when you have billions of users.
What we need is more native functionality (implemented in the JS engine with C++ or Rust) to have as less user land code as possible.
As an example, imagine how much energy, bytes, and CPU would be saved if browsers and JS engines included reactivity and JSX. Or if browsers included an API similar to jQuery.
Quick math to grasp the scale: 100kb * 1 billion users = 100TB of data that needs to be transferred and parsed many times, every single day. It's absurd.
That is, the Hotspot VM was such a phenomenal engine that lots of other languages sprung up to take advantage of that: Closure, Scala, Kotlin, etc.: https://en.m.wikipedia.org/wiki/List_of_JVM_languages . Even with the Java language itself, syntactic changes happen much more frequently than VM-level bytecode changes.
With an interpreted language like JavaScript, the dividing line is a little grayer, because the shippable code isn't bytecode, it's still just text. But it still seems to make sense to me to target a a "core", directly interpretable language, and then let all the syntactic sugar be precompiled down to that (especially since most JS devs have a compilation step now anyway). Heck, we basically already did this with asm.js, the precursor to WebAssembly.