One thing I don't see discussed much is how much users are empowered by being able to inspect the JavaScript that is sent to their machine. I can inspect every line of code before it executes. I can look at exactly what data is being stored client side and modify it if I wish. The client side of apps are forced to be open source. Every major browser has excellent developer tools built in that allows us to do this. I'm not sure how much of this will be taken away by WASM. Or if it will hinder my ability to prevent exfiltration of my private data.
Minified JS and backend APIs foil this. You download a dozen blobs of junk that all calls back to an app server for its persistence. If you could save the JS you get from gmail or google docs and host it locally I'd be convinced, but we both know that's impossible.
This is posed as a pretty heavy counter, "[X] foils this," but it's only been to degrees so far, and the extent we've gone is radically different where we're headed. Certainly hacking modern webapps isn't always the most fun, but often it can be! Even if the app in question is pretty shitty & mean about how they distribute their bundles.
Backend APIs have often been one of the webs best points, not it's hinderances, in terms of explorability. Certainly it conceals what happens behind the scenes, but the "single page app" model leaves an intrepid DevTool user with at least as much power as whatever the webapps inbuilt capabilities are, and far less handcuffs. Huge numbers of sites have "unofficial" clients, since it's so stupidly easy to open devtools, look at network traffic, and write some instrumentation/alter stuff around a little and see what you get. Since the normal app behavior has to transit a network barrier, and that's quite visible, it very quickly lights up & shows the interface. Where-as in many native apps, state is just latent in the process, suffused everywhere, & deeply murky & unclear. There's a lot we don't know, but there's also so much clarity that the front-end/back-end client/server split has given us, that computing typically hasn't had.
Less web-ish things can be problematic. Folks who just open websockets & start speaking TCP protocols aren't very "web" (even though they use a tool the web happens to have). GraphQL is maybe somewhere inbetween: there's still enough regular evidence/it kind of fits the form, but everything is a little different & normal webdev tools need a lot of extra help to do ok. GraphQL is a partial stray from the core webdev path (for example, it doesn't use URLs for endpoint nor it's content).
Ideally everyone should ship source maps. But there's a lot of negative behaviors against this, mostly for no real good cause. Just turning on the pretty-printer for the mangled/uglified source code, & debugging some DOM events is often enough to figure out what a thing does pretty quickly, to get into the architecture pretty easily, even when it is all minified. Often the minification is only partial, only extends so far, as we talk to data sources & storage; we can start to see properties & json that look regular, that give us meaning & clues. Minification mainly serves the users, by being very mildly smaller than compression; rarely is it an active defense again comprehension, and rarely is that active defense effective; everything is too much the same, the patterns are all common, the platform is all normal.
How close or how far we stray from Babel matters. "The platform is all normal" is being put to the sword by these upstarts, with amazing zeal & excitement. There are absolutely going to be tons of good normal not too wild webassembly codebases out there that at least leave the web platform as primary, that are advanced & not javascript but are at least web-like, and that'll be a bit harder to understand & decrypt & learn about (to perform natural science on). Understanding these is going to require a more advanced capability. It'll feel a bit more like breaking out Ghidra & wandering in (https://github.com/NationalSecurityAgency/ghidra), than DevTools, even with the most minified websites we have today.
But there are also going to be countless new virtualizing universes of compute that simply do not care at all, that are micosms until themselves: Inland Empires, as Disco Elysium characterized. The level of Ghidra disassembly skills required to get anywhere is going to be expontentially higher in many wasm realms.
Wasm is a big turning point, is definitely a fall of Babel. Others have pointed out & absolutely right, many many wasm users will still embrace the web platform, it's higher level "text/html" mime-type underpinning, it's apis. Wasm will amplify the challenge, even here. Right now there's a ton of glue-code projects- Rust's wasm-bindgen is a big glue-maker, and simply wrapping our heads around the multi-verse of code-spaces running, wasm-bindgen bridging us between page space and wasm code space, is going to be an intense new experience, but in many cases it will be fine & managable. DevTools will help. Hopefully source-maps come into vogue somehow. But there will also be uncountable infinite reaches of virtualization, where people are just off doing something entirely different & else, where disassembly requires understanding mountains of virtual machine & rendering engines to get anywhere, where nothing makes sense, and reason & natural science & understandability of the world about us is truly lost.
There are challenges today, but it's overall ok, and tomorrow looks harder, but there's still lots of room for hope, but there are also spreading reaches of dark, where it's likely understanding what happens will converge with impossibility. The web today is murky, but in no way resembles that darkening. We've been exploring lands around Babel for a long time, but there are some quite severe marches into the far beyond, and in some ways, yea, I am excited to see what folks make when unconstrained, but the understandability of the universe about us also ranks high in my priorities, and the web still has a stunning truthfulness & earnest & directness far far surpassing anything else available in computing that is notable & alive (if maybe not exactly well), and I want to see that espoused & embraced if we do want to explore reaches beyond (and I don't see that value anywhere else presently).
The difference between having, say, C++ code where the variable names are scrambled versus having assembly seems vastly different to me. And that parallel seems 100% accurate here. JS may be minified but you still are reading the cide that was written, where-as no one seriously authors wasm; it's a lower level (virtual) machine.
And it's worse. Because wasm is a separate process, so now there's multiple things to debug & learn. Tools like wasm-bindgen generate complex glue code to bridge wasm and the host/js side. The wasm-verse brings a ton of new challenges to legibility.
Learning minified bundled js codebases can be daunting at first, but it's ultimately js, most often built with normal-ish stacks, and the code usually shows it's nature pretty quickly after setting some DOM event breakpoints. It's nowhere near the joy of view source, but it's not bad.
Notably Chrome m111 just turned on pretty printing by default, so the first couple milliseconds of opening the debugger should be significantly less overwhelming. You used to have to know to press the prettify button.