32-bit memory addressing means you only have 4GB, not 32GB.
The company I current work for makes radiotherapy software. Most of our native apps run in clinics under .NET.
There are some cases where we want users or employees to be able to access our radiotherapy tooling from a browser. Microsoft has a pretty phenomenal .NET -> WASM compiler. We can compile our .NET DICOM image viewer tooling to WASM, for example, and it is more performant and full-featured than Cornerstone.js (https://www.cornerstonejs.org/).
However, medical imagery is memory heavy. We frequently run into the 4GB limit, especially if the CT/MR image uses 32-bit voxels instead of 16-bit voxels. Or if the field of view was resized. Or if we accidentally introduce an extra image copy in memory.
I agree that wanting to have more than 32bits of addressable space (not 32GB of memory) in a web app seems excessive.
However the real win is the ability to use 64 bit types more easily, if for nothing else other than it simplifies making wasm ports of desktop libraries.
It goes beyond that. Many languages expect that you use types such as size_t or usize for things that are conceptually collection sizes, array offsets, and similar things. In some applications, it's common that the conceptual collection is larger than 2^32 while using relatively little memory. For example, you could have a sparse set of integers or a set of disjoint intervals in a universe of size 2^40. In a 64-bit environment, you can safely mix 64-bit types and idiomatic interfaces using size_t / usize. In a 32-bit environment, most things using those types (including the standard library) become footguns.
I work in bioinformatics. A couple of times a year I check if browsers finally support Memory64 by default. They don't, and I conclude that Wasm is still irrelevant to my work. I no longer remember how long I've been doing that. Cross-platform applications running in a browser would be convenient for many purposes, but the technology is not ready for that.
One could argue that size_t should be 64 bits on wasm32 since it's a hybrid 64/32 bit platform (and there's the ptrdiff types too which then would depend on the pointer size), but I guess sizeof(size_t) != sizeof(void*) breaks too much existing code.
For example, in a project needed to rely heavily on markdown and needed the exact same markdown renderer on both server and client. That alone made us choose node.js on the server side so that we could use the same markdown module.
Today, I'd probably find a rust / c etc markdown renderer and compile it to wasm. Use it on the server and client as it.
This is a silly example but wasm being a universal runtime would allow interfacing things a lot easier.
Ah also, things like cloudflare workers let you run wasm binaries on their servers. You can write in in any language that can target wasm and you have a universal runtime. Neat.
You can embed a C/C++ program into arbitrary places using WASM as a runtime, so if you have any C++ program you want to automate, you can "lift and shift" it into WASM and then wrap it in something like TypeScript. This is surprisingly useful. WASM also removes sources of non-determinism, which may enable you to do things like aggressive caching of programs that would normally be slightly non-deterministic (imagine a program that uses a HashMap internally before dumping its output). I use this to run FPGA synthesis and place-and-route tools portably, on all operating systems, with 100% deterministic output: https://yowasp.org/
memory64 support will be very useful, because many non-trivial designs will use a lot more than 4GiB of RAM.
Some of us are trying to convince the Node team that pointer compression should be on by default. If you need more than 4G per isolate you're probably doing something wrong that you shouldn't be doing in Node. With compression it's not actually 4GB, it's k * 4GB.
Java pointer compression promises up to 32GB of heap with 32 bit pointers, for instance.
If some subset of pointers has a guaranteed alignment of 2^N bytes, then the least significant N bits are always zero, and don't need to be stored explicitly (only restored when dereferencing the pointer)
Look son, the only way we're gonna get anything done is abstracting the abstractions so we can actually get some abstracted code running on the abstracted abstractions. That means we need 128 gallonbytes of abstracted neural silicon to fit all our abstracted abstractions written by our very best abstracted intelligences.
Then why didn't Java do better? Its tagline was write once, run everywhere.
I remember back in the day setting up cross compiling was horrendous though, so I agree, I just don't think it's the only reason. These days all you do is set a flag and rerun "go build", it's stupidly easy, as far as compiling goes.
The other two things that come to mind is that on the web users expect things to look different, so the fact that your cross compiled app looked/behaved like ass on at least one platform unless you basically rewrote the front end to conform to each platforms user interface guidelines (aka write once, rewrite everywhere), meant that websites could look more how the company making the website wanted it to look, and less like how Redmond or Cupertino-bases companies wanted it to look.
The real killer feature though, imo, was upgrading of software. Customer support is a big expense that ends up sinking developer time, and if you got bug reports and you fixed the problem, you'd keep getting bug reports for that issue and the CS team would have to figure out which version the customer is on, buried three menus deep, before getting them to upgrade. The website,
however is basically always running the latest version, so no more wasting everyone's time with an old install on a customer's computer. And they showed up in metrics for management to see.
> Then why didn't Java do better? Its tagline was write once, run everywhere.
Because Sun and then Oracle never though the web would kick off. Sun had the ball of gold in its hands with the HotJava browser. But they thought the web was a fad and abandoned it. They should have continued developing HotJava and hard-pushed for the JVM inside the web-browser as a public standard and then Java would have been the absolute dominant language of Planet Earth today - the rest would be crying in a dark corner.
Another problem was the 2001 dotcom bubble burst. That crash made a lot of senior executives and investors think that the web was merely hype-technology and de-invest in most front-end efforts. Google proved them completely wrong later.
> Since there are many questions about the way the TIOBE index is assembled, a special page is devoted to its definition. Basically the calculation comes down to counting hits for the search query
>
> +"<language> programming"
I don't think the popularity of a programming language could be measured by how many hits it has on search engines. For example, it may well be that 50% of those hits are forum posts from people being frustrated because the language sucks. In addition, the fact that a language is in use in a lot of existing systems says little about when that code were written, and which options were available at that time.
A major factor was that early Java in the browser did not support.jar (really .zip) files. This meant every class (and every inner class) required a separate http request (on the much slower http of the day).
You used to have to put everything in one giant class to work around this.
I don't disagree that it came down to UI framework support. It came down to Qt and Wx and neither was a clear winner. The problem was there was nobody with broader ecosystem incentive to make a toolkit that kicked ass on all platforms. It had direct interest but was selling a toolkit and could not make it free/gratis as selling that was their business model. Maybe Firefox with XUL, but they had a vested interest in promoting web sites.
Long time back when IRC was in use, we had server operators removing bots from IRC channels. It was easy to discover a bot as when you ping them on direct message they were not able to have good conversation with you.
They were multiple simple AI systems that tried to emulate human but it was easy to discover that it is a bot.
One botnet came up with solution that was brilliant. Put bot on few channels and it someone start talking to it, have the bot randomly outreach person who is active from different channel. Then rely conversation between them as man-in-the-middle, just changing the nicknames. You could have full conversation with actual person and it was hard to spot that it's a bot:-)
#Google once again is being irresponsible by disclosing this vulnerability. Look at the timeline, there was a meeting between Google and Microsoft where Microsoft requested more time for the fix to roll out out. And Google decided to do irresponsible thing and just disclosing vuln, so now more and more attackers can use it instead of doing what's good for customers/users.
Google... Don't be evil??? Lol. So not true now...
Delayed disclosure is irresponsible. Doubly so if the vulnerability is the same as an old one. Triply so if there is active exploitation going on. So yes, Google is evil, for not doing immediate full public disclosure 90 days ago.
reply