I opened the firefox debugger, and it went blank O_o
Chrome was ok with it: it's about 220kb, which is not bad at all for a whole runtime + stdlib. Python pyiodide (https://pyodide.org/en/stable/) is several Mb.
220kb is still too much to pay upfront, since I usually want my webpages to be under 1Mb, and I can't justify burning 1/4 of the size budget.
As for the Firefox story, maybe it's a good obfuscation trick :)
57k post Brotli. Measuring the decompressed size of your page's static text content is like measuring your static image content by the size of the decompressed bitmap the browser generates instead of the size of the PNG (or whatever format). Server side both examples should be precompressed as they are static assets.
If the goal is strictly to keep things low on bandwidth compressed size is all that matters. If the goal is strictly to keep things fast time to various user events is all that matters. If the goal is a balance of both then both will matter. In none of these is uncompressed asset wire size <1 MB the relevant metric. Nor parse time based on uncompressed size even.
Barring any other information whatsoever while knowing a script is 10 MB vs 10 KB should give you a strong hint on "parse time" it's not actually telling you what the metric is named for which should be a red flag. What you actually want to know is what the parse time was regardless of what the file size was or, more likely in the bigger picture, how using the resource changes time to certain user noticeable events. Perceived slower pages much smaller than 1 MB are certainly easy to generate by optimizing for the wrong things as are perceived faster pages with much bigger payloads.
On the other hand if you're just using 1 MB as a quick and simple yardstick you probably don't intend to measure some of your assets compressed and some others uncompressed, particularly when picking which to axe.
I found this today after following back to Fennel (for probably the 10th time hehe) from the recent Fullmoon post (https://news.ycombinator.com/item?id=30385759) by thinking "what if I use Fennel instead of Lua-proper there?".
I then checked out the Fennel Wiki (https://github.com/bakpakin/Fennel/wiki) where you'll spy a link to Fengari. Maybe that trail of breadcrumbs helps you find more things you might be interested in.
This is at the JavaScript level but I would love to see someone release a browser with a side-by-side JS VM and Lua VM, just as an experiment to what the web could be like with a better language.
Foe anyone interested, there was a port of Webkit/Blink where the web objects retained by javascript are garbage collected outside of V8, using blink GC to destroy those objects.
The way it works its through smart pointers, where for instance you say how the reference to that object is retained according to the object that it references.
The good side of this is that other programming languages besides Javascript can deal with blink objects the same way Javascript does (This feature was called 'oilpan').
It's this feature that makes possible for my project to have web-based applications in Swift, for instance, and would allow to plugin the Lua VM or Jit in the same way and still be able to be a first-class citizen of the webkit API as Javascript is.
It's there. For instance if you want to retain a reference to a 'WebFrame' for instance, you can create a wrapper object which is the one you directly control the lifetime and have a 'Persistent<WebFrame>' as a property of your wrapper to hold the blink reference.
As long the smart pointer is not destroyed (with the destruction of your wrapper or holder) the web frame object is guaranteed to retain a reference, making the object alive.
Otherwise you can retain Weak<> and Member<> smart pointers for things that your object dont own, and of course, in the case of Weak<> you can expect it to be collected in the next scheduled GC job (or anytime). And in the case of Member<> is not a strong reference to the object as in Persistent<> so you dont own the object's lifetime, but you retain a reference so the object should not go away while you hold it.
To see how serious is the commitment to this scheme, you can just explore the Blink codebase to see that this scheme is
actually used internally as a way to control lifetime between objects, and not just as a API thing to be used from consumer projects (unlike V8 which vends a different API for consumers of the VM from the one it uses internally).
According to the description, a minimal glance at the code and this [0], no, that's not it.
It is "the Lua VM written in JavaScript" and "largely a port of the PUC-Rio C implementation of Lua". It runs the Lua code on that VM, so there's no transpilation.
This is great! Can't wait to try it out as soon as I am on a computer. I will see if I can hook this up in my little side project: https://github.com/nhatcher/ariana-lua
Σελήνη/Selene (the ancient name) is Luna: our moon. Lowercase φεγγάρι/feggari/fengari is moon: you would say «δίδυμα φεγγάρια του Άρη» for “Mars' twin moons”.¹
In the same vein, capitalized Ήλιος/Helios is Sol: our sun. Lowercase ήλιος means any sun.
¹ Trivia: “satellite” is «δορυφόρος» = spear bearer; from the protectors of kings or powerful men in general, who typically encircled their protectees.
I'm not entirely sure what you mean by "language fork", but all versions of Lua have incompatibilities. 5.1, 5.2, 5.3, and 5.4 are all major versions, with features that aren't completely compatible with any of the others.
For example, 5.2 brought in major changes for how environments were handled, and 5.3 brought in integers, and 5.4 brought in changes to how number overflow is handled.
At 5.3 the behaviour of the basic arithmetic operations were changed in a fundamental and non-backwards compatible way to bring in those integers. That certainly counts as a language fork, as opposed to the other two examples.
It definitely is the mainstream continuation of that language by the original authors. Changing the API, or the implementation, or making other breaking changes does not automatically (or otherwise exclusively) make it a "language fork". Lua has the luxury of making breaking changes, and most Lua users both accept and appreciate that. Lua is continuously refined while languages like JavaScript and PHP are stuck with their (sometimes dubious) decisions forever or _break the web_.
This is not breaking an API or the implementation. It is not a refinement. This is a change to the language itself. If you had a published standard it would be a change and not an addition.
Okay, will the release of C2x be a fork of the C language, since it's going to remove support for K&R function definitions (and not every such function can be rewritten to use the new style)?
ANSI C definitely counted as a language fork. Things were different in that there were multiple existing branches and ANSI C was a fairly attempt to create a dominant one. This example supports my contention that you have to clearly identify which branch you mean, K+R vs ANSI was definitely something you had to specify.
In 5.2, environments were changed in a fundamental and non-backwards compatible way. CFunctions used to have environments, but now they don't. This also affected equality when comparing between function values.
In 5.4, they changed _the way that numbers behave_:
> Literal decimal integer constants that overflow are read as floats, instead of wrapping around.
Which, by the definition used for 5.3 being a fork, would also be a fork. It's a fundamental change of the entire numeric tower.
Each Lua x.y is a "major" version. You can expect things to be stable and patched in versions x.y.z with variable z. Might be irritating but it has been this way for decades.
Lua is not a general-purpose language. Its main usage is embedded inside a bigger "host" app, usually in C, for which it provides a higher level language. But the features available are completely defined and dependant on the host app. On top of that, Lua (or at least, its vanilla implementation) is very easy to tweak and understand.
As a result, you can find Lua inside a car, a videogame or a web server ... and each incarnation of the language is incompatible with other versions of the language out there. Instead of a big continental platform, Lua is a fragmented archipelago of islands. You can still swim from one to the other, but you have to get wet.
I wonder the same thing. Writing a scripting language on top of another scripting language makes me wonder, why add another layer? A common complaint I have seen with frontend JS apps is that people tend to unnecessarily bloat them.
But I do appreciate the efforts and idea. Most probably the developers must have thought this through and have definitely kept the performance aspect in mind.
Don't take my word for it as I only looked into this a while ago but, if I remember correctly, you'll probably need to:
a. First build a function in JS wrapping the usage of fetch and the management of the JS Promise there. You'd use lua_yield and lua_resume to yield, and to return back to lua the appropriate value. You would, optimally, do this just once. Or maybe someone has already written such a wrapper so you would simply require it; you'd have to search around for that.
b. Once you have that, in the Lua side, you would simply wrap your calling code in a coroutine and you don't need to use any await or then or anything. Something like...
coroutine.create(function()
local r = fetchWhatever(url, ...); -- the wrapper mentioned above; you just call it
print(r)
end)
But again, don't take my word for it. Things may have changed since I last looked at Fengari.
Lua is the only language I can truly say I love. If more people used it (and used it responsibly, not letting it become a mess of odd libraries), the world would be a better place.
One aspect of Lua that stands out to me is how every feature is carefully designed both in isolation and in composition with the others. The language has relatively few features, but none of them are hanging off the side, the all lean on each other to make a cohesive whole.
I think Lua is a bit unique in this for two reasons. First, they have in intentional open-source but not open development model. Second, because of the way that Lua is embedded inside other projects, there is more willingness to implement backwards-incompatable changes. I'm sure this is a negative for some who want to build a larger, less fractured community, but it has advantages for language cohesiveness.
I agree, that's the one unfortunate flaw of Lua I'd go back in time and change if I were Hitler With a Time Machine, but I've used Lua and other 1-based languages like ScriptX, and you totally get used to it, and finally realize that 0-based languages have their own confusing quirks and inconveniences that you got used to when you learned them, and you just don't think about them any more once you've internalized them, just like 1-based languages. It's just a matter of moving the confusing quirks and complexity around, not that 0-based languages are less confusing, complex, or quirky than 1-based languages, or the other way around.
But that said, I'd prefer that Lua had 0-based indexes, simply because that's what most other languages have, not because it's superior.
> It's just a matter of moving the confusing quirks and complexity around
unless you can exclusively program in 1 language only, these confusions would be the cause of various bugs, or at least, increase the cognitive load. Programming is already hard enough, without artificially increasing the cognitive load!
If you work with pure Lua that’s a non issue. The ipairs function abstracts that away, and for numerical loops are written different than in C, with a start value (included), an end value (included) and a step (optional), and no exit condition. You can’t put i<len there as a result like you would do in C.
Agreed that interaction with C or other 0-based languages or systems (eg screen coords) makes things more difficult.
iirc, one other confusing element is that tables (hash tables? I don't remember what they're called) return null when a lookup is done for a nonexistent key.
This is not necessarily a bad choice. Exceptions and such can be a real pain. However, accidentally getting a null value because you didn't check and then have it propagate much further in your program is extremely difficult to debug. Instead of blowing up at the site of the bad lookup, you only see the distant effects of it (e.g. put the null value into another table, which gets put in another table, and then is attempted to be called as a function, etc).
If you learned arrays in C where indexing is the same as pointer arithemetic then zero-based arrays seem natural. If you are coming from the real-world concept of "a list of things" then the "zero-th item" in the list seems odd; one-based indexing feels natural.
"with" existed in python 2. In python 2.6 in fact, like comprehension lists, decorators, generators, descriptor protocol, exotic slicing assignation, advanced nested unpacking, infinite parameters...
Python was always chock-full of advanced features, people just usually don't notice because they get productive in 3 days with the basic features and don't need to go further. It's has the quality of a very smooth learning curve, but a very long one if you care.
Isn't it LuaJIT that's really fast? Last I remember reading about it, there was some version fragmentation going on with Lua advancing and LuaJIT stuck on an older version? (That was a long time ago, and I don't know what been happening since.)
And LuaJIT has historically been extremely fast (initially much faster than JavaScript's early JITs) because it didn't have nearly as many optimizer-busting design flaws to work around as JavaScript JITs did, because Lua's language design is so much simpler and cleaner than JavaScript's, which wasn't originally designed to be compiled (coughcough "with" cough "this"). But because JavaScript was the "Chosen Language", a whole lot of effort has been put into developing JITs that worked around JavaScript's flaws over the decades since. But all that effort could have been put to much better uses if it didn't have all those flaws in the first place.
The Lua interpreter is really lightweight, and for decades has been the go to choice for when you need dynamic code and speed (for example, it's been popular in the games industry for this reason).
Probably someone else can shed light on exact numbers, but Lua is faster than Python.
I'm still using Lua on embedded Linux devices, and in fact completed a project last year that was deemed impossible by the Java devs, but easily within our memory/time budget as a Lua group. The success of that embedded Lua project converted a large number of Java diehards into Lua accolytes...
Folks who sniff at putting a scripted/interpreted language into an embedded environment really need to think twice with Lua. It is fast, tight and highly performant - and if you bundle it along with LuaJIT and Turbo.lua, it'll give you the best of all worlds - aynsc i/o, coroutines, very, very fast performance and a great execution environment upon which to build truly useful apps.
Not OP, but for me, I love Lua because it’s both easy-to-use and simple. Python is the former and Go is the latter, but I don’t know any other mainstream language that’s both.
I wish Netscape had chosen Lua instead of bothering to invent JavaScript. Python would have also been much better than JavaScript, but Lua would have been perfect. But at least they didn't choose TCL, as Sun was pushing before they switched gears to Java after the Great TCL War. And personally, I would have preferred PostScript (which was the basis of NeWS, with a Smalltalk-like OOP system) or ScriptX (which was like Scheme or Dylan with "normal" infix syntax and a CLOS-like OOP system) to JavaScript.
If all the time and effort and treasure that was wasted cobbling together JavaScript from scratch with all its naive unforced design flaws and security holes and interoperability issues -- and then everyone else pissing away so much effort working around and fixing subtle bugs in JavaScript code due to those terrible design flaws (coughcough "this" coughcough "equality") -- had been put into Lua instead, which was cleanly designed from the start by incredibly smart people who actually had a clue about what they were doing, the world would be a much better place.
People who are that confused about equality shouldn't design programming languages:
>For Netscape in 1995, I believe Python would have been a better choice than JavaScript, but Lua would have been an even better choice, given how much smaller, simpler, and more efficient Lua is, and the eventual excellence of LuaJIT. (If only Lua indexed its array from 0 instead of 1...)
But Python and Lua didn't "look like Java" enough for Netscape.
>But if not PostScript, Python, or Lua, then at least Netscape didn't use TCL in the browser. Around 1994, long after NeWS and right before Java, Sun announced they were going to make TCL the official scripting language of the world wide web, which triggered RMS into kicking off the Great TCL War:
>And with that diplomatically worded message, RMS kicked of The Infamous TCL War. That was Stallman's response to Sun bombastically pushing TCL as the official scripting language of the web, BEFORE Live Oak / Java was a widely known (or evangelized) thing.
>At the point anybody started talking about a Java/TCL bridge, it was already all over for TCL becoming the "ubiquitous scripting language of the Internet".
>Sun's unilateral anointment of TCL as the official Internet scripting language trigged RMS's "Why you should not use Tcl" message, which triggered the TCL War, which triggered Sun to switch to Java.
>After the TCL war finally subsided, Sun quietly pushed TCL aside and loudly evangelize Java instead. The TCL community was quite flustered and disappointed after first winning the title "ubiquitous scripting language of the Internet" and then having the title yanked away and given to Java.
>Any talk of bridges were just table scraps for TCL, the redheaded bastard stepchild sitting outside on the back porch in the rain, smoking a cigarette and commiserating with NeWS and Self.
>Tom Lord's description of what happened is insightful and accurate:
> People who are confused about equality shouldn't design programming languages:
This is a very dull criticism of JavaScript, everybody uses ===. While it's definitely true the language has some poor decisions (`with`, `==`, etc.). You can write JS without using any of these features (and almost everyone does).
Infact, just using a decently strict ESLint config will get you most of the way there.
It's controversial, but I think somewhere under the cruft of JS is a good language. The syntax is dead simple, functions are first class, and with something like TS you can get static typing ontop of all of that.
It really annoys me that people confuse ease of use due to familiarity with simplicity.
Many programmers will be familiar with the typical curly-braced C-style syntax many mainstream programming languages have and so find JS syntax easy to get into but that has nothing to do with simplicity.
Building a parser for JS is not exactly simple nor is teaching new programmers the syntax. Lua is vastly superior in both aspects. People just tend to forget the pain most new programmers have to go through learning a curly braced languages, having to figure what all these weird symbols and different kinds of braces and how to type them.
JS is arguable even worse than other curly braced languages because of weird exceptions like automatically adding semicolons to line endings or these fancy arrow functions. Honestly JS is one of very few languages were I actually need to look up syntax after having not used it for a while.
I don't personally think there's anything complicates about arrow functions. They're syntactic sugar for `function() {}`, with the exception that they have sensible binding of `this`.
I do agree that Lua is simpler, but I don't think this is an area where JS is that bad.
I’ve used JavaScript for about 25 years, sometimes more seriously, and always hated it. I’ve always found it unpredictable, I don’t like the turtles-all-the-way-down thing going on with its objects. I’d like to leverage my experience with regular class definitions and instantiation from Java workalikes. I’m sure many people love JS’s take on OO but I find it annoying.
A perfect example: I read hey there’s a cool simpler way to write small anonymous functions called arrow syntax. Nice let me try it, hm it’s not working —> google —> arrow functions have no access to ‘this’.
> I’d like to leverage my experience with regular class definitions and instantiation from Java workalikes. I’m sure many people love JS’s take on OO but I find it annoying.
That's a bit like complaining about a language without types not having interfaces. I mean, sure, but it's also clearly not what that language is trying to do. Prototype-based programming is object-oriented programming without defining classes, that's basically the thing that sets it apart. If you're trying to do classical OOP with prototypes, you're not really embracing the paradigm of the language you're using.
> A perfect example: I read hey there’s a cool simpler way to write small anonymous functions called arrow syntax. Nice let me try it, hm it’s not working —> google —> arrow functions have no access to ‘this’.
Not sure what this has to do with OOP vs prototypes, or even Objects. But yeah, new language features sometimes modifies behavior like in this case with arrow functions (which are not just a different syntax to write anonymous functions). You're also wrong that they don't have access to `this`. They do, but arrow functions don't define a new lexical scope so `this` is referring to the closest part that did define the bindings for `this`, `super`, `arguments` et al.
> That's a bit like complaining about a language without types not having interfaces
Totally agree. I wouldn't make the mistake I see often of saying "for this reason JavaScript is objectively bad" -- but it's bad for me.
> you're not really embracing the paradigm of the language you're using
That's true, and related to the problem. I don't want to learn another paradigm, I want to get my work done. JavaScript's paradigm isn't intuitive to me, and at this point I assume will never click since I've been using it fairly consistently for 25 years and it still makes me uncomfortable.
> Not sure what this has to do with OOP vs prototypes, or even Objects
It doesn't. It's an example where JS often doesn't behave the way I expect it to. Some examples of this unpredictability have been fixed with newer versions over the last 25 years but this just illustrates the problem I often have where I make what I feel are reasonable assumptions about how its scoping or execution model works and have to trial-and-error my way into getting it to work right. This is after countless time spent reading JS books and posts over the years.
I accept that this may just be a personal issue, but I also don't think I'm the only one.
> A perfect example: I read hey there’s a cool simpler way to write small anonymous functions called arrow syntax. Nice let me try it, hm it’s not working —> google —> arrow functions have no access to ‘this’.
If you try to use a new language feature without reading the documentation first, you will probably run into difficulties. This is not a unique problem to JS.
There's a few ways of doing anonymous functions now, and some capture `this` and some don't. This is not a nice part of the language perhaps, but it's not like e.g. Java doesn't have similar complexity with inner classes and outer 'this'
I think your answer just solidifies their point since your proposed solutions are to avoid parts of the language and bolt two pieces of tooling on top.
The point is that we wouldn't have needed decades of work on tooling to achieve what could have been done from that start with intelligent language design.
Decades later you may believe that "everybody" uses "===", but actually "==" is still there, and many people still use that regardless of what you choose to believe, simply because it's there and 33% shorter and looks like C and Stackoverflow is full of examples of it, and it STILL commonly causes many subtle hard-to-find bugs.
People like Roberto Ierusalimschy, Luiz Henrique de Figueiredo, and Waldemar Celes, James Gosling, Guido van Rossum, and Anders Hejlsberg are enlightened, experienced programming language designers who actually know what they're doing, and they don't make stupid amateur-hour mistakes like JavaScript was cursed and riddled with from day one, that make it difficult for compilers to optimize code, and which we're still using tooling and linters and IDEs and compilers to work around, because they're still in the language and will never go away.
"My favorite is always the billion dollar mistake of having null in the language. And since JavaScript has both null and undefined, it's the two billion dollar mistake." -Anders Hejlsberg
"It is by far the most problematic part of language design. And it's a single value that -- ha ha ha ha -- that if only that wasn't there, imagine all the problems we wouldn't have, right? If type systems were designed that way. And some type systems are, and some type systems are getting there, but boy, trying to retrofit that on top of a type system that has null in the first place is quite an undertaking." -Anders Hejlsberg
But you also missed the reference to how deeply and tragically confused the bigoted designer of JavaScript is about equality when it comes to human beings, not just programming languages. And the terrible damage his confusion about equality and his promotion of inequality did to Mozilla and his co-workers and his own reputation and legacy.
I love what JavaScript finally evolved into after decades of intense development and revision and optimization, which took the precious time and effort of uncountable extremely talented people, but all that effort that was pissed away working around JavaScript's original stupid unforced flaws could have been applied in so many much more productive and useful ways, instead of wasted the way it was, and the world would be a much better place if a language like Lua that wasn't so naively and incoherently designed in the first place was used instead of JavaScript.
>the bigoted designer of JavaScript is about equality when it comes to human beings
Zero relevance to programming language design, makes your argument look very weak and emotionally-motivated. If you have to sink to bringing up the personal views of the language designer you're criticizing on an entirely unrelated matter to make a point, you don't really have a point.
>his own reputation and legacy
His legacy is quite fine, I love Brave browser and use it everyday and so are countless individuals. Not all of us are so fragile to the point of not using good software because its author just so happens to disagree on a completely irrelevant social issue.
As for Javascript, it's a terrible mistake but I would love seeing the ones you will make when you're forced to design and implement a language in 10 days.
I share a similar vague desire, but the specifics are all off.
If Netscape had even heard of an obscure two-year-old scripting language from Brazil, and chosen to use it, we wouldn't have gotten the Lua we know and love in the bargain. 1995 Lua wasn't the obviously superior Javascript alternative that it's been since the early 200s.
Cannot agree with you more. It is just so elegant and powerful, and yet simple.
Easily one of the most productive tools in my suite, and I made a lot of money as a Lua developer last year, using Lua in a realtime analysis application.
Would love to see a Lua-only browser arise from these efforts. Its just such a delightful language to code in ..
> I have no problem interoperating with the DOM in WASM.
What? How? Was a year or two ago I dived into WASM last time, but at that point there was no direct DOM access, and all the talk around it was pointing to it maybe existing far into the future, not anytime soon.
How are you accessing the DOM from WASM without using any of the JS host? You're not talking about just sending messages from WASM to JS and "manipulating" the DOM that way are you?
I've been calling into the Javascript runtime from C# in Blazor. (C# compiles to WASM.)
Blazor's built-in framework appears to send HTML (as a string) to the DOM. It also automatically sets up callbacks from DOM events into C#.
For code that I've written, I've mostly done simple stuff, like calling window.alert. I suspect I could call window.getelementbyid using the same techniques, but I haven't done that yet.
I did try manipulating the DOM from Rust (via WASM) in the summer of 2020. That was an exercise in frustration, but I point my finger equally at the language and the runtime. Granted, a lot can change since then!
Vanessa Freudenberg's brilliant SqueakJS, a Squeak VM in JavaScript, also layers a Smalltalk VM on top of JavaScript in a way that efficiently and elegantly interoperates with JavaScript's garbage collector, because trying to layer one garbage collector on top of another garbage collector would be a disaster.
One thing that's amazing about SqueakJS (and one reason this VM inside another VM runs so fast) is the way Vanessa Freudenberg elegantly and efficiently created a hybrid Smalltalk garbage collector that works with the JavaScript garbage collector.
SqueakJS: A Modern and Practical Smalltalk That Runs in Any Browser
>The fact that SqueakJS represents Squeak objects as plain JavaScript objects and integrates with the JavaScript garbage collection (GC) allows existing JavaScript code to interact with Squeak objects. This has proven useful during development as we could re-use existing JavaScript tools to inspect and manipulate Squeak objects as they appear in the VM. This means that SqueakJS is not only a “Squeak in the browser”, but also that it provides practical support for using Smalltalk in a JavaScript environment.
>[...] a hybrid garbage collection scheme to allow Squeak object enumeration without a dedicated object table, while delegating as much work as possible to the JavaScript GC, [...]
>2.3 Cleaning up Garbage
>Many core functions in Squeak depend on the ability to enumerate objects of a specific class using the firstInstance and nextInstance primitive methods. In Squeak, this is easily implemented since all objects are contiguous in memory, so one can simply scan from the beginning and return the next available instance. This is not possible in a hosted implementation where the host does not provide enumeration, as is the case for Java and JavaScript. Potato used a weak-key object table to keep track of objects to enumerate them. Other implementations, like the R/SqueakVM, use the host garbage collector to trigger a full GC and yield all objects of a certain type. These are then temporarily kept in a list for enumeration. In JavaScript, neither weak references, nor access to the GC is generally available, so neither option was possible for SqueakJS. Instead, we designed a hybrid GC scheme that provides enumeration while not requiring weak pointer support, and still retaining the benefit of the native host GC.
>SqueakJS manages objects in an old and new space, akin to a semi-space GC. When an image is loaded, all objects are created in the old space. Because an image is just a snapshot of the object memory when it was saved, all objects are consecutive in the image. When we convert them into JavaScript objects, we create a linked list of all objects. This means, that as long as an object is in the SqueakJS old-space, it cannot be garbage collected by the JavaScript VM. New objects are created in a virtual new space. However, this space does not really exist for the SqueakJS VM, because it simply consists of Squeak objects that are not part of the old-space linked list. New objects that are dereferenced are simply collected by the JavaScript GC.
>When full GC is triggered in SqueakJS (for example because the nextInstance primitive has been called on an object that does not have a next link) a two-phase collection is started. In the first pass, any new objects that are referenced from surviving objects are added to the end of the linked list, and thus become part of the old space. In a second pass, any objects that are already in the linked list, but were not referenced from surviving objects are removed from the list, and thus become eligible for ordinary JavaScript GC. Note also, that we append objects to the old list in the order of their creation, simply by ordering them by their object identifiers (IDs). In Squeak, these are the memory offsets of the object. To be able to save images that can again be opened with the standard Squeak VM, we generate object IDs that correspond to the offset the object would have in an image. This way, we can serialize our old object space and thus save binary compatible Squeak images from SqueakJS.
>To implement Squeak’s weak references, a similar scheme can be employed: any weak container is simply added to a special list of root objects that do not let their references survive. If, during a full GC, a Squeak object is found to be only referenced from one of those weak roots, that reference is removed, and the Squeak object is again garbage collected by the JavaScript GC.
Also:
The Evolution of Smalltalk: From Smalltalk-72 through Squeak. DANIEL INGALLS, Independent Consultant, USA
>Although Squeak is still available for most computers, SqueakJS has become the easiest way to run Squeak for most users. It runs in just about any web browser, which helps in schools that do not allow the installation of non-standard software.
>The germ of the SqueakJS project began not long after I was hired at Sun Microsystems. I felt I should learn Java; casting about for a suitable project, I naturally chose to implement a Squeak VM. This I did; the result still appears to run at http://weather-dimensions.com/Dan/SqueakOnJava.jar .
>This VM is known in the Squeak community as "Potato" because of some difficulty clearing names with the trademark people at Sun. Much later, when I got the Smalltalk-72 interpreter running in JavaScript, Vanessa and I were both surprised at how fast it ran. Vanessa said, "Hmm, I wonder if it’s time to consider trying to run Squeak in JavaScript." I responded with "Hey, JavaScript is pretty similar to Java; you could just start with my Potato code and have something running in no time."
>"No time" turned into a bit more than a week, but the result was enough to get Vanessa excited. The main weakness in Potato had been the memory model, and Vanessa came up with a beautiful scheme to leverage the native JavaScript storage management while providing the kind of control that was needed in the Squeak VM. Anyone interested in hosting a managed-memory language system in JavaScript should read his paper on SqueakJS, presented at the Dynamic Languages Symposium [Freudenberg et al. 2014].
>From there on Vanessa has continued to put more attention on performance and reliability, and SqueakJS now boasts the ability to run every Squeak image since the first release in 1996. To run the system live, visit this url:
Err, while squeakjs is impressive (and I have spent quite a while doodling within it, using old squeak images, and current cuis images), one cannot in good conscience call it "fast".
In fact, if keeping one eye on the truth, then one couldn't call it "slow" either.
If performance IMPROVED by one order of magnitude, then you could call it "slow".
I have to wonder: if one compiled the Lua VM into WASM, would it by any chance run code at speed comparable to JS? Seeing as plain Lua is crazy fast for a dynamic language.
That's assuming that WASM itself works with minimal delays.
Presumably you would load wasm modules with this, as it is just a js VM. Couldn't find a defined browser API, but if the promise of general interoperability holds, then I imagine loading wasm will work like it normally does, just using lua.
This is a virtual domain implementation, like React, not interacting with the actual DOM like the demo in the OP. WASM is awesome, but seems like folks don't really understand it and the limitations.
I bet DOM access in WASM is totally doable, just gonna be a lot harder than how this project implemented it.
"By itself, WebAssembly cannot currently directly access the DOM; it can only call JavaScript, passing in integer and floating point primitive data types. Thus, to access any Web API, WebAssembly needs to call out to JavaScript, which then makes the Web API call. Emscripten therefore creates the HTML and JavaScript glue code needed to achieve this."
Do tell, what's your definition of a "scripting language" anyway, and what is it about that definition that implies you shouldn't be happy about implementing one scripting language in another scripting language?
Especially when the hosted scripting language can take advantage of the garbage collection and the incredible amount of optimizations that have been put into the hosting scripting language.
You know there's a reason people have put so much time and effort into optimizing JavaScript. Is there a reason it's sad to take advantage of that?
Or would it make you happier if everyone wrote their own garbage collectors and optimizing JIT compilers and portable operating system independent runtimes from scratch every time?
I remember seeing lua for the browser quite some years ago (not sure if it was fengari or not) that worked similarly (include the script and then anything of type `application/lua` would execute using it). I ran into it in 2014.
If fengari is that, then I think it predates wasm.
Edit: it is not, since my memory is from 2014. However, the git history says fengari was started 5 years ago and wikipedia says wasm came out 4 years ago.
Wasm doesn’t have any built in garbage collector (yet) and so by building on top of Javascript (especially for a language like Lua that has a lot of alignment with it) you are reusing that machinery which is easer.
True you could take an existing lua implementation and compile to WASM but then as the other comment explained interacting with the DOM would need to be implemented.
Well, you're completely OT, but I started hating frontend development exactly when redux + sagas started becoming popular.
Don't get me wrong, Elm has a very similar architecture but it's pleasant to use. The idea is cool, the implementation of redux and sagas is a terrible boilerplate mess.
More recently, with react hooks + async you can model something similar without having to use redux or sagas.
Redux hook syntax is also a step forward, albeit not very useful now that we have context and reducers.
I think I've been where you are. You may also want to check out Fable.
Anyway, I maintain a very complex SPA, where the react+hooks+async approach (and whatever there was before hooks) backfires. Hooks like useEffect itself turn into event handlers (Chains of useEffect clauses triggering each other's dependencies, sometimes across multiple components), and component composition and rendering becomes basically a way to declare async work flows. At some point this is very hard to follow and maintain, and this is where redux and saga come in. Saga allows me to cut down on the boilerplate and define the async logic in an almost synchronous manner, once I got the hang of it. I also don't see how saga is much boilerplate when used right.
Also: Use createSelector and useSelector for almost everything. A selector doesn't just reduce the store's data, it can also perform computations and "elevate" the data from serializable stuff to fancier class instances (using 3rd part libraries). Composing graphs of selectors means extracting much of the display logic and computing out of the components, and it also means faster applications because of memoization.
But the point is you really need to use every part of the stack to find the sweet spot: Typescript (with ActionType etc), Thunks (for simpler async tasks, but optional), createAction (for typing mainly), createSelector, and optionally/ideally Redux Toolkit and RTK Query.
Yes, of course. It is possible. And you could also write everything in C and compile it to Webassembly. I don't know why I often get these pedantic "disagreements" even though this is no disagreement at all.
There are very valid reasons why virtual DOM systems (and similar approaches like Svelte) are so popular: For any not-completely-basic use-case they are easier to manage and faster.
React is a popular and battle-tested choice. Once you get the hang of it, it's a well thought out system, and you can get a lot of advice and wisdom about it for free. Maybe React is so unpopular with the pedantics here because it is so popular in the real world?
And Redux is one way - not the only way - to implement a pattern that is a functional programming alternative to the MVC pattern. An alternative I very much prefer. It is extremely powerful for moderate to complex cases, and then the boilerplate can be kept manageable and appropriate for the required complexity. I found that it is performant, easy to read, easy to extend and easy to debug.
You can do all of that in Javascript, but my experience is that it is in general a lot more painful. It is harder to maintain and harder to bring other people on board. The only time this "I don't use no stinking framework" approach doesn't completely backfire is in smaller projects or when self-rolled frameworks emerge out of the "no framework" code base.
I try to keep my ego out of programming. That also means that dissing on frameworks "just because I can survive without them" isn't productive.
There is power in the combination. When you think about needing sagas, you probably need sophisticated state management. Redux is a popular and battle-tested choice. Just as React is a popular and battle tested choice.
So I wanted to check the size of it.
I opened the firefox debugger, and it went blank O_o
Chrome was ok with it: it's about 220kb, which is not bad at all for a whole runtime + stdlib. Python pyiodide (https://pyodide.org/en/stable/) is several Mb.
220kb is still too much to pay upfront, since I usually want my webpages to be under 1Mb, and I can't justify burning 1/4 of the size budget.
As for the Firefox story, maybe it's a good obfuscation trick :)