Out of curiosity: is LuaJIT supporting only 5.1 because of technical difficulties imposed by newer versions, or is it simply that the design of newer Lua versions did not appeal to the LuaJIT maintainers?
There is design tension between the language designer and the VM implementation. AFAIK some features post 5.1 make performant VM implementations difficult. Lua is originally a configuration language where performance didn't matter all that much. Because Luajit is so fast much more of the application code can be done in the scripting environment so now the performance matters much more than it did.
So the spectrum is from newer language and slower to older language and faster. If performance is an issue the cost of the newer language features could be that more of the application code has to be written in C++ instead of Lua - in that context the Lua language shouldn't be considered independent of the host language. Performance matters to me so increase in C++ code would not be worth the newer language features. Using Rust instead of C++ as the host language might change the landscape again since Rust is so much more ergonomic than C++.
Very cool! There need to be more options for developers with lower-end boxes, for gamers with low-end hardware. Unreal Engine 5 is a lost cause nowadays without 64GB of RAM, Unity is a mess and there need to be more options than Godot.
In my youth I cut my teeth on the quake 2 sdk. And even without a 3D suite and a c compiler I could get creating.
When the Rage toolkit became available, almost none of the community were as besotted with eagerness as they had done before. It was a 30GB+ download with some hefty base requirements. While rage could run on a 4 core machine, not many gamers at the time had 16 core Xeon’s and 16gb of ram!
The worst the HL2 modding scene had to contend with was running Perl on windows.
The other problem with it is that there are other variables involved besides pure speed, which is CONSISTENCY of speed.
Folks are always surprised to see just how fast Java is able to perform on benchmarks - it has a reputation as such a slow language then how come it's able to execute almost as fast as C++?
But Java's problem isn't execution performance. It's:
* Startup performance. It takes time to instantiate the JVM. That matters for some, not for others.
* Consistent performance. This is the nature of Garbage Collection. It can surprise you when it executes you and drop a performance blip when you least expect it.
Most developers who think they need the fastest performance actually need CONSISTENT performance more than the fastest.
> * Consistent performance. This is the nature of Garbage Collection. It can surprise you when it executes you and drop a performance blip when you least expect it.
This has been getting much better in Java in recent years. Shenandoah has been the default since Java 15, and we've been seeing further improvements since then.
> Processing thread stacks concurrently gives us reliable sub-millisecond pauses in JDK 17.
I even see some video games (say Dome Keeper) that periodically go out to lunch even on a top of the line gaming PC and understand that kind of game is often written in C# which has a garbage collector. Thing is I remember playing games on the much weaker PS Vita that were written in C# but I never remember pausing like that.
VR games are now the frontier of gaming (in terms of the industry outdoing itself) and there you're approaching hard real time requirements because it is so awful to get seasick. I appreciate it.
I think it might have more to do with the specific implementation than the tool.
Most incarnations of C# offer a GC.Collect method and an ability to configure the threading model around collections. Used properly, you can keep maximum delays well-bounded. You still have to work your ass off to minimize allocations, but when some do inevitably occur due to framework internals, etc., you don't want to have to clean up 3 hours worth of it at once. Do it every frame or scene change.
These hitches (in unityland anyways) usually mean the dev is allocating a lot of short lived objects in their render loop. Even little heuristics like keeping the new keyword out of Update() and using object pools prevent the majority of things like this for us over on the unity side.
Godot has some perf warts still being churned out, so I don't know what all they've got to do to finesse around them. Even if GDScript was greasy gcc lightning, it's really easy to crunch yourself into some compromises when you're trying to eat your way out of the bottom of ticket mountain before Nextfest or you starve or whatever. Small team gamedev is wild.
I am wondering also if my big computer with 64G is part of the problem: if I allocated less RAM to the process I wonder if the GC breaks would be shorter but more frequent.
Javas problem is a culture of overly generic solutions (vendor neutral!) that then try to fix architectural performance issues by liberally adding caching. This makes both the startup time and consistency way worse than it needs to be.
Much of that (potential) compilation speed is due to Wirth's personality, but his circumstances also helped.
For industrial languages, programs will be compiled a few times while developing and debugging, and then run many, many more times in production.
For a teaching language, programs will be compiled more than a few times while learning and debugging, and then run never.
This situation has the corollary that teaching languages ought almost always to opt in favour of compilation speed over speed of compiled code.
(when I started programming, developers shared a minicomputer CPU, which meant that even with Unix, which prioritises editor threads over compiler threads, ends of quarters were brutal for load average and hence system responsiveness)
You say this as if fast compilation speed somehow means bad performance.
This is a false dichotomy. Pascal compiles fast and runs fast.
Then there's the fact that performance of a program depends on far more things than the language it's written in — and faster iteration cycles ease optimization and profiling on the level of algorithms, which is where you have most gains.
Squeezing that potential 3% speed up from using another language is rarely worth it then.
True for Turbo. Not true for the truly atrocious UCSD Pascal.
Computer scientists circa 1980 were driven absolutely crazy by BASIC becoming the dominant teaching language in primary schools; with alternatives like UCSD Pascal, however, you are better off writing assembly.
Pascal = Turbo Pascal if we're taking late 80s/early 90s.
Everything else is blasphemy.
After that, Pascal = Delphi (for quite a while).
The fact that both are commercial, closed-source (and, in case of Delphi, quite pricey) packages is pretty much the sole reason Pascal lost out in popularity to languages that mere mortals could use at home without having to resort to piracy.
It took Embarcadero over a decade to realize that offering a free "Community Edition" of the IDE is a must to stay relevant. It was too late by then.
Some people in the 1990s thought that open source, particularly the GNU suite, killed off the market for “cheap and cheerful” dev tools, thus we haven’t had a Borland ever since. Today innovation in programming tools tends to come from the hugest companies (Microsoft, Oracle, Google, Facebook) and even successful open source tools such as the LLVM-based compiler suite owe a lot to big funders like Apple.
(Possibly Jetbrains is an exception, though my take is their attempts to introduce new languages like Katlin and radical frameworks like MPS haven’t been as impactful as getting the bugs and slowness out of Eclipse)
reply