Everyone probably knows, but INFOCOM did the Zork type text adventure games. They chose to author to a virtual machine, which could be ported to many machines once, making all their games playable across a ton of machines.
Today, their software is very accessible. Porting a Z machine is not a super heavy lift. Once done, the library of interactive fiction games is available.
This machine might make a great target for micro controllers and or other smaller scale systems. Could be fun, and a source of smaller scale software for people to explore, build on.
Damn cool! First I've read about this.
- V3, perfectly. Infocom games and Puny Inform ones. The best would be the 12th release of Curses in v3 format.
- A V5 game would run pretty slow on a C64 for example, you'll need a Turbo Chameleon to run the games at acceptable speed. As acceptable I mean max 4-5 sec lag.
- V8 games like Anchorhead are a "no-no" under an 8bit machine. Using telnet/serial against a 16 and more bit machine would be the sanest option.
I have an Apple //e Platinum running at 16 megahertz. Might do V8 fine.
Running Anchorhead was impossible, and even V5 games such as Vicious Cycles ran borderlinely unusable. I mean, playable over a lag of 8-10 seconds after inputting the command, but it gets tedious fast.
If you want artistic sensibility, fine. But pretending a personal crusade to make permanent computers you can build as an undergrad sophomore is somehow a noble cause is pretending your hobby is cosmic in importance and utterly delusional.
It's like people who bloviate about GPL licenses: you can tell they've never gone to bed hungry.
Just look at all the lessons to be learned from systems of the past. And what can actually be done on them!
TLDR; avoid personal dogma in tech, or don't, I'm not your dad, and who's he to tell you what to do anyways?
Lately I've been thinking a lot along similar lines, but have come to a wildly different conclusion about how to do it. Of course, they are explicitly limiting their use-cases so that's to be expected. I generally agree with the idea of a VM that is simple to implement, but I think the result shouldn't look like real hardware with memory addresses and registers and a stack and all that, it should be more abstract and leave more implementation details up to the interpreter/jit.
Regardless, it makes me slightly less gloomy about the future of computing whenever I see more people coming to similar conclusions.
You might appreciate appreciate Urbit's Nock, a "functional assembly language" based on combinator calculus which can be specified in ~40 lines of text, and implemented in ~100 lines of code:
As one might imagine, a naïve implementation of this spec would be extremely slow (for example, decrementing n requires incrementing another variable m in a loop until m+1==n, which is O(n)). A regular JIT can make even the braindead implementation decently fast, but in practice, specific implementations are recognized by the interpreter and replaced with faster ones. The benefit of this approach is that the code itself is rigorously specified by the combinator definition, and you can point the finger at the interpreter if the output from the optimized version differs from the naïve implementation.
Yeah, see that seems to go way too far, but then again this stood out as similar to ideas I've had:
> The algorithm for decrementing an atom is to count up to it, an O(n) operation. But if the interpreter knows it's running a decrement formula, it can use the CPU to decrement directly.
I had basically the same idea, only since my ideas still look a lot more like an instruction set it was in the form of explicit annotations to identify functions that could be replaced with a specialized instruction if available.
I'll have to look into this thing. Thanks.
You can go really, really far away from a computing system being a Turing machine with arithmetic, if you work at it - like, that's the thing you already have in every PL. But I think getting away from that also entails getting down and dirty with your chosen benchmark of expressiveness, and accepting that the system does not do some things well, and if you want to have multiple things coordinated, then you need to be careful about expressing that, too, to avoid just making a Turing tarpit.
They've got a Patreon, which I've happily supported for a while https://www.patreon.com/100.
This is a cool project, but it doesn't seem to me like it demonstrates great foresight or maintainability. The compromises that areade in commercial hardware and software aren't only made just to be evil, but also to make things affordable and useful and widely available.
> Uxn has only 32 instructions
A 6502 has about 30 mnemonics (if you consider the A/X/Y forms together as register addressing modes rather than instructions).
And don't get me wrong, it's awesome. It's a tiny stack machine complete with its own language! And graphic capabilities!
If the page just said "I made it just for the heck of it" I wouldn't like it any less.
And really, I doubt that many people make things "just for the heck of it". In reality, they probably have motivations but don't, won't, or can't articulate them.
Some people think of the Pico-8 as this but it's actually got pretty high minimum specs! Higher than the GBA! (Which is my typical "runs anywhere" platform of choice)
Maybe the graphics are a bit too limited though? 2bpp for the entire screen is a tough sell (Wassup my Apple ][ and Amstrad CPC fans)
The sound limitations are pretty cool and remind me of the Amiga.
As for the ASM specs, I'll have to wait and see what more experienced programmers have to say on that one. I remember people picking over the DCPU 16's ASM and there having to be revisions.
All it does is output characters to STDOUT, but you can do really interesting things with that micro VM and and an ANSI graphics terminal. Also started messing with Tektronix vector graphics which xterm supports.
I so completely admire their skill, dedication to sustainable living, aesthetic. They're on a completely different level of hip.
Kudos to them!
Someone got it running on the RPi Pico http://cowlark.com/2021-02-16-fuzix-pi-pico/
There also is mruby, version 3 can load applications in just 100 Kb.
How would these projects compare or differ with the minimalism and design goals of uxn?
less is more from MIT
Here's how fast it renders animations on a Memory Display:
The DVD bounce program is 317 bytes long when assembled:
FUZIX assumes you have a machine of some kind already, and a C compiler for it. I suppose if someone wrote a C compiler targeting Uxn, one could port FUZIX to it in principle.
In a similar vein, wasm has been used for archival purposes. It's likely that a wasm binary compiled today will be able to run many decades in the future, perhaps on architectures not even dreamt of yet.
"Why Create A Small Virtual Computer?
We want to produce lasting versions of our tools and games, and by using simpler systems(the UxnVM is only 200 lines of C) we can build more resilient software due to their lack of dependencies, and support older hardware when possible.
The Uxn emulator is extremely simple, and can be ported to an unsupported system fairly easily."