Hacker News new | past | comments | ask | show | jobs | submit login
Snakeware – Linux distro with Python userspace inspired by Commodore 64 (github.com/joshiemoore)
436 points by AnIdiotOnTheNet on June 2, 2020 | hide | past | favorite | 158 comments

This is exceedingly cool. It could be a modern-day Lisp Machine that takes advantage of the Linux kernel's great hardware support. Since Python is such a dynamic language, I'd be interested in how they plan to allow the Python processes to communicate. This would be a great opportunity to avoid the classic Unix mistake of using text files as the lowest common denominator everywhere; a structured data format like S-expressions could be passed around instead.

Edit: Other languages I'm thinking of for this idea:

* Lua or LuaJIT (even more dynamic and really great performance; smaller and somewhat fragmented ecosystem)

* Scheme (potentially also fast and good S-expression support, also very dynamic; small and very fragmented ecosystem)

> This would be a great opportunity to avoid the classic Unix mistake of using text files as the lowest common denominator everywhere

There are a great many who do not consider this to be a mistake. The fact that after half a century, Unix-like operating systems and programming environments are de rigueur is plain enough evidence of this. I can't think of much else in the whole field of computing that has had this kind of staying power over such a long period of time, excepting perhaps the idea of binary digital computers themselves.

Fun history note: Part of the reason the Unix userspace and development chain lean so heavily on text processing is because its inventors were tasked with designing a system for more powerful tools for the writing and production of technical documentation. But they were dyed-in-the-wool systems and programming nerds so stand-alone documentation tools were not very interesting. So their solution was to invent a new OS, borrowing the features they liked from other OSes they had been exposed to, under the guise of delivering a documentation production system. If it turned out that this Unix thing they came up with happened to be an excellent programming environment as well, that was just a nice bonus as far as management was to be informed.

One of the greatest hacks of computing history, if you ask me.

Edit to add: As others have pointed out, you are hardly restricted to piping around text. And if you do, the power of text is that you can impose arbitrary structure on top of it, e.g. tabular data, shell scripts, XML, YAML, JSON, etc. Saying it's too bad Unix abstracts everything as text files is like saying it's too bad microprocessors abstract everything down to either a one or a zero when far more interesting number exist.

> The fact that after half a century, Unix-like operating systems and programming environments are de rigueur is plain enough evidence of this.

Unfortunately Unix blew a 28-3 lead against Netscape Navigator and now 99% of people spend their time in web browsers instead.

I think this speaks to the weakness of the experience using Unix, not its strength.

> Unfortunately Unix blew a 28-3 lead against Netscape Navigator and now 99% of people spend their time in web browsers instead.

There might be exceptions, but people generally don't develop software in their browser.

Thanks for the serious response to a somewhat snarky comment.

I see software development as only one among the many things a computing system should enable. It's definitely an important one. But computers should enable creativity and power for all kinds of tasks, not just programming.

It's sad to me that people in all these different fields besides ours get stuck with webapps. I understand why they use them, they're very convenient. But it would have been great if we could have provided them an even better OS/desktop environment so that they never would have had to switch to such an unempowering tool.

Additionally, programmers being one of the last holdouts of actually engaging with Unix-as-a-way-of-doing-things is a precarious place to be. Microsoft/GitHub is on full "embrace" mode again, and it looks like they're going to make another run at slurping everyone into their ecosystem. Same with tools like Repl.it. It seems to me that the proportion of people who are engaged with the Unix way of doing things (piping text around, etc.) is shrinking, not growing.

EDIT: To get myself back on topic, for these reasons I see people experimenting with new (and crucially local) environments as really exciting. Traditional Unix environments don't need to be the last word, we can do better.

Apple went down a very interesting path during the interregnum years from 1985 through 1996. It was a beacon of creativity where some of its engineers worked on projects that offered new visions of personal computing that empowered the user without requiring them to be full-fledged programmers. HyperCard and AppleScript are examples of such projects, and the Knowledge Navigator, while not an actual product, was one of Apple's visions. I also find OpenDoc to be quite intriguing; OpenDoc would have introduced component-based GUI software to the mass public. Users could perform tasks by mixing-and-matching components and applying them to a document.

It would have been really cool to have seen Copland and Gershwin in its completed forms, but sadly as creative Apple was in the 1990s, its management was a mess until Steve Jobs returned, and so Copland never saw the light of day other than a few developer releases.

I am grateful for the work that Steve Jobs did at NeXT and for his amazing work returning to Apple, saving it from possible bankruptcy, and making it the juggernaut that it is today. I've long been a fan of Mac OS X. However, parts of me miss the Apple of the 1980s and 1990s, where the engineers and researchers there explored ideas of how to make personal computing better. I feel that the personal computing experience for desktops have been stuck in a rut for nearly 20 years, with some aspects actually degrading rather than improving.

But projects such as Snakeware give me hope that we'll see desktop computing move forward. I'm actually working on a side project where I'm creating a Common Lisp-based desktop environment inspired by Lisp machines and OpenDoc that can run on Linux and the BSDs, but I'm in the design phase and I'm far from writing any code (I've been studying up on graphics programming since part of this project involves writing a new GUI toolkit). There is still a lot of room for innovation on the desktop, and as you said, Unix isn't the last word, even for programming.

> However, parts of me miss the Apple of the 1980s and 1990s, where the engineers and researchers there explored ideas of how to make personal computing better. I feel that the personal computing experience for desktops have been stuck in a rut for nearly 20 years, with some aspects actually degrading rather than improving.

I totally agree with this, and think it's an incredibly important issue.

If you post anything public about your Lisp-based DE please feel free to let me know (email in profile).

And given the hegemony of Windows desktops and game consoles, plenty also don't develop software on UNIX.

Piping data from one command to the next is one of those things that I love showing to new users and it gets them all excited, only to then have to disappoint them when they think of a cool idea because they'd need to learn regex, sed and awk first to do anything non-trivial.

It's a shame Powershell is a disaster, because having a structured-by-default pipeline would've really bridged the gap between power users and programmers.

Disaster? It feel quite good over here.

> It's a shame Powershell is a disaster

Yeah, lets get back to bash forever :S

Bash is a good shell with a fairly unimpressive pipeline. Powershell is bad shell with a good pipeline.

For me, as a programmer, the shell part of a shell is more important, as I can do the rest in a more powerful language. Powershell is too verbose for shell usage (see New-Item), brings nothing new to the table in terms of scripting and is ultimately not worth using just for the better pipeline.

(another commenter pointed me to Nushell, which seems to be a very good middle ground, and I've heard of a few more similar efforts, so we hopefully won't be stuck with just bash for too long)

Oh, here we go again with verbosity like aliases are non existent. For realz, New-Item is ni for example or mkdir or whatever...

> as I can do the rest in a more powerful language

Powershell is like repl for dotNet. You can't really get more powerfull.

> brings nothing new to the table in terms of scripting

Are you serious ? I guess somebody has a lot of learning to do.

> (another commenter pointed me to Nushell, which seems to be a very good middle ground, and I've heard of a few more similar efforts, so we hopefully won't be stuck with just bash for too long)

We were on that road number of times before...

> Oh, here we go again with verbosity like aliases are non existent

Aliases would have to be standardised (and I don't mean the crap they pulled with aliasing wget) and regardless, only provide an illusion of simplicity that breaks down the moment you try to do anything more advanced.

> Powershell is like repl for dotNet.

The only place dotNet actually matters on Windows. Everywhere else it's just unnecessary bloat.

> You can't really get more powerfull.

I mean, yeah, technically anything Turing-complete is as powerful as it gets. But things like generators, coroutines, easy multi{thread,process}ing, good support for object-oriented and functional patterns... are what I'm looking for in a powerful language.

> Are you serious ? I guess somebody has a lot of learning to do.

Besides structured data, which was the entire point of this discussion, I don't know of anything Powershell can do that other scripting languages can't. I am, of course, not counting the many system and application hooks, as those are usually 1) not part of Powershell itself, 2) Windows-only and 3) were already possible on Linux (unlike on Windows, basically everything has CLI support here).

In all of this, btw, I was not considering Python a scripting language, which I easily could have (there are even a few neat shells for it). Once you have Python, it's barely even a contest.

> Aliases would have to be standardised (and I don't mean the crap they pulled with aliasing wget) and regardless, only provide an illusion of simplicity that breaks down the moment you try to do anything more advanced.

OK, wget/curl was bad idea, but lets not overreact here about one bad judgement. Illusion of simplicity ? That breaks ? Cmon... never had a single problem with aliases until pwsh, and I am using Powershell 24x7 almost exclusivelly in aliased form. Also, powershell does standardize aliases, once they are in, they never change (except when it went x-plat, but everything changed, so that is moot point).

> The only place dotNet actually matters on Windows. Everywhere else it's just unnecessary bloat.

Let me just say here that entire sentence is crap :) I do not work for MS but life became good when it was x-platformed. MS reputation was bad in linux world but its a social construct that now starts to fade away. I am into technology, not into that social mumbo jubmo bs.

> I mean, yeah, technically anything Turing-complete is as powerful as it gets.

You cant really compare PowerShell with mainstream langauges, since its made for shell context. Regardless, you can do all those fancy stuff in C# and compile it inline. Native powershell has all the things you mention tho, seems you need to take a good look into it (except coroutines but who cares for that in shell ?)

> I don't know of anything Powershell can do that other scripting languages can't.

Let me cite another well known person to you - I mean, yeah, technically anything Turing-complete is as powerful as it gets. The point is to do it in easy to understand and concise manner, without reinventing the entire universe in the process.

> In all of this, btw, I was not considering Python a scripting language, which I easily could have (there are even a few neat shells for it). Once you have Python, it's barely even a contest.

I was waiting for this: lets get python (or ruby or <insert-non-shell-language-here>. Lets be serious about this.

> aliases, etc.

Aliases only hide verbosity in base commands. If the parameters and switches are overly verbose too, they can't help you there.

> dotNet

I'm not saying it's not good and certainly not just because it's from Microsoft. I've done Windows and Linux development and while it's the way to go on Windows, I've never missed it on Linux.

> You cant really compare PowerShell with mainstream langauges

I never intended to, but you called it as powerful as it gets, which really isn't true. I was just giving you examples of why it isn't.

> reinventing the universe

See, to me, the reason I can't switch to powershell is because it seems to reinvent too much - both from a shell and a scripting perspective.

> Python

The whole reason why I brought it up was to continue my original point of "unimpressive for scripting". While I appreciate the convenience of using the same language for shell and scripting, I have no problem using a non-shell language to do more complex things.

Either way, you have successfully convinced me to give Powershell a try one more time once I get some free time to experiment.

> If the parameters and switches are overly verbose too, they can't help you there.

There are parameter aliases too, proxy vars and you can shorten their name to unique subset which gets you as far as you can get anywhere

> Either way, you have successfully convinced me to give Powershell a try one more time once I get some free time to experiment.

Thats good to know :) Do it right, and you wont regret it, I can promise that.

> It's a shame Powershell is a disaster

What's your take on nushell?

I looked it up and it looks like basically exactly what I want. Thanks for the suggestion. I'll definitely give it a try in the following weeks.

"ex" stands for extraterrestrial.

Powershell is a disaster? Did no one tell Microsoft or the hundreds of vendors supporting Powershell as an interface to their product?

Wasn't UNIX originally used to format patent applications at Bell Labs/AT&T? That was purportedly the selling point of UNIX to the business people.

When someone suggests using something other than "text" (usually it's some binary format, I consider S-Expressions to be text), I always wonder, why is this person writing their suggestion using text?

Then we have the suggestions that "everything is a file" is also a "mistake". That's great but if a "file" is not a good enough abstraction then what is?

Text is a human language.

Computers do not really run on text. You can't talk to Unix like it's the bridge computer in Star Trek. No, Unix runs on bytes that sort of look like text because we have encoded and rendered it that way.

What computers need most for structured processing are data types. At instruction set level we had a period of experimentation with byte and word lengths which was still in effect when Unix was born. At that time we also had experimentation with text encodings such as EBCDIC and the various ASCII codepages. Since then we have pretty much settled on standards based on multiples of an 8-bit byte, and on a common text encoding - UTF-8. At the time of Unix's inception the byte stream was a relatively bold move because it enforced variable-length encodings of bytes as the building block of interoperability, versus some kind of static record. Now it is mundane, and the world has moved on to always parsing the bytes for some other purpose.

We desire standards that encode data containers, strings and numeric primitives, so that we do less parsing. XML and S-expressions offer two methods of encoding hierarchies of strings. More recently, JSON has become a very common encoding. It encodes strings, numeric values, and two types of hierarchical containers(key-value and array). We also have SQL databases as an older example, which encode various primitives and a variety of tabular data relationships. With each data language we get one or more associated query languages to express how we select data.

Unix's handling of files is incomplete because of the ambiguity of queries on files. We rely heavily on relationships between directories and on glob syntax to perform selection in a Unix filesystem. Unix paths and glob syntax are not really a byte stream or a file, but they are most certainly part of Unix, and you would be suffering in short order without those concepts. Yet Unix does not respect the existence of its own query language, and never declares a basic system representation for queries and their results akin to the results of a SQL query. Instead we parse the query as bytes and output the results of queries to bytes and parse those bytes, which means there is no guarantee of support at any of the boundaries where your program touches paths. As long as you only deal with one file at a time you don't have an issue, but at scale, the way we work with our applications needs more complex selections on files; the workaround is to "bottle up" the data into one file that has some other structure, which then necessitates special tools to move around the data.

And if you look at what people complain about when working with the shell, the behavior of file selection is a major complaint, and adds a lot of complexity.

> When someone suggests using something other than "text" (usually it's some binary format, I consider S-Expressions to be text), I always wonder, why is this person writing their suggestion using text?

Of course Unix actually operations on streams of bytes, which often represent text, but not always.

Linux is the best operating system for a project like this. The system call binary interface is stable at the processor architecture level:




Any Linux process can perform Linux system calls, no matter what language the software is written in. Instead of relying on C libraries to do everything, a compiler could emit system call code when necessary. The entire Linux user space could be rewritten in a higher level language that way. Rust would be the likely candidate but there's no reason a Lisp or Scheme couldn't do it. The language would need full support for features of binary interfaces such as function calling conventions, pointers, structures with padding and alignment, etc.

Do you know of any projects exploring this space?

The Go runtime apparently uses system calls directly. The implementation of every other programming language I know uses the C standard library. Rust has a freestanding mode which is great. I even made a crate for Linux system calls but that needs inline assembly which isn't stable yet.

As for user space projects, I know this one:


I meant to start my own but replacing what already exists is a lifetime of work. Haven't had much free time these days nor the willpower to make it happen. I created a liblinux but it's incomplete:


I also found out the Linux kernel itself has an excellent system call stubs header which they apparently use for their own tools:


Go does syscalls directly on Linux. Libc is only needed if you use a thing that only exists in libc (e.g. NSS).

In that vein, Zig also uses syscalls directly on Linux in its standard library so as to avoid linking with C unless desired. This is not possible on the various BSDs.

In matter of scheme you have the guile project. https://www.gnu.org/software/guile/guile.html

so maybe guixsd ?

> This would be a great opportunity to avoid the classic Unix mistake of using text files as the lowest common denominator everywhere

Text files are just a binary format that happens to be easily readable by humans, thus making the hurdle of parsing or generating something to use that format much lower. It usually is this hurdle that is most important to longevity and wide spread success than close to metal sympathies- cf JSON and Protocol Buffers

Text files are easily readable on a Teletype or a VT100. That was a solid design decision back in the day when you had to say "screen editor" to distinguish them from the ordinary line editors.

We didn't have interactive debuggers and inspectors and loggers, so we did everything we could to make systems naturally debuggable. It's like how the Model T engine cover just lifted off with a big door handle. When a technology is new, people using it are expected to be mechanics, too.

But today, even amateur wrote-it-over-the-weekend programs have realtime visual editing of rich data. There are countless free libraries for serialization of every format, and GUI (and TUI) libraries for every toolkit/language/style. Developers enjoy using JSON because it lowers the "hurdle of parsing or generating". It's not 1970 any more.

It's been done using Tcl [0] in the nineties, in the EtLinux [1] project for embedded systems. One of the goals was to shave off memory usage by implementing tools under a single scripting interpreter (instead of each of them having their own binary). Kind of overlaps with BusyBox [2] philosophy...

[0] https://www.tcl.tk/

[1] http://www.etlinux.org/

[2] https://busybox.net/FAQ.html#goals

Thinking about structured data at the OS level makes me want to see a revival of Classic Mac OS's resource fork.

Structured metadata in every file is a dream of mine.

The resource fork idea was OK, but it suffered from a implementation optimized for a floppy disk. Writes were deferred as long as possible, and if anything went wrong, the whole data structure was corrupted.

"Text Edit is not a text editor, and the resource fork is not a database" - early Apple publication. Both could have been. Programs tend to need little databases, for preferences and such, and the resource fork was way ahead of doing everything with a text file. But the implementation was never toughened up to database standards after the Macintosh got a hard drive and more memory. And Text Edit was limited to 32K for a long time.

Meanwhile, Unix struggled along with text files for everything. And lock files for the text files. And daemons to remove dead lock files. And a lack of file locking. And a lack of interprocess communication. Eventually the Unix/Linux world got that all fixed, but way too late.

This could be handled by convention of treating directories as though they were a single structured "file" under certain circumstances. There is precedent in the form of Application Directories from NeXT, RiscOS, and ROX Desktop. Modern MS Office files are also basically just a zipped up directory of files and metadata.

I have a similar idea. I've been working on a design for a Common Lisp-based desktop environment that can run on top of Linux and the BSDs. It's inspired by the Lisp machines of old and OpenDoc, an API for building component-based software that was championed by Apple and IBM back in the 1990s before Steve Jobs returned. I haven't started writing any code yet, partly because I've been learning X programming and other parts of the graphics stack like Cairo and Pango.

I have some more ideas at http://mmcthrow-musings.blogspot.com/2020/04/a-proposal-for-..., though I wrote this before I committed to using Common Lisp (I talk about using GNUstep near the end of the document, but I've since ruled it out due to the difficulty of finding modern Objective-C bindings for SBCL that work with GNUstep).

There is Mezzano, a Common Lisp operating system. Currently designed to be run within Virtualbox. https://github.com/froggey/Mezzano

> a structured data format like S-expressions could be passed around instead.

Why would a structured format be a better lowest common denominator than an unstructured one like text files? Or is it files that are the issue?

Unix pipes are great, but having to parse plain-text streams is inefficient and error-prone. Imagine how much more powerful it would be if commands returned data in structures that could be used by other applications without parsing an ad-hoc text format.

The filesystem can be used to approximate this, like it is in Plan 9, but I think that using text as the lowest common denominator leaves a lot of potential power on the table.

Powershell has been growing on me lately. It uses objects to pass data around and it's quite nice.

Using powershell for a month or so and then going back to doing things with text-parsing UNIX commands really highlights how much better things could be if only we moved past doing things the same way they were done in the 70s.

Have you checked out NuShell?

I'm really sad I missed the boat so hard on Powershell.

I grabbed it at the first public release (beta? alpha?), and the thing was absurdly slow. Trying to `ls` took around 10 seconds. I nope'd out and didn't touch it again.

Now I have a coworker who uses it religiously (on linux, who would have thought!) and when I'm watching over his shoulder I'm astounded by how great it is.

How do people use environment variables with powershell? I often want to do `FOO=bar ./program`, but this doesn't seem to be supported. I googled briefly, but there weren't many satisfactory answers.

there isn't a directly supported way. you write a function to backup, run and restore the old variable to get the same behaviour. Not very ideal.

    $Env:FOO='bar'; ./program

It's equivalent of `export FOO='bar'; ./program` , which is not same as `FOO='bar' ./program`, which, in turn, is equivalent to `OLD_FOO="$FOO"; export FOO='bar'; ./program; export FOO="$OLD_FOO";` .

It may be nice for small amount of data on pipeline, but is very slow compared to unix pipes. See their own documentation how they tell you to not do that: https://docs.microsoft.com/en-us/windows-server/administrati...

Interesting idea, might be important to brush up on the implementation rationale of Unix.

To that end there is The Design of The Unix Operating System, Bach [1]. Or A Fast File System for Unix [2]. Or The Design and Implementation of a Log-Structured File System [3]. It seems to me that files solve for the problem of how to deliver an economic memory management system, and they do it by paging cheap disk space so that it's an effective proxy for fast-but-expensive memory. Also files, with their file pointers, can certainly be used as data structures, but this 'understanding' of what each section means is clearly variable, so is best left to specific applications and it's not good to build into the system as a default. Different applications will want different structures for optimal performance...

[1] https://www.amazon.com/Design-UNIX-Operating-System/dp/01320...

[2] https://people.eecs.berkeley.edu/~brewer/cs262/FFS.pdf

[3] https://people.eecs.berkeley.edu/~brewer/cs262/LFS.pdf

Unix-like pipes are not restricted to sending text, they may send general bytestreams. Modern data serialization formats (flatbuffers, capnproto, msgpack, etc.) are highly efficient, and using serialization makes it easier to deploy applications in a distributed setting.

The value is in having everything speak a common standard. In unix, that 'standard' is lines of text, which is inherently linear and lacks the ability to idiomatically represent hierarchy.

Conceivably, one could use modern serialization formats or we could do something a bit more interesting, like supporting some notion of "objects" directly in the operating system or runtime (in the case of powershell, the "runtime" is the .net vm; in unix it's the OS).


* Field type: S - 16 bytes, D - 32, L - 64, Q - 128, B - 256, P - 512, T - 1024.

* Field ID - 3 bytes ASCII Letters, numbers, '_';

* Field-value separator - 2 bytes: ':' and field value terminator, e.g. ': ' or ':"' or ':[' or ':{' or ':(' ;

* Field value 9 bytes ASCII for S, 27 for D, and so on, padded with ' ';

* Field separator: 1 byte '\n';

  SOBJ: X23456789\n
  SFN_:"Length"  \n
  SFV_: 23       \n
  SEND:          \n
Such text can be parsed by both human and computer, because computer can just jump from field to field without parsing everything in between, like humans do.

Sure, but the standard GNU or BSD userland doesn't do that, does it? If you're going to the trouble of implementing your own userspace, then you might as well use a consistent, proper data serialization format (either text or binary, but text is more accessible).

You don't need to send text (but you should by default for 99%+ cases when the performance is good enough and use the advantages of a human-readable format)

The simplicity of a pipe is a feature.

Ultimately, these "text" formats are simply a matter of convention that makes some binary-formatted streams easier to document for humans' benefit. There's nothing wrong with a binary format if it's properly documented in some way, and modern serialization formats can be augmented with schemas and other features that help provide this.

Being able to easily read and edit them matters too (pipes are just files so one might choose that file to be a persistent file on disk).

Having a none-text format would mean all your file editing tools would have to then understand this new binary format. So you’d have to rewrite every tool to manipulate that data as if it’s text. By which point you’ve effectively just reinvented ASCII.

"This would be a great opportunity to avoid the classic Unix mistake of using text files"

Don't get it. Where is the mistake? I think it was its biggest strength. In the successor, Plan 9, everything is a file...

I'd love to see something like RPYC used for interprocess-communication.

When I was a kid other kids either had ZX Spectrum, Commodore 64 or BBC Micros. (yeah - I knew one kid with a Dragon 32 poor sod. And someone with an Atari 400 or 800 where I was jealous of the hardware palette tricks and game cartridges)

Anyway - The Commodore seemed the worst option in terms of habitability. Any graphics or sound programming largely consisted of poking random locations in memory. The included BASIC had no direct support for any of the interesting stuff.

The quality of the BASIC was also poor. The Beeb probably had the best implementation at the time with the Spectrum coming second.

So - I guess what I'm saying is the C64 is a strange role model.

Until beaten by the Raspberry Pi, the Commodore 64 was the best selling single model of computer in history, Commodore having sold between 12 and 17 million units in its lifetime.

So I imagine it is inspired, at least in part, by its ubiquity.

My first computer was the VIC-20 and grew up on C-64 and, today, I thank the stars that they didn't have that stuff in their BASIC implementation. Because I had to deal with the machine on its terms, rather than more comfortable terms, I believe I gained an understanding and intuition for digital logic and processing fundamentals. I think this early grounding in low level basics has allowed me to be flexible and adaptive in ways that many of my colleagues that have only ever had practical experience operating at high levels of abstraction seem to have trouble with.

Naturally this is all anecdote and should be weighted accordingly, but my sense is that these experiences ultimately were formative in rather positive ways given my age and such at the time. And none of this is to disagree with your point... only to show that the primitive BASIC, coupled with for the time rather nice hardware for price, wasn't necessarily bad.

The C64's BASIC was indeed very poor compared to the competition, but the hardware (and games) were pretty cool. People wrote games with assembler in all those early home computers.

The BASIC v2 was horrible, but still there were plenty of great games written in BASIC (Sid Meier's Pirates was mostly BASIC, for example).

Really!? Wow. I don't see how, will read about it. Do you have any link to read? (any excuse to read about Sid Meier's Pirates! and the C64 is good).

In any case, C64's BASIC "programming" meant that for graphics and audio you were mostly POKEing and PEEKing, since its instruction set was very limited. And when you're mostly peeking and pooking directly at memory, you're barely writing BASIC, are you? ;)

edit: I tracked down a 2014 comment from HN saying the same as you: that Pirates! was mostly written in BASIC. I'm amazed. Now I really want to see the source code and/or read a "making of".

I don't remember if it was compiled BASIC or if you could actually freeze the game and print the listing. I do remember that it was mostly BASIC as I mentioned though, meaning that it resorted to ML routines when needed.

Just found out that it wasn't even compiled BASIC: https://www.c64-wiki.com/wiki/Pirates!#Miscellaneous

It is debatable if using PEEKs and POKEs for audio and video functions changes the fact that it is actually BASIC. You are only imitating LDA and STA opcodes, the rest is still BASIC.

Awesome. Thanks for the info! I'm really surprised by this. I didn't know you could do anything useful with C64's BASIC. I know I didn't. I waited till I had GW-BASIC on a PC XT to really learn how to program.

If Pirates! was mostly high level BASIC, I wish the source code was available somewhere to read. Kind of like Broderbund, which released the assembly language code for some of its old games...

It should be pretty straightforward to extract it from the image file using the modern emulator/debuggers. I believe the comments (if any) will be scarce, but still...

yeah, I remember typing out basic listings full of peek and poke without understanding why and what they did with those memory locations. (on commodore plus4)

This is wonderful. I’ve often fantasised about unifying the command line and the desktop into a single UI paradigm and this project could just give me a realistic shot at trying it out.

Anyone who is interested, much of this classic has yet to be tried properly afaik: https://en.wikipedia.org/wiki/The_Humane_Interface

Another idea: instead of having Photoshop/Excel etc. clones, have a document centric rather than application centric UI and allow features to be installable when they are needed (micro-apps?). E.g. Gaussian blur or spellchecking could be things you’d add from a microapp directory which would add an extra button to your interface for images and documents respectively.. you’ll build up a personalised interface and ‘own’ it, rather than being presented with a sea of unknown buttons/menu-items.

> unifying the command line and the desktop into a single UI

This is how computers used to be. I had an Apple II which was exactly this. if you booted without a disk in, you just got a BASIC prompt.

I'm just now falling down this rabbit hole of working with frame buffers as a result of this project and now I'm curious. Did you develop your own window manager? This is really fascinating stuff and makes it really simple to create low-level things with a high-level language. [1] [2]

Also, does OpenGL work?

[1] http://seenaburns.com/2018/04/04/writing-to-the-framebuffer/ [2] https://www.kernel.org/doc/Documentation/fb/framebuffer.txt

Tangentially, Plan9 extends the notion of /dev/tty to something called the draw device. Meaning that a process can just map and use /dev/draw to provide graphics. Those graphics are rendered into the window of the process.

> Also, does OpenGL work?

There's no reason why it can't; OpenGL doesn't require X11 (or Wayland), and EGL (a library/interface that binds the OGL implementation to a windowing/display system) could certainly be implemented for linuxfb (if it hasn't been already).

Either way, I'd expect the current state of snakeware's graphical environment is that it won't work without some (possibly large) modifications.

Fbdev is a legacy interface and is probably not what you want if you're developing something beyond a simple demo. Check kmscube for a modern example of how to use EGL with DRI/KMS/GBM: https://gitlab.freedesktop.org/mesa/kmscube/

That's really interesting, and something I've been thinking about lately too. I wonder if it would be any better with Guile Emacs, so that you have the FFI.

Does anyone here run Emacs as their only environment?

It reminds me of Perl/Linux.

A Linux distribution where ALL programs are written in perl.



Imagine the efficiency of having your whole userland written in one line of code!

This made me laugh way harder than it should. Just imagine that one line. It'd either be the greatest or most terrifying piece of code I think I'd ever be likely to witness. Probably in all existence really. But then of course, someone would have to come along and port that ultimate one line of Perl code to brainfuck and one up them and then someone else would come along and turn that one liner into an entire lisp dialect and on it'll go.

If you think about it, every compiled program is a one-liner.

Or perhaps everything is a one-liner:


As a kid I had a Commodore plus4 as my first computer (it was already severely outdated, but I got it for free). The fact that it booted directly to an scripting language intepreter (commodore basic) brought me to programming. You HAD to learn at leas some basic to do anything useful or fun with a commodore homecomputer. With my young nephews I see that the younger generation sees computers and smartphone more like useful blackboxes that "just work" without even trying to understand how.

> Our window manager, snakewm, is based on pygame/pygame_gui. We do not use X11; snakewm draws directly to /dev/fb0.

Whole window system written in Python, That’s hardcore!

Well, sort of. Pygame itself is more C than Python.

Everything they build on top of that should be able to be pure Python, though, which is pretty nifty.

I didn't know it was possible to obtain a hardware-accelerated OpenGL context without using GLX. How does it work without X11? Can I draw to the screen without any dependencies?

Probably the same way Wayland or the original Raspberry Pi drivers do it, using EGL.

I would love one of these, but with Lua as the language instead. Its quite possible with something like LOAD81, from antirez[1] - but I guess I'd love to see a full-blown desktop OS with Lua at the core, and not much else. I guess booting directly to a tekui[2] based editor would be nice, too.

Something about having a clean boot to a single-language environment which appeals to me more and more as the decades go by.

[1] http://github.com/antirez/load81.git

[2] http://tekui.neoscientists.org

Check out the (sadly inactive) Node9 operating system [0].

> Node9 is a hosted 64-bit operating system based on Bell Lab's Inferno OS, but using the Lua scripting language instead of Limbo and the LuaJIT high performance virtual machine instead of the Dis virtual machine. It also uses the libuv I/O library for maximum portability, efficient event processing and thread management.

[0]: https://github.com/jvburnes/node9

Ah yes, that is nice - thanks for the info. So many interesting things using Lua ..

Same, but with Ruby …

Actually had this idea years ago and toyed with hacking a ruby user space atop of Linux From Scratch – course I never went anywhere with the idea. Delighted someone's done it for python. Be cool if there was a Ruby version of snakeware :)

Seconded. Also check out ComputerCraft.

I'd love a club for this kind of thing.

I loved ComputerCraft, but was kind of sad that the source was closed. It fell off of my radar until today, but I'm happy to see that it is now open source!

Reminds me a little of Squeak Smalltalk: https://squeak.org/

It starts up its own environment, and everything that's there can be interacted with either by writing code, or by clicking on things and inspecting/modifying their variables.

Another similar environment is Oberon. It allowed you to select any snippet of Oberon code and execute it!

This is a very fun-looking project. However, in the GNU/Linux world, "user space" is a very broad term referring to everything running outside of the kernel. In order for this to be a "Python user space", you would need to implement or include a lot python equivalents of standard user space tooling. Python grep, python awk, python top, python vi, python gcc (lolz), and even python CPython (lolololz), etc. In the README, you say you want to get it to boot without Busybox. Busybox is currently the thing filling this void for you now. It is providing your actual user space, including it's systemV-inspired init system.

What you have here is basically a very simple pygame template with an embedded application build tool chain. Which is totally cool, and potentially fun and productive. But it isn't replacing the GNU/Linux user space, which is C heavy, but very language diverse.

I would focus not on replacing the user space, but interfacing with it where reasonable. To that end, you may even want to consider shedding pygame and interfacing with X11/wayland. Identify the surfaces you want to control with python and implement that. It will perform better, and be more true to what makes Linux "Linux".

> In order for this to be a "Python user space", you would need to implement or include a lot python equivalents of standard user space tooling. Python grep, python awk, python top, python vi, python gcc (lolz), and even python CPython (lolololz), etc.

Who says you need those particular utilities to make a user space? You could have a user space without a shell, much less gcc, vi or awk.

I mean, okay, you don't need vi or awk to call it Linux. But surely you agree that CPython is needed to run Python, yes? Its interpreter is written in C. To build it, you need a shell, make, gcc, and a text editor would be most helpful.

It's all part of the user space. You can hide it within the snakeware application (why?), but don't deny it exists in the Linux user space. And C Python needs C extensions (pygame, for example), so GCC is at the very least a very-very-nice-to-have, is it not?

What is the goal here? You want to run Python without a shell? A huge number of python scripts depend on the shell. You are increasing barriers to using Python to code interesting apps in this system. Should the shell be implemented in Python, too? It could be, but why? Why not just integrate a standard shell, and present some nice-ish way of interfacing with it via Python?

I don't think that this application itself can even work in such an environment (no shell, no init system, etc). Currently this project is depending on a shell provided by Busybox. It uses these underlying tools to initialize and configure the operating system. It's all there in the user space, just hidden beneath this application.

Look, there are no laws about this sort of thing. You can do what you want and call it what you will, and I'm not some tech lingo fascist trying to impose my idea of "user space" on anyone (even though it's really a term with a specific meaning of 'anything outside of the kernel', for which making an all-python version sounds highly difficult). I'm merely pointing out that a great deal of this user space is actually already non-python, and user space carries with it a lot of linux's essential functionality, largely via the GNU core utils and other common utilities which are written in a wide variety of languages. You can choose to hide all of that functionality to make a python-only sandbox... but.... then...

...Couldn't I just use pygame to make standard applications that run on all distros outside of this isolated "window manager" program? All this achieves right now is isolating your apps in this kind of simulated window environment, and preventing the user from ever doing anything meaningful unless they have a specially coded snakeware(tm) python app. It can never run a standard installation of something like Firefox; utilize any standard Linux CLI software; utilize an existing package management system or the packages from it; etc. because it has no shell, and doesn't make use of the standard Linux graphics compositors (X11/Wayland).

So having said all of that, my goal isn't to tear down the project, but to ask the hard question for the benefit of this project: What is the goal? What I see here is mainly a simulated window manager written in pygame which can only ever run specially-created GUI apps, which is booted into directly. This design can never become a viable distro as it stands.

This is a cool project. A fun project. But first and foremost, this is about 1400 lines total of Python and shell script. It's not really all that much yet, and it's not going to get there by pretending this design is sustainable to become some robust user environment where everything is Python. Sometimes, complying with standards and specifications are what make a thing the thing it is. And this is not a Python-only user space Linux distro. It is a simulated window manager written in Python that runs on a Linux kernel in a standard shell using a compact subset of standard tools written in other languages, and conceals the rest of user space on which it itself depends.

But maybe it's just easier to pretend the whole thing isn't a single process running on a single core, non-Posix compliant, incapable of running standard Linux GUI or CLI apps; and that all of that sacrifice in functionality hasn't been specifically engineered to cater to some misguided aesthetic notion that this all makes the system more hackable because it is a written in Python. Maybe it's easier to simply pretend that you can't build a python window manager that works with native linux graphics compositing systems, and that the whole design would be better off targeting certain configuration processes using Python.

> . But surely you agree that CPython is needed to run Python, yes?

No, it just happens to be the reference implementation, there are others to chose from.

One can even go crazy and bootstrap a Python environment, basic compiler development stuff.

You are mistaken. Pygame depends on CPython, as it is a C extension. Under the hood, even this window manager is mostly C.

And? Who says that Pygame is the only way of implementing this?

Pygame is all this is. I'm not saying that it's the only way. It's probably one of the worst ways in my humble opinion, though I can't really say that without understanding their vision for the project tbh.

Should I assume they are looking at scratching every single line of code they've written for this and restarted? Why would I make that assumption?

I don't want to quibble with you. My only point here is that an all-python user space is a significant undertaking if you are willing to acknowledge the scope of what the user space actually entails. It's totally doable, but this project isn't even a step in the direction of solving those problems. It's a totally different thing.

Depending on the objective in a "python only" operating system, I may be inclined to strongly advise not trying to reimplement everything in python. The nature of C is such that you can write a C compiler in C, compile it, throw away the source code, and use the compiled binary for the new compiler to start developing a new compiler for C. But you can't like, make a python interpreter for python, in python, delete CPython, and then use the python version. It is dependent on CPython. And so, to build it, it's dependent on GCC, make, a shell, etc.

These are great tools. I think you should keep them, and define a specific surface area that python interfaces with. That's all I'm sayin'.

Sure you can, that is the whole goal of PyPy, IronPython, Jython, Python on GraalVM, Python on OpenJ9, CircuitPython and plenty of other attempts.

Anyone that understands compilers knows one can write language X in X.

In fact, "Writing Interpreters and Compilers for the Raspberry Pi Using Python" does exactly that.

We can nuke CPython and there is plenty of Python code that would still run without problems.

I'm sorry man, but you sound incoherent to me.

You can write a compiler for python in python--but it still needs the original cpython (or jython, or ironpython, or pypy, etc.) to run, because there is no compiled binary produced by your python code.

You can write a compiler for any language using python, but you need one of the interpreters you mentioned, and you can't throw it away unless it produces a binary. Maybe Cython can do this. But if it needs an interpreter to run, you'll have to keep the interpreter around as a non-python dependency.

But who cares? Unless you rewrite a Pygame from scratch, or replace it with another graphics library, in which case you need to rewrite snakeware from scratch, none of this is relevant here.

It's just insane. You are proposing rewriting entire language runtimes and/or graphics engines to preserve 1400 lines of python code (mostly a menu, a calculator, and a game of snake), with the supposed goal of creating an all python user space, and dismissing it as "Who says you can't do it?". I'm not saying it's impossible, but it makes no sense. You'd be better off starting over and taking this on with a different strategy. If your end goal is an all Python user space linux distro that actually works like linux, this project is a non-starter without being totally reimagined with virtually 0 reuse of the code that's there even at a conceptual level. You wanna rewrite pygame or python's compiler? Be my guest! It has nothing to do with your supposed objective though.

At the very least, you would need to get out of the pygame window simulation, start using a standard graphics compositor, and replacing existing utilities with python alternatives to make this work. But again, even CPython, Jython, PyPy, etc. will need to remain. Maybe Pygame can be made to work with PyPy, in which case you get close to what you want--but it's still a C extension, not python. And that's your graphics renderer. So why not just use Wayland or something then?

I think the quest for language purity at the OS level is a fun dream to pursue, but you need to decide where to cut it off in terms of scope and how to implement the scope you choose.

Interesting. The GUI is implemented on top of PyGame. The terminal is actually a Python REPL.

Do you plan to write replacements for the POSIX utilities? Strict POSIX compliance is not necessary but user space needs the general functionality provided by those packages.

Why another set of the same old?

Let the users (hopefully a new generation with new ideas) create their own paradigms.

Yes. I didn't mean to imply that POSIX utilities should be reimplemented. It's just that a new user space isn't very substantial if users can't even explore the file system. The new software could even be graphical in nature. Right now a Python REPL is the only real way to interact with the operating system.

Any idea where I can get data on POSIX utilities ranked by use/importance?

you could start with the list of what's included in busybox:


the goal of busybox is to provided a stripped down posix userspace for use in embedded devices -- basically, provide enough posix utils to be able to do some useful shell scripting.

Here’s a standard list of Unix commands: https://en.wikipedia.org/wiki/List_of_Unix_commands

It is probably more comprehensive than what you were after though...

Start with file management utilities such as ls, cp, mv and rm.

Heh totally want to install this on a PineBook and give it to elementary school kids as their first personal computer.

Agreed. I've been trying to figure out what to give my son as a first computer. This would be a neat option.

If he wants to watch videos on youtube, he is going to have to code himself a browser first.

If you can run youtube-dl (hint: written in python), you only need a video decoder and player. If pygame doesn't give you that, though, you're in for trouble big time, because video codecs are hard. :)

also, he will need to type the whole program on the keyboard. like we did when we were kids copying tens of pages of basic from magazines.

Any other ideas so far? I’ve mulled over this question of best first computer for children. The best I’ve come across so far is a plain Linux installation with no desktop environment and a text-based typing program.

It claims to be "fully Python-based Linux distro" but it seems to be just a buildroot config with an init.d script that runs the python repl in a loop. There's still /bin/sh, C tools, etc.


Same thing with a C64 - you have a simplified BASIC-environment but still can use the underlying Assembler.

Nice; I had been thinking of building something similar aimed at the raspberry pi, to make it easier to just get python scripts running, without having to learn about the OS. But never found the time. Was going to be called PyOS (pun on BIOS): https://github.com/askvictor/pyos

FYI, The audio in the demo video only plays in the left channel.

This is funny, the other day when Microsoft released the code to GW-BASIC, I wrote this in response to someone saying their IBM PC booted into a version of BASIC rather than DOS:

I've always thought that using BASIC as the default command line was really weird... Like on an Apple II or TRS-80. You could type in a command to load something from a disk or run a program, or type a line number to start programming. How did that make sense to anyone?? To my 11yo self, it was confusing and I've never changed my mind.

To add to that thought, I think there's some things which are worth emulating from the good ol' days, and some which aren't. I think this is probably one of the latter.

My 16 year old self found it the opposite of confusing on my TRS-80, especially with everything in ROM. Hit the power button and in seconds you can start programming and exploring. It's how thousands of kids got their start programming.

> You could type in a command to load something from a disk or run a program, or type a line number to start programming. How did that make sense to anyone??

But that makes perfect sense to me!

The commands you type in are, themselves, basic code. Line numbers just tell the computer to start storing those commands for later use. Chain them together and you have your own program, which you can save and load and run later.

I find it all aesthetically beautiful, actually.

But how different is that really than bash? Or even command.com?

It isn't, really. The command-line is a beautiful thing!

Always made perfect sense to me - I grew up on ZX spectrum, BBC Micro and VIC-20. When I started using PCs on dos, that made sense too. I never thought either approach was wrong...

Those computers could not run a general purpose GUI, so a command-line interface was all you got. The nice thing about BASIC in that context is that it's a clean extension of a desk calculator: it gives you floats and math operations, functions etc. out-of-the-box, which took a non-trivial chunk of ROM code to implement on that kind of 8-bit platform.

Hmm. I think it makes a lot of sense and never confused young me with a Commodore 64.

If you boot a Unix system with a shell like bash, you basically have a programmable console that serves as your interface for loading files and running programs, as well as a programming interface itself. The shell is effectively Unix's "Commodore Basic V2.0" ;)

Not really - you don't have an invisible memory buffer that's just sitting there that you need to load with a program in order to use it. Then you type "run" and it magically pulls the program out of the ether and runs it. And the command line isn't automatically, and exclusively, used as an ed-like line editor.

I get that all shells are programmable, but booting right into a programming language REPL? You need a different mental model of what's happening under the hood for it to make sense.

OTOH, I remember that the first thing I did when, as a 10-year-old boy, I got a VIC-20 (my very first computer) was to type

    10 + 10
and "20" appeared after I pressed the Enter key. Today this would not work with Bash, yet at the time the computer's answer seemed pretty logical to me.

There modern day expression-oriented CLIs where this would work, too :)

It absolutely didn't seem strange to me back then, so I guess we disagree. The memory buffer isn't invisible, you can list it and modify it.

Lots of Basics worked like that back them, to me it's understandable without a second thought -- much like a REPL today is. In fact we designed a console/REPL just like this in my day job. It's superb because the shell to the computer is an interactive programming language. If this puzzles you, a Smalltalk environment would blow your mind :P

Some people argue computers ought to go back to that: you boot up, you get your programming environment and file and device manipulation interface ready, and it's all the same thing.

I don't see this too different from the Unix command line.

It actually removes the complication of using the compiler, and makefiles, and so on.

The beauty lies in its simplicity.

After seeing "operating systems" like JS/UIX here, it's great to see one that actually boots independently. And it looks like it has multitasking, and contributions from multiple developers, awesome!

To be fair, I think those features came with the Linux kernel.

Yeah. I mean, this is still much more effort than every Ubuntu-derived "OS" out there, but it isn't like Haiku or Redox or anything.

This actually is really good idea. While I don't use Python often, I wouldn't see this as a problem. By reducing a barrier, they might actually create something fun to use.

Also, having a Raspberry Pi as first/default platform would be exceedingly welcome and good.

I wouldn't mind Lua as well, Ruby etc. As long as it is something easy to use.

Nobody seems to have mentioned it, but I would love to see how this grows and whether or not if they'll eventually adopt Cython, which would bring a lot of performance boosts, but also be good for Cython in general.

Cython: https://cython.org/

Sounds like the One Laptop Per Child environment. That had a Python user space. Can you still run it?

I was thinking the same thing. I actually own the original OLPC, and unfortunately, that thing is a pile of garbage. The entire UI and all apps are programmed in Python, which while good for getting children into programming I guess, is bad for actual use. The system is so unimaginably slow that doing really anything at all is unusable. Like every other adult who bought one, I installed Debian on it and a lightweight tiling window manager.

I keep my OLPC (running Debian) around for precisely the reason of having a low-spec touchstone to keep me grounded. Heck I even test some programs on an NSLU2! https://gist.github.com/cellularmitosis/9ab25536b01d903539bd...

Wasn't Python basically invented as the scripting language for the Amoeba OS?

I often imagine an alternate reality where much of the non-performance-critical Unix/Linux userland such as login is written in Python.

This is great.

I think a laptop with this, along with a Python book is one of the greatest gifts for someone to learn about computers and programming.

All exploration and few distractions.

I had this exact same idea, just no time to execute it, so very happy to see this. Now, I just have to install it to a Raspberry Pi!

I kind of really want to contribute to this now. I've contributed to SerenityOS, too. But do I have time? No.

There are a lot "See also" - so I'd like to shout out to the late Terry Davis' "TempleOS" and his ... strange take on an interactive boot-to-command line system based on C++ and x86_64.

[*] https://en.wikipedia.org/wiki/TempleOS

[Edit]: See also http://www.codersnotes.com/notes/a-constructive-look-at-temp...

it's just 'buildroot' (which is a minimalist distro that happens to include python).

I mean the python window manager is interesting but also mostly delegates to pygame.

Calling it a distro is a "bit" over-selling it IMO

This needs to be on a Raspberry Pi.

Semi-related, I recently installed the BMC64 distro on my unused Raspberry Pi. It boots directly to VICE, no "Linux" to be seen. It's like booting a C64. You turn it off by unplugging, just like the real thing. The only thing that tells you this is not a real C64 (besides lacking the keyboard/case) is that you can bring up a menu with lots of options with F12 (necessary to mount disk images, at the very least).

Wow, does it take me back to my childhood!

Semi-semi-related, there was a commercial distro named Amithlon that booted straight into AmigaOS 3.9. Alas, it was discontinued, but it had a really nice AmigaOS feel to it.

Cool! I missed the boat back then and never did own an Amiga. It was too expensive for my parents to buy. Later I heard it was an incredible machine and young me would have loved to toy with it.

It was a cool machine, and AmigaOS was simply awesome compared to contemporary consumer operating systems, but alas Commodore went belly-up before they managed to modernize the platform.

I want this but for v8/js!

This may float your boat: https://github.com/NodeOS/NodeOS

This is so cool.


Fun idea!

Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact