> Most importantly, because I can.
Probably present this somewhere in a conference (CCC, FOSDEM, etc). Who knows who could be looking at this.
edit: Interesting is a personal taste, and while I have noticed articles on hacker news becoming more repetitive lately, it's probably not fair to put it entirely down to "Founder-type people".
We had a new hire who was huge into HN a while back. Was very hard to rein him in from suggestions of haskell, random frameworks, etc. I've dealt with the "shiny things" mentality many a time, but this was something else. Made the "academic do it right" approach look liberal in terms of risk.
Haskell is cool, but yeah sorry keep that out of my prod stack.
One was a worm that packaged itself up and sent itself around, the other did some sort of injection or binary patching and happened to piggyback onto the worm.
I think we're half way there already, because of the number of tools for decompiling and code scanning for security vectors these days, the increasing use of fuzzers to discover unintended or undocumented features, including hardware features like undocumented instructions, and increasingly clever and subtle side-channel attacks for finding device state that is not supposed to be visible to software.
The difference between "porting" and "modifying" has (at least in everything I've read in "hacker" culture) always been that porting is rarely a trivial act.
Btw., what are "fb games"?
See https://www.phoronix.com/scan.php?page=news_item&px=Nintendo... for more information.
Wow, four people guessed (or trolled) framebuffer. Interesting.
Are the Linux tests deterministic, and the tests for this change will not run unless something directly affects it? If not, then I’m not a huge fan of these commits, which seem more about vanity.
RFC because I'm not sure if it's useful to have this merged.
So it's for fun; you won't be able to run any existing "normal" Linux emulators and you'd be better off porting stuff to run on bare metal than trying to squeeze it in under Linux on an N64. Cool project, fun, great that it's getting upstreamed... just keep in mind it's not really useful for end users for anything (unlike, say, Wii Linux, where with ~80MB of RAM you can actually get somewhere, though still with many limitations).
OTOH, as other people have mentioned, it makes a good test case for N64 emulators. If they can run this, that's good validation.
But to satisfy your definition needs, I refer to small microkernels.
If you have a very small microkernel, then you can very explicitly only install the software in user space that you need, without having them pre-installed via kernel.
The fact is, the Linux kernel can be compiled with features to suit constrained environments. It is just that these features are determined at build time, not run time.
Also quite interesting this question already came up on Quora https://www.quora.com/What-is-the-smallest-in-size-Linux-ker... and also on many other websites:
A microkernel with feature control at the process/service level would actually have much worse feature control than Linux, because Linux build time configuration options are often quite a bit more finer grained than that. For example, you can build Linux for uniprocessor systems, which makes global build-time changes that disable certain kinds of locks, which makes it smaller globally. A microkernel could or could not have the same build-time feature; it is not guaranteed to.
As others have pointed out, microkernels have performance overhead and that reason alone makes them unsuitable for an N64. On a game console you need all the performance you can get.
You want to throw out as many functionality as possible to get your kernel down in size to fit into e.g. 1 MiB of memory.
If you don't need those functions, you can remove them from kernel space in a monolithic kernel. It's possible to build Linux without TCP/IP support and with no on-disk filesystems.
They achieved a compressed linux kernel size of just 749 kB which additionally requires at least 12 MB of RAM to boot. This is very impressive, but there a constrained systems with 1 MB or less of memory.
What nobody has (yet) mentioned is that micro kernels typically run slower than monolithic kernels because user space / kernel space memory swapping is expensive compared to running everything in kernel space. This overhead would kill any performance you might get from a system with the hardware specs of an N64. So a monolithic design is absolutely the way to go (in facts that’s how N64 games are actually written —- one monolithic code base with shared memory).
Pragmatically the only way to write software for the N64 is to go bare metal. As the OP said, this Linux port is a fun technical challenge but no OS would be practical (that is unless you’re just using it as a boot loader for other software).
"A microkernel would be a good fit for such constrainted environments."
No, that's exactly my argument. My argument is that the linux kernel (even if you strip everything out and create the most tiny linux kernel) is still too big for many constrained environments, e.g. a old router with 512 KiB memory (there are many devices that cannot run linux). However, it is possible to run a small microkernel on such a router, that's my point, nothing more nothing less, that's why I consider using microkernel a good fit.
Then people argued that mircokernels can be as big as a linux kernel and that you can strip the linux kernel down in functionality and I agreed with them, but this does not contradict the point I made.
As the GP said, micro kernels have a performance overhead swapping data between rings. That overhead would bite hard on something running a NEC VR4300 clocked at 93.75 MHz.
A monolithic kernel is the way to go. Just not Linux specifically.
> My argument is that the linux kernel (even if you strip everything out and create the most tiny linux kernel) is still too big for many constrained environments
That was the OP's point. Yours was that a micro kernel would be a better fit. It would not.
> a old router with 512 KiB memory (there are many devices that cannot run linux)
Those devices wouldn't be running code written to programmable chipsets. They wouldn't be running an operating system in the conventional sense. Much like laumars point about how games are written for the N64.
Also nobody is suggesting Linux runs everywhere. We are just pointing out that you massively misunderstand how micro kernels work (and embedded programming too by the sounds of your last post).
By the way, you wouldn't find any routers running a meger 512KB of RAM. That wouldn't be enough for multiple devices connected via IPv4, never mind IPv6 and a wireless AP too. Then you have firewall UI (typically served via HTTP), a DHCP & DNS resolver (both of which are usually served by dnsmasq on consumer devices) and likely other stuff I've forgotten about off hand -- I have some experience building and hacking routers :)
> However, it is possible to run a small microkernel on such a router, that's my point, nothing more nothing less, that's why I consider using microkernel a good fit.
Most consumer routers actually run either Linux or some flavour of BSD. All of which are also monolithic kernel designs. Some enterprise gear will have their own firmware and from what I've seen from some vendors like old Cisco hardware, those have been monoliths too.
I know micro kernel has the word "micro" in it and the design requires loading the bare minimum into the kernel address space of the OS, but you're missing the bigger picture of what a micro kernel actually is and why it is used:
The point of a micro kernel isn't that it consumers less memory. It's that it separates out as much functionality from the kernel as it can and pushes that to user space. The advantages that brings is greater security with things like drivers (not an issue with the N64) and greater crash protection (again, not really an issue with the N64). However that comes with a performance cost, code complexity and any corners you do cut to try to bring those costs down ultimately end up eroding any real world benefits you get from a micro kernel design.
This is not true in general, although true for modern devices. There are older router models that cannot run linux. A few years back I unsuccessfully tried to flash a very minimal <1 MiB linux on an old TP link router. I was able to flash the rom but I couldn't boot, because there was not enough memory available, it wasn't 512KB but only a few MiB IIRC, still not enough.
> We are just pointing out that you massively misunderstand how micro kernels work
If someone points out, that linux kernel can be reduced in size and that there are some big microkernels, then I do agree and there is no misunderstanding, as far as I can see. Same holds true for the performance argument.
> As the GP said, micro kernels have a performance overhead swapping data between rings.
I agree that performance will be problematic, but this does not render microkernels useless in general for constrained devices. See for example:
How long ago was "a few years ago"? What model number was that? DD-WRT has been ported to the Archer series but if you're talking a ZyNOS based router then you're probably out of luck. Those ZyNOS devices are the real bottom end of the market though. Even the ISP routers here in the UK are generally a step up from those. Particularly these days now that households have an expectation to have kids playing online games, streaming Netflix and such like (even before COVID-19 hit ISPs have been banging on for ages about how their routers allow you to do more concurrently). And with TP-Link, the Archer series are all Linux based or Linux compatible and they start from ~£50. So you'd be really scraping the barrel to find something that wasn't these days.
> I agree that perfromance will be problematic, but this does not render microkernels useless in general for constrained devices.
Any OS designed around kernels, memory safety etc would be useless in general for constrained devices. This isn't an exclusively Linux problem. On such systems the whole design of how software is written and executes is fundamentally different. You don't have an OS that manages processes nor hardware, you write your code for the hardware and the whole thing runs bare metal as only one monolithic blob (or calls out to other discrete devices running their own discrete firmware like a circuit). That's how the N64 works, it's how embedded devices work. It's not how modern routers work.
In 2020 it's hard to think of a time before operating systems but really that is the way how the N64 works. Anything you run on there will eat up a massive chunk of resources if it's expected to stay in memory. So you might as well go with a tiny monolithic kernel and thus shave a few instructions from memory protection and symbol loading (not to mention the marginally smaller binary sizes due to any file system metadata, binary file format overhead and other pre-logic initialisation overhead (such as you get when compiling software rather than writing it in assembly). If you're going to those lengths though laumars point kicks in: you're better off just writing a "bootloader" menu screen rather than a resident OS.
This brings back memories :) http://www.ixo.de/info/zyxel_uclinux/ Sure, we are talking about low-end (real bottom) devices and date models here. I cannot recall the model number, but I think we both agree that routers that cannot run linux exist, although not very common (anymore).
> Any OS designed around kernels, memory safety etc would be useless in general for constrained devices.
How about QNX then?
"QNX is a commercial Unix-like real-time operating system, aimed primarily at the embedded systems market. QNX was one of the first commercially successful microkernel operating systems. As of 2020, it is used in a variety of devices including cars and mobile phones." - https://en.wikipedia.org/wiki/QNX
They are "aimed primarily at the embedded systems market", their latest release is from "7.1 / July 2020; 5 months ago" and they are operating their business model since 1982.
So not just low-end, but a decade old device that was already low-end upon it's release. That's hardly a fair argument to bring to the discussion.
> How about QNX then?
QNX wouldn't run on something with <1MB RAM. Nothing POSIX compliant would* . The published minimum requirements for Neutrino 6.5 (which is already 10 years old) was 512MB. Double that if you want the recommended hardware specification.
Sure, if you want to strip out graphics libraries and all the other stuff and just run it as a hypervisor for your own code you could get those hardware requirements right down. But then you're not left with something POSIX compliant, not even useful for the N64. And frankly you could still get a smaller footprint by rolling your own.
The selling point of QNX is a RT kernel, security by design and a common base for a variety of industry hardware. But if you're writing something for the N64 then none of those concerns are relevant (and my earlier point about a resident OS for the N64 being redundant is still equally valid for QNX).
Also smart phones are neither embedded nor "constrained" devices. I have no idea what the computing hardware is like in your average car but I'd wager it varies massively by manufacturer and model. I'd also wager QNX isn't installed on every make and model of car either.
* I should caveat that by saying, yes it's possible to write something partially POSIX compliant which could target really small devices. There might even be a "UNIX" for the C64. But it's a technical exercise, like this N64 port of Linux. It's not a practical usable OS. Which is the real crux of what we're getting at.
Fair enough, I agreed that I should have come up with an better example. But before going down another rabbit hole, just replace router with any modern embedded chip you like, that cannot run linux as example.
Regarding QNX, I don't know their current requirements but what impresses me:
"To demonstrate the OS's capability and relatively small size, in the late 1990s QNX released a demo image that included the POSIX-compliant QNX 4 OS, a full graphical user interface, graphical text editor, TCP/IP networking, web browser and web server that all fit on a bootable 1.44 MB floppy disk."
> I should caveat that by saying, yes it's possible to write something partially POSIX compliant which could target really small devices.
Yeah, I think here's a interesting overview of some
I wonder how many of them are POSIX compliant (or partially) and what their requirements are. GNU/Hurd certainly is.
An older QNX ran from a floppy with very few MB. With GUI and a browser with limited JS support.
>QNX wouldn't run on something with <1MB RAM. Nothing POSIX compliant would* .
You could run Linux under a TTY for i386 with 2MB with some swap about 24 years ago.
It did and it was a very impressive tech demo....but it's not representative of a usable general purpose OS. Chrome or Firefox alone comes in at > 200MB. So there is no way you'd get a browser that would work with the modern web to fit on the 1.4MB floppy. And that's without factoring in fonts, drivers, a kernel and other miscellaneous user land.
The QNX demo was a bit like this N64 demo. Great for showing off what can be done but not a recommendation for what is practical.
> You could run Linux under a TTY for i386 with 2MB with some swap about 24 years ago.
That's still double the memory specification and yet Linux back then lacked so much. For example Linux 24 years ago didn't have a package manager (aside from Debian 1, which had just launched and even then dpkg was very new and not entirely reliable). Most people back then still compiled stuff from source. Drivers were another pain point, installing new drivers meant recompiling the kernel. Linux 1.x had so many rough edges and lacked a great deal of code around some of the basic stuff one expects from a modern OS. There's a reason Linux has bloated over time and it's not down to lazy developers ;)
Let's also not forget that Linux Standard Base (LSB), which is the standard distro's follow if they want Linux and, to a larger extent POSIX, compatibility wasn't formed until 2001.
Linux now is a completely different animal to 90's Linux. I ran Linux back in the 90s and honestly, BeOS was a much better POSIX-compatible general purpose OS. Even Windows 2000 was a better general purpose OS. I don't think it was until 2002 that I finally made Linux my primary OS (but that's a whole other tangent).
I mean we could have this argument about how dozens of ancient / partially POSIX-complient / unstable kernels have had low footprints. But that's not really a credible argument if you can't actually use them in any practical capacity.
There are modern microkernels that are POSIX compliant and have a much lower footprint than linux. That's not the problem. I think the most prominent issue, people points out here is performance. However, it's very obvious to me that the extra abstraction of having a kernel vs having no kernel on an constrained device costs performance, and it's always a trade-off, both solutions can be found and both solutions are valid.
There are... but they're not < 1MB. Which was the point being made.
> I think the most prominent issue, people points out here is performance.
That's literally what I said at the start of the conversation!
> However, it's very obvious to me that the extra abstraction of having a kernel vs having no kernel on an constrained device costs performance, and it's always a trade-off, both solutions can be found and both solutions are valid.
Show me a device with the same specs as the N64 which runs an OS and I'll agree with you that both solutions are valid. The issue isn't just memory, it's your CPU clock speed. It's the instructions supported by the CPU. It's also the domain of the device.
Running an OS on the N64 would never have made sense. I guess, in some small way, you could argue the firmware is an OS in the say way that a PC BIOS could. But anything more than that is superfluous both in terms of resources used and any benefits it might bring. But again, if it's a case of "both solutions are valid" then do please list some advantages an resident OS would have bought. I've explained my argument against it.
Let's take a look at what was happening on PCs around the time of the N64's release. Most new games were still targeting MS-DOS and largely interfaced with hardware directly. In a way, DOS was little more than a bootstrap: it didn't offer up any process management, the only memory management it did was provide an address space for the running DOS application, it didn't offer any user space APIs for hardware interfaces -- that was all done directly. And most of the code was either assembly or C (and the C was really just higher level assembly).
Fast forward 4 years and developers are using OpenGL, DirectX and Glide (3DFX's graphics libraries which, if I recall correctly, was somewhat based on OpenGL) in languages like C and C++ but instead of writing prettier ASM they're writing code based around game logic (ie abstracting the problem around a human relatable objects rather than hardware schematics). It was a real paradigm shift in game development. Not to mention consoles shifting from ROM cartridges to CD posed a few new challenges: 1) you no longer have your software exist as part of the machines hardware 2) you now have made piracy a software problem (since CD-ROMs are a standard bit of kit in most computers) rather than a hardware one (copying game carts required dedicated hardware that wasn't always cheap). Thankfully by that time computer hardware had doubled a few times (Moore's Law) so it was becoming practical to introduce new abstractions into the stack.
The N64 exists in the former era and the operating system methodologies you're discussing exist in the latter era. Even the constrained devices you're alluding to are largely latter era tech because their CPUs are clocked at orders of magnitude more than the N64 and thus you don't need to justify every instruction (it's not just about memory usage) but in many cases an OS for an embedded device might just be written as one binary blob and then flashed to ROM, effectively then running like firmware.
It's sometimes hard to get a grasp on the old-world way of software development if it's not something you grew up with. But I'd suggest maybe look at programming some games for the Atari 2600 or Nintendo Gameboy. That will give you a feel for what I'm describing here.
I lived through that, the first PC I used had DOS with 5'25" floppies.
On 3DFX, it was a mini-GL in firmware, low level. Glide somehow looked better than the later games with DirectX, up to Directx7 when games looked a bit less "blocky".
>For example Linux 24 years ago didn't have a package manager
Late 90's Linux is very different from mid 90's. Slackware in 1999 was good enough, and later with the 2.4 kernel it was on par on w2k, even Nvidia drivers worked.
And I could run even some games with early Wine versions.
I wouldn’t agree that Slackware in 2000 was on a par with Windows 2000 though. “Good enough”, sure. But Linux had some annoying quirks and Windows 2000 was a surprisingly good desktop OS (“surprising“ because Microsoft usually fuck up every attempt at systems software). That said, I’d still run FreeBSD in the back end given the choice between Windows 2000 and something UNIX like.
We are way past the usual FUD against microkernels.
Now on to your point about the complaint the GP and myself made being FUD; it’s really not. The closest to a monolithic kernels performance any micro kernel has gotten was L4 and those benchmarks was running Linux on top of L4 vs bare metal Linux. While the work on L4 is massively impressive there is still a big caveat, the actual workload was still effectively ran on a monolithic kernel with L4 acting like a hypervisor. So most of the advantages that a micro kernel offers were rendered moot and there was still a small performance hit for it.
Why doesn’t that matter for the Nintendo Switch? Probably because any DRM countermeasures in user space would have a bigger performance penalty and a micro kernel offers some protections there as part of the design. That’s just a guess but as I opened with, Nintendo are quite secretive about their system software so it’s hard to make the kind of conclusive arguments you like to claim.
Other than that I can only point out the CCC related
Also given the amount of hypervisor and container baggage that gets placed on top of Linux to make for the lack of microkernel like safety, it doesn't really matter if it happens to win a couple of micro-benchmarks.
And with regards to your point about Linux vs micro kernels, it does make a massive difference when you’re talking about hardware like the N64 which wouldn’t want any of those features which micro kernels excel at and which every instruction wasted is going to cost the user experience heavily. This point was made abundantly clear at the start of the conversation as well.
Look, I have nothing against micro kernels. There’s an architectual beauty to them which I really like. It’s the functional programming equivalent of kernel design. But pragmatically it wouldn’t be your silver bullet in the very specific context we were discussing (re N64). And to be honest I’m sick of you pulling these pathetic straw man arguments in every thread you post on.
The problem worth micro kernels is that abstraction isn’t free. That’s less of an issue with modern hardware running modern work loads because you’d need to put that memory safety in regardless of the kernel architecture and chips these days are fast enough that the benefits of security and safety far far outweigh the deminishing cost in performance. However on the N64 you don’t need any of the benefits that a micro kernel offers while you do need to preserve as many clock cycles as you can. So a micro kernel isn’t well suited for that specific domain. The case would be different again for any modern low footprint hardware because they’d still be running on CPUs clocked at an order of magnitude more and modern embedded system might need to take security or stability concerns more seriously than an air gapped 90s game console.
In short, micro kernels are the future but the N64, being a retro system, needs an approach from the past.
This is why it doesn’t help how modern and 90s hardware have been conflated as equivalent throughout this discussion.
How come that Nintendo decided to use them (according to reverse engeneering finds)? If they are not suited, then Nintendo should know that right?
The N64 doesn’t run any OS. It’s just firmware that invokes a ROM which runs bare metal.
The Switch, however, does have an operating system.
There is around 20 years difference between the two games consoles. That’s 20 years of Moore’s law. 20 years of consumer expectations of fast processors and fancier graphics. And 20 years of evolution with developer tooling and thus their expectations.
You cannot compare the two consoles in the way you’re trying to. It’s like comparing a 1920s racing car to a 2020s F1 car and asking why they are so different. Simply put: because technology has advanced so much in that time it’s now possible to do stuff that wasn’t dreamt of before.
I don’t think there’s much to be gained in speculation about proprietary operating systems running on newer hardware though.
That was literally what I said :)
> shows their massive potential for constrained devices. Although certainly not for performance reasons
Games consoles are about as far removed from a constrained device as you could possibly get.
It seems to me that you are always ignore low-end devices and very old devices.
If we are talking about an PS5 then yes, this and similar devices are not very constrained, even a full blown Kubuntu might run on some.
But, again there are low-end gaming devices with a tiny black and white screen for 10 dollars and old gaming hardware with very tight constrains. The N64 is certainly one of those constrained old gaming devices.
PS5 would easily run Linux considering the PS3 had a few Linux distros ported to it (back when Sony endorsed running Linux on their hardware via the “Other OS” option, which they later removed). Linux is pretty lightweight by modern hardware standards anyways. It’s just not suitable for every domain (but what OS is?)
On then topic of consoles running Linux, pretty sure I have a CD-R somewhere with Linux for the Dreamcast. That was the era when consoles really started to converge on a modern-looking software development approach.
I never did. I never mentioned current generation consoles not even implicitly. I always talked either about the N64 or (gaming) devices that are constrained and cannot run linux.
“I think that Nintendo might use Microkernels [in the Switch] according to reverse engeneering finds shows their massive potential for constrained devices.”
Maybe you hadn’t grokked that pjmlp was talking about the Switch (Nintendo’s current generation console) rather than the N64?
Either way, my other comment also applies:
> Nintendo actual devices use microkernel based designs. We are way past the usual FUD against microkernels.
Regarding pjmlp post, yes that's true, it wasn't clear (and still isn't) to me from his post, that he specifically speaks about the Switch when referring to "devices".
I know it’s not there in your original comment, that’s why it was inside square brackets. That’s a standard way of including context in a quote that otherwise would lack said context. You would see that in news papers and other publications. This isn’t some weird markup I’ve just invented and it’s definitely not a figment of my imagination because the post you were replying to was about the Switch.
> Regarding pjmlp post, yes that's true, it wasn't clear (and still isn't) to me from his post, that he specifically speaks about the Switch when referring to "devices".
You’re right, it wasn’t explicit. My apologies there.
That's very interesting, can you provide a source?
I am skeptical that this would be the preferred way to port emulators or graphical games. You’re not getting a budget SGI workstation out of this, because the OS kernel itself is only a small part.
The N64 is built around a chip called the Reality Control Processor, or RCP. This contains the RSP, stripped-down MIPS CPU core with a fixed-point SIMD vector unit, and the RDP, a rasterization engine that does simple trilinear texture interpolation and color blending. This is the hard part of programming the N64, and it’s not something that’s addressed by swapping out the OS kernel.
There are… a number of major challenges you will still have to face if you want to make homebrew software for the N64, unless you are okay with just having something on the CPU and writing to the framebuffer.
It will knock down an entry barrier for some people, though, and there's no harm in that at all.
Just trying to temper people’s expectations.
> It will knock down an entry barrier for some people, though, and there's no harm in that at all.
To be honest—I don’t think this is lowering the barrier of entry to N64 development much. Those are the expectations I’m trying to temper here. If you want to develop for N64, you’re going to go through a lot of fuss getting an EverDrive 64 or a similar alternative, setting up an accurate emulator like CEN64 (the popular emulators are not suitable for development), getting toolchains running on your development system, etc.
I think some people have equated “Linux has been ported to system X” as “development for system X is now solved”, when Linux is only a small part of the solution, and for smaller systems (like the N64, which has only 4 MB RAM base), Linux is probably not your kernel of choice anyway.
The Nintendo 64 development scene would definitely benefit from more people pitching in and doing tools development. This is a good time to do it, there are a lot of gaps ready to be filled, and the number of people doing N64 development has increased quite a bit over the past couple years.
On the other hand , this might massively improve the turnaround time on RSP programming (which is something I've actively been trying to learn), since it'll be that much easier to edit and run microcode without having to bake entirely-new ROMs in the process: I'd just need a serial console (or even a framebuffer console on top of whatever the RSP's rendering) and an assembler (or, at the very least, something to turn hexadecimal input into binary data to DMA over - we're only talking 4K each of code and data here, after all).
Certainly a modern GUI a la KDE, Gnome and friends would be well outside of its abilities, but a functional GUI is possible on a shockingly small amount of memory!
The SGI Indy which could be had with an R4000 CPU came with 16MB base
OTOH KnightOS² (not Linux) has a rudimentary (obviously not Xorg) GUI that IIUC runs on TI-73 series graphing calculators with 25 KB of RAM.
IIRC the Haiku folks had a format for that.
And Windows 95 only required 4MB, so it should be within the realm of possibility,
Berkeley Systems, of course, is known for BSD.
There are significantly less bloated desktop environments, but people like their bling, and bling by default includes an accompanying increase in resource consumption.
Look at Raspbian. It looks pretty good, but uses a lot less resources than gnome and kde. You can add to it if you want it to be as bloated as the typical desktop environments. But it's a choice.
Personally I'm in love with tiling desktops which use even less resources yet. And they're blazingly fast.
A lot of it is merely feature creep. Remember when OSX used to have animations (like applications minimizing like a genie getting sucked back into its kettle) as a feature you could turn on if you wanted it? Now it's pretty much default, and most people don't know you can shut that stuff off. And that's how things get sluggish. Sexy new "advanced" features become default, and you always need faster computers and more memory to keep up.
If you insist on using Gnome or KDE, obviously bloated because everything has extra features enabled by default, then then shut those resource hogs off. You'll start to get back to fast desktop days again. Some people will miss the bling, but you can't have one without the other. You can't expect all that sweet sweet bling without the resources being tied up to make it happen.
The rest, the part you are complaining about bloating is the GNU part. I'm not complaining about that GNU part, btw. I've lived in it since the early 90's. But the Linux kernel can be as slim and responsive as you want it to be if you are willing to compile yourself. You can even compile it as an RTOS, and you can never convince me that would be too sluggish.
What I got voted down for was pointing out that Linux-proper wasn't the slow part of that blend. It was the desktop. And that part is GNU, not Linux. Period.
I actually had a gig with SLS Soft Landing back in the day. It wasn't unrelated to the stuff I was doing with uucp at the time. That was important then, but totally not at all today, lol.
I am assuming you are making a bit of a jest, because things have moved along in the past many decades.
You are right. Linux has evolved tremendously over the past ~30 years. Those early times were fun though...
(It won't be running Gtk4 or Qt6 of course.)
>It's also noted that Linux on the Nintendo 64 is still a big buggy and "constantly flirting with [out of memory]."
If you're running out of ram with just a shell that definitely doesn't leave much left. And there's no storage device, so you can't swap.
IIRC, there used to be a GBALinux as well. That's what, 1/8th of the memory?
That this port is OOM-ing is just it being buggy.
There's the cartridge. And yes, while technically that's supposed to be ROM, flashcarts like the EverDrive are able to get creative with that, and I can see that being a viable pathway to achieving something approximately resembling swapping.
There's a reason a lot of open source in recent years have avoided GPL as a plague and opted for BSD or MIT or Apache licenses.
Linux stayed on GPL2 instead of upgrading to GPL3 precisely to allow others freely using Linux in their commercial devices. In other words: they made the intentional choice of making it legally easy to embed Linux in proprietary hardware products (aka. Tivoization, which GPL2 allows but GPL3 forbids)
GPL2 is better for commercialization than GPL3 is. However, BSD, MIT, Apache are much much better than any GPL version, including AGPL.
Chromium uses BSD license and browsers based on Chromium, including Chrome, Microsoft Edge, Opera run everywhere.
BSD is much more prevalent than GPL and software written in BSD-style licenses will carry on into the future as even fewer people will be willing to touch anything GPL.
"But why... Most importantly, because I can."
How does a Linux port make it easier to port emulators or console games?
Is this a step towards getting an N64 on a modern (?) stack like qemu+linux?
It helps emulator development by giving you something to work with inside of your emulator that actually has some tooling to let you look around and test out the system from the inside, unlike trying to get a game working. If Linux-N64 can boot, you can expect that a lot of your code is working properly.
There were some games that manually implemented their own virtual memory paging system to map the cartridge ROM to RAM address space.
But, in general a feature that distinguished the N64 from the PS1 was that it had a single, unified block of RAM for all of the hardware to share however you like.
> Differing memory countings are due to the 9th bit only being available to the RCP for tasks such as anti-aliasing or Z-buffering.
afaik there is no way of manually using this ram, its hardwired to fixed function pipeline stages of RCP.
But because of that you might be able to use it for extra sort of like how the Factor 5 GC games would spill out to ARAM.
A lot of work for 4k of RAM though.
It seems that this actually requires using the Expansion Pak as system RAM to even run.
The texture memory would be difficult to use as RAM though since it's not directly addressable.
That's one of the major reasons the N64 has a reputation for being 'blurry'. 4k gives you a max of (with mip maps) one 32x32 16-bit texture at a time.
Everyone here saying 8mb too small, well just add more ram.
No idea how realistic it'd be to get it working with Linux.
See here: https://www.linux-mips.org/wiki/Linux/MIPS_Porting_Guide
I was bored one evening and ported Linux to run on the Wii's IO/security processor (not the main CPU, that's Wii Linux). It only took a few hours to get the kernel booting to userspace.
Until now however;
Command and Conquer Remastered
And so on
What a time to be alive
And Disco Elysium, though that is a bit more RPG-y. (no combat though, but it has stats and "character progression")