Hacker News new | past | comments | ask | show | jobs | submit login
Alan Cox announces Fuzix OS (plus.google.com)
322 points by socialized on Nov 2, 2014 | hide | past | web | favorite | 91 comments

For those not familiar with the Linux kernel contributors, Alan Cox wrote large parts of the networking stack, was the maintainer of the 2.2 branch, and was commonly considered the "second in command" to Linus Torvalds at one point: http://en.wikipedia.org/wiki/Alan_Cox

He released what people considered to be the stable branch of the Linux kernel (the -ac branches). Linus was moving quickly and Alan would package up the releases that everyone would use.

I just learned, from reading that wikipedia page, about the falling out he and Linus had.

And, interestingly enough, a senior maintainer who quit because he got sick of Linus' behaviour.

That's not true at all, Alan Cox stated himself

> I'm leaving the Linux world and Intel for a bit for family reasons. I'm aware that "family reasons" is usually management speak for "I think the boss is an asshole" but I'd like to assure everyone that while I frequently think Linus is an asshole (and therefore very good as kernel dictator) I am departing quite genuinely for family reasons and not because I've fallen out with Linus or Intel or anyone else. Far from it I've had great fun working there.


This is a much more recent incident. The time he abruptly resigned as maintainer of the tty code was clearly a "fuck this, if you don't like the way I'm doing it, fix it yourself" resignation in direct response to some classic (if mild) Linus-in-your-face criticism.

Aren't they the same incident? His G+ posts talks about resigning from tty duties among other things -- but also states there was no beef between him or any other kernel maintainers.

January 2013: "I'm leaving the Linux world and Intel for a bit for family reasons."

July 2009: "I've had enough. If you think that problem is easy to fix you fix it. Have fun. I've zapped the tty merge queue so anyone with patches for the tty layer can send them to the new maintainer."

No, they're not the same incident.

not being familiar with this case -- but from an outside perspective reading the mailing list now, seems Alan introduced a patch that broke userland code -- which is the biggest No-No in kernel hacking, ie. the number 1 rule is don't break userland. Alan appeared to argue userland code was broken and not his patch, which just made Linus mad (as expected). The kernel strives to never break userland, even when userland is relying on legacy and/or broken pieces of kernel code (the idea is to code around the broken parts and provide compatibility until userland changes/fixes their problem -- ie. no kernel change should ever mass-break userland code).

> code around the broken parts and provide compatibility until userland changes/fixes their problem

read: provide compatibility indefinitely.

yes that's the idea. in fact, it was only last year (or maybe 2012) that intel 386 (an almost 30 year old cpu) support was finally dropped. This is how binaries from 20 years ago will still run on today's hardware and kernel with zero modifications. That is a good thing, especially for enterprise.

it's not that they leave security vulnerabilities in, it's that they build compatibility for any software that may expect something to work a certain way, while simultaneously fixing the underlying problem. To software, it should not care what kernel it's running on going forward.

Yes, I'm familiar with the incident. I was reading LKML when it happened. It's a counterpoint to the usual "Linus's style doesn't cause any problems."

It may be a counterpoint, but it is also one of the milder rants from Linux: No expletives; no colourful descriptions.

Yes, it's critical, but is the criticism unwarranted? He is questioning why one of the most senior maintainers is directly contradicting one of the most central "edicts" from Linus on the kernel development: Don't break user-land. In the message, Linus directly quotes Alan as arguing that breaking userland is ok.

Of course Alan was/is free to disagree, but he should have known very well that Linus would never let that fly. Not least because Linus had told him it wouldn't, and he kept pressing for it.

What was the alternative? From the outside, it looks like Alan repeatedly avoided doing what Linus told him needed doing. Linus could not have backed off without sacrificing the guarantee of not breaking userland.

If you read the whole thread Linus was wrong. He was constantly confusing two different bugs (despite Alan pointing this out to him multiple times), and the fixes he was yelling at Alan for not applying would have caused other things to break.

Ooof. If I'm reading that thread correctly, Linus wanted to leave in a bug that would probably allow a local attacker (or maybe even a remote attacker) to execute arbitrary code in the kernel, just to avoid the risk of breaking userland code that did questionable thing that happened to work before by fixing it.

So reading this thread, I keep asking "could Linus have said what he needed to say in nicer terms and gotten better results?" I think so. In every Linus rant I encounter he goes on and on about the problem, then the person causing the problem, and typically attacks the developer. It's one thing to rule with an iron fist, it's another to target individuals and not the behavior.

> It's one thing to rule with an iron fist, it's another to target individuals and not the behavior.

Quite honestly, I don't see any of this in the thread in question.

Where he's practically yelling "WHY?". Here's how it could be re-written to be a more effective email:

"Hey Alan,

The problem is that while you make pty.c have the correct behavior, lots of legacy userland code relies on the current implementation. Since as kernel developers, we must avoid breaking userland code at all costs, a better solution is to change the behavior, but implement a compatibility layer. Once enough userland code has been fixed, we can remove the compatibility layer. This has been our standard approach for these types of issues, and I think it would work here. While I understand how frustrating it is to not be able to fix this once and for all, I think we need to address our users' needs first.

Thanks, Linus"

Shorter, nicer, more to the point, and with much less time wasted on emotional outbursts.

The bottom line is a kernel developer, a senior kernel developer at that, should know this already, and should never violate the golden rule of kernel development, and if they did somehow, should not argue against reverting their userland-breaking changes. Seems to me, given the circumstances, Linus was much more reserved than he could/should of been. A great way to make people think twice before doing something stupid is to call them on the carpet when they do something stupid. There was no name calling, no profanity, no personal attacks here.

That's not to mention your snippet above would have received the same exact pushback (maybe more) from Alan at the time since Alan believed he was in the right.

I agree, it's strange that Alan knowingly broke userland, and then argued that he was right. Something more must have been going on.

At the same time, do you really believe that the only way to get people to see the error of their ways is to be as harsh as Linus usually is and to resort to personal attacks (referring to your line about how he could and should have been harsher)? This is indeed Linus's usual style. The reality is that maintaining some basic level of respect with your colleagues would mean not losing great kernel developers to Linus's outbursts. Alan leaving was a big deal and was widely publicized. How many others have left to never return after receiving a nastygram from Linus?

Let's take it further: if your boss publicly called you a "fucking idiot" as Linus is known to do, every time you screwed up, would you want to continue working for her? Chances are, most people would resign on the stop.

Lastly, I disagree. Even if Alan still thought he was right after receiving my version of the email, he would have a much harder time arguing with a more reasoned version. He would certainly be less likely to throw his hands up in the air and leave the team. This whole "Linus needs to be an asshole sociopath to get shit done" is simply not true: most effective project managers don't resort to this type of behavior because they have better ways to get things done. Linus has fallen into this MO, and it's working for him, but I argue that if he was a better manager and a nicer person he would get more stuff done, not less.

> Something more must have been going on.

I think we can both agree on this point.

Typically, when I've noticed Linus explode at someone on the mailing list, he is usually the last person to put his 2 cents in. As-in, many others in the mailing list and bug trackers are the first to point out the problem(s) yet the maintainer/patch-submitter argues back and refuses any changes. Usually after a long while, someone pings Linus and asks for his involvement (the kernel is far too large for any single person to be paying great attention to all components). Linus typically comes in as the last line and just unleashes at someone who has already made a much larger problem than it ought to have been -- sort-of the "buck stops here" thing.

The most prominent example I can think of off the top of my head is the Kay Sievers fiasco. People had been going back and forth with Kay for weeks/months before Linus finally weighed in. It resulted in Linus banning any PR's from Kay.

I don't condone all of Linus' outbursts -- but we do need to remember it's the internet and more importantly a select group of people on the LKML... it's not an office building where cordiality trumps directness. Sometimes being direct is the best approach, even in "real life".

Agreed on most points except last.

Linus is never direct. He takes the scenic route, describing, someone's ancestry, mental capacity, and personal hygiene (figuratively speaking), and only after he is done taking down the individual, does he get to the actual behavior. His exchange with Kay could have been much shorter too:

"Kay, this behavior is unacceptable here. I read through the threads and it looks like you are causing real problems. Because of this, I am blocking any future PR'a from you."

More simple, direct, addresses behavior and let's everyone know that the buck stops with him. And this probably would have taken much less time than his actual rant.

> Linus is never direct.

He was direct in the discussion with Alan Cox that this thread started with. He didn't act the way you described in this case.

It takes a lot to get Linus to the point of a rant of the kind you describe, and they are exceedingly rare compared to his usual behaviour.

> Yes, I'm familiar with the incident. I was reading LKML when it happened.

Probably the link was posted for the benefit of others.

Yes, thanks. I should have mentioned that.

I half expected him to throw in something like "Just a hobby, won't be anything big and professional like Linux."

For those not familiar, this is a reference to Linus' quote when announcing commencement of work on Linux:

>just a hobby, won't be big and professional like gnu


(Sorry for the Google Groups link...)

The sardonic introduction and the fact that I initially misread his name as Alan Kay made this all the better for me, but then I came to my senses.

Alan Cox is just as interesting of a figure, though, and this is certainly a cool project. One might berate as to why we need another toy Unix, but I personally like having diverse itches scratched. Not to mention this might be an easier introduction to low-level OS hacking than having to deal with all the cognitive overhead of contributing to larger projects like Linux and the BSDs (and then a non-x86 arch is always nice).

> Not to mention this might be an easier introduction to low-level OS hacking than having to deal with all the cognitive overhead of contributing to larger projects like Linux and the BSDs

Do you have an opinion on Minix, in this respect?

I'd really love to see a micro-kernel with just enough drivers to run on modern hardware, then tied to a higher level platform. I know it may seem really weird, but would love to see node running on something like this. I know there's been some work on getting something like node working with embedded systems via translation, or communication channels, it'd just be nice to see this micro application to something larger, and removing the extra overhead in said system.

Again, just a weird, warped thought here... Hell even getting golang or or rust would be cool. Something you can run some code on without a lot of baggage that you aren't using. I love where coreos is heading, and would love to see something even lighter.

Look at rumpkernel, OSv, the x-on-Xen solutions eg Mirage.

When I first read this, I thought this was meant to be a sarcastic statement on SystemD haters, but this is for real? I would assume this is all geared to scratching some kind of retro computing itch?

It could also be an exercise in open computing, hailing back to an era before legal attacks and ubiquitous surveillance.

Being released in 1976, the Z80 architecture is well known and unencumbered by patents. It's also simple with fixed instruction timing, meaning it can be well tested, leaving few places for a back door to hide. The original hardware probably predates any surveillance programs (edit: and the silicon is being publicly reverse engineered by enthusiasts). There's a satisfying feeling of control, when in charge of a computer that is simple enough to understand in its entirety.

A Z80 won't be the fastest computer, but it might be useful for some tasks. Updates to the Z80, starting with FPGA cores, will be faster than the original, and might form a basis for enthusiasts to develop further. Let's face it, ARM's roots are in Z80 era processors.

If that was the goal, basing it on something like the Motorola 680x0 family would've been a better choice:

* The architecture is already 32 bit.

* There are FPGA reimplementations, including a range of boards from different people, and open source designs (the Minimig) that people have produced working machines from.

* There's been a variety of work on producing more advanced versions of the cores, employing more modern design features.

* The M68k family has MMU support.

* There are Linux ports for M68k, as well as a number of other OS's.

Looking at his Google Plus ports, which includes various ancient hardware as well as a variety of old emulators, I think the retro appeal is more relevant.

Not claiming to have much understanding about any of this stuff, but here's his comment from the link:

"+retrotails prower I'm sure +Dirk Hohndel would like an Atari ST port but once you get past 286 there are better OS to run. For m68k you have to deal with the lack of segmentation or banking in most cases (so vfork not fork), but if you can do that then given you've usually got real DMA I would have thought 2.11BSD/RetroBSD a far far better place to start, or even ucLinux. Many of the 68K platforms are also blessed with other open but non-Unix OS's especially the Atari where basically all the underlying OS code you need is nowdays available in a free form (eg FreeMiNT/XaAes - which is even uses rpm)

Some of the key assumptions in UZI and thus in Fuzix really stop making sense if you have a 32bit system or you have good offloaded I/O. I'm not entirely sure those assumptions don't break down by 286 to be honest. However 286 protected mode is so wonderfully demented and feature filled it has to be done."

Everything he writes there makes sense. Nobody would bother with a UZI derivative on an M68k system, because you can run far more advanced OS's on them. But that's exactly why M68k would make far more sense if his OS was an "exercise in open computing".

But as a fun retro project, on the other hand, UZI / Fuzix seems like an awesome thing to play with (though I'd prefer to see a C64/C128 version...)

The original ARM was inspired by and designed on the 6502 (in the sense of they visited Western Digital and realized the designing a CPU was actually feasible for a small team, them emulated the design on a BBC Micro with 2nd processor) but IIRC Sophie Wilson stated they have no actual technology in common.

I was very confused here. Western Design Centre, not Western Digital. WDC is still around [1] (or at least the website is up - it was last updated two years ago according to the front page).

For those unaware, the 6502 design was largely the product of two people: Chuck Peddle and Bill Mensch.

Chuck Peddle was an ex-Motorola guy that was on the 6800 team but was dissatisfied that Motorola was unwilling to bet on a much cheaper chip aimed at the mass market (the 6800 was around the $250 level before Peddle left; the 6502 debuted at $25 I believe).

Bill Mench was/is by pretty much all accounts a near super-human layout guy. At this time chips were still laid out without much, if any, computer assistance, and prototype turnaround was slow and expensive, so he was a major asset by cutting the number of prototypes needed (the legend - no idea if it is true - is that he had an amazing run of 9 or so designs that came back working on the first try; far simpler designs than these days of course, but to have any come back working on first try was uncommon).

Jack Tramiel at Commodore (who bought MOS) was always penny pinching, and so Mensch was able to retain rights to produce his own 6502 derivatives, and founded Western Design Centre, and have continued designing and selling related cores ever since.

[1] http://www.westerndesigncenter.com/wdc/

Yes, you are correct. The team from Acorn as they were then visited and realized that they too could do this, it didn't need the resources of a giant firm. Of course they already had an excellent track record.

There's an interesting interview about this (amongst other things - it's basically Wilson talking about her career - interesting throughout) here: http://www.computerhistory.org/collections/catalog/102746190

Very informative thanks. My dad still has a hefty collection of Micros (A, B, B+ and Master) and even has a second processor tucked away in his loft.

The creators of Intel's 4004 (arguably the beginning of the microprocessor era) and the 8080 are the same men who left and founded Zilog. I would argue that the heart (brain?) of the work making these microprocessors so great is Masatoshi Shima. Of course there were other important people involved like Federico Faggin who worked closely with Shima and brought him to start Zilog, Yoshio Kojima who recruited Shima to Busicom to start working on the calculators, and Tadashi Sasaki (Sharp) with collaboration with Robert Noyce (Intel) who invested in Kojima's company (Busicom).


Unfortunately, widespread surveillance has been around for a lot longer than the 70s though it started to become more broad in scope around then.


People used to learn programming by learning assembly language. Here's one example for the 68000:

First page of contents: http://imgur.com/1XCLrsQ

Example instruction description: http://imgur.com/MV6Le34

Hello world: http://imgur.com/5yKJlQ9

Off topic but if anyone is ever in Cheltenham UK (or is prepared to pay postage) I have some old books I want to get rid of. http://imgur.com/AWV9HLn

For anyone who wants non-paper versions of stuff like this (focused on Amiga and 8-bit Commodore computers, but there's also a bunch of more generic stuff, and 6502/68000 material): http://www.bombjack.org/commodore/

Also, the M68k programming manuals should still be downloadable from FreeScale (what used to be Motorola's semiconductor division). At least they were when I looked for the last a year ago or so. They are wonderfully clear and detailed.

For sure! I really got my programming feet wet writing programs in Z80 assembly for my TI-83 graphing calculator as a middle schooler. A few years later I dabbled a bit in 68k when I got a TI-89.

Those books look good. I've emailed you about a few of them.

Lattice C! I haven't seen that book in ages.

People used to walk to school uphill both ways in the snow.

Hmmm, bet international postage would be a killer

Well there are some people out there who have valid concerns about SystemD and in general, about Linux the big and bloated, unmaintainable, insecure monolithic kernel. Is that a valid concern? I am not sure. Is that going to be fixed by Alan Cox and Fuzix? Also not sure.

I like the idea of Minix, and that would be a good direction to move on, but the big companies who are contributing the majority[1] of the current Linux work are vested too much in Linux.

There are other projects targeting the same area, different approach though.

1. http://arstechnica.com/information-technology/2013/09/google...

2. http://osv.io/

> Is that going to be fixed by Alan Cox and Fuzix? Also not sure.

The beginning of the OP is obviously a joke.

This is a retro computing hobby project for 1970s era microprocessors. It's obviously not intended to be any kind of replacement for Linux or any other OS on modern computers.

Well, in certain aspects Linux started as a joke. :)

Cox seems to be on some kind of emulation and retro computing binge as of late. His G+ postings are all about Z80 and the computers that used that CPU.

Well, it does not appear to be actually trying to target large-scale deployments, nor the same areas that Linux targets currently. Seems to be mostly about emulating/reliving old-school computing devices and their systems. So, from that perspective, seems to be more of a "pet project" than a serious one. It's doubtful people will dump their OS of choice for this one unless there is a specific reason.

I'm a computer science major in my senior year and I've always been interested in contributing to a big Open Source lowel level software project like this. However, I lack experience working on Operating Systems so have struggled with getting started. Would this be a good place to try getting involved?

Only if you have some existing interest in the hardware in question. This is a hobby retro-computing project, for a chip that was popular 30 years ago.

If you want to get into a low-level, open source project that is relevant today, I would try RISC-V (http://riscv.org/). It's an ISA and family of processor cores designed to be competitive with the niche that ARM usually fills these days, but fully open source, with a freely implemtable ISA. Now, it's pretty new and you can't buy RISC-V chips yet, but it's done by a team led by David Patterson, who is one of the fathers of the RISC architecture, and it looks pretty promising as a new open ISA and family of processor cores.

Or if you want to work on hardware that is actually available today, ARM would probably be your best bet. Maybe try porting Linux, or an RTOS like NuttX (http://www.nuttx.org/), to an ARM SOC that it doesn't yet run on.

This is a great project, but it is most definitely retro-computing. If you're not interested in that area, then probably not.

xv6 is probably the smallest and simplest kernel you can study: http://pdos.csail.mit.edu/6.828/2014/xv6.html

40k? Why is everything so bloated these days? The last 6502 machine I had only had 32k in total.

Maybe he hasn't optimized yet?

Semi-OT: what is it with all the old-school Linux hackers and their obsession with Google+?

Network effects.

I was wondering what he was up to, he was working on Z80 support for pcc[1]. Which is maturing quite well; it can build a lot of the NetBSD kernel for example.

[1] http://pcc.ludd.ltu.se/

For anyone who thinks this is a joke, or that the Z80 can't do anything useful: http://www.symbos.de/

The old processors are commonly underestimated. I remember the times of the first versions of Turbo Pascal on CP/M and 8086. It had a very basic user interface (just key strokes) but the language was very convenient, and compilation was incredibly fast, almost instant.

I also had an Atari ST with 8MHz 68000 (no math copro) with Tempus Editor (ASCII) and Tempus Word (word procesor). Both were written in assembler, also incredibly fast -- faster than MS Word on a PC today. Other people used their Atari ST to write their disseration with Signum, and others published professional newspapers with it.

What does a PC really need? A convenient assembler, a few compilers, a screen editor (vi), a simple database, a word processor (TeX), simple TCP/IP and other very basic things. Just the things which Alan focuses on. Unix on a Z80 or on enhanced FPGA cores sounds really interesting.

I am a happy Linux user for decades but I am seriously concerned about the future of Linux. On one hand Linux will probably soon be kept out from hard locked UEFI/Secure Boot systems, on the other hand modern PCs cannot be trusted anymore in case of security anyway. Also many Linux distros follow the questionable systemd way which makes me wonder if Linux will soon be bloated up like Windows. The Linux kernel runs wonderful so far but it already has several million lines of code, and systemd will add a significant level of complexity.

These things are reasons why I consider Alan's approach of "back to the roots" the right way and very promising. Not only the software is open source but also the requirements for the hardware are so low that many people could build their own System V Unix Z80 systems at home. Cheap microcontrollers like the Parallax Propeller could be added to provide VGA output and parallel I/O.

> What does a PC really need?

I'd like a large address space and some way to do read and write barriers (for real-time GC).

> These things are reasons why I consider Alan's approach of "back to the roots" the right way and very promising.

What do you think about OpenBSD? The only reason I switched from OpenBSD to Lubuntu on my personal laptop is because of Adobe Flash.

> The only reason I switched from OpenBSD to Lubuntu on my personal laptop is because of Adobe Flash.




I wrote 2 games on the Gameboy and Gameboy Color assembly (both running Z80s at 4 and 8 MHz) back in the day and they were great to program on. Loved the simplicity of the machine.

I have many Z80 machines in my 'museum'; happy to see people making software for them still. I am working on applications for Symbos[1] which is really a show of force on these rather slow machines[2].

[1] http://symbos.org [2] https://www.youtube.com/watch?v=2-oBNh0UkQc

Now I really need to finish building my Z80 machine!

Sometimes I think more Z80 computers are available in 2014 than in 1994 or 2004.

I have a


almost ready to bring up in my infinite spare time. And a console IO board and memory board.

Nice, just a few weeks ago I finished my port of socz80 to DE0-nano (https://github.com/slp/socz80-de0_nano).

Apparently, the original socz80 is already supported on Fuzix, so it should run here too. More retro-fun!

Wow a new OS for my Amstrad CPC. If only I knew how to transfer data unto it.

Floppy (you can use a PC floppy drive), or grab a board to add SD or IDE to your CPC

direct link to the Github project, though the parent post on G+ does have good Q&A :)


   "The time has come," the Walrus said,
   "To talk of many things:
    Of QR codes and profile pics
    Of Cox'es and their hackings"


I wonder how difficult, or even possible (no security architecture at the processor level) it would be to port NetBSD to the z80.

Maybe from an old BSD but not from NetBSD.

There's a fairly hard cut-off between the design of early OSs with no memory protection (e.g. legacy Mac OS, W95, Amiga, DOS, Locomotive basic), and operating systems which ration memory out to processes (NT, BeOS, Linux, BSD, Solaris). The later group depends on hardware features that the z80 didn't have.

Version 7 unix is in the first camp. Whereas BSD had paged virtual memory well before the IP became open. Also, I think NetBSD requires at least a 32-bit word size.

There was a unix-like in the z80 era called Coherent. On the wikipedia page it says, "There was no support for virtual memory or demand paging." I remember it being advertised in the magazines but never got to play with it, would be interested to hear stories.

[multiple edits, had fun thinking about this]

Retrobsd is a good starting point.

No, same problems. 32-bit, memory protection.

Ah I had forgotten. Well you can run the netbsd rump kernel without memory protection, which gives you much of unix (no mmap obviously), but the 32 bitness is probably still an issue, although it might not be you will have to implement 32 and 64 bit types but you could try to do it...

The main problem with Z80 it's the lack of a good C compiler. SDCC is not enough.

Any relation to Russ Cox (unixv6, plan9, golang) ?

Come on, this wasn't trolling .. (also, I googled before asking)

I read the first line and I thought it was going to be a proper linux distro without all that new systemd junk. But then I saw it was for z80. Very disapointed :(

Unlike Linus, Alan has MBA.

This is pretty exciting!

Jokes on you, Lennart just announced 8-bit systemd.

Yes to all your questions, Mr. Cox.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact