I just learned, from reading that wikipedia page, about the falling out he and Linus had.
> I'm leaving the Linux world and Intel for a bit for family reasons. I'm aware that "family reasons" is usually management speak for "I think the boss is an asshole" but I'd like to assure everyone that while I frequently think Linus is an asshole (and therefore very good as kernel dictator) I am departing quite genuinely for family reasons and not because I've fallen out with Linus or Intel or anyone else. Far from it I've had great fun working there.
July 2009: "I've had enough. If you think that problem is easy to fix you fix it. Have fun. I've zapped the tty merge queue so anyone with patches for the tty layer can send them to the new maintainer."
No, they're not the same incident.
read: provide compatibility indefinitely.
it's not that they leave security vulnerabilities in, it's that they build compatibility for any software that may expect something to work a certain way, while simultaneously fixing the underlying problem. To software, it should not care what kernel it's running on going forward.
Yes, it's critical, but is the criticism unwarranted? He is questioning why one of the most senior maintainers is directly contradicting one of the most central "edicts" from Linus on the kernel development: Don't break user-land. In the message, Linus directly quotes Alan as arguing that breaking userland is ok.
Of course Alan was/is free to disagree, but he should have known very well that Linus would never let that fly. Not least because Linus had told him it wouldn't, and he kept pressing for it.
What was the alternative? From the outside, it looks like Alan repeatedly avoided doing what Linus told him needed doing. Linus could not have backed off without sacrificing the guarantee of not breaking userland.
Quite honestly, I don't see any of this in the thread in question.
The problem is that while you make pty.c have the correct behavior, lots of legacy userland code relies on the current implementation. Since as kernel developers, we must avoid breaking userland code at all costs, a better solution is to change the behavior, but implement a compatibility layer. Once enough userland code has been fixed, we can remove the compatibility layer. This has been our standard approach for these types of issues, and I think it would work here. While I understand how frustrating it is to not be able to fix this once and for all, I think we need to address our users' needs first.
Shorter, nicer, more to the point, and with much less time wasted on emotional outbursts.
That's not to mention your snippet above would have received the same exact pushback (maybe more) from Alan at the time since Alan believed he was in the right.
At the same time, do you really believe that the only way to get people to see the error of their ways is to be as harsh as Linus usually is and to resort to personal attacks (referring to your line about how he could and should have been harsher)? This is indeed Linus's usual style. The reality is that maintaining some basic level of respect with your colleagues would mean not losing great kernel developers to Linus's outbursts. Alan leaving was a big deal and was widely publicized. How many others have left to never return after receiving a nastygram from Linus?
Let's take it further: if your boss publicly called you a "fucking idiot" as Linus is known to do, every time you screwed up, would you want to continue working for her? Chances are, most people would resign on the stop.
Lastly, I disagree. Even if Alan still thought he was right after receiving my version of the email, he would have a much harder time arguing with a more reasoned version. He would certainly be less likely to throw his hands up in the air and leave the team. This whole "Linus needs to be an asshole sociopath to get shit done" is simply not true: most effective project managers don't resort to this type of behavior because they have better ways to get things done. Linus has fallen into this MO, and it's working for him, but I argue that if he was a better manager and a nicer person he would get more stuff done, not less.
I think we can both agree on this point.
Typically, when I've noticed Linus explode at someone on the mailing list, he is usually the last person to put his 2 cents in. As-in, many others in the mailing list and bug trackers are the first to point out the problem(s) yet the maintainer/patch-submitter argues back and refuses any changes. Usually after a long while, someone pings Linus and asks for his involvement (the kernel is far too large for any single person to be paying great attention to all components). Linus typically comes in as the last line and just unleashes at someone who has already made a much larger problem than it ought to have been -- sort-of the "buck stops here" thing.
The most prominent example I can think of off the top of my head is the Kay Sievers fiasco. People had been going back and forth with Kay for weeks/months before Linus finally weighed in. It resulted in Linus banning any PR's from Kay.
I don't condone all of Linus' outbursts -- but we do need to remember it's the internet and more importantly a select group of people on the LKML... it's not an office building where cordiality trumps directness. Sometimes being direct is the best approach, even in "real life".
Linus is never direct. He takes the scenic route, describing, someone's ancestry, mental capacity, and personal hygiene (figuratively speaking), and only after he is done taking down the individual, does he get to the actual behavior. His exchange with Kay could have been much shorter too:
"Kay, this behavior is unacceptable here. I read through the threads and it looks like you are causing real problems. Because of this, I am blocking any future PR'a from you."
More simple, direct, addresses behavior and let's everyone know that the buck stops with him. And this probably would have taken much less time than his actual rant.
He was direct in the discussion with Alan Cox that this thread started with. He didn't act the way you described in this case.
It takes a lot to get Linus to the point of a rant of the kind you describe, and they are exceedingly rare compared to his usual behaviour.
Probably the link was posted for the benefit of others.
>just a hobby, won't be big and professional like gnu
(Sorry for the Google Groups link...)
Alan Cox is just as interesting of a figure, though, and this is certainly a cool project. One might berate as to why we need another toy Unix, but I personally like having diverse itches scratched. Not to mention this might be an easier introduction to low-level OS hacking than having to deal with all the cognitive overhead of contributing to larger projects like Linux and the BSDs (and then a non-x86 arch is always nice).
Do you have an opinion on Minix, in this respect?
Again, just a weird, warped thought here... Hell even getting golang or or rust would be cool. Something you can run some code on without a lot of baggage that you aren't using. I love where coreos is heading, and would love to see something even lighter.
Being released in 1976, the Z80 architecture is well known and unencumbered by patents. It's also simple with fixed instruction timing, meaning it can be well tested, leaving few places for a back door to hide. The original hardware probably predates any surveillance programs (edit: and the silicon is being publicly reverse engineered by enthusiasts). There's a satisfying feeling of control, when in charge of a computer that is simple enough to understand in its entirety.
A Z80 won't be the fastest computer, but it might be useful for some tasks. Updates to the Z80, starting with FPGA cores, will be faster than the original, and might form a basis for enthusiasts to develop further. Let's face it, ARM's roots are in Z80 era processors.
* The architecture is already 32 bit.
* There are FPGA reimplementations, including a range of boards from different people, and open source designs (the Minimig) that people have produced working machines from.
* There's been a variety of work on producing more advanced versions of the cores, employing more modern design features.
* The M68k family has MMU support.
* There are Linux ports for M68k, as well as a number of other OS's.
Looking at his Google Plus ports, which includes various ancient hardware as well as a variety of old emulators, I think the retro appeal is more relevant.
"+retrotails prower I'm sure +Dirk Hohndel would like an Atari ST port but once you get past 286 there are better OS to run. For m68k you have to deal with the lack of segmentation or banking in most cases (so vfork not fork), but if you can do that then given you've usually got real DMA I would have thought 2.11BSD/RetroBSD a far far better place to start, or even ucLinux. Many of the 68K platforms are also blessed with other open but non-Unix OS's especially the Atari where basically all the underlying OS code you need is nowdays available in a free form (eg FreeMiNT/XaAes - which is even uses rpm)
Some of the key assumptions in UZI and thus in Fuzix really stop making sense if you have a 32bit system or you have good offloaded I/O. I'm not entirely sure those assumptions don't break down by 286 to be honest. However 286 protected mode is so wonderfully demented and feature filled it has to be done."
But as a fun retro project, on the other hand, UZI / Fuzix seems like an awesome thing to play with (though I'd prefer to see a C64/C128 version...)
For those unaware, the 6502 design was largely the product of two people: Chuck Peddle and Bill Mensch.
Chuck Peddle was an ex-Motorola guy that was on the 6800 team but was dissatisfied that Motorola was unwilling to bet on a much cheaper chip aimed at the mass market (the 6800 was around the $250 level before Peddle left; the 6502 debuted at $25 I believe).
Bill Mench was/is by pretty much all accounts a near super-human layout guy. At this time chips were still laid out without much, if any, computer assistance, and prototype turnaround was slow and expensive, so he was a major asset by cutting the number of prototypes needed (the legend - no idea if it is true - is that he had an amazing run of 9 or so designs that came back working on the first try; far simpler designs than these days of course, but to have any come back working on first try was uncommon).
Jack Tramiel at Commodore (who bought MOS) was always penny pinching, and so Mensch was able to retain rights to produce his own 6502 derivatives, and founded Western Design Centre, and have continued designing and selling related cores ever since.
First page of contents: http://imgur.com/1XCLrsQ
Example instruction description: http://imgur.com/MV6Le34
Hello world: http://imgur.com/5yKJlQ9
Off topic but if anyone is ever in Cheltenham UK (or is prepared to pay postage) I have some old books I want to get rid of. http://imgur.com/AWV9HLn
Also, the M68k programming manuals should still be downloadable from FreeScale (what used to be Motorola's semiconductor division). At least they were when I looked for the last a year ago or so. They are wonderfully clear and detailed.
I like the idea of Minix, and that would be a good direction to move on, but the big companies who are contributing the majority of the current Linux work are vested too much in Linux.
There are other projects targeting the same area, different approach though.
The beginning of the OP is obviously a joke.
This is a retro computing hobby project for 1970s era microprocessors. It's obviously not intended to be any kind of replacement for Linux or any other OS on modern computers.
If you want to get into a low-level, open source project that is relevant today, I would try RISC-V (http://riscv.org/). It's an ISA and family of processor cores designed to be competitive with the niche that ARM usually fills these days, but fully open source, with a freely implemtable ISA. Now, it's pretty new and you can't buy RISC-V chips yet, but it's done by a team led by David Patterson, who is one of the fathers of the RISC architecture, and it looks pretty promising as a new open ISA and family of processor cores.
Or if you want to work on hardware that is actually available today, ARM would probably be your best bet. Maybe try porting Linux, or an RTOS like NuttX (http://www.nuttx.org/), to an ARM SOC that it doesn't yet run on.
I also had an Atari ST with 8MHz 68000 (no math copro) with Tempus Editor (ASCII) and Tempus Word (word procesor). Both were written in assembler, also incredibly fast -- faster than MS Word on a PC today. Other people used their Atari ST to write their disseration with Signum, and others published professional newspapers with it.
What does a PC really need? A convenient assembler, a few compilers, a screen editor (vi), a simple database, a word processor (TeX), simple TCP/IP and other very basic things. Just the things which Alan focuses on. Unix on a Z80 or on enhanced FPGA cores sounds really interesting.
I am a happy Linux user for decades but I am seriously concerned about the future of Linux. On one hand Linux will probably soon be kept out from hard locked UEFI/Secure Boot systems, on the other hand modern PCs cannot be trusted anymore in case of security anyway. Also many Linux distros follow the questionable systemd way which makes me wonder if Linux will soon be bloated up like Windows. The Linux kernel runs wonderful so far but it already has several million lines of code, and systemd will add a significant level of complexity.
These things are reasons why I consider Alan's approach of "back to the roots" the right way and very promising. Not only the software is open source but also the requirements for the hardware are so low that many people could build their own System V Unix Z80 systems at home. Cheap microcontrollers like the Parallax Propeller could be added to provide VGA output and parallel I/O.
I'd like a large address space and some way to do read and write barriers (for real-time GC).
> These things are reasons why I consider Alan's approach of "back to the roots" the right way and very promising.
What do you think about OpenBSD? The only reason I switched from OpenBSD to Lubuntu on my personal laptop is because of Adobe Flash.
I have a
almost ready to bring up in my infinite spare time. And a console IO board and memory board.
Apparently, the original socz80 is already supported on Fuzix, so it should run here too. More retro-fun!
"The time has come," the Walrus said,
"To talk of many things:
Of QR codes and profile pics
Of Cox'es and their hackings"
There's a fairly hard cut-off between the design of early OSs with no memory protection (e.g. legacy Mac OS, W95, Amiga, DOS, Locomotive basic), and operating systems which ration memory out to processes (NT, BeOS, Linux, BSD, Solaris). The later group depends on hardware features that the z80 didn't have.
Version 7 unix is in the first camp. Whereas BSD had paged virtual memory well before the IP became open. Also, I think NetBSD requires at least a 32-bit word size.
There was a unix-like in the z80 era called Coherent. On the wikipedia page it says, "There was no support for virtual memory or demand paging." I remember it being advertised in the magazines but never got to play with it, would be interested to hear stories.
[multiple edits, had fun thinking about this]