Hacker News new | past | comments | ask | show | jobs | submit login
The Tanenbaum-Torvalds Debate (1992) (oreilly.com)
72 points by theoutlander on Aug 8, 2014 | hide | past | web | favorite | 47 comments



"To put this discussion into perspective, when it occurred in 1992, the 386 was the dominating chip and the 486 had not come out on the market. Microsoft was still a small company selling DOS and Word for DOS. Lotus 123 ruled the spreadsheet space and WordPerfect the word processing market. DBASE was the dominant database vendor and many companies that are household names today--Netscape, Yahoo, Excite--simply did not exist."

Funny how another 15 years changes things. I wonder how many people will remember Netscape in a few years. I have never heard of Excite before this. Yahoo is just about the furthest thing from people's mind nowadays when talking about technology companies.


> Yahoo is just about the furthest thing from people's mind nowadays when talking about technology companies.

Yahoo is actually very, very dominant here in Japan. There's even a (popular) Yahoo ISP.

I probably know more Japanese people with Yahoo e-mail addresses than I do Japanese people with gmail addresses.


That's a join venture dominated by Softbank (that's also a cell phone operator, and owner of Sprint).

Yahoo Inc. only holds 35% of it, and licences his name to this separate company.

User accounts are separate too, if you create an account everywhere in the world but Japan it's a @yahoo.com account, but in Japan it's @yahoo.co.jp.


Yes Yahoo Japan is very big, but it is mostly separate from Yahoo.com.


To add a small quibble here, Microsoft was not a "small company" in 1992. They were one of the most well known and most successful software companies in the world. But they were not nearly as large nor as dominant as they would become in the late '90s, though the entire industry grew enormously in that time frame as well.


To be slightly more specific, MSFT had a market cap of $23 billion and had just shipped Windows 3.1. It was so much in the zeitgeist that Doug Copeland would set a novel there a few years later.

In any case, there wasn't any reason not to see microkernels as the only viable way forward, especially given the multitude of competing Unix variants that had grown to unwieldy, multi-mega-SLOC monsters. The future seemed to be with lightweight, microkernel OSes like QNX, the UNICOS update, HURD (seriously, it was a thing back then!) and improved Mach and Chorus versions. The idea of writing a Unix-like as anything other than a toy seemed absurd.

History is a very hard thing to predict in advance.


Microsoft was a huge, powerful, feared company in 1992.


Choice quotes from ast:

> 5 years from now [1992 -> 1997] everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5

> My point is that writing a new operating system that is closely tied to any particular piece of hardware, especially a weird one like the Intel line, is basically wrong.


The second quote continues with "An OS itself should be easily portable to new hardware platforms".

The 1.0 release in 1994 only supported i386 machines. Only after considerable efforts and changes to the code Linux got ported to different platforms with 1.2 and AFAIK even the port to 68k and PowerPC for 2.0 needed again quite some work.

So, history proved him right. ;-) Of course, Linus didn't intend to write a portable OS but only one for AT clones.

Maybe Tanenbaum didn't foresee that Linux would jump across to that many different platforms that it eventually did but with his experience he probably already saw the possible need to port it to newer machines if Linux would be useful for some people.


Tanenbaum is an academic. The ivory tower got (and gets) the microkernel debate wrong. Microkernels are a "better architecture" and that, to them, justifies performance levels that are abysmal. It took them decades to get it working in the first place, and again this is not seen as a problem.

This is how academia works. For a long time:

micro kernels were universally perceived as better

despite never having seen a working one

despite the few attempts at a working kernel could not run basic programs, never mind a decent interface

despite every successful kernel ever being monolithic (exception, to some extent, is QNX)

and of course these messages were posted from machines that ran ... monolithic kernels

And of course : they admitted to using their power to force people "the right way" in this debate. Both ast and tanenbaum point out that they'd deduct points/fail students who took the wrong stance. (presumably, of course, they'd also fire any phd students who do the same)

Please keep this in mind next time you hear that academics widely support idea X.

Think of academia like a hell of a lot of climate debates, except with the protagonists' positions usually not very well supported, or outright contradicted by the real world. Or even worse : the position is a choice, with no real argument one way or the other, with a strict hierarchy between people, and the higher ups very willing to abuse their power.

It took me a long time to get out of academia, and out of this madness. I wrote papers, dissertations, and here's how that plays out "Good boy ! +1 (not in pay of course). Publish. Copy to the library. Now put it in the trash and work on something else" (I had failed papers too, of course).

Let's just say that this is far from the only debate that works this way. There's many other things, like in programming languages, (extreme) static typing, (pure) funcional programming, dependent typing, garbage collection, rpc interfaces, object databases, machine learning nonsense, ...

Barring other information, I have realized, a good bet is this : if >70% of the academic researchers are on the same side for any particular claim, bet against that claim, just for that reason alone. And get out of academia, of course. Google, facebook, yahoo, microsoft, ... all are building large machine learning teams with actual resources, by far the best researchers in the world, people who demand actual results on some problems (yes, that's a plus), and continuity.

there is an extreme obsession with learning algorithms in academia. The whole thing is a worthless waste of time, for one simple reason : feature engineering can make k-means beat the crap out of the most advanced algorithm anyone's ever come up with. But new learning algorithms are much easier to write, much smaller to prove things about, and thus much, much easier to get a phd dissertation through a committee with. In practice "deep learning" is by far the most successful algorithm (which is another way to say running a 1963 algorithm at scale)


> Both ast and tanenbaum point out that they'd deduct points/fail

I agree with your post completely, but you do realize ast is Tanenbaum, right?


That first quote is funny when you look at this: http://en.wikipedia.org/wiki/Instructions_per_second#Timelin...

> Intel Pentium Pro 541 MIPS at 200 MHz ... 1996

Also this list of benchmarks suggests a SPARCstation5 was either slower or roughtly the same performance as the 1st-generation Pentiums (which came out in '93, only a year after that post): http://www.vintage-computer.com/vcforum/showthread.php?11079...

But this was definitely a time when betting on RISC-based desktops seemed like a good idea.


Low-end Sun hardware wasn't exactly fast. When the Pentium came out the gap between RISC workstations and commodity Intel desktops really started to shrink


Even better is that x86 started translating its instruction set into an internal RISC-like set, so RISC kind of won anyway.


RISC didn't "kind of" win; it really won, due to everybody walking around with tiny RISC computers in their pockets. Running a variety of linux.


The important difference between traditional RISC and what we have now in either x86 or ARM is the instruction encoding. You have to have a compact instruction encoding or you waste memory bandwidth. That's part of why x86 eventually beat the old RISC designs and that why ARM has the Thumb encodings.


I don't think it's that clear of a victory; ARM is everywhere (and so is MIPS, to a lesser extent) and is commonly mentioned, that much is true, but there are far more ISAs that most people don't hear about in mundane embedded applications (of which there are many) - 8051, 6502, PIC, etc. which are definitely CISCs.

Also the x86 being "RISC-like" internally is a bit of a misnomer - x86 uops are much wider (118 bits for the P6, even wider for later models) than most RISCs' fixed 32-bit instructions. Modern RISCs like the ARM Cortex series also decode instructions into wider uops.


> 6502 … which are definitely CISCs

It seems the "6502 risc or cisc" debate was never really solved, as a Google search for that phrase will show.


Even the tiny windows computers are RISC machines. Windows NT also dates back to somewhere around 92 and made hardware independence a big deal. Dave Cutler had to fight tooth and nail to keep it in while lots if people were telling him to just build it for Intel.


In the 1980s there was often an assumption that "all the world's a VAX" - in the 1990s there was often an equivalent assumption that "all the world's a Sun".


Just a few years ago, "all the world were an Intel/AMD". We are just starting with the "all the world's an ARM" mindset.

Some things never change.


Can't help but chuckle at his apology / signature around the halfway point:

> Linus "my first, and hopefully last flamefest" Torvalds

Somehow I never noticed that before.


> In fact I have sent out feelers about some "linux-kernel" mailing list which would make the decisions about releases, as I expect I cannot fully support all the features that will /have/ to be added: SCSI etc, that I don't have the hardware for. The response has been non-existant: people don't seem to be that eager to change yet.

I guess interest materialized after a while...


Small correction, the 80486 was on the market in 1989.


Partially related, if somebody is interested The Linux Foundation together with edX MOOC platform are offering a free Linux course since few days[1]. It seems to be quite basic (I've just started it, knowing already something-but-never-enough of Linux) but considering that is provided by The Linux Foundation, "sponsored" by Linus Torvalds and it was normally done in real/virtual classroom for 2400$, it's probably worth doing it ;)

[1] https://www.edx.org/course/linuxfoundationx/linuxfoundationx...


It's a trip re-reading this stuff. It was almost painful to read Linus' apology for going off on ast, only because I remember all too well being just that same young hothead just a little bit earlier (about a half decade or so).


Before reading this, I had no idea that Ken Thompson used to work at Georgia Tech


Discussed just a few weeks ago - https://news.ycombinator.com/item?id=8010719


So that took place on bulletin boards? What's the equivalent to it nowadays? Is there a place where the action take place with today big thinkers?


Google Groups is where all those newsgroups are accessible now. I'm sure you can use other clients as well, but GG seems very convenient!

You can find the original thread here: https://groups.google.com/forum/#!topic/comp.os.minix/wlhw16...


It's called Usenet, and it still exists, though it's not as active as it used to be.

Google Groups is decent for searching old Usenet posts (though they've removed the "Advanced Search" feature), but horrible for posting. There are a number of NNTP clients (including Firefox) and free or cheap servers (I use news.eternal-september.org).

(To be clear, Google Groups includes both an interface to Usenet newsgroups and Google's own non-Usenet groups. It's probably ok for the latter, but they seriously botched the Usenet interface.)


They're asking what replaced newsgroups not where to get more old newsgroups.

Unless everyone is still using newsgroups for something besides piracy which is the only time I've really heard them used in a modern context.


The question was "What's the equivalent to it nowadays?"... unless there's something else that I'm missing, I think google groups is where such discussions take place and that's where I've participated in for the last decade or so.

I don't think newsgroups are only used for piracy. Most open source products have a Google Group associated with it unless they're using the discussion on Github or Stackoverflow chat / IRC.


To be fair, Linux evolves a lot since then. It's not as monolithic as it used to be - it has layers, modules, subsystems (vfs, usb), interfaces.


It's still fundamentally monolithic though, a crash in a module or subsystem still brings the entire system down. The point of a microkernel is to bring the subsystems into user space, where a crash can be tolerated.

Linux essentially is made stable by the sign-off/code-review/sub-system maintainers which keep the code quality high.


It really depends what you mean by "monolithic".

Properly I would think it should refer to program structure, and how well independent pieces of code are (logically) separated and able to tolerate failures in eachother.

Here it's just being used to mean "single unprotected address space", where a particular class of bugs (memory corruption) can cause arbitrary problems in completely unrelated code. Would this mean that a kernel written in some managed memory-safe language (I think msft has an experimental C# one) cannot possibly be monolithic?


I mean monolithic in terms of what the argument was about. Monolithic Kernels can't generally be written in a managed memory-safe language, because it's the kernel that does the memory-safe part. With a microkernel, parts can be written in memory-safe languages, that's why Singularity has to use one: http://en.wikipedia.org/wiki/Singularity_%28operating_system...


So has Minix (now called minix3)[1], although more slowly than Linux. It is open source now, and still not monolithic.

[1] http://minix3.org/


It's actually kind of cool. Seems to be very reliability-focused. Does anyone have any experience running production software on it?


Not sure about MINIX 3, but I remember from Andy's book that MINIX in general has been widely used for production code, typically embedded systems.


It is a piece of history.

It is also a great story. The young boy outshines the great scholar.

I still think that Tanenbaum might have been right. But the forces of the open source movement were too great, beyond our wildest dreams and were able to overcome any design limitations. Also, Linus did an epic job technically and controlling the forces unleashed.


I never understood the significance of this. what's the big deal about it?

Someone made me read it when I was learning Linux for the first time 15 years ago. I'm not sure what I was supposed to get from it ...


It's a discussion similar in some ways to Richard P. Gabriel's "Lisp: Good News, Bad News, How to Win Big". It's similar to Eric Ries's ideas about a minimally viable product.

Lisp Machines running Lisp programs edited in Emacs might be the theoretically superior way to do things. Microkernels which don't crash when some device code fails might be a superior way to do things. In a competitive market with things moving forward, people don't always have the time to fiddle with something until it's perfect. By the time the Lisp people get together and hash out Common Lisp (which Gabriel says was in many ways a poor compromise), or simplify everything with Scheme - the Unix/C people have already won. By the time things like Hurd finally become viable, OS's like Linux are already everywhere. People will take the poor substitute now and make it the default worldwide, and live with the backwards compatibility problems rather than wait years for people to come down from ivory towers with the perfect solution. You can go on and on with the examples - which chip had a better instruction set, MIPS or the Intel chip with all its legacy stuff? It doesn't matter, Intel already had a toehold and people would rather stick with it than switch.


It's a pointed discussion about the best way to go about making operating systems. Consequentially it held several predictions about the future, about both hardware and operating systems. As it turned out, Linus' line of reasoning was substantially right, and his technological bets and decisions paid off, resulting in linux becoming one of the most popular operating systems on the planet. While GNU Hurd failed to even ship, let alone prove its superiority on superior hardware (neither of which came to pass).


its amazing how similar "microservices vs monolithic" is today

history doesn't repeat itself, but it rhymes


Yup, this has been observed recently: "Data Centers are microkernels done accidentally"

http://scholar.google.com/scholar?cluster=674259428780315955...

The microservices / web services thing is basically a microkernel architecture, but done over multiple machines.

And actually Linus was prescient enough to state this. His argument against microkernels was that it makes algorithms more complicated -- you end up developing distributed algorithms for everything. He says OS algorithms are easier with shared data structures.

He explicitly said that microkernels could make sense when you have real distributed hardware and no shared memory. That seems to be what has happened. Linux has turned into a single node OS running the microkernel-ish components for a distributed OS.


I love this. We are truly living in great times.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: