Funny how another 15 years changes things. I wonder how many people will remember Netscape in a few years. I have never heard of Excite before this. Yahoo is just about the furthest thing from people's mind nowadays when talking about technology companies.
Yahoo is actually very, very dominant here in Japan. There's even a (popular) Yahoo ISP.
I probably know more Japanese people with Yahoo e-mail addresses than I do Japanese people with gmail addresses.
Yahoo Inc. only holds 35% of it, and licences his name to this separate company.
User accounts are separate too, if you create an account everywhere in the world but Japan it's a @yahoo.com account, but in Japan it's @yahoo.co.jp.
In any case, there wasn't any reason not to see microkernels as the only viable way forward, especially given the multitude of competing Unix variants that had grown to unwieldy, multi-mega-SLOC monsters. The future seemed to be with lightweight, microkernel OSes like QNX, the UNICOS update, HURD (seriously, it was a thing back then!) and improved Mach and Chorus versions. The idea of writing a Unix-like as anything other than a toy seemed absurd.
History is a very hard thing to predict in advance.
> 5 years from now [1992 -> 1997] everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5
> My point is that writing a new operating system that is closely tied to any particular piece of hardware, especially a weird one like the Intel line, is basically wrong.
The 1.0 release in 1994 only supported i386 machines. Only after considerable efforts and changes to the code Linux got ported to different platforms with 1.2 and AFAIK even the port to 68k and PowerPC for 2.0 needed again quite some work.
So, history proved him right. ;-) Of course, Linus didn't intend to write a portable OS but only one for AT clones.
Maybe Tanenbaum didn't foresee that Linux would jump across to that many different platforms that it eventually did but with his experience he probably already saw the possible need to port it to newer machines if Linux would be useful for some people.
This is how academia works. For a long time:
micro kernels were universally perceived as better
despite never having seen a working one
despite the few attempts at a working kernel could not run basic programs, never mind a decent interface
despite every successful kernel ever being monolithic (exception, to some extent, is QNX)
and of course these messages were posted from machines that ran ... monolithic kernels
And of course : they admitted to using their power to force people "the right way" in this debate. Both ast and tanenbaum point out that they'd deduct points/fail students who took the wrong stance. (presumably, of course, they'd also fire any phd students who do the same)
Please keep this in mind next time you hear that academics widely support idea X.
Think of academia like a hell of a lot of climate debates, except with the protagonists' positions usually not very well supported, or outright contradicted by the real world. Or even worse : the position is a choice, with no real argument one way or the other, with a strict hierarchy between people, and the higher ups very willing to abuse their power.
It took me a long time to get out of academia, and out of this madness. I wrote papers, dissertations, and here's how that plays out "Good boy ! +1 (not in pay of course). Publish. Copy to the library. Now put it in the trash and work on something else" (I had failed papers too, of course).
Let's just say that this is far from the only debate that works this way. There's many other things, like in programming languages, (extreme) static typing, (pure) funcional programming, dependent typing, garbage collection, rpc interfaces, object databases, machine learning nonsense, ...
Barring other information, I have realized, a good bet is this : if >70% of the academic researchers are on the same side for any particular claim, bet against that claim, just for that reason alone. And get out of academia, of course. Google, facebook, yahoo, microsoft, ... all are building large machine learning teams with actual resources, by far the best researchers in the world, people who demand actual results on some problems (yes, that's a plus), and continuity.
there is an extreme obsession with learning algorithms in academia. The whole thing is a worthless waste of time, for one simple reason : feature engineering can make k-means beat the crap out of the most advanced algorithm anyone's ever come up with. But new learning algorithms are much easier to write, much smaller to prove things about, and thus much, much easier to get a phd dissertation through a committee with. In practice "deep learning" is by far the most successful algorithm (which is another way to say running a 1963 algorithm at scale)
I agree with your post completely, but you do realize ast is Tanenbaum, right?
> Intel Pentium Pro 541 MIPS at 200 MHz ... 1996
Also this list of benchmarks suggests a SPARCstation5 was either slower or roughtly the same performance as the 1st-generation Pentiums (which came out in '93, only a year after that post):
But this was definitely a time when betting on RISC-based desktops seemed like a good idea.
Also the x86 being "RISC-like" internally is a bit of a misnomer - x86 uops are much wider (118 bits for the P6, even wider for later models) than most RISCs' fixed 32-bit instructions. Modern RISCs like the ARM Cortex series also decode instructions into wider uops.
It seems the "6502 risc or cisc" debate was never really solved, as a Google search for that phrase will show.
Some things never change.
> Linus "my first, and hopefully last flamefest" Torvalds
Somehow I never noticed that before.
I guess interest materialized after a while...
You can find the original thread here: https://groups.google.com/forum/#!topic/comp.os.minix/wlhw16...
Google Groups is decent for searching old Usenet posts (though they've removed the "Advanced Search" feature), but horrible for posting. There are a number of NNTP clients (including Firefox) and free or cheap servers (I use news.eternal-september.org).
(To be clear, Google Groups includes both an interface to Usenet newsgroups and Google's own non-Usenet groups. It's probably ok for the latter, but they seriously botched the Usenet interface.)
Unless everyone is still using newsgroups for something besides piracy which is the only time I've really heard them used in a modern context.
I don't think newsgroups are only used for piracy. Most open source products have a Google Group associated with it unless they're using the discussion on Github or Stackoverflow chat / IRC.
Linux essentially is made stable by the sign-off/code-review/sub-system maintainers which keep the code quality high.
Properly I would think it should refer to program structure, and how well independent pieces of code are (logically) separated and able to tolerate failures in eachother.
Here it's just being used to mean "single unprotected address space", where a particular class of bugs (memory corruption) can cause arbitrary problems in completely unrelated code. Would this mean that a kernel written in some managed memory-safe language (I think msft has an experimental C# one) cannot possibly be monolithic?
It is also a great story. The young boy outshines the great scholar.
I still think that Tanenbaum might have been right. But the forces of the open source movement were too great, beyond our wildest dreams and were able to overcome any design limitations. Also, Linus did an epic job technically and controlling the forces unleashed.
Someone made me read it when I was learning Linux for the first time 15 years ago. I'm not sure what I was supposed to get from it ...
Lisp Machines running Lisp programs edited in Emacs might be the theoretically superior way to do things. Microkernels which don't crash when some device code fails might be a superior way to do things. In a competitive market with things moving forward, people don't always have the time to fiddle with something until it's perfect. By the time the Lisp people get together and hash out Common Lisp (which Gabriel says was in many ways a poor compromise), or simplify everything with Scheme - the Unix/C people have already won. By the time things like Hurd finally become viable, OS's like Linux are already everywhere. People will take the poor substitute now and make it the default worldwide, and live with the backwards compatibility problems rather than wait years for people to come down from ivory towers with the perfect solution. You can go on and on with the examples - which chip had a better instruction set, MIPS or the Intel chip with all its legacy stuff? It doesn't matter, Intel already had a toehold and people would rather stick with it than switch.
history doesn't repeat itself, but it rhymes
The microservices / web services thing is basically a microkernel architecture, but done over multiple machines.
And actually Linus was prescient enough to state this. His argument against microkernels was that it makes algorithms more complicated -- you end up developing distributed algorithms for everything. He says OS algorithms are easier with shared data structures.
He explicitly said that microkernels could make sense when you have real distributed hardware and no shared memory. That seems to be what has happened. Linux has turned into a single node OS running the microkernel-ish components for a distributed OS.