It reminds me of my OS class in my undergrad days. We skipped a few bits by using the JVM to get a few things for free, but we still had to write (as groups of 3-5 people) an entire unix-like, complete with user-space programs, messaging systems, etc.
Definitely one of the most valuable classes I took for my degree.
Also met Doug, he's a great guy, old school C hacker.
Here is the first version
I did not really spend enough time reading the source to understand what's going on but it's interesting regardless.
I only wish that there was an add-on tcp/ip stack for that version, then one could really have fun with it. Yes, I know, the ip stack add on is called BSD.
I also tweaked the scheduler to halt if there's nothing to schedule, per tankenmate's suggestion.
A side-effect of this change is the generate-source-code-printout targets were rather severely broken, so they're simply removed for the moment.
Around the time of the PII people (especially overclockers) started noticing that the CPU would stay cooler when halted, and thus started a little industry of "CPU cooling software"... there's an interesting historical page on that here:
A modern processor, on the other hand, takes a few watts halted and a hundred or more when active, and transitions between these states many many times per second. How much things have changed...
if (p == &ptable.proc[NPROC])
after the sti() and initializing p to 0;
It'd be fun to add some scheduler stats and poke around with things a bit. Some really trivial cprintf() tracing in scheduler() implies that single processes can ping-pong between CPUs, but the printing itself is slow and invasive and could possibly be the cause.
>Frans Kaashoek (firstname.lastname@example.org)
>Robert Morris (email@example.com)
sh$%t, this got to be good!
Cloning the repo and surfing at the source code, just for fun, right now :)
Linux and *BSD may be the modern standard for unix-like OSes, but they're often too complex and opaque for beginners to learn from. It's super easy to read and toy with xv6's code if you want to learn more about OS internals, though. For example, I was able to implement a simple log structured file system: https://github.com/Jonanin/xv6-lfs
Note: if you're looking for companion reading material, check out this OS book: http://pages.cs.wisc.edu/~remzi/OSTEP/ (IMO it's super approachable and understandable for undergrad-level learning)
The chapter by Torvalds in the same book is also worth reading:
The claim that microkernels improve portability makes no sense to me, since Linux itself is very portable, as are the BSDs (especially NetBSD and OpenBSD).
Hopefully this is useful to somebody and isn't too blatantly link whorey :)
Also it looks like they mostly run it in qemu.
The more annoying thing is likely to be that it's 32-bit only. Are there any common machines that do x86-32 UEFI boot? Alternatively, does a 64-bit grub.efi know how to exit long mode and go into protected mode, to multiboot a 32-bit kernel?
(I was actually talking the other day with some folks about whether a 6.828 project involving a UEFI port of JOS, the other teaching OS, would make sense, but the 64-bit thing seemed to make it less interesting unless you also plan to do a 64-bit port of JOS. Which has been done before in a final project, admittedly...)
I haven't done x86 systems stuff since 64bit became common, so it might be a fun way to learn about what's different.
It should be:
But also accepts Intel syntax with a directive, and even admits "almost all 80386 documents use Intel syntax" .
As for user acceptance, a poll on the xkcd forums shows that the vast majority of users prefer Intel syntax .
That takes care of the political/network effects arguments. As far as the actual technical arguments, the position against AT&T syntax is obvious and overwhelming -- the unnaturalness of src,dest operand order; the lazyness of the parser's authors at the expense of its users by requiring percents or dollar signs to indicate token types; the usually-redundant size postfix; and the utter incompatibility with basically all non-GNU x86 documentation.
So while I don't disagree with your technical reasoning, I don't think that it's a significant pedagogical barrier. The puckish part of me wants to suggest that having to figure something out when the documents are a) insane b) backwards c) written in Martian d) all of the above is great preparation for commercial work. ;)
Xv6 itself has about 5000 lines of .c and 364 lines of .S in the kernel. Another 1500 lines in vectors.S but that's machine generated. There's a ton of stuff one could add to Xv6 (drivers, syscalls, services, etc), almost all of which one would likely just write in C.
In practice, I've found assembly mostly used for some hand-tuned inner loops (codecs, memcpy, etc) and little snippets of glue code like entry.S, swtch.S, etc.
x := 5
mov x, 5
mov 5, x
5 = x
x = x - y
sub x, y
Most programming languages have dst, src order. It also making comparisons/subtractions and conditional jumps very confusing to read.
Have you ever:
- ... Wondered how a filesystem can survive a power outage?
- ... Wondered how to organize C code?
- ... Wondered how memory allocation works?
- ... Wondered how memory paging works?
- ... Wondered about the difference between a kernel function and a userspace function?
- ... Wondered how a shell works? (How it parses your commands, or how to write your own, etc)
- ... Wondered how a mutex can be implemented? Or how to have multiple threads executing safely?
- How multiple processes are scheduled by the OS? Priority, etc?
- How permissions are enforced by the OS? Security model? Why Unix won while Multics didn't (simplicity)?
- How piping works? Stdin/stdout and how to compose them together to build complicated systems without drowning in complexity?
- So much more!
I credit studying xv6 as being one of the most important decisions I've made; up there with learning vim or emacs, or touch typing. This is foundational knowledge which will serve you the rest of your life in a thousand ways, both subtle and overt. Spend a weekend or two dissecting xv6 and you'll love yourself for it later. (Be sure to study the book, not just the source code. It's freely available online. The source code is also distributed as a PDF, which seems strange till you start reading the book. Both PDFs are meant to be read simultaneously, rather than each alone.)
Considering how many not-so-simple systems have succeeded in the market, I suspect that the real reasons for the failure of Multics are more nuanced.
How do you figure?
Saying it was a matter of "simplicity" glosses over too many of the details to be useful. Also consider that many things that early UNIX didn't include from multics got added back later (memory mapped files, dynamic linking, symlinks, ...) so by that measure it was maybe too simple.
A few points to consider:
* Multics was a joint project, and each participant had different levels of commitment and priorities for the project. This wasn't a recipe for success (see also: Taligent, OS/2)
* Hardware. UNIX moved to the PDP-11 soon after that machine became available. That machine went on to become the smash hit of the decade, especially in academia. The GE and Honeywell mainframes that Multics ran on continued their decades-long loss to IBM.
I've often wondered how different history would be if Bell Labs had bought any other minicomputer for the project. Probably today UNIX would only be known as a bunch of research papers. By skill or by luck they landed on the platform that would let their creation spread.
* Language choice. When the Multics project kicked off, using PL/I seemed like a progressive choice. After all, it had been designed by all the right committees! However, PL/I only had a moment in the sun.
To me, the single most impressive thing about the original UNIX team was that they also built C at the same time. This created a virtuous cycle where language-geeks wanted to use C (and thus wanted UNIX) and OS geeks wanted UNIX (and ended up learning C). If PL/I had lived up to its billing of being the true successor to Algol then C might not have had a void to fill and this cycle wouldn't be possible.
* Being born to the right (corporate) parent. Building even a small OS at that time cost quite a bit of money: not only to pay bright people for years to work on it, but to buy the minicomputers they needed. Bell labs had the resources and interest to do this but, uniquely, couldn't meaningfully productize it (due to their status as the telephone monopoly at the time) This meant that in the early days it was near-free, source code included. If UNIX had instead been developed inside of a computer company it would have been turned into an exclusive product and probably wouldn't have spread so far in its early days. Bell labs was one of the few organizations outside academia that would have done this.
* Just plain talent. The UNIX guys were, of course, flat out brilliant. Take pipes for example. Multics had the ability for processes to be chained, but it wasn't recognized as a first-class feature.. they were still thinking of programs as usually being separate beasts. The flash of insight was "this would be super-useful if all programs behaved this way and if the shell made it easy to construct" It's not a matter of "simplicity", but that they simply saw the problem in different way than everyone who came before. In short, UNIX was just damn good.
Certainly UNIX started out as a simple system, and that was part of its success. However, other things have been simple and never went anywhere. The point I'm trying to make is that to understand why UNIX won you need to consider a lot more of the history.
For what it's worth, I was able to learn xv6 just by working hard to understand everything fully. Effort seems to matter more than prerequisites. Plus it was a fun challenge.
Also, please feel free to email me (sillysaurus2 at gmail) if I can be of assistance to anyone, e.g. by talking about some nuance of unix architecture or about anything at all. Intensive self-study can sometimes feel overwhelming, so feel free to message me for encouragement.
A bit off topic, but you really know how to sell things :D
One of the features of UNIX was that it was designed by taking a few very simple ideas and more importantly simple constructs, and then building more complex constructs out of them. It made it possible for a handful of researchers to write all the bits of an OS needed to get to a multi-user prompt. Which at the time other examples had giant teams of engineers associated with them. That made it easy to understand at a fundamental level and you could reason about the effects various things would have on it should you make them.
One of my favorite books is "Operating Systems Concepts" which has a great survey of the features of any good operating system.
So is being Unix-like better? Yes if you want a simple robust operating system. No if you want a hard real-time operating system. Yes if you want a system you can flexibly add devices too. No if you want to build a security model based on capabilities. Yes if you want it to work on a small architectures with stack support. No if your processor doesn't support stacks or the C model of frame pointers.
In general though understanding what an Operating system has to do is priceless for a CS person to have in their toolkit.
If you need a yacht that can carry many people on the high seas through bad weather at speed, unix is solid. But there hasn't been a great racing dinghy since the 80s.
Maybe there is an demoscene/hacking tradition out there, waiting for to be discovered.
Consider the debates that we don't have:
* Hardware abstraction. Casual audio programming is harder now than it was twenty years ago. Also - why doesn't hardware self-describe?
* High-level languages. Chuck Moore's ideas of code bloat are on a totally different level to the mainstream. He's onto something.
* Protected memory. Does it matter on a machine designed for fun? How much more hackable could your OS be if you got rid of stuff like this?
Stability doesn't matter so much in fun systems. If you're in a dinghy and not capsizing, you're not sailing hard enough.
Also, there was something about Arthur Whitney looking to build a stand-alone OS in/under K recently.
Thanks - looks awesome. I particularly like his approach to HTML.
UNIX-like includes philosophies such as "almost everything is a file" and "use human-readable (and writable!) plain text formats by default" and "there will be a stdin, stdout, and stderr; use them all". More than anything else, I think that a UNIX is defined by the idea that any end-user might decide to become a programmer at any moment, and this is OK and perhaps even desirable.
Very, very desirable in my view; in contrast to the raging trend of simplification/hiding going on these days, trying to turn computers into sealed-box appliances "you're not supposed to know how it works" that give little in the way of things that can get people interested in learning more about them, I find classic UNIX to be very open in encouraging users to learn more about the system.
(Myself, I moved to FreeBSD a while ago).