Hacker News new | past | comments | ask | show | jobs | submit login
An Experimental Course on Operating Systems (stanford.edu)
967 points by jzoch on Jan 12, 2018 | hide | past | favorite | 243 comments



If any Stanford students are reading this wondering if they should take this class:

Engler was one of my favorite professors at Stanford. He's kind, sharp, well-prepared, and consistently delivered great lectures.

One of the things that made him stand out to me are the insights he provided beyond what's in the papers we read in CS240. You can read the papers yourself, but the papers won't tell you things like "I think the reason this paper was accepted is different from the reason the author likely thinks it was accepted."

He's awesome, take his class. Find a good team, and, um, don't underestimate how much time CS140 takes. :)


> "I think the reason this paper was accepted is different from the reason the author likely thinks it was accepted."

That's a great perspective to have! I wonder if opinions like these are available online - I'm thinking in a format like Adrian Coyler's morning paper. I would love to read opinionated, yet highly educated material.


Is the goal of the this class teaching student how to re-write the Pi3 's kernel/driver in Rust?

It is not that clear from the course syllabus.

"This course covers the following topics:

Disks, File systems, I/O, Threads & Processes, Scheduling, Virtual Memory, Protection & Security, Interrupts, Concurrency & Synchronization."

In theory, if those are the goals, one doesn't need Rust as part the first week's lecture.

Rewriting the Kernel in Rust sure makes a very interesting (and very tough) way to learn OS in 3 months.


It's not writing a new kernel in Rust, it's an OS course, where they touch on all the fundamentals. Instead of assignments being in C, they're in Rust.


dawson Engler, you talking about?


Correct, one of the people listed as teaching this course.


Wow I saw this page first without realising it's him. I remember being blown away by his research, and the fact that they're in separate areas (symbolic execution and exokernels). It seemed like he'd stopped publishing a few years ago - do you know why?


If anyone has looked at web frameworks in Rust, they'll have noticed one known as Rocket (http://rocket.rs). The Rocket project is authored by none other than Sergio Benitez, who is teaching Stanford CS140e.


I really should not have read that resume. Really doesn't help the overall inadequacy problem.


On the other hand:

At Google I worked with the Gmail performance team to decrease server downtime by detecting anomalous behavior before server failure. I designed and implemented an original anomaly detection algorithm based on local outlier factors.

Does anyone have any info about anything related to this?

I want to write a program to detect anomalous behavior in my systems too. (Mostly so I can completely ignore the useful information, but still.)


I can talk about my experience writing such a tool during my internship at a bank that rhymes with Oldman Bachs. Bear in mind, this wasn't my original project; since I had finished my first project, I was allowed to work on something else I find interesting. So it will probably not be very robust, but it worked.

The tool would run in trading engines and monitor different metrics of performance. CPU usage, while allowed to burst, shouldn't be sustained at 100% for a long time. Typically, these machines are monsterous and have hundreds of gigs of memory; therefore, they shouldn't come anywhere near 100% memory usage either. There's a bunch of other proprietary internal metrics that I can't really explain (stuff like latency of a particular kind of response).

Anyway, since I was most comfortable with writing the tool in Python (also, time was a constraint as I started working on the project towards the end of my internship), I wrote a modular tool with pluggable metrics. The reason why it wasn't a monolithic block: if the company decides to add a new metric they want to measure and follow, they should just have to write the logic necessary to retrieve that metric. The rest should be handled by my framework: anomaly detection, alerting relevant parties, and so on.

So banged out the framework and had just enough time to write two modules: CPU usage and memory usage (probably the easiest ones). With a simple moving window average and standard deviation, you can identify outliers: I marked everything outside 2*sigma as an outlier as it's better to have false positives than false negatives (but not excessively many false positives as that would be counter-productive and make humans ignore reports).

The alerting part was fairly straightforward as there was a built-in tool to do that. I ran a few tests with historical data and it was pretty good at detecting the anomalies that I would mark myself as a human. It could be better if someone smarter than could come up with a machine learning algo to do the same thing more accurately.

(I didn't productionize it as I didn't have the time.)


First thing that comes to mind is tracing system calls with strace (linux), truss (bsd), or Dtrace (recommended).

A common probability theory such as Markov Chains could be used as a reference:

Markov Chains http://cucis.ece.northwestern.edu/projects/DMS/publications/...

There is no limit of items this model could be applied to, but a few that come to mind; given the context might be;

Load Generation or emulation at the filesystem, network or application level.

iowait, vmstats, memory leaks (Why am I unable to account for N percent memory), Cache hit vs miss,

Network stats, heuristics, buffer overflows, denial of service (listen/accept queue overflow), TCP RTT, NTP Jitter, Response Time, etc…



You might also be interested in this older project from etsy

https://github.com/etsy/skyline

Has a fairly good mixture of models approach to anomaly detection even if it's not maintained.


Thanks for that call-out for Rocket. I skimmed a Rust book over the holidays so I could experiment with the Exonum blockchain platform. I had tentatively decided to not pursue an interest in Rust but looking through the docs for Rocket has changed my mind.

Looking through the CS140e class description made me a little envious of students who are both able to take the class live and also have 20 hours a week or so to really get the most out of the class.


How is Rocket, compared to the others? (I don't know that the one I used before is still going.)


There's a really good talk from StrangeLoop about it[0], I highly recommend checking it out. I don't write rust, but it made me want to.

[0]https://www.youtube.com/watch?v=QS8mrbAPLJc


I have made a list of all the materials needed for this class with amazon links. Hope this helps people who would like to pursue this course from outside Stanford.

1. 1 Raspberry Pi 3 https://www.amazon.com/Raspberry-Pi-RASPBERRYPI3-MODB-1GB-Mo...

2. 1 1⁄2-sized breadboard https://www.amazon.com/Qunqi-point-Experiment-Breadboard-5-5...

3. 1 4GiB microSD card https://www.amazon.com/Samsung-MicroSD-Adapter-MB-ME32GA-AM/...

4. 1 microSD card USB adapter https://www.amazon.com/Adapter-Standard-Connector-Smartphone...

5. 1 CP2102 USB TTL adapter w/4 jumper cables https://www.amazon.com/KEDSUM-CP2102-Module-Download-Convert...

6. 10 multicolored LEDs https://www.amazon.com/Multicolor-Flashing-Changing-Electron...

7. 4 100 ohm resistors https://www.amazon.com/dp/B0185FCR66/

8. 4 1k ohm resistors https://www.amazon.com/dp/B00CVZ46FM/

9. 10 male-male DuPont jumper cables, 10 female-male DuPont jumper cables

https://www.amazon.com/Haitronic-Multicolored-Breadboard-Ard...

Edit: edited for solderless breadboard link rather than a solderable one.


[Report of incorrect link deleted, as it has been corrected]

For the 100 and 1k resistors, I'd recommend something like this: https://www.amazon.com/PIXNOR-Resistor-1ohm-10Mohm-Resistors...

That includes 20 1k and 20 100 ohm resistors, plus 20 each of 54 other assorted resistor values, and it costs slightly less than just buying the 100 ohm and 1k 10 packs that you suggested.


No that is not the correct one, we are using a solderless board.


Thanks. I have updated the breadboard to a solderless one.


I am quite surprised that registers, LEDs, jumper cables and solders are required for this class. May I ask what are they required for ?


The LEDs you linked to have just two pins and automatically cycle through the three colors in a fixed pattern - is this really what is required? Or did you intend to link to a four pin variant with common anode [1] or common cathode [2]?

[1] https://www.amazon.com/dp/B01C19ENFK/

[2] https://www.amazon.com/dp/B01C19ENDM/


The image of a LED used in assignment 0 has 2 pins. That is why I put those links. But I am not totally sure, I could not find any specification.


Would it be possible to use the much cheaper Raspberry Pi Zero W instead of the 3?


To quote from the slides (Jan 8): https://web.stanford.edu/class/cs140e/notes/lec1/slides.pdf Slide 3:

"A modern approach

- Multicore, 64-bit ARMv8 platform.

- Modern programming languages and tools."

The Raspberry Pi Zero W does not support 64 bit mode, since it uses a BCM2835 instead of a BCM2837. Even more: the BCM2835 does not even support ARMv7 instructions (only ARMv6).


I saved some money by buying a kit that included most of the parts. I confirmed that this setup works and have already completed assignment 0 with it.

1. Raspberry Pi 3: https://www.amazon.com/gp/product/B01CD5VC92

2. CP2102 USB TTL adapter: https://www.amazon.com/gp/product/B072K3Z3TL

3. Breadboard/Cables/LEDs/Resistors kit: https://www.amazon.com/gp/product/B01IH4VJRI

4. microSD card: https://www.amazon.com/gp/product/B001B1AR50

Note that I didn't need a microSD adapter since my laptop already has a SD card port, and the card linked has a microSD-to-SD adapter. Overall, this was $64 vs $98 for the above parts, and it all had free one-day shipping.


Would you mind answering a couple of questions for me via email or signal about assignment 0? Specifically related to phase3/phase4.


Does anyone else think it's time for a new and promising operating system? The hegemony of OS X/Windows/Linux has basically gone on for a generation.

Shout-out to BeOS (the old geeks will know of it) which was the last promising new OS I encountered... and that was many moons ago


As someone who worked on Plan 9 for over a decade, it would be incredibly difficult. The first question out of everyone's mouth, even back in the early 2000s: "So can you run [Mozilla/Firefox] on it?" No, we couldn't and that was with a very POSIX-like system; the browser is the killer app today and it's also an operating system all its own, meaning it's one of the hardest things to port. We had enough of a basic browser that you could read HTML pages, but otherwise you're stuck with 'linuxemu' which only worked up to a certain (old) version of Debian because the Linux kernel changed shit. If you decide POSIX is a bad paradigm, you're going to have an even harder time getting a browser running.

Of course, most of the shit we do with junky web apps today could just be presented as a 9P service with maybe a couple shell scripts in front of it, but the junky web apps already exist and are in use.


Anyone have links to the plan9 propaganda pictures? Some of them were hilarious. https://imgur.com/a/Q4aMc

"There is no fork."

Apparently they're from 9front, a fork of plan9: https://news.ycombinator.com/item?id=12617036

http://9front.org/img/ is filled with a bunch of random stuff. The diagram of plan9 at http://9front.org/img/fs.png seems useful.

"Plan9 has been forked": https://news.ycombinator.com/item?id=2772718

Note the discussion on rc, plan9's scripting language: https://news.ycombinator.com/item?id=2773275

Maybe someone can borrow some ideas for their next OS.

EDIT: Aha: http://9front.org/propaganda/


This is fascinating!


I'll give you a lot of credit for working on Plan9 that long... but yeah, you're correct, a browser port would require a fairly large and well-funded team at this point, on any new OS


I think that this is kind of backwards, yes, a new user OS must run a browser, but this is a much simpler situation than "Must run Win32 software".


Yep. I’m on 9fans, and agree wholeheartedly. Porting a browser would be the _one_ thing to do to be able to run Plan9 as a personal desktop, but it’s a monumental task that will never happen.


Here is another thought. One has to realize that nobody wants an operating system. People only use them because it is the only way to run the applications the users actually wants to use. If you could run an office suite, a browser and a mail client without Windows or Linux, that is what people would do. At best an operating system is absolutely invisible, realistically it causes additional pains over those inflicted by the applications. I can not really imagine how this could work, but a new non-OS would probably be a better idea than a new OS.


Application management, window management, resource management, document management, etc. OSs are more than just the applications they run, they provide the user ways to manage their workflow.

On the other hand, you know what OS was practically invisible? DOS. And DOS was pretty great, so maybe you're on to something.


MS-DOS wasn't that great. Oh, don't get me wrong, it's great at being a singer user, non-reentrant interrupt handler on a painful to program architecture [1]. A lot of applications skipped using device drivers entirely for things like the keyboard, video and serial I/O (using such made the program run more slowly).

Do you know how much work it was to redirect the input/output of an MS-DOS application? It can get pretty insane [2].

[1] You had to pick a memory model to compile against for one thing---tiny, small, medium, compact, large or huge. You had to deal with FAR and NEAR pointers, and it really drove home the point that sizeof(int) != sizeof(long) != sizeof(void *) when writing C on that system.

[2] http://boston.conman.org/2015/12/13.1 More info here: https://github.com/spc476/NaNoGenMo-2015


I wrote a fair number of DOS programs. The memory models wern't that bad, at least you had choices... I also wrote code on machines that had 64k address spaces, with soft banking. That was worse... What I remember being a huge PITA was memory management on macOS in the system 5/6 time-frame. I vaguly have memories of struggling with locking/unlocking handles.. Heck googling about it, turns up a whole wikipedia article which spends a bit of time describing it in general.

https://en.wikipedia.org/wiki/Mac_OS_memory_management

Anyway, in dos, .ost little utilities and such worked just fine in tiny (and you could use .com which simplified everything even more). Medium and compact were sort pointless because if you were writing anything you expected to grow beyond 64k then you used a HLL and let the compiler deal with most of the fiddly stuff. IIRC One of the compilers (may have been turboC) had an project/link option which let you cycles through all the memory models, and with a quick rebuild you could check executable sizes or run little benchmarks. I did a fair amount of programming in Turbo Pascal, and I just checked the manual, it basically hid the entire memory model argument, even going so far as to have that "overlay" functionality which was basically a software paging unit for your program which would load groups of methods from disk as necessary.

Yah, and I wrote TSR's too... and hacks to take over bios/dos INT calls, etc etc etc. These days it seems any software project that has been around for a few years is orders of magnitude more difficult than the stuff people claim was hard about DOS. Heck there are _network_ drivers in linux that are orders of magnitude more code (and more complex) than DOS or probably the vast majority of DOS applications. People simply didn't write 100 million line DOS programs so dealing with little hardware oddities is simple vs trying to ferret out VM barrier bugs in linux (or whatever).


All of those things can be done by applications and services. You’re just listening the kinds of applications and services OSes tend to bundle in order to bootstrap an application and service market.


> Here is another thought. One has to realize that nobody wants an operating system. People only use them because it is the only way to run the applications the users actually wants to use.

That reduces things to almost an absurd level, and literally tries to ignore reality.

---

I want X!

Ok, we need this and that to get there...

I don't want this and that, I want X!

But you need this and that to get to X...

I don't want this and that, I want X!

---

While it's true in essence, and people should always keep the end user and lofty end goals in mind, we should never lose sight of the ground, because that's where we exist.

Besides, different users want their operating system to do different things.

> At best an operating system is absolutely invisible ... but a new non-OS would probably be a better idea than a new OS.

This train of thought can be applied to just about anything. The best product for X would just get out of the way, and assist you seamlessly to do X.


It’s not absurd. If you want Y to get X and you would be perfectly happy to get X without Y then you don’t really want Y.


Linux works exactly like this. Most people who use Linux never directly interact with the kernel. They interact with applications.

The graphical shell is an application - KDE(plasmashell) Gnome(gnome-shell) etc. the console shell is an application - bash,zsh,fish etc. The WM is an application - (KWin, Mutter, i3, xmonad etc.) The file manager is an application - (nautilus,dolphin,ranger, etc.) The package manager is an application - (apt,portage,pacman,nix etc.)

You can choose not use any one of these(or replace them with alternatives). They exist because they do something essential - and the kernel exists because it provides them with a coherent view of the users computer. It is their interface with the hardware.


You miss the point, people wants to use vim not the gnome shell.


And there is nothing stopping you from doing that - just don't start gnome shell or X on startup, and type vim on the tty.


I think you've hit the nail on the head here. I would even take this thought further and say people want the features of the apps - but not necessarily segregated into distinct apps. Consider how poor the composition of various apps is today. The prevalent conception of 'operating system' and what it should provide can only result in another plethora of silo-ed apps. We need some major whole-system redesign.


If you're interested in operating systems along these lines, I'd recommend checking out Haiku OS. The OS provides a series of "kits" that form the basic building blocks of different types of application.

https://www.haiku-os.org/


Yeah Haiku is definitely interesting and I'll take another look. I think the datatypes concept (originally from AmigaOS, and I assume also in Haiku?) is very powerful and mainstream operating systems haven't caught up yet. I am also interesting in questioning some core ideas such as file systems and processes (do we need them?), binary compilation and binding (can we maximize work in higher level language?) etc.


Easy, that is how we used to program in the old days on systems where the language runtime was the OS.


Hard Disagree. I am glad Android has GCM so that every application isn't draining my battery maintaining a separate notification channel. I'm glad Android provides a Share facility so that I can easily send deep links to people and the mobile community didn't need to invent and evangelize some IPC based system for achieving this. I'm glad Android provides Accessibility Services. I'm glad Android provides a pluggable keyboard system so that apps don't need to implement their own. Etc.

OSs are not perfect, and they are facing headwinds (as well as some tailwinds like power constraints) but pretending they don't provide any benefits is just not thinking hard enough.


I'd just go with ChromeOS.


That is correct. You build the operating system that can woo the kind of developers who you want to do business with.


Shout out to Plan 9 and Inferno. Two operating systems that were decades ahead of their times and who's distributed partitioning architecture and ideas are desperately needed in a networked world of ever increasing complexity, bloat, and protocols.

And I was a BeOS fan back in the day. Just stumbled across my old R5 box after cleaning out the attic. Very clever design for its time but didn't bring anything to the table in terms of networking, multi-user or distribution. It was a multimedia oriented, multi-threading speed demon and not much else.


Was always fascinated by Plan9 and Inferno... from a distance. Way ahead of its time. I’m glad some of its ideas (probably notably UTF-8) made their way to more conventional OS’es.


UTF8 is probably the least of them. The really good ideas such as the virtual file system, it's networked counterpart 9p, everything as a file including shared libraries (there is no dynamic linking in plan 9), per-process namespace partitioning (each process gets its own file system and you control what it sees), no root user, modular kernel and system components like CPU, DISK and AUTH services which can be ran on different systems, etc. Too many to name. Plan 9 was so far ahead of its time that I honestly think people just couldn't understand the need for such a powerful OS circa 1990 when the PC world was still on DOS, Apple was a niche, and Unix was something a Student or business professional used.


Absolutely. But every single new operating system project falls into the trap of following all the same old patterns. It seems to be very difficult to approach problems and realize that the constraints that guided all of the old design choices simply no longer exist. Today different challenges exist and people would rather perpetuate re-creating the limitations of the past and then solving those nice understood problems than they would like to solve the new problems.

For instance, today we don't have text terminals. We have high-dpi displays. So why is the first thing you reach for a text buffer? Stop it. Vector-based fonts should be step zero. Raster graphics ought to come along pretty far down the line when you get to texture surfaces. We don't run with video units with basic memory-mapped framebuffers any more. Our storage also isn't seek-bound in most cases, we have RAM that can bust the limits of even old filesystem designs made for bulk storage. We have more now and just like the physicists say, More Is Different.

But the next revolutionary OS will probably come along and announce their goal of POSIX compliance and people will still trick themselves into thinking its 'new'. I'd rather see an OS that realizes the things the Internet and web have taught us and integrates it. What people want is search and instant-on application functionality. They don't care if it's "an app" or its "a webpage" or whatever. They want it to work, they don't want to "install", etc. And if you can murder the browser and get back to something native and cut the eye out of every company monetizing themselves through user behavior data sales, all the better!


> For instance, today we don't have text terminals. We have high-dpi displays. So why is the first thing you reach for a text buffer? Stop it.

+1. The familiar ideas and approaches are 'easy' but not necessarily the best (or even good) today. Lets start with the notion of an operating system and why it's needed? What purpose belongs within the OS vs outside? Some fundamental research along these lines would go a long way.


The problem is probably going to be that if you really want to be innovative and not just reimplement decade old ideas then you will break a lot of existing applications. There are certainly hundreds or thousands of nice and well researched ideas out there but if you fundamentally change how you handle address spaces, perform inter process communication, isolate processes, store files or what not you will break applications and have to reimplement or at least adapt them, too. You will also need drivers for hundreds and thousands of components. And thousands of protocols and standards. A billion Dollar and a decade of work will probably not be enough to seriously compete with existing operating systems. Maybe you could build a POSIX layer for compatibility but now you are already building two operating systems.


So, after typing this I realized that it is all related to Desktop OSs, not servers, so keep that in mind when reading.

I don't think applications are really the big problem these days with virtualization and all. If your OS can virtualize other OSs, that can act as your compatibility layer. Windows 7(?) did it for XP, MacOS X did something kinda like it for classic, etc.

The real problem is you have to offer a compelling reason for people to A) want to use it over alternatives, and B) develop for it over alternatives. Personally I think if you can make it really developer and power user friendly a lot of applications might show up just because the system is a joy to work in for the kinds of people who make things. You'll note that this is pretty much the opposite of where Desktop OSs have been heading.

Also, drivers. Not sure how to crack that nut.


If you virtualize applications then there will be no real benefit for using your new operating system, it just adds complexity and degrades performance over running natively and will hardly be able to make use of features setting your operating system apart. Maybe you could really attract developers but that would not change anything fundamentally, normal users wouldn't come until the developers have native and better replacements for applications users want to use. It just shifts the burden a bit from you building the operating system and the applications to you building the operating system and other developers hopefully building the applications. It won't change the amount of time and money required, maybe even increase both because it is no longer a single concerted effort.


He's saying if you built a new innovative OS with cutting edge OS research. The OS itself would be great to use and fast and have tons of features,etc etc...and you use virtualization for other things that aren't compatible.


Right. The virtualization is just a way to allow you to use the legacy applications you need while they are phased out in favor of the new hotness.

Of course, that'll get you nowhere unless the new hotness really is so much better that cost/benefit of running it, plus oldOS in a VM, is greater than that of just running oldOS.


Redox is interesting and shows promise https://github.com/redox-os/redox

...but it’s in no way a departure from the status quo, even referring to itself as kernel + GNU/BSD style ecosystem.

I wonder if we will see a new paradigm anytime soon. The Hurd?


Building an OS that doesn't rely heavily on some existing userspace and windowing paradigm is like building a new land vehicle that can't use roads.

In isolation it might be superior and have laudable attributes, but you're maybe a millionth the way towards competing with current automobiles for a general purpose solution in any fairly densely populated country.


We'll make it use railroads then :-)


> I wonder if we will see a new paradigm anytime soon.

I'd like to see something like a self-hosting library OS. The entire OS is a single giant application, software is installed by adding the source code and recompiling the whole thing, and security is done using language-level and package management constructs.


BTW, you might also find PhantomOS interesting: https://en.wikipedia.org/wiki/Phantom_OS

Specifically 'Managed code: Memory protection on object level, rather than on process level'


See https://mirage.io

MirageOS is a library operating system ...


The way a library OS normally works is that you compile your OS+App on one machine and install it on another (virtual or otherwise). What I'm suggesting is that you compile on one machine and then reinstall your OS on the same machine, and the result is still able to do this trick.


I like the idea of a library operating systems and the Unikernel architecture. But a good idea would be having a multi-thread operating system only as a hub for multiple virtualization boxes. Than we can leverage the idea of having an OS image bundles with the application and deploy there. We'd change completely how applications are managed/installed.


This sounds similar to Project Atomic[1]. The base OS is CentOS or Fedora, system files are managed by OSTree[2], and applications are run as Docker containers. You could probably also use something like Flatpak[3] for desktop apps, which is the approach used by Endless OS[4].

What I find interesting is these two projects are using a similar approach but targeting opposite ends of the spectrum. Atomic is intended as a cutting edge server OS for enterprises that want an integrated stack for running containerized infrastructure (they also started publicizing Atomic Workstation as of the last release, although I found it painful to use due to a lack of docs). Endless is targeting emerging markets, composed primarily of first-time computer users. OSTree+Flatpak allows the desktop to run similarly to a mobile OS, and there’s minimal chance of users breaking the system itself.

[1] https://www.projectatomic.io [2] https://ostree.readthedocs.io [3] https://flatpak.org [4] https://endlessos.com/for-developers


You mean the web browser, now with wasm? ;-)


Or any other platform that uses JIT



If you're interested in different paradigms see https://github.com/dzavalishin/phantomuserland/blob/master/R...


Yeah, Redox has precisely one interesting idea as far as I can see: it evolves the "everything is a file (except when it isn't)" UNIX paradigm with "everything is a URI". Other than that it's just any old UNIX clone with worse hardware support and Rust hype.


The best answer to "Rust hype" I've ever seen.

https://www.viva64.com/en/b/0324/


Discussed heavily when it was posted: https://news.ycombinator.com/item?id=9531822

and then again a year ago https://news.ycombinator.com/item?id=12232385


Absolutely. For clarification, I've tried and liked Rust and would probably use it on the right project. I still feel like there's some hype around it though.


The amount of evangelism around Redox, especially from jackpot51 himself, while it's still really little more than a toy, leaves me nervous backing such claims.


Well, it's the first OS that isn't built on C/C++ (unsafe languages). That alone is a huge step forward for security and correctness. And it seems to improve on Linux on most ways, which is nice. Sure, besides Rust there aren't any new big ideas going on in Redox, but is there anything better out there? Rust+microkernel sounds like state-of-the-art to me.


> Well, it's the first OS that isn't built on C/C++ (unsafe languages)...

This isn't even remotely true, but we're in the age of short memories. There have been several operating systems built in Java within our (careers') lifetimes.

And dozens of operating systems that predate C. https://en.wikipedia.org/wiki/Timeline_of_operating_systems

There's nothing preventing other operating systems from being written in a safe way -- they just aren't. There are also multiple definitions of the word "safe". As in, "BSD or Linux are safe to bet the software architecture of my multi-million systems budget".

No new ideas mean no reason for me to even consider adoption until it reaches critical mass. Maybe in a decade I'll give it a look.

Edit: Multics was written in PL/I, which itself just had a new stable release this past September...

Edit2: Redox isn't really that safe even. It's pretty cavalier and leaves a lot up to the application. It uses the unsafe parts of Rust a bit. An application can still malloc and forget to free -- so aren't you right where you started? Really every time this comes up, I would like to posit that "people evangelizing Redox don't even know what the f* they're talking about", but I honestly don't have time outside of my real work to dig into the thing. jackpot51 is fantastic at PR for his project though, and I don't want to discourage anyone from working on. It's good that it has an active community, I just don't need to hear about it.

Edit3: Checking Redox' source shows that about 1.5% of the code invokes 'unsafe' and they cite this in their documentation, but what I really want to know is how frequently those lines of code are called.


>Edit3: Checking Redox' source shows that about 1.5% of the code invokes 'unsafe' and they cite this in their documentation, but what I really want to know is how frequently those lines of code are called.

I never used Rust but looked at it several times over the past years. Doesn't the 'unsafe' classification allow for future automated tools/theorem provers to challenge those specific lines? Looking for vulnerabilities/bugs in only 1.5% of the code instead of the entire code base sounds like a nice improvement.


Not today, but we hope tomorrow. We're working on it.


> An application can still malloc and forget to free -- so aren't you right where you started?

The upside of invoking unsafe means that you were able to find out that 1.5% of the code is unsafe and you are able to focus in on it.

Put another way, it makes the haystack significantly smaller.


Only if you created a 'safe API' on top of the unsafe code otherwise the bug could be everywhere.. I remember a SSL bug which was really an API bug: it isn't always easy to create a 'safe' API..


> Well, it's the first OS that isn't built on C/C++ (unsafe languages).

That was in 1961 at Burroughs, using ESPOL.

There were plenty of OS being built in Algol and PL/I variants outside AT&T walls since the 60's.

History is full of OS written in other languages that weren't either C or C++, including Apple's Lisa OS and the first editions of Mac OS.


Yes, and more than that we need a good, better thought-out modern successor to POSIX-type interfaces. For instance: I think a process ought to be able to have more than one current working directory and possibly more than one user-ID at a time. It should have the option to insert data in the middle of a file without having to manually shift the rest down. Shell scripts should be able to interact with the filesystem via transactions that can be rolled back if anything fails. Programs should be able to have typed input and output, checked by the shell and/or OS, which could also enable command-line tab-completion to search installed programs for any that match a desired type.

I have a bunch of other random gripes with POSIX-style OS interfaces and find it a bit frustrating that these interfaces haven't changed much in decades and seem to have attained a lot of inertia of the "we do it this way because we've always done it this way" kind.


Forget the fallacious appeals to tradition- Are there any competing (possibly superior) specs to POSIX?


Well native windows definitely isn't POSIX. I might go against the opinions on this board, but I personally believe that MS got a lot more right about OS API's in windows than POSIX provides with its lowest common denominator. Frequently the POSIX API's that have tons of under-specified behaviors that aren't obvious at first reading. Particularly when viewed through a multihreaded/async paradigm. Many of the nonstandard linux/bsd/etc API's are there to address major shortcomings in POSIX which one frequently discovers aren't problems with the windows equivalents. Sure CreateFile() has a lot more parameters than open() but it frequent turns out to be useful functionality if your writing more than introductory applications.


Well, there is Synthesis OS [1] where the kernel can rewrite itself on the fly to provide synthesized system calls (think of a specialized `read()` call that can return a file line-by-line for just a simple example). But even that is (was?) almost 30 years ago.

[1] http://valerieaurora.org/synthesis/SynthesisOS/


This OS was much superior from a latency POV but if I remember the article it had no security so..


s/specs/PoCs/

Write one.


Yep, luckily there are definitely a good handful of interesting OS projects around. HelenOS[0] stands out to me the most due to its component-based design. Though I'm also looking forward to seeing how seL4[1] and Genode[2] might be put to use.

[0] http://www.helenos.org/

[1] https://sel4.systems/

[2] https://www.genode.org/


Not sure about the others, but seL4 is a microkernel, so it likely won't be adopted as a user facing OS.


I'm interested. Please elaborate on your reasoning.


From my understanding, it was designed to be a small, verifiable kernel intended for use in applications with high-end reliability requirements (e.g., military, aerospace).

In the case of seL4, verifiability comes from a formal specification of its key operations that is proven using a theorem prover. The spec is implemented in multiple languages including Haskell and of course C.

As it stands, formal verification is limited to very small codebases: seL4 itself is on the order of 10k lines of C. Extending the proof further outside the current boundary to cover additional functions that are typically found in OS kernels would be prohibitive to say the least!


>From my understanding, it was designed to be a small, verifiable kernel intended for use in applications with high-end reliability requirements (e.g., military, aerospace).

It's immediately and very obviously useful for these fields. It is small indeed, because it follows the principle of minimality, introduced by Liedtke's L4. But do not get this wrong: It is a general purpose µkernel.

This principle of minimality (the supervisor mode µkernel should only do something if it can't be done by a service in user mode) is the difference between 1st and 2nd generation µkernels. seL4 is a 3rd generation µkernel: It's designed around capabilities.

Genode is building a framework to create OSs based on µkernels. They do support many µkernels, but there's been some serious effort put into their seL4 support. Genode isn't quite there yet for this purpose, but does have ambitions of being the base for a general purpose OS.

Here's a 2014 update on lessons learned from L4.

https://www.youtube.com/watch?v=RdoaFc5-1Rk

If at any point you've thought "sure, but doing things like this is slow", then I suggest reading this.

http://blog.darknedgy.net/technology/2016/01/01/0/


What about EROS and its relatives?

Or Genera?

Or even TempleOS?

Then there are some older systems worth looking at (e.g. ITS, MCP, VME/B, Oberon).

Some ideas: The system language should be the operating system. No need for files if it's image-based, just persistent objects. Atomic system calls. Capabilities instead of access control.


I'd love to see KeyKOS on modern hardware. http://www.cap-lore.com/CapTheory/upenn/ for docs, http://www.cap-lore.com/CapTheory/KK/Apridos/ for code for old hardware.


Yes. We need new requirements more than we need new implementations of the same.


> Does anyone else think it's time for a new and promising operating system?

iOS is the new and promising operating system.

It challenged and made us review many of the assumptions that everyone considers to be universal for an OS, even though they have no reason to be:

- The need for a user-accessible "file system."

- The need for multiple, customizable windows on the screen.

iOS dared to go against the grain and people rallied against it.

Even though the vision seems to have gotten somewhat muddled post-Jobs, and some of the trademark stubbornness giving way under Tim Cook, and despite the annoying bugs and inconsistencies lately, iOS has actually worked surprisingly well; the iPad is the first computing device that many of the older people in my family can comfortably and confidently use.

It still has some way to go, but if you give it an honest chance, you'll see that an iPad [+ Smart Keyboard + Pencil] can easily perform 70%-90% of the task that most people ever need to do on a computer, without many of the jags associated with traditional desktop operating systems.

With the recent rumors about unifying iOS and macOS while still keeping desktop and mobile user interfaces distinct, I'd like to believe that Apple have some exciting plans ahead for a future OS made up of the best features from iOS+macOS.


Original palmOS didn't have a filesystem. Apps just sort of took care of any data they had there was never a "save" or "load file" option in those apps. There are other systems that are similar in that regard (AS400 is somewhat similar, and much older, and MVS doesn't really have a filesystem either).

And multiple overlapping windows was the big breakthrough with the original macos. Even then it was quite restricted (only desk accessories could overlap) until system6/multifinder added the ability to run more than one application at a time.

So, while I agree the ipad model works well for a lot of things, I think the dedicated tablet model is sort of dying now that ~6" phones are everywhere. When the ipad broke, no one missed it..


I remember reading about EROS (Extremely Reliable Operating System) [1] years ago. The killer feature was that you could shut the computer off at any time, and it would always recover a consistent state of everything that was running when you restarted. It also had an advanced security system.

[1] https://en.wikipedia.org/wiki/EROS_(microkernel)


Now even with Microsoft introducing WSL, I fear the POSIXfication of OS architectures.

However there is still hope, when we look at the ongoing userspace transitions on mobile OSes.

I mean, the kernel might still be UNIX or Windows like, but the userspace is quite different.


Minix is going to take over any day, just you wait and see!


Apparently it already did. Wasn't it said to be running the management engine on all Intel chips?


I think that was sarcasm.


I think he knows that.


I think too.


Haiku, Minoca, SPIN, Fuchsia, etc. If you're actually interested in alternative computing, choices abound.


Haiku is just a rebranded BeOS, no?

Minoca is for embedded devices... not really GP

SPIN looks... very rough/early-stage

Fuchsia looks... interesting, I guess, even if it's going to be subject to the same exact mutation-related bugs and security holes that have plagued C-based OS'es for years


Well, Fuchsia has more C++, Rust, Go and Dart code than C.


It's time for the Hurd to gain more developers. It is a very elegant and flexible system, which de-emphasizes super users and gives more power to regular users.

Thanks to NetBSD/rump kernel and the POSIX persona of the Hurd, it can get a whole bunch of device drivers with relatively little effort.


There has to be some market need to make a new OS. Right now there isn't much of one. What will it practically accomplish that the current infrastructure doesn't?

The future of OSes is stuff like android, iOS, Qubes (security) or MirageOS (simplicity for security)


And NixOS, a fantastic functional way to administer machines. And ready for prime time.


That's the vision of the future that makes me want to give up computing and live in a cave.


The thing is the benefits are not big enough for the costs compared to other more exciting things like self driving cars and machine learning.

Also 'make a new os' is much like 'rewrite vs improve in place'. Most mature engineers do improve in place. New systems research innovations come through improve in place operations. Change a filesystem, change that subsystem and eventually you have a new way of doing things.


Market need likely, but it is hard to get people to pay for software licenses.


Oh they are willing to pay for it, if it can be delivered properly. What do you think sysmantec and a million other enterprise security and it management companies are part of?


Haiku OS now boots on modern hardware (like my 2015 ThinkPad), and aside from the usual niggles like trackpad support and some browser weirdness it “works”.

Of course, it’s nowhere near polished enough for day-to-day use as macOS or Windows.


I do, but I'm not confident that the Linux monopoly will go anywhere.

It's a network effects thing. Everyone demands POSIX compatibility, and it's getting to the point where software doesn't even try to be cross-platform anymore; it just assumes you've got POSIX and a GNU environment. Possibly even specifically Linux -- Docker will run on other OSes, but it does so by launching a Linux VM to run all the containers in.


So what happens now that systemd is dragging Linux away from POSIX compatibility?


POSIX itself doesn't matter (it really hasn't since there were competing UNIXes and portability was important.)


POSIX is an interface contract between users/programmers and the operating system, and very much matters.


Not really, nowadays Linux compatibly seems more relevant than plain old POSIX, and even then only when coding in C or C++.

"POSIX Abstractions in Modern Operating Systems: The Old, the New, and the Missing"

http://www.cs.columbia.edu/nsl/papers/2016/posix.eurosys16.p...


It's true that web applications are very Linux-centric, there are however a lot of business solutions that run on Windows. ELO, DATEV, SAP comes to mind.


I used to use beos... Got through college using it. Wrote a web server in it and ported it to Linux for the one CS class I ever took in college and was appalled by greenthreads relative to bthreads. Extra credit was "make it multithreading".. well, already had that done.

I think something interesting would be an os based on BEAM as a underlying threading and memory management model, if you could put in security guarantees.


I think rather than reinvent the wheel and make an OS from scratch, just get involved with one already being developed.

Haiku is an open source free rewrite of BeOS and they are trying to make an OS that can run BeOS apps. https://www.haiku-os.org/

AROS is an open source and free AmigaOS 3.X rewrite, it can run old Amiga apps.

http://aros.sourceforge.net/

OSFree is an open source and free OS/2 clone. http://www.osfree.org/

ReactOS is an open source and free OS to try and be like XP/2003 in a way and has made some progress recently.

https://reactos.org/

If I was in charge of Microsoft or Apple, I'd look into one of those OSes to fund and exchange code with instead of creating one from scratch.

While they are not ready for prime time if some billion dollar company invests one million USD in just one of them, it could get it to get out of the alpha stage and go beta or retail in a few years or so.


it'd be certainly nice, but writing a kernel is easy compared to having working device drivers - that's hard enough on windows/linux/macos...


Is it the writing of device drivers that's the difficult part or just the fact that there's so many devices to write drivers for?


It depends on what device you are writing drivers for. I’ve done a few in userspace, and it can take between 60 minutes (with simulation for testing) and a few engineers a year.

That’s ignoring stuff like filesystems, network protocols, shader compilers, or whatever else you need to get basic functionality out of the device.

Also, a given chip might require anywhere between 2-3 drivers (for something simple like a power controller) to hundreds + a stand-alone OS (for things with an embedded microprocessor, like a drive or some fpgas)

It is tedious work, but I’d love to see the BSD’s band together with a hardware manufacturer to produce a high end consumer-grade laptop with a 4-5 year production lifespan, and well-supported hardware.

There haven’t been any noteworthy hardware improvements in that space in the last 5 years (arguably a 2013 MBP is better than a 2018 one in most ways), so this isn’t as crazy as it used to be.

These laptops could be similar to Windows laptops also sold by the same vendor, which reduces engineering costs. Dell sort of does this for Linux, but they churn the models too quickly for the BSD’s to catch up, in practice.

The vendor could rotate a new model in every 2-3 years (maybe rotate form factors each time), and make it a sustainable business.

There is precedent for this in the embedded space. Look at PC Engines, for example.


Besides the negative side effects in linux of not having a stable kernel API, is the fact that 3rd party OS's have a really hard time leveraging the opensource work companies do for linux. If the linux driver model were stable enough that a linux driver book was accurate more than 3 months after it was published it would be possible for people to write API compatibility layers for certain classes of drivers and leverage that work in other OSs.


Both but I think #1 is harder to solve/workaround.

Many modern devices (GPUs are most famous for that but there’re others, e.g. printers) include their microcode/firmware/binary blobs as a part of their device driver. Technically you can reuse these binaries for different OS, but legally, at least if you’re on American soil, you can’t.


Both.


When it is time someone or a group of people will step up. Then issue will become that of adaptation. Enough people will have to see it through the early and rough stages. Prospective users will have to not get turned off by negative hastily written or outdated opinion pieces.

Most people would not be willing to give a chance to what may be a great product in 5-10 years. So the next great operating system will most likely sneak in through a Trojan horse. My bet would be, containers, something, something, something, operating system.


The L4 ecosystem is interesting and there's a lot of activity going on. If you're just interesting in something 'not OSX/Windows/Linux', there's the IBM Mainframe world (you can use the Hercules emulator to run the software) or the DEC nee HP VMS world (there's a free hobby license). Way out, you can run Symbolics Genera in a VM and see how the world would have been if Lisp had won.


Weirdly I dream recently about this (all start imaging a relational language, then it get bigger and bigger), with new CPU arch, memory, etc (with full sci-fi capabilities)

The main problem is compatibility. The solution is virtualize. Let's say at least provide headless linux (Docker?). This is enough to start. Imagine that linux run not too bad here (ie: fast enough), just to keep dreaming.

The next is why? Why build for this OS if I "just" can run linux here?

The conventional narrative is that the OS is "boring" "invisible" etc. I think instead that is only true because most OS are boring. If we think as a master integration then is more cool.

Consider the posibility to turn the unix text stream/pipelines as the actual UI paradigm (like react) and have the ability to connect not only apps, but components inside. For example, If I'm in mail and wanna crop the image I just put here, I need to detour a lot to Photoshop then return to mail.

Why?

I think is possible to say "This is a image. Any app/component that could operate on it can do it". So, in short, is a UI manifestation of:

MailApp.currentMail.images | PhotoShop.crop > returnBack

So, like in a REST API, the apps publish their URIs with ACCEPT/HEADERS like "json", "xml", "img" etc and this allow to match to others apps that operate on them.

Having also the apps operate on "docker-like" containers by default (fully isolated) yet communicate as Erlang Actors (send messages) to any other app that "Accept: FORMAT", allowing to express to the USER what we do now on the terminal.

---

Other thing that totally need a revamp is the whole file exploring. Is just not rigth that google answer faster than a local disk search.

Also, why not isolated in containers the files of SYSTEM, APP + CONFIGS + TEMPS/CACHES, USER_DATA so if I wanna do a backup just copy the USER_DATA container and be worry free?

This is just some random ideas. Is about time to think of the OS as APP too, and start doing cool things :)

--

In the area of security is also a lot to do. A minimal thing is to acknowledge that most users perform multiples roles on a system, and the case where exist a dedicated user to be admin or dba is just a special case.

So, is reverse the roles/user.

I'm jhon. Now, I change hat to be admin. Then change to be normal user, then change to be developer. I don't need 3 user accounts to be all of this.

Because I'm all of this.


> Is just not rigth that google answer faster than a local disk search.

A while back, I had just bought a new desktop. I couldn't just use the hodgepodge of ~1280 MB worth of RAM from my old desktop because it was a different technology, but I also couldn't afford a GB of ram that would work with the new desktop.

I kept my old desktop on the network, made a big ramdisk out of 1GB of its RAM, and exposed it to the network as a network block device (kernel driver was called 'nbd'). I mounted that device on my new desktop and used it for swap, because it was orders of magnitude to eat all the transmission, TCP, etc delays in exchange for not having to hit disk platters.

.... which is _really not all that different_ from why google can return search results faster than you can search your local hard drive.


Regarding piping between apps: this is the role the clipboard serves today. When you copy something, the clipboard stores the app you copied from and a list of formats. When you paste something, the app selects a format from the list in the clipboard, then recieves the contents of the clipboard in that format from the app you copied from.

Regarding your filesystem suggestions: this is already commonly done on linux. Generally, separate filesystems are created on separate partitions for each of /mnt, /home, and / (everything else). If you wanted to backup all user data, you could find the partition, then run `dd if=/dev/home of=$BACKUP`.

Regarding your third point. Windows UAC already allows a user to jump into being an admin when needed. I'm unsure why you would need a separate user account for other roles. Users exist primarily to protect your stuff from other people on the computer. You shouldn't need to protect your files from yourself.


I understand some hacky, underused and not well designed ways to do some of the ideas exists :)

>Regarding piping between app: this is the role the clipboard serves today

This is more a "scratch" area than truly piping. If you can't do:

map(PhotoShop.listImages, zipToFile) |> map(sendEmail)

naturally then is not the idea.

> this is already commonly done on linux.

In fact this one is almost too close. But think for example in all the trash/caches/temps is stored on the HOME.

The problem is that the whole thing rely on ad-hoc, let's imagine developers and users will respect the filesystem layout...

You can totally store your photos on /bin. You can't trust ONLY your personal data is on HOME.

> Windows UAC already allows a user to jump into being an admin when needed. I'm unsure why you would need a separate user account for other roles.

Not, is the opposite, is not to create more accounts, but to switch roles per context.

I'm jhon/admin when managing computer I'm jhon/user when browsing I'm jhon/developer when compiling


It turns out all those things can be done in userland, you don’t need a new kind of kernel to support it. Likewise, most of the interesting things going on systems research today (e.g. distributed computing) don’t really touch the kernel either.


Reusing the kernel and some low level is clearly the way to go. But userland is truly where the game is.

The important thing to note is that the OS show the way. Similar how iOS change everything


The magic of iOS was in its design and human interface components. That Apple was able to turn OS X so quickly into iOS by reusing all the low level components shows us how little is happening in that space ATM.


OS isn’t that interesting anymore. We take them for granted, and can’t imagine them doing anything more than they do now that can’t be done well enough in user space. BeOS was exciting because it was doing something that other OSes weren’t doing yet, now it would be old hat.


I'm confused, Beos and Linux aren't that far apart, maybe 4 years at most.


If BCI become mainstream we might need new operating system anyway.


Sure, but how would you solve the hardware drivers problem?


Sounds cool!

Mildly related: I found "Nand To Tetris: The Elements of Computing Systems" to be an amazing, bottom-up, hands-on approach for learning about the fundamental layers of computer architecture, from hardware to assembly to OSs.


A wonderful complement (albeit with significant overlap) is Charles Petzold's Code. It's written as prose so doesn't include exercises but the book is infused throughout with the mastery attained by its author. His synthesis of different ideas and the historical context he provides give a very nice perspective on computing. While nand2tetris makes for a good textbook to work through, Code makes for a better subway read.


I am currently taking the class. Its an embedded systems + OS class that undertakes the challenge of writing an operating system for a raspberry pi 3 in Rust. The lectures aren't recorded sadly, but its the first time offering something like this and both Sergio and Dawson are fantastic professors!


I guess its "experimental" as in your will in a lab setting with experimental (rust!?) code for traditional operations on an rpi.

Not that you will be experimenting with new OS concepts.

Which is sort of a shame because it seems much of OS research might be turning back to concepts that haven't been explored since the "RISC/unix" revolution in the late 80's early 90's proclaimed that multiple privilege level machines, capabilities, full ACL controlled operations, message passing kernels, and dozen's of other concepts wern't "fast" or fell to the wayside because the RISC and traditional unix model couldn't support them, while we continue to pay a huge hidden tax for the flat address/paged memory model...


Note that the "e" is inspired from CS107e [1], the experimental version of "Introduction to Computer Systems". 107e also uses a Raspberry Pi to help students incrementally build up a working knowledge of basic system components including the processor, memory, and peripherals.

[1] https://cs107e.github.io/


Looks similar to "Baking Pi" from Cambridge. https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/


I'm afraid I disagree. An OS course needs to at least cover virtual memory, processes and filesystems. Doesn't look like "Baking Pi" does.


That depends on what you mean by "Operating System". Lots of real time operating systems don't have virtual memory or file systems. But I would agree with you that the "Baking Pi" course covers little of what one would typically associate with an operating system, it is much more of a course on low level ARM assembly and interfacing with a frame buffer.


That one is in straight Assembly, but actually quite interesting to follow.

Thanks for the link.


Will this become the Rust's era Minix (which encouraged a certain student to do an improved version?)


That would be Redox.

https://www.redox-os.org/


Redox looks very cool!

It is very funny that their "Screenshots" tab is full of pictures of real computer screens:

https://www.redox-os.org/screens/


To me that looks like they are just trying to show their software running on real machines. Which is why they show the whole laptop in the picture. A screenshot saying "Here it is running on a Thinkpad" doesn't have the same effect you know?


When HDMI was new one of the things that went around gamer circles was the fact that it was possible to do video capture on a second computer by daisy chaining them together. That way your game isn't compromised (as much, you probably still have higher latency) by trying to capture video.

I don't know if any modern video cards can still do that, but it might be good for these guys to look into it.


You can still get HDMI capture cards as the more popular streamers use them so their main computer is unencumbered with video mixing.

Most video cards these days have built in support for capturing video and encoding it on the fly in hardware, with a <5% impact to performance. This wouldn’t work for these guys as its probably a feature enabled by the driver.


It looks like you need to be a Stanford student to be able to watch the lectures. Can anybody here recommend a good OS course(video preferably) for beginners?

I am a self-taught developer and would love to learn it in my free time.


You don't really need the lectures - I took the main version (cs140) with ousterhout. His notes are condensed and give you a good summary of the basics. They're also open to the public:

https://web.stanford.edu/~ouster/cgi-bin/cs140-spring14/lect...



Berkeley's CS 162 is great, and recordings are available on internet archive. There are some decent undergraduate textbooks these days too: OSTEP is particularly accessible and free online, and the Tom Anderson _Operating Systems: Principles and Practice_ book is solid.


I hope they turn this course into a MOOC.


Sadly, Stanford isn't as commited as MIT to open their courses. Only a handful of them can be found on Coursera/Lagunita.


That's unfortunate. The first class I took on Coursera (and it might even be before it was Coursera) was a Stanford class. It was "Introduction to Databases" by Prof. Jennifer Widom and was very good.


Do you know why?


Don't they have their own edx platform?


It's a fork of edx's platform called "Stanford Online –Lagunita".


i remember something in that vein yes, also I did their db course and it was a very nice one (the tone was a nice blend of simple but serious with interesting exercises but no torture)


I'd love to follow along, but I don't see a parts list to get the matching parts.


Assignment zero lists the following parts:

    1 Raspberry Pi 3 (Model B)
    1 1⁄2-sized breadboard
    1 4GiB microSD card
    1 microSD card USB adapter
    1 CP2102 USB TTL adapter w/4 jumper cables
    10 multicolored LEDs
    4 100 ohm resistors
    4 1k ohm resistors
    10 male-male DuPont jumper cables
    10 female-male DuPont jumper cables


Oh hey, so it's actually a course in operating systems combined with embedded programming? That's really neat!


I'm not sure I'd consider a system running full blown Linux with self hosted development tools, graphical environment and so on to be a typical embedded system.

Edit: I suppose they may not be running Linux on the Pi, certainly not the graphical variants on a 4GB card.


Not sure where you are getting that info... but xfce is just above 2gb. You could easily run a graphical desktop on that.


Yes, I saw that, but it'd be nice to get the exact same components if possible. It'd be even nicer if the same pack the students got was available from their source.


What do you mean by exact same components? Cables are pretty much cables, LEDs are LEDs, resistors are resistors, and a Raspberry Pi is a Raspberry Pi. Other than the Pi, brand names don't really matter too much in this space.


That list is a bit under-specified for an absolute newbie to electronics. A link to Digikey or Mouser would be helpful. A resistor just isn't a resistor, for example. I imagine they really mean a 1/4 watt 5% tolerance thru-hole resistor for use on the breadboard, for example. Someone who orders a surface mount 0402 1k resistor is in for trouble...


I'm haven't done any electronics stuff before, so I didn't know that.


Sweet! I have been looking for a good opportunity to build something with Rust in a domain I already know, and this looks like a right mix of everything!


See also: http://rust-class.org/

(Note though that this uses Rust as it was in 2014.)


Yeah, with the caveat that Rust in 2014 was a very different language; we had a runtime back then!

http://rust-class.org/0/pages/final-survey.html was very instructive to read at the time.


Is this supposed to be a replacement for using PINTOS to build an OS in C? I remember the PINTOS projects as one of the most rewarding things I worked on in my college career.

https://web.stanford.edu/class/cs140/projects/pintos/pintos_...


Its an alternative. Rather than using a virtualized environment, you build everything yourself for a raspberry pi 3. No scaffolding, nothing. and its in Rust!


Interesting. Can anyone take this course or Stanford students only?


Well, the lecture notes and assignments seem are available online, so one can follow at your own pace

Assignment 0: https://web.stanford.edu/class/cs140e/assignments/0-blinky/


That's awesome. It's very well written. In just five minutes of reading, I learned what registers and memory-mapped I/O are and how they apply to the Pi. Old news to CS students but for this guy with a liberal arts degree, super helpful. I'm grateful that the document took the time to explain this kind of thing.


Did you complete assignment 0 yet?


Seconded. Following a class syllabus on your own time and without grading or feedback except knowing whether you did it is under-rated :-)


If you follow the pre-registration survey link, it asks for a SUNet ID, and if you try to register via Piazza, it asks for an @stanford.edu email address. So it seems likely the course proper is just students.


FWIW I created a subreddit [0] for folks (not from Stanford) wanting a forum to study, follow and discuss on this course.

[0] http://www.reddit.com/r/cs140e


It seems the 2 biggest hurdles of creating a new OS (aside from designing and actually writing the damn thing) is software support and driver support...

Silly idea, but how about a "super" terminal OS... that does a few primary things:

1. Responsive clients for various UIs (X, VNC, RDP, SSH, Powershell, etc.) - Having a full UI to use my smart phone would be awesome. 2. Run VMs (which can run docker, etc.) 3. Uses some mechanism (wave hands here) on the remote machines to facilitate storage, computation?, graphics, audio, clipboard and printing on the local terminal. 4. Strong support for a few devices. Pick a few network cards and graphic cards to support, perhaps a few other basics, and leave it at that...

I know it would still take years of writing a LOT of code, but this could reduce the lack of software support. Drivers are still a sticky issue...

(Yeah, like I said, a silly idea!)


Plan 9 sorta does 3. Can execute on one device, display on another, play sound on two others, and store wherever. This was always my pipe dream w/ that OS.


I’ve been moving towards this for a couple of years. Right now a minimal Linux with Firefox and an RDP client will do 90% of what I need, and I would run the whole thing off a Raspberry Pi 3 if it supported dual monitors.

Ironically, Windows Home also qualifies, and new low-end fanless PCs can drive 4K displays, so I’ll likely be running a setup like that by Summer, with zero local data except remote connection settings.


Have a look at the cheat sheet, it's extremely illustrative for intermediate Rust programmers:

https://web.stanford.edu/class/cs140e/notes/lec3/cheat-sheet...


Here is the original link – it was made by Raph Levien:

https://docs.google.com/presentation/d/1q-c7UAyrUlM-eZyTo1pd...


A quick note: while this is a good way to get a grasp on how these types work, it also exposes/implies things that are implementation details and not guaranteed. This is why we haven't put it in the docs, awesome of a resource as it is.


My feelings are hurt!

Nah, just kidding. An example of the kind of thing that might change, the current stdlib mutex implementation currently depends on pthread mutex objects, which can't be safely moved, so there's an additional allocation to box them. The parking_lot crate is an alternative implementation that interfaces with the system at a lower level, so can avoid this allocation.

I found I learned a lot about Rust's low level implementation details when making the chart, so can definitely see how it would be useful for such a class, and am very happy it's being used in this way.


<3

By the way, I'm extremely sad to be missing your talk at the Recurse Center; I'm in Canada until Monday :(


Is there the possibility of adopting this with an explicit caveat that the details are subject to change? I would have found it incredibly useful when first learning about these types; feels like a missed opportunity.


For anyone interested in following up on the course, I have created a cs140e subreddit [0] as I think it will be useful for those interested. Feel free to join.

[0] http://www.reddit.com/r/cs140e


Wow, I would have loved to take this class in my undergrad. This looks really cool!


Could someone explain to me how Golang would compare to Rust for this purpose? I'm a Golang newbie and I always wondered if it would be possible to write an OS in it (+ some assembly).


Golang isn't generally suited to real time or near real time systems (like operating systems) because of its garbage collector. It will pause (not for very long) whenever it likes to tidy up and that can cause problems when interacting with hardware where timings are potentially critical.


Thanks, makes sense!


I'm pumped too see the new OS architectures Rust will allow for.


Same as C. Rust is basically C with hardcoded checks enforced and put in by compiler that good programmers already know to put in.


Please don't post flamebait to HN. We don't want programming language flamewars, or any flamewars.

https://news.ycombinator.com/newsguidelines.html


Given all the vulnerabilities that have happened with C code over the years, I think making it harder for programmers to screw up is a win.


while it's true that Rust has the potential to decrease the frequency and impact of certain types of bugs, i think the GP was saying that Rust's borrow checker is unlikely to revolutionize the way operating systems are designed and the way userspace interacts with the kernel.

the GP fails to acknowledge Rust's other features such as its type system, but i'm not convinced those will revolutionize operating system architecture either.

that doesn't mean i'm not excited about the potential of a new open-source general purpose operating system kernel written in Rust and entering the same space as the Linux kernel. it seems like a good fit for the language and an area where modernization is past due.


Fair enough, I definitely agree with everything you're saying. I took a little issue with the statement that "good programmers" know how to properly write safe C good though, as it not only is unnecessarily condescending but also glosses over the fact that even good programmers write code with bugs, and "knowing how" to do something properly doesn't mean you'll do it properly 100% of the time.


You're missing some things. For example, the APIs you can expose in Rust can encode significantly more information than a C-based one can, given the strength of its typesystem.

Of course, Rust doesn't have a stable ABI yet, but then again, neither do many OSes. We'll get there!


I was thinking something similar. Maybe I can pass a userspace closure into the kernel? Maybe I don't need the memory manager to be as beefy? Maybe the API can be fully reactive (idk if this is good or not).


One thing, and one that's relevant to an OS class, that I've been doing with my own toy kernel (that I haven't had any time for lately) is using Cargo workspaces to make it extremely modular. You can of course do this in C, but it's even easier with Cargo. In a class context, you could do something like "here's the entire OS except the scheduler package, implement it and make all the tests pass."


This is true. I wonder if we'll see an OS where each of these modules has some sort of monadic (or maybe something else) API so that you can really decouple the implementation from the API.

This exists: https://ocw.mit.edu/courses/mathematics/18-s996-category-the...


Actually, it is more like "Same as Ada", but with curlys (oversimplifying here).


> Rust is basically C with hardcoded checks enforced and put in by compiler that good programmers already know to put in.

I have a feeling you know absolutely nothing about rust.

Even the best programmers and programs in C have issues with what the borrow checker will catch. Moreover, the expressiveness of the type system is world's above and beyond any other main-stream (so not Haskell or ocamel) language. Saying it's C with checks is like saying French and Hungarian are the same, but with some different words.


> I have a feeling you know absolutely nothing about rust.

Please edit uncivil swipes out of your comments here. The rest of what you wrote is fine.


I think you are correct about Rust being substantially different from C, but that's a nasty way of saying it ("you know absolutely nothing about..."). Is it really necessary to speak that way?


It wasn't that nasty. Might just be because I have no sympathy for people saying things like they are fact, yet don't know what they are talking about.


Except the context of the quote was OS architecture so dmitrygr was correct -- Rust will not provide different OS architectures.

Moreover, it was mean-spirited. Probably because they were reacting to the second part of dmitrygr's quote.


On a fundamental level, sure, a new language doesn't inherently make a new architecture possible. But you have to think of it from an affordance perspective: what architecture does Rust make easier or better?

Consider the Stylo project in Firefox, for example. Yes, Mozilla could have done the parallelization in C++ instead of Rust. They even tried! Twice! But it failed both times. That doesn't mean that it's impossible.

In my experience, this is what people mean when they say things like this.


Sure, but you're moving the goalposts :)

Look, I'm a big fan of Rust - I want to see it succeed. I go to Rust meetups in my city, I've advocated for it's adoption in my company, yadda yadda. But I don't these vague half-truths are good for the language or the community. In fact, I think they'll be harmful to the community over time as the language will fail to live up to expectations. When I started programming, Java was in the position of Rust. It was being given so many vague platitudes that it experienced push back a few years later as developers realized it didn't fix all their problems.

I don't mean for this post to be mean or condescending. Tone is hard to transmit over the internet.


I don’t think it’s moving the goalposts. It’s where they were set in the first place. Furthermore, nobody is making promises of solving all the problems, the OP said they were interested to see what might happen, which is a very different statement than “this will happen” or “this will happen and fix everything.”

That is, I 100% agree with your comment, but I don’t see it happening in this thread.


How is that a vague half truth? Just because it’s not concrete doesn’t mean it’s either vague or only half true


Well, that's a bit not nice.

I took the "Rust is like C" line to mean is that you can use Rust anywhere you can use C, running at C speeds (the whole zero cost abstraction/no runtime/unsafe blocks), someone that can't be said about OCaml/F#/Haskell (TTBOMK).

EDIT

Moreover, if you look at context, you'll see that his point is that when it comes to OS design, a rust based OS will look mostly like a C based OS.


Rust + Rpi interests me and I might wanna follow along. Are they using an available kit from an online retailer ? Getting/shipping individual parts is a pain where I'm from.


I wonder: is there room for a new OS to truly surpass Linux/Windows ?

I would love to see more commercial success for micro/nano-kernels (vs. the staple monolithic kernels)


Can someone point to a reliable website where I can buy the required materials? I think this is going to be a big question for beginners.


Welp. I dont even go to Stanford, but applied anyway cause it looked good. Wish me luck getting in :)


I'm glad they transitioned into Rust. OS classes I've taken before were nowhere this cool.


UV actually had a Rust based OS class four years ago! The language was at something like 0.7 so I'm really excited to have this course with up-to-date Rust.


Rust + OS design seems like the sweet spot for teaching future systems programmers. Nice idea!


Seems the chances of a non stanford student getting this class is pretty slim. Super lame.


Can anyone from any corner of the world register for this class?


Why was the title changed? It used to be "Stanford CS140e: Writing a Raspberry Pi OS in Rust"


HN usually prefers the <title> of the webpage, in this case, that's "Stanford CS140e - Operating Systems". However, it's already a link at stanford.edu, so the first half is redundant, but "Operating systems" is a zero-information title, so "An Experimental Course on Operating Systems" was used instead, as it's the sub-title of the page.

I'm not a mod, so I can't say for sure, but I bet that was the thought process.


Proving again that heuristics don't solve problems, they just make them less obvious.


Yeah I figured the original title I put was more informative, but oh well. Hopefully people click it and see its a pretty modern and interesting take on an OS class!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: