Engler was one of my favorite professors at Stanford. He's kind, sharp, well-prepared, and consistently delivered great lectures.
One of the things that made him stand out to me are the insights he provided beyond what's in the papers we read in CS240. You can read the papers yourself, but the papers won't tell you things like "I think the reason this paper was accepted is different from the reason the author likely thinks it was accepted."
He's awesome, take his class. Find a good team, and, um, don't underestimate how much time CS140 takes. :)
That's a great perspective to have! I wonder if opinions like these are available online - I'm thinking in a format like Adrian Coyler's morning paper. I would love to read opinionated, yet highly educated material.
It is not that clear from the course syllabus.
"This course covers the following topics:
Disks, File systems, I/O, Threads & Processes, Scheduling, Virtual Memory, Protection & Security, Interrupts, Concurrency & Synchronization."
In theory, if those are the goals, one doesn't need Rust as part the first week's lecture.
Rewriting the Kernel in Rust sure makes a very interesting (and very tough) way to learn OS in 3 months.
At Google I worked with the Gmail performance team to decrease server downtime by detecting anomalous behavior before server failure. I designed and implemented an original anomaly detection algorithm based on local outlier factors.
Does anyone have any info about anything related to this?
I want to write a program to detect anomalous behavior in my systems too. (Mostly so I can completely ignore the useful information, but still.)
The tool would run in trading engines and monitor different metrics of performance. CPU usage, while allowed to burst, shouldn't be sustained at 100% for a long time. Typically, these machines are monsterous and have hundreds of gigs of memory; therefore, they shouldn't come anywhere near 100% memory usage either. There's a bunch of other proprietary internal metrics that I can't really explain (stuff like latency of a particular kind of response).
Anyway, since I was most comfortable with writing the tool in Python (also, time was a constraint as I started working on the project towards the end of my internship), I wrote a modular tool with pluggable metrics. The reason why it wasn't a monolithic block: if the company decides to add a new metric they want to measure and follow, they should just have to write the logic necessary to retrieve that metric. The rest should be handled by my framework: anomaly detection, alerting relevant parties, and so on.
So banged out the framework and had just enough time to write two modules: CPU usage and memory usage (probably the easiest ones). With a simple moving window average and standard deviation, you can identify outliers: I marked everything outside 2*sigma as an outlier as it's better to have false positives than false negatives (but not excessively many false positives as that would be counter-productive and make humans ignore reports).
The alerting part was fairly straightforward as there was a built-in tool to do that. I ran a few tests with historical data and it was pretty good at detecting the anomalies that I would mark myself as a human. It could be better if someone smarter than could come up with a machine learning algo to do the same thing more accurately.
(I didn't productionize it as I didn't have the time.)
A common probability theory such as Markov Chains could be used as a reference:
There is no limit of items this model could be applied to, but a few that come to mind; given the context might be;
Load Generation or emulation at the filesystem, network or application level.
iowait, vmstats, memory leaks (Why am I unable to account for N percent memory), Cache hit vs miss,
Network stats, heuristics, buffer overflows, denial of service (listen/accept queue overflow), TCP RTT, NTP Jitter, Response Time, etc…
Has a fairly good mixture of models approach to anomaly detection even if it's not maintained.
Looking through the CS140e class description made me a little envious of students who are both able to take the class live and also have 20 hours a week or so to really get the most out of the class.
1. 1 Raspberry Pi 3
2. 1 1⁄2-sized breadboard
3. 1 4GiB microSD card
4. 1 microSD card USB adapter
5. 1 CP2102 USB TTL adapter w/4 jumper cables
6. 10 multicolored LEDs
7. 4 100 ohm resistors
8. 4 1k ohm resistors
9. 10 male-male DuPont jumper cables, 10 female-male DuPont jumper cables
Edit: edited for solderless breadboard link rather than a solderable one.
For the 100 and 1k resistors, I'd recommend something like this: https://www.amazon.com/PIXNOR-Resistor-1ohm-10Mohm-Resistors...
That includes 20 1k and 20 100 ohm resistors, plus 20 each of 54 other assorted resistor values, and it costs slightly less than just buying the 100 ohm and 1k 10 packs that you suggested.
"A modern approach
- Multicore, 64-bit ARMv8 platform.
- Modern programming languages and tools."
The Raspberry Pi Zero W does not support 64 bit mode, since it uses a BCM2835 instead of a BCM2837. Even more: the BCM2835 does not even support ARMv7 instructions (only ARMv6).
1. Raspberry Pi 3: https://www.amazon.com/gp/product/B01CD5VC92
2. CP2102 USB TTL adapter: https://www.amazon.com/gp/product/B072K3Z3TL
3. Breadboard/Cables/LEDs/Resistors kit: https://www.amazon.com/gp/product/B01IH4VJRI
4. microSD card: https://www.amazon.com/gp/product/B001B1AR50
Note that I didn't need a microSD adapter since my laptop already has a SD card port, and the card linked has a microSD-to-SD adapter. Overall, this was $64 vs $98 for the above parts, and it all had free one-day shipping.
Shout-out to BeOS (the old geeks will know of it) which was the last promising new OS I encountered... and that was many moons ago
Of course, most of the shit we do with junky web apps today could just be presented as a 9P service with maybe a couple shell scripts in front of it, but the junky web apps already exist and are in use.
"There is no fork."
Apparently they're from 9front, a fork of plan9: https://news.ycombinator.com/item?id=12617036
http://9front.org/img/ is filled with a bunch of random stuff. The diagram of plan9 at http://9front.org/img/fs.png seems useful.
"Plan9 has been forked": https://news.ycombinator.com/item?id=2772718
Note the discussion on rc, plan9's scripting language: https://news.ycombinator.com/item?id=2773275
Maybe someone can borrow some ideas for their next OS.
EDIT: Aha: http://9front.org/propaganda/
On the other hand, you know what OS was practically invisible? DOS. And DOS was pretty great, so maybe you're on to something.
Do you know how much work it was to redirect the input/output of an MS-DOS application? It can get pretty insane .
 You had to pick a memory model to compile against for one thing---tiny, small, medium, compact, large or huge. You had to deal with FAR and NEAR pointers, and it really drove home the point that sizeof(int) != sizeof(long) != sizeof(void *) when writing C on that system.
 http://boston.conman.org/2015/12/13.1 More info here: https://github.com/spc476/NaNoGenMo-2015
Anyway, in dos, .ost little utilities and such worked just fine in tiny (and you could use .com which simplified everything even more). Medium and compact were sort pointless because if you were writing anything you expected to grow beyond 64k then you used a HLL and let the compiler deal with most of the fiddly stuff. IIRC One of the compilers (may have been turboC) had an project/link option which let you cycles through all the memory models, and with a quick rebuild you could check executable sizes or run little benchmarks. I did a fair amount of programming in Turbo Pascal, and I just checked the manual, it basically hid the entire memory model argument, even going so far as to have that "overlay" functionality which was basically a software paging unit for your program which would load groups of methods from disk as necessary.
Yah, and I wrote TSR's too... and hacks to take over bios/dos INT calls, etc etc etc. These days it seems any software project that has been around for a few years is orders of magnitude more difficult than the stuff people claim was hard about DOS. Heck there are _network_ drivers in linux that are orders of magnitude more code (and more complex) than DOS or probably the vast majority of DOS applications. People simply didn't write 100 million line DOS programs so dealing with little hardware oddities is simple vs trying to ferret out VM barrier bugs in linux (or whatever).
That reduces things to almost an absurd level, and literally tries to ignore reality.
I want X!
Ok, we need this and that to get there...
I don't want this and that, I want X!
But you need this and that to get to X...
While it's true in essence, and people should always keep the end user and lofty end goals in mind, we should never lose sight of the ground, because that's where we exist.
Besides, different users want their operating system to do different things.
> At best an operating system is absolutely invisible ... but a new non-OS would probably be a better idea than a new OS.
This train of thought can be applied to just about anything. The best product for X would just get out of the way, and assist you seamlessly to do X.
The graphical shell is an application - KDE(plasmashell) Gnome(gnome-shell) etc.
the console shell is an application - bash,zsh,fish etc.
The WM is an application - (KWin, Mutter, i3, xmonad etc.)
The file manager is an application - (nautilus,dolphin,ranger, etc.)
The package manager is an application - (apt,portage,pacman,nix etc.)
You can choose not use any one of these(or replace them with alternatives). They exist because they do something essential - and the kernel exists because it provides them with a coherent view of the users computer. It is their interface with the hardware.
OSs are not perfect, and they are facing headwinds (as well as some tailwinds like power constraints) but pretending they don't provide any benefits is just not thinking hard enough.
And I was a BeOS fan back in the day. Just stumbled across my old R5 box after cleaning out the attic. Very clever design for its time but didn't bring anything to the table in terms of networking, multi-user or distribution. It was a multimedia oriented, multi-threading speed demon and not much else.
For instance, today we don't have text terminals. We have high-dpi displays. So why is the first thing you reach for a text buffer? Stop it. Vector-based fonts should be step zero. Raster graphics ought to come along pretty far down the line when you get to texture surfaces. We don't run with video units with basic memory-mapped framebuffers any more. Our storage also isn't seek-bound in most cases, we have RAM that can bust the limits of even old filesystem designs made for bulk storage. We have more now and just like the physicists say, More Is Different.
But the next revolutionary OS will probably come along and announce their goal of POSIX compliance and people will still trick themselves into thinking its 'new'. I'd rather see an OS that realizes the things the Internet and web have taught us and integrates it. What people want is search and instant-on application functionality. They don't care if it's "an app" or its "a webpage" or whatever. They want it to work, they don't want to "install", etc. And if you can murder the browser and get back to something native and cut the eye out of every company monetizing themselves through user behavior data sales, all the better!
+1. The familiar ideas and approaches are 'easy' but not necessarily the best (or even good) today. Lets start with the notion of an operating system and why it's needed? What purpose belongs within the OS vs outside? Some fundamental research along these lines would go a long way.
I don't think applications are really the big problem these days with virtualization and all. If your OS can virtualize other OSs, that can act as your compatibility layer. Windows 7(?) did it for XP, MacOS X did something kinda like it for classic, etc.
The real problem is you have to offer a compelling reason for people to A) want to use it over alternatives, and B) develop for it over alternatives. Personally I think if you can make it really developer and power user friendly a lot of applications might show up just because the system is a joy to work in for the kinds of people who make things. You'll note that this is pretty much the opposite of where Desktop OSs have been heading.
Also, drivers. Not sure how to crack that nut.
Of course, that'll get you nowhere unless the new hotness really is so much better that cost/benefit of running it, plus oldOS in a VM, is greater than that of just running oldOS.
...but it’s in no way a departure from the status quo, even referring to itself as kernel + GNU/BSD style ecosystem.
I wonder if we will see a new paradigm anytime soon. The Hurd?
In isolation it might be superior and have laudable attributes, but you're maybe a millionth the way towards competing with current automobiles for a general purpose solution in any fairly densely populated country.
I'd like to see something like a self-hosting library OS. The entire OS is a single giant application, software is installed by adding the source code and recompiling the whole thing, and security is done using language-level and package management constructs.
Specifically 'Managed code: Memory protection on object level, rather than on process level'
MirageOS is a library operating system ...
What I find interesting is these two projects are using a similar approach but targeting opposite ends of the spectrum. Atomic is intended as a cutting edge server OS for enterprises that want an integrated stack for running containerized infrastructure (they also started publicizing Atomic Workstation as of the last release, although I found it painful to use due to a lack of docs). Endless is targeting emerging markets, composed primarily of first-time computer users. OSTree+Flatpak allows the desktop to run similarly to a mobile OS, and there’s minimal chance of users breaking the system itself.
and then again a year ago https://news.ycombinator.com/item?id=12232385
This isn't even remotely true, but we're in the age of short memories. There have been several operating systems built in Java within our (careers') lifetimes.
And dozens of operating systems that predate C. https://en.wikipedia.org/wiki/Timeline_of_operating_systems
There's nothing preventing other operating systems from being written in a safe way -- they just aren't. There are also multiple definitions of the word "safe". As in, "BSD or Linux are safe to bet the software architecture of my multi-million systems budget".
No new ideas mean no reason for me to even consider adoption until it reaches critical mass. Maybe in a decade I'll give it a look.
Edit: Multics was written in PL/I, which itself just had a new stable release this past September...
Edit2: Redox isn't really that safe even. It's pretty cavalier and leaves a lot up to the application. It uses the unsafe parts of Rust a bit. An application can still malloc and forget to free -- so aren't you right where you started? Really every time this comes up, I would like to posit that "people evangelizing Redox don't even know what the f* they're talking about", but I honestly don't have time outside of my real work to dig into the thing. jackpot51 is fantastic at PR for his project though, and I don't want to discourage anyone from working on. It's good that it has an active community, I just don't need to hear about it.
Edit3: Checking Redox' source shows that about 1.5% of the code invokes 'unsafe' and they cite this in their documentation, but what I really want to know is how frequently those lines of code are called.
I never used Rust but looked at it several times over the past years. Doesn't the 'unsafe' classification allow for future automated tools/theorem provers to challenge those specific lines? Looking for vulnerabilities/bugs in only 1.5% of the code instead of the entire code base sounds like a nice improvement.
The upside of invoking unsafe means that you were able to find out that 1.5% of the code is unsafe and you are able to focus in on it.
Put another way, it makes the haystack significantly smaller.
That was in 1961 at Burroughs, using ESPOL.
There were plenty of OS being built in Algol and PL/I variants outside AT&T walls since the 60's.
History is full of OS written in other languages that weren't either C or C++, including Apple's Lisa OS and the first editions of Mac OS.
I have a bunch of other random gripes with POSIX-style OS interfaces and find it a bit frustrating that these interfaces haven't changed much in decades and seem to have attained a lot of inertia of the "we do it this way because we've always done it this way" kind.
In the case of seL4, verifiability comes from a formal specification of its key operations that is proven using a theorem prover. The spec is implemented in multiple languages including Haskell and of course C.
As it stands, formal verification is limited to very small codebases: seL4 itself is on the order of 10k lines of C. Extending the proof further outside the current boundary to cover additional functions that are typically found in OS kernels would be prohibitive to say the least!
It's immediately and very obviously useful for these fields. It is small indeed, because it follows the principle of minimality, introduced by Liedtke's L4. But do not get this wrong: It is a general purpose µkernel.
This principle of minimality (the supervisor mode µkernel should only do something if it can't be done by a service in user mode) is the difference between 1st and 2nd generation µkernels. seL4 is a 3rd generation µkernel: It's designed around capabilities.
Genode is building a framework to create OSs based on µkernels. They do support many µkernels, but there's been some serious effort put into their seL4 support. Genode isn't quite there yet for this purpose, but does have ambitions of being the base for a general purpose OS.
Here's a 2014 update on lessons learned from L4.
If at any point you've thought "sure, but doing things like this is slow", then I suggest reading this.
Or even TempleOS?
Then there are some older systems worth looking at (e.g. ITS, MCP, VME/B, Oberon).
Some ideas: The system language should be the operating system. No need for files if it's image-based, just persistent objects. Atomic system calls. Capabilities instead of access control.
iOS is the new and promising operating system.
It challenged and made us review many of the assumptions that everyone considers to be universal for an OS, even though they have no reason to be:
- The need for a user-accessible "file system."
- The need for multiple, customizable windows on the screen.
iOS dared to go against the grain and people rallied against it.
Even though the vision seems to have gotten somewhat muddled post-Jobs, and some of the trademark stubbornness giving way under Tim Cook, and despite the annoying bugs and inconsistencies lately, iOS has actually worked surprisingly well; the iPad is the first computing device that many of the older people in my family can comfortably and confidently use.
It still has some way to go, but if you give it an honest chance, you'll see that an iPad [+ Smart Keyboard + Pencil] can easily perform 70%-90% of the task that most people ever need to do on a computer, without many of the jags associated with traditional desktop operating systems.
With the recent rumors about unifying iOS and macOS while still keeping desktop and mobile user interfaces distinct, I'd like to believe that Apple have some exciting plans ahead for a future OS made up of the best features from iOS+macOS.
And multiple overlapping windows was the big breakthrough with the original macos. Even then it was quite restricted (only desk accessories could overlap) until system6/multifinder added the ability to run more than one application at a time.
So, while I agree the ipad model works well for a lot of things, I think the dedicated tablet model is sort of dying now that ~6" phones are everywhere. When the ipad broke, no one missed it..
However there is still hope, when we look at the ongoing userspace transitions on mobile OSes.
I mean, the kernel might still be UNIX or Windows like, but the userspace is quite different.
Minoca is for embedded devices... not really GP
SPIN looks... very rough/early-stage
Fuchsia looks... interesting, I guess, even if it's going to be subject to the same exact mutation-related bugs and security holes that have plagued C-based OS'es for years
Thanks to NetBSD/rump kernel and the POSIX persona of the Hurd, it can get a whole bunch of device drivers with relatively little effort.
The future of OSes is stuff like android, iOS, Qubes (security) or MirageOS (simplicity for security)
Also 'make a new os' is much like 'rewrite vs improve in place'. Most mature engineers do improve in place. New systems research innovations come through improve in place operations. Change a filesystem, change that subsystem and eventually you have a new way of doing things.
Of course, it’s nowhere near polished enough for day-to-day use as macOS or Windows.
It's a network effects thing. Everyone demands POSIX compatibility, and it's getting to the point where software doesn't even try to be cross-platform anymore; it just assumes you've got POSIX and a GNU environment. Possibly even specifically Linux -- Docker will run on other OSes, but it does so by launching a Linux VM to run all the containers in.
"POSIX Abstractions in Modern Operating Systems: The Old, the New, and the Missing"
I think something interesting would be an os based on BEAM as a underlying threading and memory management model, if you could put in security guarantees.
Haiku is an open source free rewrite of BeOS and they are trying to make an OS that can run BeOS apps.
AROS is an open source and free AmigaOS 3.X rewrite, it can run old Amiga apps.
OSFree is an open source and free OS/2 clone.
ReactOS is an open source and free OS to try and be like XP/2003 in a way and has made some progress recently.
If I was in charge of Microsoft or Apple, I'd look into one of those OSes to fund and exchange code with instead of creating one from scratch.
While they are not ready for prime time if some billion dollar company invests one million USD in just one of them, it could get it to get out of the alpha stage and go beta or retail in a few years or so.
That’s ignoring stuff like filesystems, network protocols, shader compilers, or whatever else you need to get basic functionality out of the device.
Also, a given chip might require anywhere between 2-3 drivers (for something simple like a power controller) to hundreds + a stand-alone OS (for things with an embedded microprocessor, like a drive or some fpgas)
It is tedious work, but I’d love to see the BSD’s band together with a hardware manufacturer to produce a high end consumer-grade laptop with a 4-5 year production lifespan, and well-supported hardware.
There haven’t been any noteworthy hardware improvements in that space in the last 5 years (arguably a 2013 MBP is better than a 2018 one in most ways), so this isn’t as crazy as it used to be.
These laptops could be similar to Windows laptops also sold by the same vendor, which reduces engineering costs. Dell sort of does this for Linux, but they churn the models too quickly for the BSD’s to catch up, in practice.
The vendor could rotate a new model in every 2-3 years (maybe rotate form factors each time), and make it a sustainable business.
There is precedent for this in the embedded space. Look at PC Engines, for example.
Many modern devices (GPUs are most famous for that but there’re others, e.g. printers) include their microcode/firmware/binary blobs as a part of their device driver. Technically you can reuse these binaries for different OS, but legally, at least if you’re on American soil, you can’t.
Most people would not be willing to give a chance to what may be a great product in 5-10 years. So the next great operating system will most likely sneak in through a Trojan horse. My bet would be, containers, something, something, something, operating system.
The main problem is compatibility. The solution is virtualize. Let's say at least provide headless linux (Docker?). This is enough to start. Imagine that linux run not too bad here (ie: fast enough), just to keep dreaming.
The next is why? Why build for this OS if I "just" can run linux here?
The conventional narrative is that the OS is "boring" "invisible" etc. I think instead that is only true because most OS are boring. If we think as a master integration then is more cool.
Consider the posibility to turn the unix text stream/pipelines as the actual UI paradigm (like react) and have the ability to connect not only apps, but components inside. For example, If I'm in mail and wanna crop the image I just put here, I need to detour a lot to Photoshop then return to mail.
I think is possible to say "This is a image. Any app/component that could operate on it can do it". So, in short, is a UI manifestation of:
MailApp.currentMail.images | PhotoShop.crop > returnBack
So, like in a REST API, the apps publish their URIs with ACCEPT/HEADERS like "json", "xml", "img" etc and this allow to match to others apps that operate on them.
Having also the apps operate on "docker-like" containers by default (fully isolated) yet communicate as Erlang Actors (send messages) to any other app that "Accept: FORMAT", allowing to express to the USER what we do now on the terminal.
Other thing that totally need a revamp is the whole file exploring. Is just not rigth that google answer faster than a local disk search.
Also, why not isolated in containers the files of SYSTEM, APP + CONFIGS + TEMPS/CACHES, USER_DATA so if I wanna do a backup just copy the USER_DATA container and be worry free?
This is just some random ideas. Is about time to think of the OS as APP too, and start doing cool things :)
In the area of security is also a lot to do. A minimal thing is to acknowledge that most users perform multiples roles on a system, and the case where exist a dedicated user to be admin or dba is just a special case.
So, is reverse the roles/user.
I'm jhon. Now, I change hat to be admin. Then change to be normal user, then change to be developer. I don't need 3 user accounts to be all of this.
Because I'm all of this.
A while back, I had just bought a new desktop. I couldn't just use the hodgepodge of ~1280 MB worth of RAM from my old desktop because it was a different technology, but I also couldn't afford a GB of ram that would work with the new desktop.
I kept my old desktop on the network, made a big ramdisk out of 1GB of its RAM, and exposed it to the network as a network block device (kernel driver was called 'nbd'). I mounted that device on my new desktop and used it for swap, because it was orders of magnitude to eat all the transmission, TCP, etc delays in exchange for not having to hit disk platters.
.... which is _really not all that different_ from why google can return search results faster than you can search your local hard drive.
Regarding your filesystem suggestions: this is already commonly done on linux. Generally, separate filesystems are created on separate partitions for each of /mnt, /home, and / (everything else). If you wanted to backup all user data, you could find the partition, then run `dd if=/dev/home of=$BACKUP`.
Regarding your third point. Windows UAC already allows a user to jump into being an admin when needed. I'm unsure why you would need a separate user account for other roles. Users exist primarily to protect your stuff from other people on the computer. You shouldn't need to protect your files from yourself.
>Regarding piping between app: this is the role the clipboard serves today
This is more a "scratch" area than truly piping. If you can't do:
map(PhotoShop.listImages, zipToFile) |> map(sendEmail)
naturally then is not the idea.
> this is already commonly done on linux.
In fact this one is almost too close. But think for example in all the trash/caches/temps is stored on the HOME.
The problem is that the whole thing rely on ad-hoc, let's imagine developers and users will respect the filesystem layout...
You can totally store your photos on /bin. You can't trust ONLY your personal data is on HOME.
> Windows UAC already allows a user to jump into being an admin when needed. I'm unsure why you would need a separate user account for other roles.
Not, is the opposite, is not to create more accounts, but to switch roles per context.
I'm jhon/admin when managing computer
I'm jhon/user when browsing
I'm jhon/developer when compiling
The important thing to note is that the OS show the way. Similar how iOS change everything
Mildly related: I found "Nand To Tetris: The Elements of Computing Systems" to be an amazing, bottom-up, hands-on approach for learning about the fundamental layers of computer architecture, from hardware to assembly to OSs.
Not that you will be experimenting with new OS concepts.
Which is sort of a shame because it seems much of OS research might be turning back to concepts that haven't been explored since the "RISC/unix" revolution in the late 80's early 90's proclaimed that multiple privilege level machines, capabilities, full ACL controlled operations, message passing kernels, and dozen's of other concepts wern't "fast" or fell to the wayside because the RISC and traditional unix model couldn't support them, while we continue to pay a huge hidden tax for the flat address/paged memory model...
Thanks for the link.
It is very funny that their "Screenshots" tab is full of pictures of real computer screens:
I don't know if any modern video cards can still do that, but it might be good for these guys to look into it.
Most video cards these days have built in support for capturing video and encoding it on the fly in hardware, with a <5% impact to performance. This wouldn’t work for these guys as its probably a feature enabled by the driver.
I am a self-taught developer and would love to learn it in my free time.
1 Raspberry Pi 3 (Model B)
1 1⁄2-sized breadboard
1 4GiB microSD card
1 microSD card USB adapter
1 CP2102 USB TTL adapter w/4 jumper cables
10 multicolored LEDs
4 100 ohm resistors
4 1k ohm resistors
10 male-male DuPont jumper cables
10 female-male DuPont jumper cables
Edit: I suppose they may not be running Linux on the Pi, certainly not the graphical variants on a 4GB card.
(Note though that this uses Rust as it was in 2014.)
http://rust-class.org/0/pages/final-survey.html was very instructive to read at the time.
Assignment 0: https://web.stanford.edu/class/cs140e/assignments/0-blinky/
Silly idea, but how about a "super" terminal OS... that does a few primary things:
1. Responsive clients for various UIs (X, VNC, RDP, SSH, Powershell, etc.) - Having a full UI to use my smart phone would be awesome.
2. Run VMs (which can run docker, etc.)
3. Uses some mechanism (wave hands here) on the remote machines to facilitate storage, computation?, graphics, audio, clipboard and printing on the local terminal.
4. Strong support for a few devices. Pick a few network cards and graphic cards to support, perhaps a few other basics, and leave it at that...
I know it would still take years of writing a LOT of code, but this could reduce the lack of software support. Drivers are still a sticky issue...
(Yeah, like I said, a silly idea!)
Ironically, Windows Home also qualifies, and new low-end fanless PCs can drive 4K displays, so I’ll likely be running a setup like that by Summer, with zero local data except remote connection settings.
Nah, just kidding. An example of the kind of thing that might change, the current stdlib mutex implementation currently depends on pthread mutex objects, which can't be safely moved, so there's an additional allocation to box them. The parking_lot crate is an alternative implementation that interfaces with the system at a lower level, so can avoid this allocation.
I found I learned a lot about Rust's low level implementation details when making the chart, so can definitely see how it would be useful for such a class, and am very happy it's being used in this way.
By the way, I'm extremely sad to be missing your talk at the Recurse Center; I'm in Canada until Monday :(
the GP fails to acknowledge Rust's other features such as its type system, but i'm not convinced those will revolutionize operating system architecture either.
that doesn't mean i'm not excited about the potential of a new open-source general purpose operating system kernel written in Rust and entering the same space as the Linux kernel. it seems like a good fit for the language and an area where modernization is past due.
Of course, Rust doesn't have a stable ABI yet, but then again, neither do many OSes. We'll get there!
This exists: https://ocw.mit.edu/courses/mathematics/18-s996-category-the...
I have a feeling you know absolutely nothing about rust.
Even the best programmers and programs in C have issues with what the borrow checker will catch. Moreover, the expressiveness of the type system is world's above and beyond any other main-stream (so not Haskell or ocamel) language. Saying it's C with checks is like saying French and Hungarian are the same, but with some different words.
Please edit uncivil swipes out of your comments here. The rest of what you wrote is fine.
Moreover, it was mean-spirited. Probably because they were reacting to the second part of dmitrygr's quote.
Consider the Stylo project in Firefox, for example. Yes, Mozilla could have done the parallelization in C++ instead of Rust. They even tried! Twice! But it failed both times. That doesn't mean that it's impossible.
In my experience, this is what people mean when they say things like this.
Look, I'm a big fan of Rust - I want to see it succeed. I go to Rust meetups in my city, I've advocated for it's adoption in my company, yadda yadda. But I don't these vague half-truths are good for the language or the community. In fact, I think they'll be harmful to the community over time as the language will fail to live up to expectations. When I started programming, Java was in the position of Rust. It was being given so many vague platitudes that it experienced push back a few years later as developers realized it didn't fix all their problems.
I don't mean for this post to be mean or condescending. Tone is hard to transmit over the internet.
That is, I 100% agree with your comment, but I don’t see it happening in this thread.
I took the "Rust is like C" line to mean is that you can use Rust anywhere you can use C, running at C speeds (the whole zero cost abstraction/no runtime/unsafe blocks), someone that can't be said about OCaml/F#/Haskell (TTBOMK).
Moreover, if you look at context, you'll see that his point is that when it comes to OS design, a rust based OS will look mostly like a C based OS.
I would love to see more commercial success for micro/nano-kernels (vs. the staple monolithic kernels)
I'm not a mod, so I can't say for sure, but I bet that was the thought process.