I had heard of Inferno and Plan9, but aside from some YouTube videos and links, I never really looked it over or took it for a drive. I'll have to check this out; after all, it's amazing what nuggets are buried in the recent past of CS.
Here's some timing tests done in various languages (C, Java, Perl, Tcl, and others) in 1998 compared with Inferno . Brian W. Kernighan of K&R fame was one of the author's at Bell Labs!
And here's a comparison to Arthur Whitney's K language, I believe not too long after using the same code to compare with K on a 100MHz Pentium . K is significantly faster and shorter in LOC.
The tests were mainly loops, text and stdio. APL derived languages like A+, K, J, or Q don't usually do loops, they are better with binary, and memory-mapped files, but K did very well in the benchmarks despite those factors.
There are great reasons why a dev team might not want to switch over to K for its operations (or switch languages at all, really), but seeing K in action, I have to respectfully disagree that readability and maintainability are those reasons.
First, it's much easier to learn k/q on a good enough level than learning C. Second, this apparent bar has a nice effect on the salary you can get for it.
It's a good system and is used in a couple of commercial systems I know of.
It is also ported to IBM's Blue Gene and used in the research programme there.
I've been really interested in playing with it since my college years, but every time I got to poke around I felt like it wouldn't be (in my very uneducated opinion) fit for something in production.
However the idea of the OS is very appealing to me, so I would love to hear more on how does this fare in the real world and what would be a compelling use case.
I worked for a startup trying to build a phone - I'm NDA on that so can't say much about it except we incorporated Sql lite into it easily enough - the beauty of an OS in C, tack on a 9P interface and you don't need drivers or headers in your Limbo code.
In fact that's the greatest takeaway from Plan9 and Inferno for me. Add 9p to your server and you never need native drivers again - it's a bit like REST. Once you can mount 9P you compose all sorts of stuff.
The canonical example is TCP. If I can mount your /net/tcp on my file system I can use your network stack. Even if I am only connected to you with a serial cable. And it is on a per process basis so I can have two shell windows importing two network stacks from different remote machines and use two networks and they are separate.
Thanks for the details!
Perhaps not with the purity that the Plan 9 developers hoped for, but in the end we use computers to get work done.
OS choice is a market dominance issue.
It's worth paying attention to this if you are ever thinking about going to work at Google, because it is possible to have extensive conversations ahead of time about the sort of work one is and is not interested in doing, and still end up completely failing to communicate. "Library", "tools", and "file system" are other terms which have Google-specific meanings.
This wouldn't be so much of a problem if you could just apply for a job directly, instead of having to be hired by Google generally and then getting assigned wherever they think you'll fit. (Is that still the practice, or have they improved things?)
On the FOSS front I don't expect much innovation on that area, as the alternatives seem to just be yet another POSIX clone with the exception of unikernels research.
Quite a few features have moved into mainstream. Often not as consistent as before. There's actually plenty more in the making over past decade. The comments here would make you think there's not innovation in CompSci but there is. JX is a nice, recent example where they do a capability/isolation-like architecture, implement it via language-based security in VM, make it faster than many microkernels, put half the drivers in VM, and run that & other parts of drivers on microkernel for lower risk. Already have a web server to show usefulness. Clever stuff that might be good in appliances maybe with MINIX 3-like self-healing.
For old ones, Burroughs stays top on my list given they scientifically designed a HW/SW combo from a language report (ALGOL) to be ideal machine for it. You usually worked closer to algorithm level instead of low-level with good reliability/security. KeyKOS had fine-grained isolation of everything plus persistence where your data... whole running system... would likely survive a crash. VMS's clustering and central support for clustered apps was so good that you basically just configure the boxes, tell the apps to use OS functions, set policy, and you're good to go. 17 years uptime on highest one. Convergent's CTOS and Tannenbaum's Amoeba both made a collection of workstations act like one mainframe or minicomputer OS with all resources shared for max utilization. We have Grid Computing but clean integration.
The most interesting stuff for developers and hackers was in LISP machines, esp Genera. Features from LISP that should go into every programming language: interactive running commands for testing; incremental, per-function compilation; live updates of that to running app for instant results. Combined with safe typing, this would let you crank out code like lightening. The OS itself was implemented with this high-level, highly-debuggable, live-updating language. It came with source. So, you could trace a problem in your app from it to system call seeing actual source code of OS along with current, running state of how it was being [mis]used. You could then correct the system if you chose to keep it running. Live updates and modification of your own running system with same language from apps all the way to bottom of OS is just unheard of today. Needless to say, they rarely crashed outside of hardware failure. :)
Far as mobile, most of the work is going into isolation and virtualization mechanisms, some with TrustZone, etc. OKL4 was best result there. I don't follow mobile as much & search engines are often clogged. Some Google wrangling got me these innovations.
Reflex addresses heterogenous programming of fast and low-power CPU's. Manual, tedious process. They applied old concept of software DSM to make it easy. I'm a DSM fan since supercomputer days so that's awesome.
Alright, I've thrown a few others above that illustrate different ways OS research is going. Last two are modern grid OS's with one quite innovative in that it works on ad hoc networks whose assumptions are as bad as WAN's. The 2nd one has lots of references to interesting systems. Should give you a taste of clever stuff going on in mobile and grid that you might have missed.
Agreed, but what made it into mainstream computing we are aware of and touch and fiddle with? I don't consider a deeply embedded baseband processor to be mainstream outside the handful of radio engineers.
It's crazy how Burroughs (B-5000) solved many issues elegantly and Intel's team with i960 lost the internal challenge to the x86 group. It may have been before its time to some extent, but worse isn't better when it concerns the underpinnings of computing.
Barrelfish is very nice but purely research, OLK4 used the Dresden design and took efficient microkernels to another level and managed to deploy into millions of devices and that's great, but the only mainstream microkernel'ish remnant I could point to that is active and modern is NT's userspace graphics and audio drivers. There are similar things on Linux but not quite the same. I find it telling when network gear switches from vxWorks or QNX to Linux with extra kernel modules. Market forces and maybe the illusion of more linux developer might be the reason, but it's not too smart.
The success of UNIX in the 80s led to the unsatisfactory OS architectures we're using today. It also led to the rise of C and the introduction of trivial attack vectors all over the place. Even plain Pascal was much safer and used on Apple systems above ASM.
Many things made it into mainstream. Use of high-level languages with GC and automated checks. Mainframe model and many techs reinvented in cloud market and Xeon CPU's. Clustering is mainstream. Microkernel and hardware-up approaches dominate mobile research and products. Highly concurrent desktops like BeOS anticipated. I'm still waking up but this is what I remember off top of head.
"I don't consider a deeply embedded baseband processor to be mainstream outside the handful of radio engineers.""
Damn, my sleepy ass might have read it wrong. Still might have been right to cite it given things like Samsung Oxynos, big cores and little cores, are getting more prevalent. Low RISC has main core plus tiny cores. Even embedded are mixing cores of different strengths. What it gets you is significant boost to concurrency with lower power and cost. On other end, you can emulate Channel I/O from mainframes to get great throughput:
"It's crazy how Burroughs (B-5000) solved many issues elegantly and Intel's team with i960 lost the internal challenge to the x86 group. It may have been before its time to some extent, but worse isn't better when it concerns the underpinnings of computing."
My thoughts exactly. Burroughs is being reinvented under crash-safe.org with functional programming on top. Draper is integrating the HW enforcer with RISC-V with open-source plans. Some hope. i960 was clever. Lessons learned is to ensure compatibility with language or whatever is popular to piggyback on popularity. CHERI is doing that. Otherwise, you might die off.
"but the only mainstream microkernel'ish remnant I could point to"
Microsoft is taking the lead integrating cutting-edge tech like driver mitigations. Unfortunately, mainstream desktops, etc mostly ignore this. However, in embedded, there's lots of microkernels in use. One mainstream product, Blackberry OS & Playbook, used one of best ones called QNX Neutrino. Speed, responsiveness, and reliability are excellent. Too bad on network switches as Linux isn't going to beat QNX on reliability. The reason might be that they're usually deployed in HA configuration that survives crashes anyway. Long as not byanzatine failure, that works well enough for corporate environments.
"The success of UNIX in the 80s led to the unsatisfactory OS architectures we're using today. It also led to the rise of C and the introduction of trivial attack vectors all over the place. Even plain Pascal was much safer and used on Apple systems above ASM."
The success of System/360 + COBOL, CP/M + DOS + x86, UNIX + C, and recently Mach + UNIX + Objective-C. It was a team effort. :) Far as Pascal, you might find design and assurance sections of the kernel below interesting. It was written in Pascal for safety and first secure kernel.
Btw, if you want, I can try to dig out an interview with Dr Schell that I found that traces his invention of INFOSEC field from the start. It's a long interview but was very worth it. Many surprises along the way like corporate types wanting high assurance but IT industry fighting it. And Burrough's legacy going further than you thing... even into Intel x86 CPU's. Not everyone wants to spend 30 min reading an interview though so I understand if not.
Unfortunately, the devices Blackberry created didn't sell enough and now they're trying to be another Android manufacturer. It's sad, but a kernel alone doesn't make a user facing device.
> interview with Dr Schell that I found that traces his invention of INFOSEC field from the start
> Burrough's legacy going further than you thing... even into Intel x86 CPU's
How so? Do you mean the half-hearted attempts by Intel like their trasactional memory extensions or mpx? The B-5000 was great for writing implementations for GC'ed languages.
I'm not going to spoil it. What I will say is that most thought Schell and Karger invented the security stuff with Anderson of Anderson Report being some suit or manager. We also thought Burroughs stuff happened in parallel, but separate, from what led to Orange Book and Intel security extensions. I always thought Burroughs B5000 tech was too good to be isolated in history. We also thought businesses, as often said, saw no value in real INFOSEC and instead wanted checkboxes.
Interview will counter all of that. I especially liked learning how the first, certified system and demonstrator, the SCOMP, was built. All this time I thought government funded it out of a belief in INFOSEC. Wrong again with another surprising answer.
Enjoy the 150 pages haha.
But, they took the easy way out.
There is also an ActiveX version i.e. hosted in Internet Explorer but I'm not sure if it still works.
They certainly haven't updated it to match the modern world…
Did anyone make any headway porting/redoing the browser plugin to Mozilla? There were some mumblings about GSoC for that not that long ago.