Anyway, Oberon is a memory-safe and garbage-collected language without the pointer trickery that makes C so dangerous. So how does Oberon implement the necessary low-level parts? Fortunately, some version of the source is easily browsable online: https://people.inf.ethz.ch/wirth/ProjectOberon/index.html
Here is for example Kernel.Mod: https://people.inf.ethz.ch/wirth/ProjectOberon/Sources/Kerne...
Low-level operations are done with magical functions that simply perform an operation on a memory address identified with an integer, e.g. `SYSTEM.PUT(q0+8, q1)`. These are then presumably handled directly by the compiler (the System.Mod file does not define any PUT function). This is syntactically more awkward than the equivalent in C (`q0 = q1`, assuming this is an `int` pointer), but perhaps things that are unchecked and potentially dangerous should be syntactically awkward. It's not like a lot of code will be written using these facilities.
>There were some spiritual predecessors where the compiler itself was part of the OS
Sounds very much like forth or lisp to me, (and perhaps even erlang etc). Any language that's implemented like a virtual machine internally is suited to being an "integrated" operating system.
I think there's a tremendous amount of advantages to doing this as well, it reduces the levels of abstractions between the hardware and the user applications. Reduced level of abstractions means a snappier experience for the users, and also makes it easier for the developer to integrate things vertically.
http://tunes.org touches on this type of thing as well
I wonder if there is a lesson in the fact that all of the tightly language-integrated operating systems are dead. While Unix is clearly tied to C at an API level, the actual ABI is entirely based on machine code, and can be accessed through any language that can produce a binary. The downside may be that tight integration hinders language innovation, in that new languages have to target the "native language", which is awkward if it is high-level. Genera (the Symbolics Lisp OS) did host some C and FORTRAN compilers ("Zeta-C", you can get it online), but I am not sure how fast it ran. Unix is often disparaged for providing "mechanism" instead of "policy", but maybe that has serious evolutionary advantages.
I see your point, but the Oberon system does have an ABI, it's not necessarily coupled to the Oberon language but it is coupled to the linker of the system. I have created binaries for Oberon in a rudimentary assembler, it's not hard to have them work together with the native libraries (modules). In fact it's less convoluted than ELF for example.
In Oberon an 'executable' has 1 or more commands, somewhat similar to how git has more commands like pull, push & commit.
But everything is reusable you don't need a libgit at some point if you need support for it in another application. Neither do you need to invoke a command in a process and capture its output.
In Oberon git and libgit would be the same thing. Everything you execute as a command is directly reusable as native functional calls from other apps.
In Oberon not only is every module an application, the exported procedures are available as commands on the REPL and callable via mouse actions as well.
However in modern Windows you try to achieve a similar experience via what PowerShell allows for (.NET, COM and DLL entry points) and OLE Automation.
Still is isn't as easy and painless as the whole Oberon OS allowed to.
I think the world is ready, or almost ready, for a comeback in this kind of design. Given that we have so many open, compatible standards (JSON, Office Documents/XML, hell even TCP/IP, etc, etc), the "traditional" problems of esoteric systems and their incompatibilities with each other, like we had back in the late 80s and 90s, seem to melt away. We are at a point where we can and should be experimenting with completely new systems from the ground up. Otherwise we will never get anything new, and everything will just be some iteration of Unix for the rest of time -- like medieval scholastics endlessly debating Aristotle and not discovering anything truly novel.
In many ways many Forth systems were simultaneously the programming language and the operating system, especially on small systems. Having a single integrated system meant you didn't need a lot of space, which was important when you were trying to develop software on a 1MHz system with 8KiB RAM.
There have been many operating systems written tightly integrated to a language. Oberon is one, there were several that were based on Lisp, and many Forth systems count. The early Smalltalk systems probably count as well.
I agree with you that tightly language-integrated operating systems are dead. You're absolutely right that tight integration makes it hard to change things. The tight integration could potentially save space (memory and storage), which is why I think it was more popular years ago, but that's much less relevant today. If you have tight memory constraints, you'll probably develop on a beefier system and then transmit the result to the tiny system instead, and that approach doesn't require a tightly language-integrated operating system.
1. Is it free or reduces costs in some way? UNIX on minicomputers is automatically going to get adopted if its design is useful just because of massive reductions in equipment costs.
3. Is it familiar? Do they understand the concepts? And do the developers, picky about syntax, see a similar syntax? This boosted C++, Java and C# over the likes of Lisp, Prolog, or Haskell.
Also, it's not a big loss anyway given only a few of these integrated OS's were attempted in a big way. Over 90% of efforts fail to go anywhere. Only a handful of attempts were made. If anything, they might still be ahead of the odds in the long run. They just gotta stop creating unnecessary obstacles for themselves.
Similarly to how they took the whole Longhorn effort before.
At least .NET Native, async/await, span and low level memory management landed in official .NET runtimes later.
(a) wouldn't happen for most customers (esp big spenders)
(b) would throw away billions of dollars
Integrating Midori-derived technologies into Windows and using them in non-Windows applications is best move. They could still push it as a new thing they sell in parallel. Something cutting edge. They're just too afraid of losing Windows revenue.
Symbolics had their own C compiler, which was unrelated to zeta-c. One could use it to for example compile the C-based X11 server.
[Edit: Clearer wording.]
Oberon the operating system is written in Oberon the language, but the compiler compiles native code, there is no interpreter.
just exactly like in C! What's the difference besides the particular syntax?
Indeed! To use the unsafe (predefined) procedures you must import the pseudo-module SYSTEM.
On that part none. But Oberon is "memory-safe and garbage-collected", so you don't use that aspect for 99% of the program (whereas in C everything is like that).
Enjoy a screenshot tour, https://www.progtools.org/article.php?name=oberon§ion=co...
Well, you could also point them to VMS, which was deliberately designed to avoid language lock-in.
I immediately thought of "POKE" on the built in BASIC interpreters for 80s home computers.
It was indeed an eye opener on how many different things there can still be. Especially the mouse key combinations that you had to press were pretty unique.
Never used it ever since that first year, but it's hard to forget because it was so different.
I have fond memories of learning oberon. It gave me a deeper understanding not just of programming but the almost arbitrary conventions of popular OS’s, in where to draw the line between code, documents and applications (in oberon they were all the same thing), and how to use keyboard and mouse to manipulate items on a screen.
I've never gotten around to being comfortable with mouse chording, which makes me think that it was a rather odd idea.
That said, Oberon probably makes a very fine teaching language.
Note that lots of Nim syntax and features were inspired by Wirth's languages including Oberon and Modula 3 (https://nim-lang.org/faq.html#what-have-been-the-major-influ...)
But Modula 3 was a project at DEC's SRC with Luca Cardelli as the primary author.
Both Modula 2 and 3 are worth looking at, and both are fully capable systems programming languages (ie. you can write an OS using them).
As a teen I learned Modula 2 before learning C. I bought a Modula 2 compiler for my Atari ST instead of a C compiler. In retrospect this may have been a mistake; the ST's OS was designed with C calling conventions in mind, with a lot of void casting, etc. All the documentation also implied this/ Using it from Modula 2 was a pain.
And then later when I learned C, I found many aspects very ... disturbing... after coming from the Wirth language world.
C is really primitive compared to Modula-2's modules. It just feels (and I guess is) hacked instead of a well thought out design.
With all the latest C# 7.x and 8.0 improvements, it feels really close to what Modula-3 allowed for.
If you first thought on reading that was "But Python's exception-handling system looks like everyone else's, how can it be lifted from this obscure language?", that's because everyone else lifted it from Modula-3 as well. That's right, C++ also borrowed Modula-3's system, and Java and C# built on top of that. Pretty much the whole mainstream concept of exception handling was invented by Modula-3.
However what many that bash Java's checked exceptions aren't aware of, is that they actually came from Modula-3. Not sure if this is what you mean.
I like how go gives access to the cool parts of Oberon within a relatively popular language.
Go is the secret child of Oberon-the-language and C. Before Go, Plan 9 was conceived as the secret child of Oberon-the-system and Unix.
No longer maintained or developed but an interesting piece of history nonetheless.
Also, the Project Oberon book is a magnificent tome detailing a complete, self-contained system that was used "in production" at the University. Highly entertaining and educational.
Click here to run it in your browser on emulated hardware (no Gadgets though, too bad.): https://schierlm.github.io/OberonEmulator/emu.html?image=Ful...
The book Project Oberon is a masterpiece as are the language and system it describes.
I wonder how the system deals with user errors…
Whereas Inferno's kernel is implemented in Plan9's C variant, and Limbo uses the DisVM.
EDIT: I can't believe I said anything that warrants downvoting. Can't we share experiences?
That's quite a statement, knowing the history of those languages.
Oberon (Operating System)
I think it's very natural to mention Modula-3 in the context of Oberon.
To me it read like someone posting about Xv6 (Operating System) and a comment mentioning Objective-C and how to get its toolchain working ;-)
Those versions were already quite close to something like NeXT, but with Oberon variants.
Nowadays what is left are random ISOs that don't always boot properly on VMs and it isn't easy to compile the those OSes, even with some source still floating around on Github.
EDIT: Some System 3 and AOS links from here might still be useable, https://en.wikibooks.org/wiki/Oberon#System_Variants
These things like Oberon and Inferno are from a more innocent era of computing.
If we keep the chip features, the OS will have to wrap them in one way or another.
edit: my personal preference would be to go full-Terry, but I know that's a pipe dream.
Where do you draw the line? Do you throw away Linux and write your own OS, which is bound to grow to the same level of complexity because it needs to deal with hardware complexity? Or do you throw away the existing hardware as well and start from silicon? Maybe even reboot the computing stack on an entirely different type of hardware?
iOS has a lot going for it, but it’s definitely not small, clean or simple.
(Ditto for the chips, which build on ARM, originally Acorn)
Ultimately I don’t think either arguments are particularly useful as all they demonstrate is that good technology is an evolutionary process of stands on the shoulders of other pieces of good technology.
Steve Jobs went from CEO of NeXT to CEO of Apple, and promptly started a project to scrap the existing "System" series of Macintosh operating systems in favor of a Unix derived from NeXTSTEP.
It’s a similar problem with creating new web browser rendering engines.
It's just a massive amount of work, with literally millions of lines of code required, whereas a basic operating system is easily under 100k SLOC.
Plan9 and Inferno suffer from this.
Haiku suffers from this.
And Oberon is has the same issue.
As for the desktop market, BeOS has a browser and that still failed. SkyOS had a browser and failed. AFAIK Haiku has had a Firefox port for several years as well (albeit I make no statement about how stable nor bug-free it might be).
When you have Microsoft and Apple heavily promoting their platforms, even going so far as to give educational institutions massive discounts knowing they’re indoctrinating future customers, it strikes me that the only way to achieve household success with anything new is with massive corporate backing and a decent chunk of good luck too. So personally I’d define Linux as an anomaly.
I also think you’re not making a fair comparison where you compare lines of code in a “basic” operating system to a fully featured modern browser. But I go into more details on that in another post further down.
The world has moved on since the 90's. Some evolutionary lines had features that were lost.
And, given the forces you describe, the ecosystem probably had room for at most one anomaly.
But parsing the different standards of HTML and deal with invalid HTML in the "appropriate" way must be a lot of work. And implementing CSS to work with existing websites, and making it efficient, must be a PITA.
Much of that complexity was created in the name of discoverability. Given the amount of money that was spent on search and looking at the quality of search results nowadays, I'm not so sure.
In general the idea is to separate structure from style, and to allow developers to specify style more declaratively to enable them to make sites quickly. But I'm not positive that that worthwhile goal implies that the logic should be implemented in the browser. IMO it should be implemented in downloadable library code.
There was once a programmer who was attached to the court of the warlord of Wu. The warlord asked the programmer: “Which is easier to design: an accounting package or an operating system?”
“An operating system,” replied the programmer.
The warlord uttered an exclamation of disbelief.
“Surely an accounting package is trivial next to the complexity of an operating system,” he said.
“Not so,” said the programmer, “when designing an accounting package, the programmer operates as a mediator between people having different ideas: how it must operate, how its reports must appear, and how it must conform to tax laws.
By contrast, an operating system is not limited by outward appearances. When designing an operating system, the programmer seeks the simplest harmony between machine and ideas. This is why an operating system is easier to design.”
The warlord of Wu nodded and smiled. “That is all good and well,” he said, “but which is easier to debug?”
The programmer made no reply.
— The Tao of Programming, Geoffrey James, 1987
Much like building a kernel + CLI shell is easier than building a fully multi-tasking OS with GPU accelerated GUI compositing, stable ABIs, modular driver model, and full support for 99% of common hardware.
https://en.m.wikipedia.org/wiki/Wikipedia:NPOV_dispute may be relevant as well.