Hacker News new | past | comments | ask | show | jobs | submit login
Oberon Operating System (wikipedia.org)
196 points by JNRowe 23 days ago | hide | past | web | favorite | 93 comments



When people ask "how would you write an operating system without C?", Oberon is what I point them to. It's a very different style than Unix, in that the Oberon language is a relatively integrated part of the operating system, rather than communication being through a machine-level ABI. There were some spiritual predecessors where the compiler itself was part of the OS, and there was no other way to communicate with it than through the language semantics. I'm not completely sure whether Oberon works like that, but some Concurrent Pascal systems did. It's a very different view of what an "operating system" means.

Anyway, Oberon is a memory-safe and garbage-collected language without the pointer trickery that makes C so dangerous. So how does Oberon implement the necessary low-level parts? Fortunately, some version of the source is easily browsable online: https://people.inf.ethz.ch/wirth/ProjectOberon/index.html

Here is for example Kernel.Mod: https://people.inf.ethz.ch/wirth/ProjectOberon/Sources/Kerne...

Low-level operations are done with magical functions that simply perform an operation on a memory address identified with an integer, e.g. `SYSTEM.PUT(q0+8, q1)`. These are then presumably handled directly by the compiler (the System.Mod file does not define any PUT function). This is syntactically more awkward than the equivalent in C (`q0[2] = q1`, assuming this is an `int` pointer), but perhaps things that are unchecked and potentially dangerous should be syntactically awkward. It's not like a lot of code will be written using these facilities.


>in that the Oberon language is a relatively integrated part of the operating system

>There were some spiritual predecessors where the compiler itself was part of the OS

Sounds very much like forth or lisp to me, (and perhaps even erlang etc). Any language that's implemented like a virtual machine internally is suited to being an "integrated" operating system.

I think there's a tremendous amount of advantages to doing this as well, it reduces the levels of abstractions between the hardware and the user applications. Reduced level of abstractions means a snappier experience for the users, and also makes it easier for the developer to integrate things vertically.

http://tunes.org touches on this type of thing as well


Lisp machines, definitely. With respect to Forth, I'm not sure. Has there ever been a Forth operating system that provides the facilities we would expect of a modern operating system (and using Forth as the interface)? There are lots of Forth applications on bare hardware, and I see no reason why it should not be possible (except that you'd need some kind of extension for concurrency), I just don't know whether it has been done.

I wonder if there is a lesson in the fact that all of the tightly language-integrated operating systems are dead. While Unix is clearly tied to C at an API level, the actual ABI is entirely based on machine code, and can be accessed through any language that can produce a binary. The downside may be that tight integration hinders language innovation, in that new languages have to target the "native language", which is awkward if it is high-level. Genera (the Symbolics Lisp OS) did host some C and FORTRAN compilers ("Zeta-C", you can get it online[0]), but I am not sure how fast it ran. Unix is often disparaged for providing "mechanism" instead of "policy", but maybe that has serious evolutionary advantages.

[0]: http://www.bitsavers.org/bits/TI/Explorer/zeta-c/


> While Unix is clearly tied to C at an API level, the actual ABI is entirely based on machine code, and can be accessed through any language that can produce a binary.

I see your point, but the Oberon system does have an ABI, it's not necessarily coupled to the Oberon language but it is coupled to the linker of the system. I have created binaries for Oberon in a rudimentary assembler, it's not hard to have them work together with the native libraries (modules). In fact it's less convoluted than ELF for example.


What I like about Oberon's module system is that there is no distinction between a library and an app or command.

In Oberon an 'executable' has 1 or more commands, somewhat similar to how git has more commands like pull, push & commit.

But everything is reusable you don't need a libgit at some point if you need support for it in another application. Neither do you need to invoke a command in a process and capture its output.

In Oberon git and libgit would be the same thing. Everything you execute as a command is directly reusable as native functional calls from other apps.


Interestingly, the same applied AFAIK to Multics - generally, as an end user, you had one process, and everything you ran in it was loadable, run-time linkable modules.


One can do this quite easily in Windows as well, via RUNDLL32. Not sure if Linux has a direct equivalent, however.


Not as Oberon.

In Oberon not only is every module an application, the exported procedures are available as commands on the REPL and callable via mouse actions as well.

However in modern Windows you try to achieve a similar experience via what PowerShell allows for (.NET, COM and DLL entry points) and OLE Automation.

Still is isn't as easy and painless as the whole Oberon OS allowed to.


That's still just going one way (doing a library call in this case). In Oberon there is no distinction between a .DLL and an .EXE so to speak. A compiled module is (or can be) both.


Oberon is a gift to the computing world. It is one of the few examples of a top-to-bottom design of a full computing system (The Smalltalk Blue Book is also like this, sans the hardware).

I think the world is ready, or almost ready, for a comeback in this kind of design. Given that we have so many open, compatible standards (JSON, Office Documents/XML, hell even TCP/IP, etc, etc), the "traditional" problems of esoteric systems and their incompatibilities with each other, like we had back in the late 80s and 90s, seem to melt away. We are at a point where we can and should be experimenting with completely new systems from the ground up. Otherwise we will never get anything new, and everything will just be some iteration of Unix for the rest of time -- like medieval scholastics endlessly debating Aristotle and not discovering anything truly novel.


4os was an operating system written in Forth. It was discussed on Hacker News in 2016: https://news.ycombinator.com/item?id=12709802

In many ways many Forth systems were simultaneously the programming language and the operating system, especially on small systems. Having a single integrated system meant you didn't need a lot of space, which was important when you were trying to develop software on a 1MHz system with 8KiB RAM.

There have been many operating systems written tightly integrated to a language. Oberon is one, there were several that were based on Lisp, and many Forth systems count. The early Smalltalk systems probably count as well.

I agree with you that tightly language-integrated operating systems are dead. You're absolutely right that tight integration makes it hard to change things. The tight integration could potentially save space (memory and storage), which is why I think it was more popular years ago, but that's much less relevant today. If you have tight memory constraints, you'll probably develop on a beefier system and then transmit the result to the tiny system instead, and that approach doesn't require a tightly language-integrated operating system.


I liked your comment on Oberon as an example of what non-C/UNIX might look like. Your statement on language-integrated OS's implies there is a technical reason they didn't make it. It's a common mistake technologists make where they think of everything in technical aspects. Business (i.e. Marketing) and social effects are far, far, more prevalent a reason for success or failure of technologies in terms of adoption. Here's four factors that will answer lots of your questions by themselves:

1. Is it free or reduces costs in some way? UNIX on minicomputers is automatically going to get adopted if its design is useful just because of massive reductions in equipment costs.

2. Is it backwards compatible and/or does it easily integrate with the established ecosystem? IBM, Microsoft, and Intel used backwards compatible to grab markets followed by lock-in. You could say Linux/BSD eventually did with them piling more stuff on the old code. OpenBSD was the exception with them ripping stuff out. Easy example of second is languages compiling to and/or using libraries from established ecosystems like C, Java, .NET, and Javascript.

3. Is it familiar? Do they understand the concepts? And do the developers, picky about syntax, see a similar syntax? This boosted C++, Java and C# over the likes of Lisp, Prolog, or Haskell.

4. Big, dominant company pushes for X to be the language for their platform. That explains most of the big ones that are compiled and Javascript. The other factors helped, though.

These four factors are enough to absolutely kill Lisp and Forth machines in both commercial market and FOSS adoption. They don't make them impossible. They're just an uphill battle with a lot more work to do. We see communities such as Erlang and Haskell getting stronger despite being so different from established languages without being able to easily integrate with their ecosystems. Clojure [wisely] cheated by building the weird language on top of a massive ecosystem (Java platform). There's also projects aimed at Javascript trying to re-create the advantages of Lisp and Smalltalk. So, there's hope the integrated systems can make it either with lots of effort or (more wisely) just working with those factors instead of against them.

Also, it's not a big loss anyway given only a few of these integrated OS's were attempted in a big way. Over 90% of efforts fail to go anywhere. Only a handful of attempts were made. If anything, they might still be ahead of the odds in the long run. They just gotta stop creating unnecessary obstacles for themselves.


I read somewhere, that one of the political reasons why Midori failed was they tried to reboot the world and Windows team didn't like it for one second.

Similarly to how they took the whole Longhorn effort before.

At least .NET Native, async/await, span and low level memory management landed in official .NET runtimes later.


I 100% believe that because replacing Windows:

(a) wouldn't happen for most customers (esp big spenders)

(b) would throw away billions of dollars

Integrating Midori-derived technologies into Windows and using them in non-Windows applications is best move. They could still push it as a new thing they sell in parallel. Something cutting edge. They're just too afraid of losing Windows revenue.


> Genera (the Symbolics Lisp OS) did host some C

Symbolics had their own C compiler, which was unrelated to zeta-c. One could use it to for example compile the C-based X11 server.


In addition to what nickpsecurity said, I think there's another factor. Pick a language - any language. There's a lot of software that is not written in that language. I'd like to run some of it on this new OS, without having to re-write it in this OS's chosen language. If this new OS makes that difficult, well, I need that software more than I need that particular OS.

[Edit: Clearer wording.]


> Sounds very much like forth or lisp to me, (and perhaps even erlang etc). Any language that's implemented like a virtual machine internally is suited to being an "integrated" operating system.

Oberon the operating system is written in Oberon the language, but the compiler compiles native code, there is no interpreter.


> Low-level operations are done with magical functions that simple perform an operation on a memory address identified with an integer,

just exactly like in C! What's the difference besides the particular syntax?


None, but in C you use the same language mechanism for following an object reference (safe in the absence of manual memory management), indexing an array, and doing crazy address arithmetic. In Oberon, safe and unsafe operations are clearly distinguished.


> In Oberon, safe and unsafe operations are clearly distinguished.

Indeed! To use the unsafe (predefined) procedures you must import the pseudo-module SYSTEM.


>just exactly like in C! What's the difference besides the particular syntax?

On that part none. But Oberon is "memory-safe and garbage-collected", so you don't use that aspect for 99% of the program (whereas in C everything is like that).


On top of that, the latest generations System 3 with its Gadgets and Active Oberon were quite good, and yes they even supported audio and video players, which made as much use of inline Assembly (Active Oberon extension) as any Windows 3.0/MS-DOS alternative back then.

Enjoy a screenshot tour, https://www.progtools.org/article.php?name=oberon&section=co...


> When people ask "how would you write an operating system without C?", Oberon is what I point them to.

Well, you could also point them to VMS, which was deliberately designed to avoid language lock-in.


Intel/Siemens had a joint venture developing the BiiN hardware. The operating system, applications, development tools, and so on were written exclusively in Ada. However, a lot of that software fully worked on that particular chip, so when it floundered commercially that was the end of it.


"SYSTEM.PUT(q0+8, q1)"

I immediately thought of "POKE" on the built in BASIC interpreters for 80s home computers.


At the university of Antwerp they gave Oberon as the first programming language for Computer Science (back in 1997). The reasoning of the professor was so that everyone started from scratch, because nobody would know it already.

It was indeed an eye opener on how many different things there can still be. Especially the mouse key combinations that you had to press were pretty unique.

Never used it ever since that first year, but it's hard to forget because it was so different.


I was in that same class. I recall that same professor also used smurf analogies to explain programming concepts. A peculiar man.

I have fond memories of learning oberon. It gave me a deeper understanding not just of programming but the almost arbitrary conventions of popular OS’s, in where to draw the line between code, documents and applications (in oberon they were all the same thing), and how to use keyboard and mouse to manipulate items on a screen.


> eye opener

I've never gotten around to being comfortable with mouse chording, which makes me think that it was a rather odd idea.


Was also taught in Computer Science 178 at Stellenbosch University in 1997.


The Oberon programming language is also a very nice alternative to C - smaller, safer and easier to learn. It can be used without the Oberon operating system.


I think Oberon doesn't have the community to thrive though. A language nowadays is much more than the core, it's also all the libraries, tooling and tutorials around it.

That said, Oberon probably makes a very fine teaching language.

Note that lots of Nim syntax and features were inspired by Wirth's languages including Oberon and Modula 3 (https://nim-lang.org/faq.html#what-have-been-the-major-influ...)


Modula, and then Modula 2, were languages primarily written by Wirth.

But Modula 3 was a project at DEC's SRC with Luca Cardelli as the primary author.

Both Modula 2 and 3 are worth looking at, and both are fully capable systems programming languages (ie. you can write an OS using them).


I still have a soft spot for Modula 2. Very nice language.

As a teen I learned Modula 2 before learning C. I bought a Modula 2 compiler for my Atari ST instead of a C compiler. In retrospect this may have been a mistake; the ST's OS was designed with C calling conventions in mind, with a lot of void casting, etc. All the documentation also implied this/ Using it from Modula 2 was a pain.

And then later when I learned C, I found many aspects very ... disturbing... after coming from the Wirth language world.


University of North Dakota replaced Pascal with Modula-2 for their CompSci classes starting in the Summer 1988 semester. It was taught on their IBM 370. I found Modula-2 to be great for some things, but its I/O was horrible. I don't think that was a function of the 370.

C is really primitive compared to Modula-2's modules. It just feels (and I guess is) hacked instead of a well thought out design.


Same here, strangely C++ provided a kind of welcome home to some of us.


Both C# and Java have had quite a few influences from Modula-3.

With all the latest C# 7.x and 8.0 improvements, it feels really close to what Modula-3 allowed for.


Python as well. The module system and the exception-handling system were mostly lifted from Modula-3, and the class/object system is a hybrid of Modula-3 and C++ (something that's still explicitly stated in Python's documentation to this day [0]).

If you first thought on reading that was "But Python's exception-handling system looks like everyone else's, how can it be lifted from this obscure language?", that's because everyone else lifted it from Modula-3 as well. That's right, C++ also borrowed Modula-3's system, and Java and C# built on top of that. Pretty much the whole mainstream concept of exception handling was invented by Modula-3.

[0] https://docs.python.org/3/tutorial/classes.html


Actually CLU and Mesa were the ones introducing exceptions. :)

However what many that bash Java's checked exceptions aren't aware of, is that they actually came from Modula-3. Not sure if this is what you mean.


Go is Oberon's and C's secret child. They met thanks to Robert Griesemer, who worked under Wirth's supervision, AFAIK, during his PhD.

I like how go gives access to the cool parts of Oberon within a relatively popular language.


This is true, but Rob Pike has his own independent connections to Oberon. Acme and other parts of Plan 9's mouse-driven UI were greatly "inspired" by the Oberon system.

Go is the secret child of Oberon-the-language and C. Before Go, Plan 9 was conceived as the secret child of Oberon-the-system and Unix.


That is more true of Inferno and Limbo than Plan 9.


What I like about Niklaus Wirth is that he stays true to his design philosophy and never sells out. Oberon is not a crowd-pleasing language.


Oberon is very well suited for experiments; the language is so minimal that it is quite easy to write a compiler. Here is e.g. a front end for LuaJIT: https://github.com/rochus-keller/Oberon (i.e. Oberon is used as an alternative to Lua with LuaJIT as a backend; here is more information about the project: https://medium.com/@rochus.keller/implementing-call-by-refer...).


Another interesting Oberon project I found a while ago: XOberon, an RTOS for robotics http://www.ifr.mavt.ethz.ch/research/xoberon/

No longer maintained or developed but an interesting piece of history nonetheless.


Oh man, blast from the past! Sjur the spin doctor and Roberto the master mind... We had a collaboration where their robot was used as platform for a mid-size league robocup participant (wannabe :-})... That realtime Oberon on PowerPC was really cool!


Far as embedded, there's also Astrobe IDE for Oberon on ARM Cortex:

https://astrobe.com/Oberon.htm


To data, IMO, nothing has beaten OberonOS TUI+Gadgets for UI smoothness. (See Jef Raskin's "The Humane Interface")

Also, the Project Oberon book is a magnificent tome detailing a complete, self-contained system that was used "in production" at the University. Highly entertaining and educational.

Click here to run it in your browser on emulated hardware (no Gadgets though, too bad.): https://schierlm.github.io/OberonEmulator/emu.html?image=Ful...

https://schierlm.github.io/OberonEmulator/


Oberon+Gadgets was a glimpse into a different better world. I'm glad I got to use it enough to realize that the UI's we have are not the UI's we could have had.

The book Project Oberon is a masterpiece as are the language and system it describes.


> No questions are asked: this is a deliberate design decision, which needs getting used to. Most editors ask the user when closing a modified text: this is not the case in the Oberon System.

I wonder how the system deals with user errors…


For the case of accidentally closing a Viewer: The command System.Recall restores the most recently closed Viewer. Just find or type that command anywhere on the screen and then execute it by clicking it with the middle mouse button.


I prefer Inferno but Oberon is interesting. The papers aren't a bad read either.


I finds sad that many people focus too much on Plan9 and overlook Inferno, which was actually his last stop, where Plan 9 designers actually implemented some of the ideas they had originally for Plan9, like a memory safe GC userspace (e.g. Alef).


It would indeed be nice to see a comparison between Inferno and Oberon. From a distance, Alef and Limbo (the system languages used in Inferno) look like obvious predecessors to Go.


A big difference is that Oberon OS is fully implemented Oberon, compiled to native code, with some variants doing JIT on module load.

Whereas Inferno's kernel is implemented in Plan9's C variant, and Limbo uses the DisVM.


Unrelated but related, I had a lot of fun programming in Modula-3 but it seems getting the toolchain working in 2019 is more of a DIY project?

EDIT: I can't believe I said anything that warrants downvoting. Can't we share experiences?


Mentioning Modula-3 and how to get the toolchain working isn't even remotely related to the subject. Of course we can share experiences, but it would be probably at the bottom of this thread, because it's off topic.


> isn't even remotely related to the subject

That's quite a statement, knowing the history of those languages.


I am happy to be proven wrong. What does Modula-3 have to do with the subject:

Oberon (Operating System)


The languages Oberon and Oberon-2 were explicitly designed for the task of creating the operating system Oberon. When Wirth started working on the OS he realized that it would be too hard to write a nice OS with just Modula-2 (at least that's what I understood from the interviews he gave with various journals). Modula-3 was an evolution of Modula-2 by DEC (not by Wirth), but it was influenced by Oberon and they all belong to the same "Pascal language family" in terms of design philosophy.

I think it's very natural to mention Modula-3 in the context of Oberon.


I guess it's onfortunate that Oberon the OS and the Oberon the language share the same name.

To me it read like someone posting about Xv6 (Operating System) and a comment mentioning Objective-C and how to get its toolchain working ;-)


If you want to try out the Oberon language, there are several Oberon implementations for different platforms, including embedded systems, native windows executables, and the Java virtual machine: http://oberon07.com/compilers.xhtml


I feel not having anything on a runnable state from System 3 or AOS doesn't do proper justice to Oberon.

Those versions were already quite close to something like NeXT, but with Oberon variants.

Nowadays what is left are random ISOs that don't always boot properly on VMs and it isn't easy to compile the those OSes, even with some source still floating around on Github.

EDIT: Some System 3 and AOS links from here might still be useable, https://en.wikibooks.org/wiki/Oberon#System_Variants


I love the idea of smaller more logical operating systems. I am surprised that in the era of giant tech companies with top programmers and a lot of resources the most commercially viable strategy is still "try and paper over the complexities of linux" instead of starting something smaller and more modern.


When the CPU's developer manual is 2198 pages and still growing, any option other than starting with Linux and trying to keep up (i.e. VT, SGX, TPM, GPU, etc) will be extremely costly.

https://www.intel.com/content/dam/www/public/us/en/documents...

These things like Oberon and Inferno are from a more innocent era of computing.


I think that Intel’s cpu complexity (and arm and for that matter RISC v CPUs) is that they are designed to speed up existing software. Software that was written 50 years ago on simpler machines. Starting from scratch is not as complex as you might think if running existing software at “native” speed isn’t a requirement.


Are we going to give up GPUs, virtualization extensions, SGX enclaves, memory barriers, upcoming AI-acceleration, etc? To my knowledge, there is no software concept that will performantly render these functional units obsolete.

If we keep the chip features, the OS will have to wrap them in one way or another.

edit: my personal preference would be to go full-Terry, but I know that's a pipe dream.


Are there any concepts?


> starting something smaller and more modern

Where do you draw the line? Do you throw away Linux and write your own OS, which is bound to grow to the same level of complexity because it needs to deal with hardware complexity? Or do you throw away the existing hardware as well and start from silicon? Maybe even reboot the computing stack on an entirely different type of hardware?


I would be so down for a risc-V laptop running redox


Why not? Apple runs their own chips in the iPad and iPhone, with their own OS. Seems to be working out for them.


They didn’t start from scratch, though. iOS is derived from macOS, which is derived from NeXT (and FreeBSD if I recall right), etc etc.

iOS has a lot going for it, but it’s definitely not small, clean or simple.

(Ditto for the chips, which build on ARM, originally Acorn)


And they have a somewhat continuous team working on that OS since before Linux was a word. Same with Microsoft. Is it really relevant to any other company?


Apple haven’t been working on iOS longer than Linux has been around. It’s true that iOS is based on macOS which is based on NEXT (of various capitalisations), FreeBSD and a few other platforms. But Apples direct involvement starts at macOS and there’s very little FreeBSD/NEXT/etc in iOS compared to original code. Likewise you could argue that “Linux” predates Linux if you include the GNU user land and start tracing things backwards that way (I’m assuming “Linux” in the context of this discussion is GNU/Linux because it wouldn’t be fair to compare a kernel to a full desktop or mobile OS).

Ultimately I don’t think either arguments are particularly useful as all they demonstrate is that good technology is an evolutionary process of stands on the shoulders of other pieces of good technology.


I'm fond of the jocular statement that NeXT acquired Apple for -$429 million dollars.

Steve Jobs went from CEO of NeXT to CEO of Apple, and promptly started a project to scrap the existing "System" series of Macintosh operating systems in favor of a Unix derived from NeXTSTEP.


Sure, I didn't mean Linux isn't standing on the (partially the same) shoulders. What I meant is macOS/iOS and Windows teams are unique in shipping general-purpose OSs AND doing so and accreting code and expertise tied to that code since ancient times when "try and paper over the complexities of linux" wasn't an option at all.


Whether it’s 10 years, 20 years or 30. I don’t think it makes much difference after 10 in terms of the level of relevant expertise you’d expect. Particularly when you factor in staff leaving, getting promoted, etc and new engineers joining. Where Linux differs isn’t it’s age but rather it’s development model being decentralised. FreeBSD might be a more comparable example given the context you’re describing.


The problem of starting again becomes harder with each passing year because you have more hardware to support, more complicated software that needs porting and higher expectations from users about what a modern OS should behave like.

It’s a similar problem with creating new web browser rendering engines.


A browser engine is, I think, the main impediment to most non-mainstream operating systems being "useful" in a day-to-day sense.

It's just a massive amount of work, with literally millions of lines of code required, whereas a basic operating system is easily under 100k SLOC.

Plan9 and Inferno suffer from this. Haiku suffers from this. And Oberon is has the same issue.


I don’t think a lack of a browser was an impediment for Plan9 nor Oberon when you look at when they were released. It was more likely the momentum was already behind UNIX and similar platforms. Enterprises already had solutions to hard problems and weren’t willing to take a gamble on something new.

As for the desktop market, BeOS has a browser and that still failed. SkyOS had a browser and failed. AFAIK Haiku has had a Firefox port for several years as well (albeit I make no statement about how stable nor bug-free it might be).

When you have Microsoft and Apple heavily promoting their platforms, even going so far as to give educational institutions massive discounts knowing they’re indoctrinating future customers, it strikes me that the only way to achieve household success with anything new is with massive corporate backing and a decent chunk of good luck too. So personally I’d define Linux as an anomaly.

I also think you’re not making a fair comparison where you compare lines of code in a “basic” operating system to a fully featured modern browser. But I go into more details on that in another post further down.


I agree -- a browser wasn't needed at the time. But without one, they're not practical for use as a desktop/laptop/phone OS today. And I also agree that a browser is not sufficient (as you cited, BeOS/Haiku, etc) but it is necessary.

As a related example, NewtonOS (Apple's MessagePad/eMate OS), did have a couple of HTTP 1.1 / HTML 2.0 browsers. But it wasn't able to support SSL/TLS, nor JavaScript, and as a result using one today is impractical.

The world has moved on since the 90's. Some evolutionary lines had features that were lost.


> So personally I’d define Linux as an anomaly.

And, given the forces you describe, the ecosystem probably had room for at most one anomaly.


I think you're right, but I don't understand why creating a browser engine is such a monumental task. Is page layout really so complicated?


Page layout need not be complicated if you write it yourself in simple imperative code, for your own website.

But parsing the different standards of HTML and deal with invalid HTML in the "appropriate" way must be a lot of work. And implementing CSS to work with existing websites, and making it efficient, must be a PITA.

Much of that complexity was created in the name of discoverability. Given the amount of money that was spent on search and looking at the quality of search results nowadays, I'm not so sure.

In general the idea is to separate structure from style, and to allow developers to specify style more declaratively to enable them to make sites quickly. But I'm not positive that that worthwhile goal implies that the logic should be implemented in the browser. IMO it should be implemented in downloadable library code.


It strikes me as completely insane that it's more complicated to write a browser rendering engine than a whole OS. But I suppose the proof is in the pudding and the fact that there are so many more hobby OSs than hobby browsers must be testament to this fact.


> It strikes me as completely insane that it's more complicated to write a browser rendering engine than a whole OS.

There was once a programmer who was attached to the court of the warlord of Wu. The warlord asked the programmer: “Which is easier to design: an accounting package or an operating system?”

“An operating system,” replied the programmer.

The warlord uttered an exclamation of disbelief.

“Surely an accounting package is trivial next to the complexity of an operating system,” he said.

“Not so,” said the programmer, “when designing an accounting package, the programmer operates as a mediator between people having different ideas: how it must operate, how its reports must appear, and how it must conform to tax laws.

By contrast, an operating system is not limited by outward appearances. When designing an operating system, the programmer seeks the simplest harmony between machine and ideas. This is why an operating system is easier to design.”

The warlord of Wu nodded and smiled. “That is all good and well,” he said, “but which is easier to debug?”

The programmer made no reply.

The Tao of Programming, Geoffrey James, 1987


It depends where your benchmark is.

Writing a browser that passes acid tests and supports all the standards (old and new) as well as the sites that aren’t standards compliant but users might still expect it to work; that is an insane hard task. But building a console browser that supports a subset of standards and has no Javascript support is a much easier job (read: “easier” as in relative to the former task).

Much like building a kernel + CLI shell is easier than building a fully multi-tasking OS with GPU accelerated GUI compositing, stable ABIs, modular driver model, and full support for 99% of common hardware.


The biggest problem is that browsers are actually virtual machines nowadays, so you end up implementing two OSes.


Oberon actually has a browser. At least the Bluebottle (https://en.wikipedia.org/wiki/Bluebottle_OS) variant does: https://bbos.org/xref/WebBrowser.Mod.html


I've been considering doing this for Snapdragon SOCs. I'd probably start with the 855 or the 8cx and not worry about any backwards compatibility with anything other than the latest and greatest hardware.


Oberon! The king of the faeries!

https://en.wikipedia.org/wiki/Oberon


If I remember correctly, Wirth mentioned the moon Oberon and the fact that the name starts with an "O" (like "object oriented") as the source for the name of his programming language.


Somewhat off topic: this wikipedia page reads like a sales pitch. Does anyone know how to flag such pages for attention?


Comment on the associated talk page, or edit it yourself to make it better?

https://en.m.wikipedia.org/wiki/Wikipedia:NPOV_dispute may be relevant as well.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: