Hacker News new | comments | show | ask | jobs | submit login
The Taos Operating System (1991) (dickpountain.co.uk)
110 points by vezzy-fnord on June 30, 2015 | hide | past | web | favorite | 93 comments

Hello --- ex-Tao employee here!

So Tao was my first startup. And then it was my first that-awkward-stage-when-you're-too-old-to-be-a-startup-but-still-don't-have-proper-cashflow. Then it laid off 50% of the staff, then it went bankrupt, then it came back from the dead, then it went bankrupt again.

The lessons I learnt from Tao were:

1. Your company needs to be called something which people can pronounce. ('Where do you work?' 'Tao.' 'Dell?' 'Er... no.')

2. An OS which requires people to code for it in a language which only exists in-house is not going to get much external traction.

3. People don't buy advanced technology and then figure out what to do with it. They figure out what they want to do and then buy the technology.

4. When the paycheques stop showing up, leave.

The technology was great. I joined just as TAOS was being migrated to intent. Both worked in the same sort of way; code was written in a custom portable assembly language, which was then translated into native machine code when it was loaded into the machine. The TAOS language was VP1, the intent one was VP2, and the first VP2 systems were hosted on TAOS, so it actually translated VP2 into VP1 and then into native. Translation was very, very fast; as both VP were pretty dense it was possible to load and translate VP faster than loading native code, on certain combinations of platforms.

VP1 had 16 registers and a pretty simple instruction set. The original incarnation was a assembler macro package. VP2 was way more complicated. It had five register banks (int32, int64, pointer, float, double) of up to 65535 registers each. It was actually a pretty nice system to program in, especially when the assembler got strong typing and structured programming support... yeah, I know what you're thinking.

The OS itself was way ahead of its time: it was an asymmetric multiprocessor system, where you could have any number of nodes of different CPU types hooked together via a message-passing system. Code loading was transparent, and devices and filesystems could be attached to any node on the system. It had a blisteringly fast compositing GUI, cutting edge audio synthesis, hardware accelerated OpenGL, and eventually ran on, deep breath: 386 x86 ARM MIPS PowerPC MCore ColdFire Transputer ShBoom and a V, um, V840? Or something?

Our standard development procedure for new hardware was to have intent running (inside a Linux process) on a desktop PC; we'd bootstrap a development board up to the point of having serial support and the VP loader; then we'd hook them together. Suddenly, we have a two-node system. We could start a terminal on the desktop and see that node #0 was a Pentium and node #1 was, say, a ColdFire. You could then spawn a process on node #1 and it would run transparently on the dev board with full access to the desktop's filesystem and device drivers, all running, quite slowly, over the serial link.

We had a TAOS-era demo box which was a 9-node system consisting of an elderly Pentium and eight transputer nodes; one of the demos was to draw a Mandelbrot, farming each scanline out to a different node. Alas, by the time the transputers had finished drawing their scanline, the Pentium had drawn the entire rest of the image...

We never, ever sold the multicore system to anyone. Back in those days it was too novel for customers to know what to do with. Instead we sold it as a embedded OS, which it did really well at. We had a JVM for it, which translated Java bytecodes into VP2, which meant we got, pretty much for free, a Java JIT to native machine code which didn't require any additional porting; any system intent supported also supported Java. (See that list of architectures above? We had a Java JIT that targeted the frickin' TRANSPUTER.) For a while we were the de-facto standard Windows Mobile JVM. Even now there are still copies floating around.

But you can't make money selling JVMs; the Sun licensing fees and the large team required for conformance testing soak up any margins. We were asked whether we wanted to become the standard Blu-Ray JVM. We said no --- we didn't see how we could make any money at it. In hindsight that was probably a bad idea, because if we'd said yes then maybe someone would have cared about the company surviving.

C and C++ was supported, eventually; we had a gcc port which produced VP2. This was back in the gcc 2.95 days. I'm not sure you realise how horrible gcc 2.95 was, but it was very horrible. C wasn't a good match to the intent way of doing things and we didn't put as much effort into it than we should have; eventually we got traditional programming models complete with shared libraries and dlopen() but it was way too late to matter. Even right at the very end the vast bulk of the system was written in VP2 assembly.

We had a fling with Amiga pretty late on, and were the technology behind Amiga Anywhere, which nobody ever cared about. (Although we did actually get a few external users. The sheer novelty of this was hard to handle.)

If you actually want to get your hands on an intent development kit, there are some legal copies floating around: we bankrolled a short-lived 'magazine' (read, advertisement) called Digital Magazine, and one issue had a complete PC dev kit on the cover CD. Here's the advert, and yes, the cover picture horribly embarrassed us: http://www.amigahistory.plus.com/amiga_active/digital4press.... But I don't know if there's a copy online. (If you find one, let me know!) You may find this OSNews post about it amusing; drink every time you see a commentor say 'But what is it?'. http://www.osnews.com/comments/743

Eventually the company ran out of money for the last time and completely went under. I regret it deeply; the people were amazing and very, very smart, and the technology was deeply cool. Back in the 90s we worked with Motorola to produce a modern-style touchscreen smartphone; the software stack was in Java, running on an 11MHz ARM. It was fast and responsive and would run full-screen games. Did I mention it ran on an 11MHz ARM? IT RAN ON AN 11MHz ARM. But Motorola canned it shortly before launch. If they'd actually launched it, then Tao, and the entire smartphone world, would be very different.

Our CEO said that his ambition was for Tao to be bigger than Acorn. Well, he got his wish when Acorn folded shortly before we did. Sigh.

Oh! Oh! I totally forgot about this!

I have an actual program for intent. It's crap, but: http://cowlark.com/foo-fighter/index.html

It's in C, unfortunately, so there's nothing there of very much technological interest, but you can look at the APIs and marvel at my 13-year old code. I appear to have left a compiled binary with debugging information in it. It may even run on the Amiga Anywhere runtime or the intent ADK, if you can find one.

Update: Here's a screenshot of an editor with some VP2 assembly in it: http://mobile.osnews.com/img/vp.jpg

This is part of a program which displays images on the GUI. It's VP2 using partially-typed assembly (we went through several iterations). 'tool' defines a loadable module. qcall calls another loadable module, which is loaded dynamically or statically depending on flags. ncall makes a method call on an object (using a blisteringly fast hash table lookup, so you get dynamic method resolution in about four instructions). Calls can have multiple inputs and multiple outputs; registers are saved for you automatically. gp is a special register pointing at the app's globals.

Update update: And here's the GUI in action (it was called the AVE). http://mobile.osnews.com/img/quake.jpg It's running hosted on Windows. It's all composited --- note the transparent windows (unheard of in those days)! The viewflm window in the background is running an animation, which you can see through the transparent overlays while two Quake games run.

This is all done in software, using simple but very, very careful code written by Chris Hinsley. It was amazingly fast.

That's really cool. I can see why C wasn't a high priority - it just doesn't map that well to object-oriented assembly...

Maybe I'm crazy, but this level of abstraction is somehow to my taste, and it looks like Tao would be fun to program in. Shame that it didn't make any impact. Had it been open sourced in 1998, things might had been quite different...

What happened to the Amiga deal? You alluded to it in another post. That sounds like another really interesting story.

I wasn't really involved in that, being a mere foot soldier, but --- from memory --- we licensed intent to them and then they rebranded and extended it to become Amiga Anywhere. The idea was that AA games could be written using our tooling and run on a runtime based on intent. So, they'd run anywhere with our runtime. I think they were trying to exploit the Amiga brand to leverage synergies, or something.

I don't know why it failed; we didn't have anything to do with Amiga's operations, apart from offering support. (I don't think any of us were Amiga people.) My impression was that the general feeling inside Tao was that the Amiga of the time was cursed, and we didn't want anything to do with it. Tao itself wasn't doing well then and we were all a bit superstitious. It didn't help that the main person we dealt with was called Fleecy Moss.

I did get one perk out of it: I own a copy of the Amiga comeback album. Lucky me. https://www.youtube.com/watch?v=szMGxqwfxiI

Here's a terrible video of someone from Amiga demoing it. God, those iPaqs. I had one on my desk with a PCMCIA hard drive in it for doing the ARM Linux port. Horrible, horrible things. https://www.youtube.com/watch?v=HfHcwpzxSdk

I'd be interested in playing with a copy of AA if I could get my hands on a version which would run on a modern machine.

> I don't know why it failed

At the time, the Amiga name didn't have a lot of mainstream recognition anymore. And the people who still cared about Amiga slammed AA every chance they got on amiga.org & amigaworld.net because it wasn't classic AmigaOS running on a cell phone. There was a lot of resentment that the Amiga name & logos were being slapped on something completely unrelated.

Wow. I remember Amiga Anywhere. I was a big Amiga fan back in the day, and that was their big hope to be able to maintain relevance. Alas, it seems to be the way of all cool tech to eventually get discarded in favor of the less capable mainstream.

Kind of obvious question that hasn't been asked yet - what happened to the IP? Any chance it could be open sourced? (Would it even be pointful to open source it today?)

It vanished into a lawyer's basement, and so passed out of history. 10+ years of work, gone. NOT THAT I'M BITTER OR ANYTHING.

Open sourcing it's probably not useful: it was based almost entirely on tribal knowledge, passed from developer to developer, and learning curve was pretty steep. I worked on our translator for a while, porting it to new CPU backends, and... yuck.

However, please try; somewhere in that lawyer's basement is a short story which I wrote in my lunch break, emailed to a friend, shortly before the company went bust... and I forgot to send a copy to my personal account. And my friend lost it. I want that back, dammit.

That's just revolting.

How large was the core Taos kernel and basic userland? What kind of effort do you estimate would be needed to bootstrap a libre clone? Have any people ever considered doing this before?

TAOS was tiny; but it had no features. It had the VP1 loader, memory allocator, filesystem interface, a really horrible shell, and that was about it. It may have had threads. It certainly didn't have TCP/IP. It'd fit comfortably inside a 64kB DOS .COM file, because that's how we booted it.

Elate/intent was way, way bigger. It was a proper filesystem, with device drivers (object-oriented and named and mounted in the VFS), and modules and components and a package manager (which was actually pretty awesome) and Posix and a huge standard library. Even the translator was big, by which I mean double-digit kilobytes of translated code.

I would say it's not worth cloning. State of the art in JITs has moved on so much that intent's fairly crude binary translation's not worth much. Instead I'd use something like LuaJIT as a JIT core, and build up from there; it's fast, tiny by modern standards (although still way bigger than the intent core), and you get binary portability by pushing Lua bytecode around instead. Compiling C into FFI-heavy LuaJIT shouldn't be too terrible and should give decent performance, while keeping portability.


Incidentally, despite my previous message, it may be possible that Amiga Anywhere still has an intent license. Which means they might have source code (because it's not like they're going to get builds from Tao any more). Does anyone know if Amiga Anywhere is still a thing?

You say it's not worth cloning, but what about the touchscreen phone UI running on an 11 MHz ARM processor? What was the secret sauce that made that work? Or was it just that a phone UI back then wasn't expected to do as much as one does today?

Partly lowered expectations; it was a monochrome stylus screen with about four pixels. Partly it was the hand-tooled and very lightweight libraries. We could run the translator in both online and offline mode, so most of the Java runtime was pretranslated into machine code and in ROM, which minimised startup time (although the generated code was the same, so there was no difference in performance). Partly it was an earlier, more^H^H^H^Hless elegant age of Java; this was the MIDP era.

Ah, here it is:


Mmm. I don't remember it being quite that brick-like.

It definitely was Dave :)

Chris Hinsley

Seconded - I have one :)

IP Lawyer by the name Peter Ritz iirc. He tried to recruit Andy Henson, Andy Stout and me to revive it.

Incredible story. Thanks for sharing it! I bet the architecture would work even better on today's systems given they're often multi-core and distributed while built on OS's not designed for that. That IBM noticed this is probably why they built their K42 operating system for clusters which used a microkernel and message-passing.

Running a clean roomed P-Java stack on top of our VP translator on an 11MHz Arm and getting the footprint under 1MB was a serious achievement.

And yep, the history of Smartphones would have been very different if Moto had shipped it !

The history of Java would have been different too, because this was before all the K-Java stuff that was a response to getting Java into smaller footprints !


Thanks for sharing. Taos was one of the OSes I was really interested as an undergrad student.

There's more about this operating system, which apparently was renamed to Elate, and then Intent, at http://mobile.osnews.com/printer.php?news_id=157 and http://c2.com/cgi/wiki?TaoIntentOs.

Our naming sucked.

The company was called Tao, which nobody could pronounce.

The first OS was TAOS.

The second OS, which succeeded it, was Elate. Except, when Elate was running hosted on another OS, it was called int<b>e</b>nt. Yes, the bold e was part of the name. Except, when Amiga had it, it was called Amiga Anywhere.

Most comments of the era run along the lines of 'but what the hell is this?'. And for good reason.

I worked on the gcc back-end for TAOS. One requirement was a special output format called a "tool" which was like a single subroutine shared library. It was specified with the standard "-f" gcc extension syntax: "-ftaos-tool".

I heard about the name change to "Elate" second hand. "taos" had to change to "elate" everywhere -- even the options.

The naming may or may not have sucked, but it definitely did blow having to tell the compiler to "-felate-tool".

LMAO! That's the best command fail I've ever read!

Oh my gosh. And you also made a pun.

> called Tao, which nobody could pronounce.

Pronouncing skills (the absent of) of native English speakers constantly surprises me. But this case goes to absurd levels...

And yet they've pretty much evaporated since. References to this system are very rare and it seems like only a small circle of people ever truly experienced it. No public copy from what I can see.


A photo of a package for Acorn Archimedes refers to a 'developer edition'. Archimedes computers could host transputers.


Right at the bottom - labelled as 'never released'


Looks like Acorn were considering TAOS OS just before retiring from the general computer business.

EDIT: A later development after they went into embedded with some kind of java based runtime


Here's some more info: http://www.uruk.org/emu/Taos.html, via https://groups.google.com/forum/#!topic/comp.lang.forth/Cj_6....

According to Wikipedia the company was sold in 2007 (https://en.wikipedia.org/wiki/Tao_Group). That's a pretty long run from the early 90s.

I've tried emailing Chris Hinsley to see if he wants to answer questions on HN. An old email address, but maybe he'll see it at some point.

The uruk.org opinion article was the one I encountered first, though the BYTE mag one is of higher quality.

The wiki article lists quite a convoluted history, not to mention all the rebrandings. It seems like they never really focused on attracting researchers or considering any FOSS presence, which is a shame because it's now practically lost by this point.

If Hinsley answers, that'd be great.

I wasn't a founder, but I spent ages campaigning for a free development kit to try and get homebrew momentum. Management was rigidly against it. The reasons basically boiled down to (a) our APIs were trade secrets; (b) support costs would be way too high.

(a) was obviously dumb, but (b) had a point. We would have gotten people asking us questions. With a tiny staff we'd have had to blow off anyone who weren't paying for a support contract, which would have gotten us bad press; and given how weird intent was, we would have got questions. Part of the reason for the Amiga deal was that they'd do this for us. Well, that went well...

I agree with Dave, I was involved in arguing for a open version, as the CTO of Tao, so people could get there hands on, but it wasn't to be. :(

With hindsight Tao should have done this.


I agree. Heaps of experimentation and first-class work were lost because they happened before open source became mainstream. Especially in industry, where licensing models sealed work off from community adoption.

I'm really starting to wonder if these sealed off models promoted innovation. Have you seen much crazy innovation since open source became mainstream?

Open source was basically the default in the beginning, then declined in the 80s and 90s which prompted FSF and GNU to emerge, finally picking back up quickly in the 21st century.

This sounds like a massive case of post hoc ergo propter hoc.

> Open source was basically the default in the beginning,

Prior to the rise of the GPL and other formal F/OSS licensing models, there was some explicitly-dedicated-to-the-public-domain software, and lots of lax enforcement of copyrights in software, and, especially prior to 1974, some doubt about the copyright status of software. Before automatic copyright in 1978, and especially before 1974, lots of software was probably not copyrighted (because it took an active step to do so and because there was doubt about whether doing so would have any legal effect.)

But I don't think open source was ever the norm for software once it was clear that copyright protection was available, and certainly not once it became automatic.

I got the email !

Chris Hinsley

Do you have any more insight onto the state of the Taos IP? (i.e. any chance of getting it open sourced?).

I'm afraid it really is gone :( Rusting on a shelf somewhere never to see the light of day again.

However I did start an experimental project a while back to do a simple Taos like kernel over BSD/Linux in x86_64, VP macros and process load balancing, point to point link network simulation and function level dynamic binding ?


Look for the Asm-Kernel project amongst my other stuff.

Was just starting to work on a GUI, AVE MK 6 :) But then got into a new job and it's currently sat at that stage till I go back to it.

Regards to all, was great to read this and bring back such wonderful memories of the great team we had at Tao, some of the brightest tech people I have ever had the pleasure to learn from.

Chris Hinsley

I was very interested in this way back when BYTE ran it's article on it.

Would love to see it (what's left of it!) open sourced.


> TAOS stores objects onto mass storage media like hard disks via the mailbox system, in objects called 'filefolders'. A filefolder has some similarities to a DOS or Unix directory, but rather than being a passive storage structure it is an active object (in fact a control object). The tools attached to every filefolder are responsible for storing and retrieving objects from the folder and for negotiating with the hardware device drivers that are necessary for this transfer. Because filefolders are actually instances of a broader category of TAOS object called 'filters' they may also process the data they transfer, for example compressing and expanding it transparently to the user.

Emphasis added. Oh, it seems they were so close here. The biggest central problem with Unix mount, apart from synchonous/trusted I/O, is that the things that are exposed to the OS through mounting are only files, and never processes ...

> The biggest central problem with Unix mount, apart from synchonous/trusted I/O, is that the things that are exposed to the OS through mounting are only files, and never processes ...

With plan9port https://swtch.com/plan9port/ you can mount processes on any UNIX.

Sorry, I'm not as familiar with Plan 9 as I should be. I know that it has (and emphasises) user-space mount, but it is my understanding that the entities exposed to the OS through mounting are files, not processes. (Including through /proc, which exposes a directory tree of files containing data about processes, not processes themselves.)

Sounds a bit like Hurd translators. Is that what you were getting at?

It seems closer, though I'm mainly thinking about unpublished research. ;)

Useful link, several scanned articles on Taos at the bottom of the page.


Chris Hinsley

Ah Taos! One of serveral very interesting OSes from late 1980s and early 1990s. Grasshopper, Jaguar, Oberon are a few others from the same era.

Its probably my age and not keeping up with the research, but it feels like there were more diversity and cool things going on in OS reserach and R&D back then.

Though the development of VMs and OSes for IoT and mobile devices is probably a good counterpoint.

Oh, I'm so glad someone ressurected this old article. I kept talking about it, but docon the Internet of Taos and Elate became quite sparse. It was full of great ideas.

A demo of Amiga branded Taos was released on a CD or DVD with a game magazine. I played with it. It worked.

Yes, I remember it getting coverage in EDGE way back when, and thinking it looked interesting.

Then again, I'm one of the few who actually bought a copy of BeOS. RIP, isComputerOn/isComputerOnFire...

Quite a few companies toyed with using it as the basis for their next big thing, but it never quite happened. Pity really as a lot of good people worked at Tao.

I remember reading this and getting excited. Still today we haven't quite managed to paralellize all the things, even across cpu cores much less nodes.

Looks quite similar to Phantom OS, though it seems frozen/dead since 2012.



I'm wondering what stopped this development... Technical issues or financial/environmental? What strikes me is the organic-like structure and functioning of this idea. Which I believe is where things need to go to avoid boring stagnation.

I love the list of architectures mentioned:

"Inmos T800 transputer, Motorola 680x0, PgC7600 (see article in this issue), Intel 80386/80486, Acorn ARM and Sun SPARC"

Of these x86 and ARM dominates the world today.

I've said this before, but it's worth saying it again. It's disappointing the two most popular OSs are a Unix derivative and the bastard child of VMS.

The problem isn't so much Unix, but rather that we have largely failed to evolve it since the days of AT&T System III. The Bell Labs people kept working on Research Unix up to V9 with plenty of great features (like V9 IPC streams) that were never replicated anywhere else, before moving on to the ultimate culmination of these ideas in the form of Plan 9 and Inferno.

Meanwhile Linux has always been a boring SysV Unix clone that only occasionally ripped some features (and in a questionable manner) like procfs, sysfs, epoll, signalfd so you don't have to do the self-pipe trick, inotify, cgroups and namespaces. Stuff that competing systems often did much better.

The BSDs have been quicker to do more novel things, but by far the only two modern Unices to get the memo are DragonFly BSD and Solaris/illumos.

It is an interesting side note that Richard Miller, who hacks on Plan 9, was part of the Taos OS scene.

Even more interesting that he was on the frontpage just last night! https://news.ycombinator.com/item?id=9801745

Ah, I bet that explains it. People follow the subject and post the things they find.

I wonder how long it will be before the turtle pokes its head out?


I was waxing lyrical at the Plan9 conference to Richard who I have known for a few years about how amazing the turtle and Logo is/was for classrooms because it could engage with a wider range of children than trying to teach them programming because it stimulated those maybe interested in programming, art, robotics, languages and maths with teamwork and communication skills and even just watching things happening. Imagine my pleasure when he said "I'm glad you appreciate it" and went on to tell me the story of developing it when I had no idea!

Wait, what?! Can you illuminate the intervening breadcrumbs?

Edit: wow, he's everywhere.

I was updating with my personal breadcrumbs when you replied. Richard was a colleague of mine - the plan9 scene leads very quickly to luminaries - I have had the privilege of meeting some very talented programmers who are humble so one day you will be on a webpage and go "gosh, I know that guy"

I'd love to chat more if you'd be willing to drop me a line. My email address is in my profile. Too bad I'm too late for the plan9 scene. I'm fascinated to find this long-running nexus between OS hacking and teaching programming, because that's kinda what I've been up to lately and I've worried some that it bespeaks a lack of focus.

I sent you an email

Thanks! Hope you received my response.

It feels nice to be, at least in part, responsible for this exchange. :-)

Go look at Genode, one of the most exciting things in OS development I've seen for a long time:


It's a secure microkernelly capability-based OS where each process is basically a virtualised hypervisor; it can run a conventional OS as a user process, but it's also got a Unix-like native personality if you want it. I believe it's nearly self-hosting.

It could be worse; it wasn't very long ago when the dominant "operating system" was DOS, for example.

I personally would love to have a modern Lisp machine. Alas, having lived through the 90's, I'm glad that we at least have Unix. Perhaps someday everything will be unikernels.

> it wasn't very long ago when the dominant "operating system" was DOS, for example.

The bastard child of CP/M...

The bastard child of TOPS-10...

It is. Even worse, Microsoft got a designer of an OS (VMS) that practically never failed to build the new one that failed all the time (sighs). Least they learned a few things from VMS...

I think we've discovered over time that OS Kernels aren't the place for Grand Abstractions.

Grand Abstractions are cool and powerful when you are happy to live within their universe. But they tend to be incompatible with other Grand Abstractions.

That's why modern kernels tend to focus on Boring Abstractions (threads, processes, locks, memory maps, files, cgroups, etc.) Then the interesting and more opinionated stuff can happen in user-space (Docker, JIT-comiling VMs, web browsers, mobile apps, etc)

Disagree. Those things are already GAs, just ones more familiar with programmers today. So we call them boring.

Change the way you think about code, and change the world. And it starts with the application environment, often called the OS.

> Those things are already GAs, just ones more familiar with programmers today.

Ask yourself this: could you efficiently implement Taos abstractions on top of POSIX? (probably yes) Could you efficiently implement POSIX abstractions on top of Taos? (probably no) That is the difference between a Grand Abstraction and a Boring Abstraction.

> Change the way you think about code, and change the world.

I agree that Grand Abstractions can be powerful.

> And it starts with the application environment, often called the OS.

Sure, just leave the GAs out of the kernel. It doesn't mean you can't have them. Just put them in user-space. Then GAs can operate in a competitive market where you use them because you want to, not because you have to.

intent was actually largely Posix compliant, towards the end. As it ran in a flat address space, we never managed to implement fork(), but I believe someone was working on exploiting the Java garbage collector to search for all pointers belonging to the process and rewriting them so as to allow copying the process somewhere else in memory... or something. Possibly the company folding was a mercy.

You could imagine an OS with no threads. E.g. dataflow, or message-processors working queues, or something else. Perhaps instead of processes we'd use just virtual machines. Maybe instead of managing memory, we manage only data abstractions e.g. capabilities. There are strong reasons to want safety and security guarantees around the lifetimes of those things.

All of those should properly be managed by a kernel, yes?

> You could imagine an OS with no threads. E.g. dataflow, or message-processors working queues, or something else.

Sort of like Erlang? Which can run on POSIX.

> Perhaps instead of processes we'd use just virtual machines.

Sort of like the JVM, or Docker, or VirtualBox, or any number of VM technologies that run on POSIX?

> There are strong reasons to want safety and security guarantees around the lifetimes of those things.

Why do any of these things need to live in the kernel to give safety/security guarantees? If anything, code that runs in the kernel is more vulnerable, not less.

btw. I don't think POSIX is the be-all end-all or anything. But if I imagine something better than POSIX, its abstractions are even less opinionated than POSIX, not more.

Getting rid of POSIX is the only way of achieving safer OSes. POSIX is nothing more than the C runtime vs what other languages with richer runtimes offer. They standards body just decided to split the runtime between ANSI and Open Group.

I only care about OS POSIX support every time I have to go down to C or C++. Otherwise the OS can have whatever abstraction model it feels like.

Putting things in a kernel control failure modes, security, performance at the cost of rigid APIs. So I understand wanting to avoid committing to a kernel. But its harder to make strong guarantees without.

Actually I think library OS like Oberon, Singularity, MirageOS, Drawbridge are the way to go.

We are currently stuck with VMS influenced OS and UNIX clones everywhere.

This is why I follow the MacOS X, Android and WinRT changes as their stacks could completely replace the old idioms if they didn't care about backwards compatibility.

A GPU driver is a pretty grand abstraction.

How so? Yes it provides complex abstractions, but these are not inventions of the software, it is just exporting the abstractions supported by the hardware.

And in fact the kernel space interface doesn't abstract much at all. All of this Vulkan/Metal/DX12 stuff really looks like someone decided to simply export the kernel level interface in a cleaner way.

I think it's very relevant to mention "The Future of Programming", a recent talk roleplayed as if it were 1973, with a lot of excitement over the recent developments in computer science, and discussing how the next 40 years would be... only to realize that, 40 years later, we haven't caught up many of these innovations

Watch it, it's short, funny, and food for thought: http://worrydream.com/dbx/

Repost of the article that you already posted 15 hours ago: https://news.ycombinator.com/item?id=9802379

We sometimes invite people to repost articles that look good and didn't get much attention. 11 points is perhaps borderline for "much attention", but we thought the article deserved a discussion, so we invited vezzy-fnord to repost it. (Edit: It's astonishing that, to judge by HN Search, this operating system has never appeared on HN before, either as a story or even in a comment.)

In general, a small number of reposts is ok if an article hasn't had significant attention in the last year or so [1]. Letting good stories have a few cracks at making the front page helps mitigate the randomness of what gets traction here.

1. Please see https://news.ycombinator.com/newsfaq.html.

Off topic, but I just wanted to say how much I've appreciated the modding of late. It's been prompt, reasonable, and transparent. Thanks a lot.

Actually, it was just mentioned two weeks ago (in reference to Apple's bitcode):


So it was—good catch!

Looks like "taos os" was the thing to search on: https://hn.algolia.com/?query=taos%20os&sort=byDate&prefix&p....

Ah, sorry. I did not know that reposts are explicitly allowed even in this timeframe.

It'd probably be better to wait a few days, at least for non-topical stories, but our invite system isn't that sophisticated.

not the poster of this article, but when dang sends you an e-mail to repost a story, I tend to listen

Parallels to Apple's Bitcode? Could OS X go the way of Taos VM?

LLVM bitcode is a moving target, not a firm spec.

That could well change.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact