Hacker News new | past | comments | ask | show | jobs | submit login
Domain/OS Design Principles (1989) [pdf] (bitsavers.org)
55 points by todsacerdoti 16 days ago | hide | past | favorite | 34 comments



Apollo Domain/OS was great, innovative, and opinionated. It wasn't just another BSD or System V, and they rethought a lot of things.

Apollo's heyday was a bit before my time, but my first real job was working on technical software at a company who was a big-ticket technical software developer on Apollos (and I even once saw marketing brochure with our badge on an Apollo workstation). As an intern, I set up a pair of them for porting (though we preferred SPARCstations), and later, when I moved to headquarters, HQ was still doing the master SCM and (what's now called) CI, for all platforms, on Apollo DN10k pedestals. We got the DSEE descendant Atria ClearCase on non-Apollo workstations, in a new-tech R&D group I was in. I bought a couple retired Apollo workstations just to play with them at home.

Apollo did a lot of innovative stuff in Domain, and it's one of the few platforms I'm sometimes tempted to buy again, just to play with it and understand more of how they approached things.

When it had been years since I'd seen or heard of Apollo anywhere, I bumped into someone from there, who mentioned that Boeing had done some documentation using Apollos, and part of their very serious configuration management process involved them physically archiving an entire Domain network. (I'm guessing they used the very nice Interleaf software, which seemed to be popular on Apollo, and, by that time, had long also been available on other platforms.) It was appealing to think of an Apollo Domain network preserved in stasis, should humanity ever need to call up Apollo for duty again.


Domain/OS, later "Domain/IX" (we were told the "IX" meant "9", but weren't fooled; everyone called it "domainix") inherited from Aegis a powerfully useful feature still missing in Unix and Linux: environment variable references could appear in symbolic link text, and be expanded when the link was followed.

Dragonfly BSD has adopted a similar but rather clumsier version of the feature.

People always think at first that this would introduce security problems, but I have not heard of a plausible one.

(As an aside, corporate marketing departments were always coming up with reasons why "their" unix mustn't be called unix. People not fooled just stuck "-ix" on the end. E.g., Sun promoted their "Solaris", thus "solarix". That is why Apple's thing is still called "macosix" by those of us who know better; similarly, Windows is pronounced "windos".)


AT&T didn't share Unix as well as would be modern commercial open source convention, but there was also a culture of non-commercial sharing at the time. So, besides not wanting to step on the trademark or copyrights, and hear from lawyers, there was also a sense of fighting back against AT&T, even in marketing. (Also, Berkeley counterculture might've helped, I suppose.)

GNU = GNU's Not Unix

mt Xinu <= UNIX(tm)

https://ia601002.us.archive.org/3/items/Mt_Xinu_Mach_386_920...


"It wasn't just another BSD or System V, and they rethought a lot of things."

It was also a huge PITA to get any 3rd party Unix software to compile on Domain/OS. I went to a college that had exclusively Apollo workstations and you had to get really creative with Makefiles, #ifdef, #include, and writing compatibility layers to make it play in the SunOS/Solaris/SCO world of the time.

Eventually we started getting "HP Apollo" HPUX systems, which weren't that great either, but at least they were closer to mainstream Unix.


It was a huge PITA to get any 3rd party Unix software to build anywhere, what with BSDs, SysV, SunOS, Ultrix, AIX, Irix, Xenix, A/UX, ... We ended up with "configure" scripts, and then autoconf scripts to generate configure, and automake to generate makefiles, and libtool and libiberty. (Xenix actually had "#define remove unlink" in a system header file!)

They were horrendous, and now are unnecessary, but we have CMake and it is getting to be a problem of its own. Things now mostly just need to build on linux, windos, and macosix, and everything is little-endian.


Exactly. There was a time, circa 1990, where it seemed the majority of open source (though we didn't call it that) software on the Internet seemed to build easiest on a SunOS 4 SPARC, and that was one reason to prefer that.

I did write code portable to all of the workstation platforms (and VAXstations running VMS), and my approach was not to use separate scripts, but to write in a particular slightly-subset of K&R C (with portability macros to reintroduce some features), deal with compiler/architecture/library differences, and go through the docs and `cpp` output looking for preprocessor symbols that could help distinguish a particular platform (and you couldn't even use `#if`; it had to be `#ifdef`). For some purposes, cpp alone wouldn't be enough, and employer, which ran on all the workstations, had to develop its own portable graphics and GUI layers, IPC abstractions, etc. Makefiles were still heck for varying behavior between platforms, and you couldn't even use all the GNU Make features (though, in hindsight, maybe we should've, even when we had to keep using the vendor compiler toolchain), so complicated projects wouldn't necessarily be able to use a single Makefile alone.


There is an emulator built in to MESS in case you ever want to play with it again. You do need to find a disk image somewhere.



These branches of computing history are really interesting.

> Domain/OS uses a single-level storage mechanism, whereby a program gains access to an object by mapping object pages directly into the process's address space.

It sounds similar in that respect to IBM i, and seems like an evolutionary branch that died off. What ever happened to this paradigm?


> It sounds similar in that respect to IBM i, and seems like an evolutionary branch that died off

Even on IBM i it is in decline. Originally everything ran in the single-level store address space, but then they introduced additional non-single-level address spaces (teraspaces). And one of the major things that teraspaces are used for, is to run PASE, which is IBM i's AIX binary compatibility subsystem. And IBM appears to have a preference to ship new stuff in PASE. The single-level store environment is still used by "classic" apps (such as RPG and COBOL), but newer stuff – especially anything written in newer languages such as Java, Python, etc – runs outside of the single-level store in a PASE teraspace.


But is that just to accommodate the newer, more mainstream stuff, or because it's actually technically better?


I think it is mainly about making it easier to port code from more mainstream platforms (AIX/Unix/Linux), which reduces engineering costs. Porting open source code is a low cost way to get new functionality and features, and makes the environment seem more familiar and modern to newcomers who are familiar with Linux – and commercial Unixes such as AIX are pretty close to Linux. When detractors call it a "legacy" platform, their sales team can now respond "it's not legacy, it runs node.js!"

But one thing I think it demonstrates is a problem with non-mainstream operating system architectures. Even if a non-mainstream operating system architecture is technically superior, sooner or later you want to port software to it from a mainstream operating system, which means you need a compatibility layer implementing a more mainstream operating system architecture. And before you know it most of the code is running in the compatibility layer, because that's where all the new applications are coming from and there is no way you can keep up with that pace yourself. And then you have to ask what is the point of the innovative non-mainstream architecture if so much of the software you run doesn't actually use it. So eventually it leads you to moving off the non-mainstream architecture and on to a more mainstream one.

Is IBM i technically superior? It is a weird mixture of (a) advanced concepts like single-level store, an object-oriented operating system and bytecode virtual machine (b) legacy crud like EBCDIC, RPG, block mode terminals, 10 character limit on object names and a single-level filesystem (c) a severe lack of extensibility and openness in which a lot of OS concepts (e.g object types) are closed up so only IBM engineering can extend them (or possibly ISVs who pay big $$$$ for NDA manuals) (d) the completely different worlds of POSIX/AIX/Java grafted on the side, and increasingly taking over the rest. I grant that (a) could be said to be technically superior, but (b) and (c) clearly are not.


But that's entirely my point, yeah. I don't know if a single-level store address space is better, but if the reason for its decline on IBM i is merely that mainstream software doesn't mesh well with it, I feel like it doesn't tell me much about the paradigm itself.

By the way, I'd argue about whether all of b) is technically inferior or not. Object name limits certainly are, but I got to really know data entry with block mode terminals long, long after its hey day (I've certainly come across it back then, but I was rarely a user). I feel that it can be enormously efficient for data entry and maintenance tasks. Many a person who had to move from intensive use of a block mode data entry terminal to performing the same tasks with a web app got quite annoyed at the clumsiness of it all.

The web was not created for "business apps" but for hypertext document retrieval, the other uses got bolted on and it still very much shows. It's sad, because proper terminal emulation well used to be a ubiquitous feature of the Internet, before browsers took over almost entirely.


> but I got to really know data entry with block mode terminals long, long after its hey day (I've certainly come across it back then, but I was rarely a user)

I don't think block mode terminals are necessarily inferior. I see some big problems with 5250 though. The biggest is EBCDIC.

Another big problem is character-at-a-time interfaces let you build things like text editors (vim and emacs), spreadsheets (like Lotus 123), etc. Sure you can build a text editor for a block mode terminal (SEU on IBM i, XEDIT on z/VM, ISPF EDIT on z/OS) but there are just certain features and interaction styles that vim and emacs support that block mode terminals can't do as nicely (example: interactive search). Lotus 123 was actually ported to 3270 (to run under MVS and VM/CMS), I've never used it (I would love it if someone could find a copy so I could!) but from what I've heard it was pretty clunky compared to the MS-DOS / PC version.

Sometimes I think that block mode terminals could have exposed some kind of byte code to enable running some interactivity in the client. Actually real 3270s and 5250s generally had some kind of CPU in them (like an 8080) so I can't see why they couldn't have done that. And of course terminal emulators could do that. Then you could have these more flexible interaction styles that character mode terminals support even in a block mode terminal.


> I don't think block mode terminals are necessarily inferior. I see some big problems with 5250 though. The biggest is EBCDIC.

Oh yeah I agree, the actual implementation details in this case are icky.

> Another big problem is character-at-a-time interfaces let you build things like text editors (vim and emacs), spreadsheets (like Lotus 123)

That's true, but at the same time block mode allows for highly standardized and always latency free data entry and manipulation. I wonder if this is just a case for different technologies for different use cases.

> Sometimes I think that block mode terminals could have exposed some kind of byte code to enable running some interactivity in the client.

Hmm, it helps to preserve the zero latency aspect (if done correctly), but at the same time opens up the door for shoddy implementation and non-standard UX.

And then I'm sure people would come up with all sorts of "UI libraries" for terminals that they think are very clever, but just make everything fragmented and clumsy again, just like I often wish that a web site was just a plain old HTML page with maybe a standard web form, instead of whatever crazy js-backed UI the web framework du jour came up with...


> That's true, but at the same time block mode allows for highly standardized and always latency free data entry and manipulation. I wonder if this is just a case for different technologies for different use cases.

Other vendors – such as DEC and HP – had dual-mode ASCII terminals that normally operated in character-at-a-time mode, but had an escape sequence you could use to switch them into block mode. Maybe that's the best of both worlds. However, in practice, few apps used the block mode, even "data entry" style apps which could use it often didn't. Part of that was that using block mode basically tied you to a single brand of terminal, whereas manually generating forms using character mode was more portable. A lot of clone terminals and emulators emulate DEC VT terminals but few of those clones and emulators included the block mode functions.


Ah, I can totally imagine that being the case, yeah. Sigh, looks like there's no way out, we'll keep inventing ourselves into half-baked solutions on top of existing things.


> at the same time block mode allows for highly standardized and always latency free data entry and manipulation. ... block mode terminals could have exposed some kind of byte code to enable running some interactivity in the client. ... it helps to preserve the zero latency aspect (if done correctly), but at the same time opens up the door for shoddy implementation and non-standard UX.

I've often had the same thought: wasn't it a terrible waste of an Intel 8080 to build a stupid VT-100 around it? An 8080 could run CP/M and Turbo Pascal and SuperCalc! Wouldn't it have been great if some computer company had had the foresight to take their terminals in that direction instead?

And it turns out that actually happened. Sort of.

My closest brush with this direction of evolution was the HP 2640 and 2645 terminals normally used on HP 3000s; although they supported a block mode, they were commonly used in a CLI sort of way, but with scrollback and local editing. So you could, as I understand it, tell the line-mode editor to spit out, say, ten lines, which it did with line numbers attached; then you could use the terminal's cursor keys to go up and edit those lines, and hitting RETURN would send the modified line to the editor, complete with the line number, and then the editor would replace the line with the edited version. And of course this also gave you the equivalent of less(1) (with a limited buffer) and the ability to edit and resend previous commands (but without tab-completion). To achieve these feats, the 2640, introduced in 01974, used the Intel 8008, a slower one-chip clone of the Datapoint 2200 terminal's CPU board, and the 2645 used its successor, the same 8080 the VT-100 would use.

The Datapoint corporation (originally CTC) had been selling "programmable terminals" the whole time, starting in 01971, and unlike the 2640 or the IBM 5250, it was user-programmable, with either assembly https://history-computer.com/Library/2200_Programmers_Man_Au... or PL/B (see below). (It also had tape drives, a source-code editor, an assembler, and a primitive OS.) In 01981 they sold US$450 million of terminals, which I guess must have been about 100,000 terminals, making it a Fortune 500 company.

Datapoint's terminals had a bytecode interpreter for PL/B, which was what passed for a high-level programming language at the time. https://en.wikipedia.org/wiki/Programming_Language_for_Busin...

But what about spreadsheets? Because obviously you could do useful calculations with an 8080 or even an 8008. Also, graphics! Well, so in 01978 HP shipped the 2647A terminal, which had a BASIC interpreter, so you could do calculations, and it had graphics so you could plot functions. But spreadsheets as we know them hadn't been invented yet.

In the HP 3000 division that made the 2640, there was an HP employee who'd previously built the Breakout machine for Atari when his day job was designing HP scientific calculators. When the calculator division moved to Oregon, he switched over to the HP 3000 division, but at home, he'd already built a cheaper video terminal. Then he'd added a 6502 microprocessor to it, and wrote a BASIC for it based on the HP Basic manual he read at work. His name was Steve Wozniak, and that was the Apple I. He was selling it for about 20% of the price of a 2647A, in 01976, two years before the 2647A. http://www.foundersatwork.com/steve-wozniak.html. And the Apple had graphics, too! In fact, even before the 2647A shipped, Wozniak had started selling the "Apple ][" with the Atari employee who'd stolen his Breakout bonus, a Transcendental Meditation instructor and scam artist named Steve Jobs.

Apple's BASIC, although it was basically a command-line system, had the same screen-editing feature as the HP 2640 terminal: you could move the cursor up to a line of BASIC and edit it with the arrow keys, and on hitting RETURN it would change the program in memory. (I know AppleSoft BASIC did this; I think Wozniak's Integer BASIC did too, but I never used it, so I'm not sure. Microsoft later cloned the feature in their BASICs for the IBM PC.)

So, getting back to spreadsheets, what we know today as the spreadsheet was invented by Bricklin and Frankston as VisiCalc, shipped on the Apple ][ in 01979. As Wozniak said in the article I linked above:

> In the Homebrew Computer Club, we felt it was going to affect every home in the country. But we felt it for the wrong reasons. We felt that everybody was technical enough to really use it and write their own programs and solve their problems that way. Even when we started Apple, we had very mistaken ideas about where the market was going to be to be that big. We didn't foresee the VisiCalc spreadsheet.

Frankston and Bricklin originally thought about implementing it on the DEC Programmable Data Terminal, which embedded a PDP-11 (LSI-11) into an Intel-8080-driven VT-100 terminal. The PDT was introduced in 01978, and in 01981 the PDT had shipped over 2600 units, with a base price of US$4800: http://www.bitsavers.org/pdf/datapro/programmable_terminals/... Fortunately, they ended up on the Apple. Frankston credits the highly usable user interface they ended up with to the rapid feedback loop of experimenting with prototypes in Wozniak's Integer BASIC on the Apple ][: https://rmf.vc/implementingvisicalc

Datapoint, as I said, was selling tens or hundreds of thousands of terminals a year by 01980—but then, for reasons I don't understand, it collapsed by about 01984. I suspect the high prices (https://oldcomputers.net/datapoint-2200.html gives the 01972 price as US$7800, three times the price of a 2640 and about US$50k today, and I imagine this continued to affect their sales channels until their death) allowed them to be eclipsed by Apple (1 million units sold in 01983, 6 million total of the Apple II series) and Commodore (about 15 million 64s sold, 2 million per year around 01983). But maybe having to program them in PL/B or 8008 assembly was a big disadvantage compared to BASIC, 6502 assembly, Z80 assembly, or especially 8086 assembly. I've never seen a Datapoint terminal in real life.

In the 01980s it became commonplace to replace both block-mode terminals like the 5250 and character-mode terminals like the VT-100 with IBM-compatible PCs, which were inspired by the 01970s personal computer hobbyists like Wozniak. Typically the PCs were running entire database applications talking to a fileserver, instead of sending blocks or forms to an application server.

So I think that's the way it shook out: there was a slippery slope from "running some interactivity in the client" to "running the whole application in the client, where it could be fully interactive, relegating the server to file storage". Full of, yes, shoddy implementation and non-standard UX. I think this slippery slope is because it's kind of a pain to split an interactive application into two parts running on different computers, requiring careful attention to protocol design and the Fallacies of Distributed Systems. So terminals grew up into PCs. It's kind of a Planet of the Apes ending.

But then the internet started to take off...


Why did single-level stores die off? It's an interesting question, and I'm not sure I know the answer. That's also how Multics worked, but I think what happened was it turned out that Unix was better.

It's not wholly coincidental, or intentional, that Unix didn't have mmap. The PDP-11 and the PDP-7 didn't have paging hardware, so early Unix couldn't implement mmap at all. And it was common to access files bigger than the virtual address space, and doing that with mmap requires you to sequentially map, then unmap, different parts of the file—basically what you have to do with read() and write(). So, early Unix couldn't implement mmap because it was designed to run on cheap hardware.

Also, though, if a program is reading from a file by memory-mapping it, you can't replace the file with a pipe unless you change the program. (If you lseek() on a pipe, it croaks with an ESPIPE, now called "Illegal seek".) Unix got enormously better composability and scriptability than other contemporary OSes by virtue of pipes, to the point where the Unix group ported their pipe-based toolkit of "software tools" to other operating systems in the mid-1970s in order to have a more comfortable working environment. Then, of course, the world started to revolve around TCP, which gives you a byte-stream between two machines, like a magtape, not a random-access collection of pages like a disk. (There are lots of networked applications that really prefer a remote-disk model; Acrobat Reader and Microsoft Access come to mind. But that wasn't where TCP/IP was in the 01980s and early 01990s, Sun's WebNFS aside.)

Another problem is that, when a program is mutating a shared mutable resource like a disk sector, there are times when the resource is in an inconsistent state. Usually, we think of this as a problem for concurrent access, and the solution is to keep any other thread from observing the inconsistent state, for example with a mutex. But it's also a problem for failure recovery: if your program crashes before restoring consistency, now you have data corruption to recover from.

In Unix, the mutable shared resource was usually the filesystem, so this was mostly only a problem if the kernel crashed, perhaps due to a power failure; ordinary user programs mostly created new files, so if they crashed during execution, the worst that could happen was that their output file would be incomplete. Then the user could delete it and try again. So, even though Unix wasn't a fault-tolerant OS like Tandem's Guardian, it did tend to limit the impact of faults. (The occasional exceptions to this rule, such as Berkeley mbox files, were a continuous source of new bugs.)

The easiest way to handle this kind of problem is with atomic transactions, so that if a program crashes halfway through an update, the old state remains the current state, and there is no data corruption problem to worry about. As I understand the situation, this is how IMS and DB2 have handled this problem since the 01960s and 01970s, respectively, and of course today we build lots of applications on top of transaction systems like Postgres, Kafka, ZODB, Git, MariaDB, and especially SQLite.

But none of those systems existed in the 01980s, except for IMS, DB2, and Postgres, and none of those ran on Domain/OS. I don't have any experience with Domain/OS but I imagine that this was a source of bugs for Domain/OS applications as well.

There's another, arguably distinct, fault-related problem that pops up in current use with mmap(). If you try to read() from a file, copying data into your address space, this may succeed or fail, or it may succeed partially, for example if you hit the end of the file. All of these conditions arise at the readily identifiable point in your program where it invokes read(), and so you can look at the code to see if you forgot to handle one of them at that point. Moreover, you can be sure that neither of those two problems will arise later while you're using the data you've read, possibly while you have some other shared mutable resource in a temporarily inconsistent state.

By contrast, with mmap(), such a failure can arise any time you access the mapped memory, in most cases. For example, someone else may have truncated the file since you mapped it, as in http://canonical.org/~kragen/sw/dev3/mmapcrash.c, where as soon as the array index strays onto the now-nonexistent page, the program dies with a bus error. This makes it more difficult to write programs that handle failures correctly.

Relatedly, there's a performance issue: although memory-mapping a page and then reading it means the kernel doesn't have to copy its contents into your address space, which often increases performance, it does still have to read the page from disk. But it has much less information about your access patterns than when you're using read() and lseek(). This sometimes reduces performance, because prefetching pages before userland requests them makes a big performance difference—in the 01980s, we're talking about 30000 microseconds to wait for the disk, versus 1 microsecond to handle a page fault or 2 microseconds to handle a small read(), if the data is prefetched. It doesn't take a whole lot of extra prefetch failures to make mmap() slower, potentially by orders of magnitude.

With modern NVDIMMs and NVMe Flash, and especially new memory architectures like 3D XPoint, the performance advantages of memory-mapping might become much more important again. If it takes 300 ns to call and return from read() or write(), plus 700 ns to copy 4096 bytes into or out of userspace, then spending 100 ns to read a random cache line from 3D XPoint memory (is that about how long it takes?) might be greatly preferable to spending 1000 ns to read a page of data from it through the syscall interface. But this was not a possibility in the Apollo years.

One final minor issue with the Multics segment-mapping approach, at least when realized with paging hardware instead of segmentation hardware, is slack space at the ends of files. If the fundamental fixed-size units of a file consists of are not a multiple of the page size, such as a byte of text, then there will be times when the file's natural size is not a whole number of pages. So, for example, in CP/M files consist of 128-byte "sectors", thus saving 7 precious bits per directory entry. Your application program needs some kind of application-specific logic to tell whether the last page of the file has unused space in it, and, if so, how much.

So in CP/M, for example, some applications would place a ^Z after the last legitimate byte of a text file, and others would fill the rest of the sector with up to 127 ^Z characters. As you can imagine, this kind of thing is fertile ground not only for application bugs (you can't reliably store ^Z in a text file, and never as the last byte) but also subtle application incompatibilities. If you want to write a Unix "cat" program for CP/M, it needs to have an opinion about which of these conventions to use, and also what to do if it finds a ^Z that isn't in the last sector.

Again, I never used Domain/OS, so I don't know how it handled text files or other files that commonly had a non-page-aligned EOF. The Apollo engineers were brilliant and produced a stunning system that was much better than Unix in many ways. So maybe they had a good solution to this problem, like a universally-used text-file-handling library that didn't use a brain-dead encoding like the CP/M one. I'm just saying it's a problem that crops up in userspace with the single-level store approach (on paging hardware), while the Unix approach relegates it to the filesystem driver.


There was also KeyKOS and EROS. One problem with such systems is that data corruption can live forever if you're not careful (of course the same can happen with mmap which is our watered-down version of single-level storage).


This was how Multics was designed too. It was one of the important features left out when Unix was written, because it was very hard to do on a PDP-7.


This is one of the old OSes that I wish had been open sourced. I would really like to see the Pascal source code, and it would be neat to see something that wasn't a UNIX.


> This is one of the old OSes that I wish had been open sourced

Maybe one of these days, someone will convince the powers-that-be at HPE to do it. It has basically zero remaining commercial value. Open sourcing it would have PR benefits for HPE.


One could always look at the Domain Engineering manuals and the data structures described therein and the rest of the APIs and docs and reimplement it. That’s basically what Comer did with Xinu, Tanenbaum did with Minix, Linus did with Linux, etc., just with Unix—and it’s mostly how Aegis (the original name for Domain/OS) came about too.


Not that I have the ability to do that, but where are these Domain Engineering manuals?


A paper describing DSEE - an interesting distributed programming environment built on Domain/OS:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.575....

Excerpts:

"DSEE is implemented as one program, with instances running at various nodes in the network."

On the history manager:

"DSEE can create a shell in which all programs executed in that shell window transparently read the exact version of an element requested in the user's configuration thread. The History Manager, Configuration Manager, and extensible streams mechanism (described above) work together in this way to provide a "time machine" that can place a user back in a environment that corresponds to a previous release. In this environment, users can print the version of a file used for a prior release, and can display a readonly copy of it. In addition, the compilers can use the "include" files as they were, and the source llne debugger can use old binaries and old sources during debug sessions. All of this is done without making copies of any of the elements."

On the configuration manager (sounds like system wide built artifact caching):

"The CM maintains a derived object pool which holds several version of each object that was produced as the result of building a component named In the system model (e.g., binaries). Each derived object in the pool is associated with the ECT used to build it. When asked to build, the CM determines a "desired" BCT by binding the system model to the versions requested by the user's current CT. The CM then looks in the derived object pool to see If there ls a BCT that exactly matches the one desired. If a match is found, the derived object associated with that BCT is used. Otherwise, the component is rebuilt In accordance with the desired BCT, and the new derived object and BCT are written to the pool. In all cases, the user is given exactly what he asked for."


If you like DSEE, you should try Vesta, a version-tracking system which provides most of the things that were good about DSEE, but is free software on Linux: http://www.vestasys.org/


I really miss my DN3500 (68030 box with EISA bus and ESDI disks, from memory) - Domain/OS (and Aegis before it) was a surprisingly fun OS, switching between Aegis, BSD & SysV userlands, and it came with enough compiler tools for me to write an AutoCAD driver for a Summagraphics tablet I picked up around the same time. The windowing UI was surprisingly pretty too, and Apollo Token Ring with its whacko RAM-over-Network architecture worked surprisingly well. It's a real shame HP swallowed them, but perhaps the moral of the story is not to try to ship 3 different OSes at the same time on the same hardware (and not to write a huge evolutionary fork like Aegis was).

Edit 1: The DN3500 also came with a hardcopy of this PDF :)

Edit 2: From memory, it also came with a surprisingly good super-early guide to TCP/IP and the Internet, complete with a hardcopy of the HOSTS file in case you couldn't get it via FTP from Internic ;)


Back when I started uni, they had a whole room full of old Apollo Domain/OS machines. I think they were previously used for CAD or some EE stuff, but even than (late 90s) pretty much only used for people to do their email (pine, mostly).

Anyone got some actual war stories about the soft- and hardware?


I spent a lot of time in the computer lab with all sorts of Apollo Domain/OS machines.

You could always spot the newbs because they grabbed the first available workstation instead of the nice DN3500 or other 68030 based machines. And then the HP 425t machines started showing up and it was amazing.

I did mostly Spice and other text based "work" (https://en.wikipedia.org/wiki/ISCABBS), but there were some decent EDA and CAD graphical programs.


We bought a bunch of Apollo Domain machines for board-level CAD/CAE in the early 90s. At that time there was a Unix compatibility mode (like Cygwin) so I didn't need to mess with Domain/OS much, except for curiosity purposes and to run magic commands (like PowerShell). It reminded me of VMS.


I had an HP Apollo 425e, 68040 @ 25 MHz.

I believe it was one of the last models that could run Domain/OS, but it could also run HPUX, which is what I had installed...so I never experienced Domain/OS. I did get NetBSD running on it, but it didn't support the framebuffer, so no X11.


The HP-Apollo 9000-425e was in fact the last “true” Apollo in that it was the last system released that could run Domain/OS. If you still have the system, you can emulate an Apollo keyboard and mouse now using an Arduino or something, and install Domain/OS to try it out.

It also fully supports X11 on NetBSD now.

I have a few HP 9000-400 systems (425t, 433s, and 425e) and they run current NetBSD beautifully, even operating fully over the network. Only compiling large stuff from pkgsrc goes slowly due to paging over 10Base-T, as these systems max out at 128MB of RAM. (At least I can use a RAM disk on my server as a swap device at wire speed… Did anyone make a 100Base-T or gigabit card for EISA?)


Lots of EISA 100Base-T cards. Maybe the HP A4308A or 3COM 3C597 would have NetBSD drivers?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: