It started life as EPOC32, the operating system for "palmtop" devices made by UK-based Psion in the mid-90s; then Nokia and Ericsson decided they needed a smartphone OS that isn't Microsoft and bought into EPOC creating Symbian sometime around 1998. It took a while for Symbian to get off the ground because the hardware wasn't there yet and the partners kept bickering, but by 2006 it was shipping on a hundred million devices and was perceived as very successful against Microsoft's Windows Mobile. A few years later, Symbian would be trounced to oblivion by iOS and Android.
The interesting thing about Symbian was that it had a rich native framework but wasn't even close to POSIX or Windows. You didn't even get the C standard library; instead the system was built on an idiosyncratic embedded-centric dialect of '90s C++. This focus on optimization at expense of programmer convenience turned out to be a total dead-end once PC-style operating systems became viable on phones — but at least it made for a distinctive development experience.
I'm surprised nobody has even attempted to pick it up, AFAICT. A very good OS in its day.
I think a lot of people start mashing years together when it comes to BeOS. Apple announced the NeXT acquisition in December 1996.
In December 1996, BeOS was still a developer preview with the first "real" release still about two years away. NeXTStep was already 8 years old.
In 1996 BeOS was at a completely different state of readiness and maturity than it was with R5 (which everyone remembers fondly).
USENET is full of posts, 96-98, of individual developers announcing with triumph the porting of UNIX core utilities to the BeOS developer release versions-- utilities that were already present and mature in NeXT/Openstep.
Mac OS X Server 1.0 was released about two years after the deal was finalized and I imagine it would have taken about two years of really hard work just to make BeOS multiuser, nevermind porting all of the core utilities.
It took a couple more years (it was a somewhat jerky transition for many users) before a "usable" OS X was released but from the beginning there was an wide variety of software available for it that was easily ported from NS/OS, and features like Display Postscript, Objective-C, a mature set of APIs, and most importantly of all Project Builder were there from the beginning.
The software scene on BeOS in 1996 was... sparse.
BeOS was also optimized (probably over-optimized) for first impressions; the immaturity and architectural mistakes were hidden behind a facade of "OMG it's so fast and pretty". The BeOS GUI with its cartoony 32x32 icons was also a lot closer to MacOS than NeXTSTEP with it's huge high-DPI GUI, so BeOS looked like "Mac done right" while NeXT was a foreign culture.
So yeah, NeXT was better but most Mac users had no way of knowing that at the time.
However, BeOS was far more stable/mature than its 'developer release' status implied. I remember how blown away people were at the dev release in 1996 by how solid it was, not just by how forward thinking its features were.
I don't think multiuser would have been a showstopper for release if BeOS was acquired by Apple. They could have put out 2-3 versions of a single user OS and few would have cared.
But Be, Inc. and BeOS really strike me as the kind of design, company, compromises, focus and leadership/team that I've come to associate with startups that are actually designed to be acquired by the executive's former employer after some time.
I would love to know if this is true, though in my experience, that is usually only known to the founders and executives except in the really obvious cases.
This is very common for "startups" founded by executives who come from a company they know is failing to execute on something critical.
Be smells like this.
Or by “someone else”, did you mean someone other than PalmSource?
Let’s be honest, though: most new OSes don’t enjoy much success or longevity. The only successful new arrival I can remember recently was Android, and it had the backing of Google, and was aimed at a relatively new market. It also borrowed heavily from established tech: Linux kernel, JDK and JVM.
I doubt there’s more than a handful companies that could have turned BeOS into something big, and of those, Apple was probably their best bet.
* BeOS was evaluated by Apple as a Copland replacement (along with Solaris, NT, and NeXTSTEP).
* MacOS X discarded DPS for Quartz so it didn't turn out to be much of a feature. But very few applications touched that directly so it wasn't much of a porting issue.
* There was more UNIX and AppKit-based software available for NeXTSTEP than for BeOS but nobody in the Mac world cared about that. Quark were the only company that really blew the transition, but they blew it badly.
Source: I worked on Pink/Taligent for six years. Prior to that I supported A/UX and MPW in MacDTS.
But the books, the books were straight up crazy and people reacted to them in the same unfortunate way that they reacted to "Design Patterns."
Another of the crazy OS companies that deserves mention on the list of failures is both the Newton and Magic Cap.
Taligent also had quite a bit of influence on the field of unit testing, which I'm proud to have had a hand in. I've written about this in the past: https://shebanator.com/2007/08/21/a-brief-history-of-test-fr...
I'm a bit of a student of failure. If we want to really dive into deep failure, someone mentioned Workplace OS, which was not just a terrible project but a terrible idea (in the same way that NT's original concept of multiple-OS-personalities was wrt: OS/2 16 bit, for example, and POSIX, just taken to an entirely more crazy level).
For truly fun crazy, one has to step away from OS projects to things like graphics (for example, Fahrenheit and Talisman).
They all died except COM, which remains the basis for most APIs that Microsoft release these days (even after a brief stumble with .NET where they, if I remember correctly, they tried to kill COM). It's a great technology, and it's a shame that Microsoft didn't open it up and turn it into a truly cross-platform technology.
What didn't take over the world was this notion of object-oriented documents, which was what OpenDoc and Taligent were all about. This idea that content came with behaviour; you could embed an object from one app into another, e.g. a piece of an Excel table into a Word document, and the table would be live-updateable within Word, with all your interactions basically going between processes as COM calls, with the OO behaviour following the embedded content as it moved around, even when copy-pasted between documents. Very powerful, but super brittle. I used embedding a lot in the 1990s, trying to achieve what the PR told me should be feasible and easy, but it invariably ended with app crashes.
And really, the web/REST wound up being the object oriented document framework we were all looking for. It was terribly inefficient at first (and is still) but was architecturally simpler. The main issue is no one thought it would be possible to replace Windows with a cross platform GUI, and no one thought hypertext/hypermedia - a mostly academic concept at the time beyond HyperCard - would be that GUI.
And I would argue they CFPlugin is only really inspired by COM, it's not the real thing. It's just IUnknown and the same class layout:
> The CFPlugIn model is compatible with the basics of Microsoft's COM architecture. What this means is that CFPlugIn Interfaces are laid out according to the COM guidelines and that all Interfaces must inherit from COM's IUnknown Interface. These are the only things that CFPlugIn shares with COM. Other COM concepts such as the IClassFactory Interface, aggregation, out-of-process servers, the Windows registry, etc... are not mapped.
And OpenDoc was developed more or less simultaneously with Pink IIRC. But its been a long time and I never worked on OpenDoc so I might have the timing off a bit on this one.
From the Wikipedia page https://en.wikipedia.org/wiki/Workplace_OS: A University of California case study described the Workplace OS project as "one of the most significant operating systems software investments of all time" and "one of the largest operating system failures in modern times".
The other OS's you mentioned are all Desktop OS's, and that market was already buttoned up by the 90s, as it largely still is today. They were up against much steeper odds.
The gnu project worked on a kernel - Hurd. It has been "in active development" since 1990. It's "just the missing piece."
Mach and Hurd were existing microkernels/projects before Linux development began.
Linux dominated because it (partly) worked and was accessible and a community formed around it. Hurd had none of those. It's the canonical proof of Richard Gabriel's prescient "Worse is Better (1991)" http://dreamsongs.com/WIB.html - "The good news is that in 1995 we will have a good operating system and programming language; the bad news is that they will be Unix and C++."
This is one of the few things my feeble ageing memory recalls fairly clearly :-) I desperately wanted to work on the Hurd, but they rejected me because I was a noob. Then I was desperately waiting for BSD to pass through the legal gauntlet. Then a colleague said, "What about Linux? They just got X working. It seems pretty good". It really came out of nowhere -- not necessarily due to good development (thought there was lots of that), but rather because key early contributors were encouraged and enabled to participate by Linus. We were all desperately waiting and many talented people just leaped at the opportunity to do something (not me, unfortunately -- I got sidetracked with work :-P).
In comparison to the rest of GNU, it was just a piece. A pretty big piece, but just a piece none-the-less. Keep in mind the amount of work that went into GCC and glibc, which at the time were comparable projects. Without those pieces, Linus would have nowhere to start. Then all the user land tools -- again, without those I don't think anybody would have joined in. They would have waited for BSD. And those user land tools were really good. The first thing I did when I installed a new Unix box was install GNU. GNU was important in Unix land before anyone had ever heard of Linux. Sometimes I think younger people have no real concept about how much code was part of GNU. The goal of GNU was to give you a fully functioning POSIX system. One of the reasons we even have POSIX is because of the work of the guys working on GNU.
When we say that it's Linux with GNU, that's really to distinguish it from, say, Linux with Android. Or Linux with BSD (if anyone does that). I'm not actually sure how much GNU is regularly used in a Linux distro these days, but I still prefer it to alternatives (maybe I'm just old). Again, we're really talking about having a full POSIX compliant system and the kernel is just a part of that. Linux is super fantastic and I actually choose Linux over other possible kernels because of how good it is. But I'm never going to run Android on my desktop, no matter that it runs a Linux kernel. I'd rather not run Android on my phone, if I had any choice in the matter!
To be even more fair, I always thought we should have given X a bit more air time. Especially these days, it's important to me if I've got X or Wayland running. But it was always a bit daft to think that people were going to be saying GNU/Linux. It's even more daft to think that people would say GNU/X/Linux or GNU/Wayland/Linux. It really doeasn't roll off the tongue ;-)
People are criticizing Linus for lack of social skills but somehow he managed to start a big and loyal community. That's not a small feat.
If we're looking at userland, the only successful general-purpose operating systems in the last 30 years that started from scratch with no apps at all are iOS and Android. (Even there, arguably their killer app was backwards compatibility with the desktop web.)
All others, including Windows, OS X, and Linux, had solid backwards compatibility facilities supporting software written for DOS, System 9, and Unix, respectively.
In the 70s we went from "not good enough" to "good enough" in price, but once we got to the PC clone wars of the mid-80s it was hard to go much lower. Then in the 80s and early 90s we went from "not good enough" to "good enough" in user experience, with MacOS 7 and Win 95. The 90s OSes were all attempting to take a "good enough" user experience and make it excellent, and that's where they failed - most mainstream consumers don't care enough about excellence to make it worth learning a new OS. Instead the late 90s and early 00s took us from "not good enough" to "good enough" in information, with a big cost in user experience. The web sucked as a UI and still does, but it opened up literally billions of sites worth of content that a desktop user could only dream about. Now the web has created this whole new problem of trust, which cryptocurrencies solve, but at the cost of regressing 30 years in performance and usability.
For example, you can view it as the desktop stopped evolving, or that we moved to the browser as a virtual machine hypervisor running whatever environment you choose (particularly true with WebAssembly).
The 90s were exciting because the desktop was seen as the center of the computing universe; these days the browser is the center and the desktop is a "me too!" paradigm.
One could argue we got "stuck" or one could argue that evolution shifted to an internet first paradigm with more security in mind.
Again, there are some major points in favor of the current internet/web world. For any "program" which you will use rarely it isn't the worth the cost to install the program.
We have the browser. And we have the app store.
Arguably, for better or worse, the desktop shifts to a combination of these and probably converges with mobile for mainstream users.
Which I believe is the only operating system advertised in the Super Bowl.
IBM's windows replacement wasn't so bad in my very limited usage of it. (I used it as the only scanner drivers we had for a scanner at IBM was for a OS/2 warp machine...)
That said, I can buy Windows NT as sufficiently distinct to be a unique 1990s OS. But Linux is clearly a *nix and OS X is as well--UI and integration notwithstanding. Sure, you can pick NeXT Step and Linux and go "90s!" but they're clearly part of a much earlier tree.
And, if you bring in mobile, Android clearly derives from Linux. I don't know enough about iOS internals to identify where it sits in the OS tree.
And NeXTstep isn't a 1990s OS. It's a 1980s OS. v0.8 first demoed 12 October 1988 when the NeXT cube was launched; v1.0 shipped 18 September 1989.
NeXTstep influenced the Windows 95 UI in some ways -- the shaded 3D look, the idea of a fixed panel across the screen which could be both an app launcher and an app switcher.
But NeXT probably got that from Acorn RISC OS, which was shipping before NeXTstep was ever shown.
I've written about that, too: https://www.theregister.co.uk/Print/2013/06/03/thank_microso...
Amusingly, classic Mac OS ended up sort of close to that if you squint. It had a "nanokernel" that ran tasks preemptively and even supported multiple CPUs, but like with Copland the entire UI ran as a single "blue" task. The main difference from Copland as far as I can tell is that non-UI tasks were heavily restricted in the OS APIs that they could call; for unsupported APIs they had to send a message to the blue task and wait for a response. More details at http://mirror.informatimago.com/next/developer.apple.com/tec....
Also, how successful other OSes were or weren’t at attracting an install base during the 1990s is very tangentially related to why Copland failed.
Slightly related: I’ve always wondered what Apple might be like today if Gassée had succeeded in selling BeOS to Apple instead of Jobs with NeXTSTEP. Probably a nice headstone in the Silicon Valley graveyard next to Sun, SGI etc, but interesting to consider.
But who knows, maybe in that alternate timeline Sun succeeds in making a Solaris for the masses, and Linux becomes an obscure footnote. Some flavor of BSD ultimately becomes the oddball alternative to Windows (which dominates) and Solaris which is the runner-up developer favorite.
Apple almost died the first time mostly because Gassée went for margins instead of market share when he was at Apple.
The original pre-Google Android was a Blackberry-killer, and I don't reckon that was a good way to go.
You can see some early prototype devices here:
As usual for Ars, it has a far more detailed history, with pre-1.0 screenshots:
You can see that, before the iPhone, it started out as a Blackberry clone, or something like it. And that, IMHO, was far to geeky a toy to change the world as the iPhone did.
I think the answer to why some platforms failed and others didn’t is a much simpler one: partly due to the aggressive business tactics from their CEO and partly due to luck.
Linux is an outlier there but I think you can substitute the CEO effect with GPL however the luck element is still relevant.
If there is one thing the history of computing teaches us, it’s that a better product doesn’t mean a more successful one.
These guys: https://en.wikipedia.org/wiki/Access_(company)
Still alive, to quote GladOS. It provides the Kindle web browser, for instance.
Second, BeOS has been reborn as Haiku, which is also very much alive, and includes what Be code it legally can (the Tracker, mainly).
Haiku is IMHO the most complete/most interesting desktop FOSS out there. Haiku is now self-hosting and recently entered beta, after a long gestation. There's very little manpower behind it, so progress is slow, but it is moving.
So, significant influence, I think it's fair to say. BeOS shipped, it sold, I reviewed v5 and it remains my favourite x86 OS ever written. (Yeah, I'm biased. Sue me.)
Haiku is an interesting one. I’d put it in the same category as ReactOS. It’s successful in the sense that it’s an open source project that’s under active development. However they’re still essentially just hobbiest platforms so I wouldn’t even rank them successful when compared to Linux on the desktop (eg Ubuntu, Fedora, etc) let alone successful compared to commercial platforms like Windows nor OS X.
It really is a great shame BeOS wasnt more successful though, it was an amazing platform (even without framing it in the context of the shit that was around at the time: Windows 9x and Mac OS 9). Sadly for Haiku, desktop computing has moved on and I just don’t think there’s any need for a classic BeOS desktop any more.
Linux was long the go-to OS for cheap servers running on cheap hardware.
When the dominant computing paradigm shifted to large arrays of cheap boxes (map-reduce -> Hadoop -> cloud) Linux was in the right place at the right time.
Without wanting to start a flamewar, I honestly think it was the GPL licencing that made Linux what it is. BSD was more mature, arguably better designed and was already in use and battle tested. But GPL forced collaboration a little more where and I think that really appealed to hackers.
(I’m not arguing that GPL is better nor worse than BSD/MIT/whatever. I have no strong allegiances with either side of the camp)
Some were ported to Intel. Others (HP/UX) became dedicated big iron operating systems. HP/UX and Solaris are still being updated and used.
Solaris isn't dead. Oracle released version 11.4 in August this year:
The article you linked to doesn't say Solaris is gone either. It mentions large numbers of layoffs, but acknowledges that there is still a Solaris dev team in place (even if a significantly smaller one).
(Disclosure: Former Oracle employee, although I never worked on Solaris.)
I realize I really don't know much about A/UX at all. I didn't even know it could run classic MacOS apps. Does anyone have a link to more about the OS? I always assumed it was just a clone of System V, but if it could run classic MacOS apps, that meant it was more than just that.
The Mac compatibility stuff (done at Apple) essentially ran in a single unix thread - really a VM for the mac OS7 world - it ran in user mode (mac OS apps usually ran in kernel mode) and emulated exceptions
I’m working on a new Ethernet card for the Mac SE/30, and I’d love to be able to get if working with A/UX 3.
If you had any pointers on how one could write an Ethernet driver for A/UX I’d be very appreciative! :-)
Sadly it's not as functional as we would like, I actually wrote it for UniSoft as a way we could sell drivers without building kernels every time, someone at Apple saw it and not only demanded we include it in A/UX but also demanded that they own it .... so I wrote another one for Apple, it worked differently and was barely functional - Apple could have had the original better one for free if they'd dropped the demand that they owned it
I've got the card sending & receiving packets properly now, but having a few issues with the CPLD, making the the machine crash sometimes accessing the card. Hopefully once that's fixed I can make a rev2 of the board and release some schematics & drivers.
I could see it being a case of wanting something shiny and new. Thanks for your response!
After we handed it over the group who did the UI work were pretty small within Apple - I think it was mostly politics, my view of Apple in those days (and the few years after) was that everything was politics, I remember the firewire guys coming around and shilling for supportive developer comments to try and keep their project afloat at one point. I'd guess that 2/3 of every cool thing designed at Apple got shelved, people who had poured several years of their lives would walk.
A/UX died more slowly, switching to the PPC killed it, Apple decided not to do a Unix port (we probably could have done one faster than getting the MacOS working)
That said, it is remarkable how the two sides work together. It's not a perfect union because X apps still have to come up in a dedicated X server which runs on the Mac side, so even graphical apps don't come together seamlessly. But even within its limitations it presents a compelling illusion of a unified whole and Commando is a great way to discover command line options.
You need to track down roms and install media.
I actually have an A/UX coffee mug on my shelf complete with a phone number to call for more information (no URL or email :-))
It's a repurposed Quora answer; the original question might clarify what I was answering: https://www.quora.com/Why-was-Apple-unable-to-complete-Copla...
I recently learned that the ribbon used in Office is proprietary to Office itself and wasn't/isn't used in other products - not even in other Microsoft products, which use a different ribbon, which even behaves differently.
From Nathan Lineback's screenshot gallery:
This is pretty much the answer. NeXT was Jobs' baby and he was happy to deploy all the tech through Apple when he came back. It worked out really well for them. Copland dev was also lagging and pre-Jobs (return) Apple had a decided lack of ability to ship.
I've never seen the innards of the above technologies, but to the extent that this passage gives the impression that the technologies that were cut (and one could add QuickDraw 3D and QuickDraw GX to the list) were the least modern and future proof, I think that's exactly backward. It's largely the most modern technologies that were cut, and it's the crufty ancient APIs that made it into Carbon.
Something like OpenDoc would probably have been reasonably portable, given that it was based on IBM technologies. OpenTransport was based on System V streams, GameSprockets was based on a QuickTime stack which largely survived for some time.
Presumably those decisions were made because the new APIs, gorgeous as they were, didn't have major adoption yet, and Apple desperately needed to focus.
An end of 1995 article:
"APPLE'S COPLAND: NEW! IMPROVED! NOT HERE YET!"
"Says one recently departed Apple engineer: ``There's no way in hell Copland ships next year. I just hope it ships in 1997.''"
One year later:
"December 20, 1996: Apple Computer buys NeXT, the computer company Steve Jobs founded after leaving Cupertino a decade earlier."
A little more than two years after that, already 1999:
"Mac OS X Server 1.0, released on March 16, 1999, is the first operating system released into the retail market by Apple Computer based on NeXT technology."
At one of the Apple developer conferences, people booed and criticized the new OS's unimpressive capabilities during a demo/slideshow, prompting Amelio to come back on stage and promise to "tack on" symmetric multiprocessing. For an OS that was supposedly mere months away from release...
Essentially they were promising something comparable to NT 4, from an organization with a fraction of the team Microsoft used to deliver it.
Atari MiNT! It was an attempt to bolt UNIX semantics on top of TOS, which itself was already a weird mashup of CP/M and DOS. It ran on 68k ST boxes, and was about as bonkers as you'd expect, in ways that I can summarize with the pathname "U:\DEV\NULL".
Strategy feels similar - make things compatible enough and force apps to adapt somewhat. Of course this is a gross oversimplification, but who knows.
* Since Classic MacOS (OS 9 and below) didn't have a command line, it had GUIs for tweaking system settings. Better yet, it had a budget for preventing user interface issues in the first place. The user experience on Classic MacOS was simply better than anything we have today, on any platform (including iOS - and yes I realize this is subjective). The flip side is that the platform evolved faster until the late 2000s because developers could tinker more freely. Since the vast majority of users are not programmers, I don't think this was a win. To me, something priceless was lost, that may never be regained even with the incubator of the web pushing the envelope.
* I often wish that Apple had made a Linux compatibility layer. That entire ecosystem of software is simply not in the Mac fanbase's radar. This isn't such a huge issue now with containerization, but set everything back perhaps 10-20 years. Apple did little to improve NeXT (to make it something more like BeOS, or the Amiga). We really needed an advanced, modern platform like Copland or A/UX like the article said. But in the end, Steve Jobs knew that didn't really matter to like 99% of users, and he was probably right. Still, I'm in that lucky 1% that sees the crushing burden of console tool incompatibilities and an utter lack of progress in UNIX since the mid 90s.
* Much of the macOS GUI runs in a custom Apple layer above FreeBSD (rather than using something like X11). I'm not really convinced that the windowing system is that optimized, because it used to use a representation similar to PDF. So for example, I saw weird artifacts and screen redraws back when I was doing Carbon/Cocoa game programming, especially around the time OpenGL was taking off. Quartz is powerful but I wouldn't say it's performant. A 350 MHz blue & white iMac running OS X had roughly the same Finder speed as an 8 MHz Mac Plus running System 7 or a 33 MHz 386DX running Windows 95. Does anyone know if the windowing system is open source?
I could go on, in deeper detail, but it's futile. I think that's what I truly miss most about Classic MacOS. If you ever watch a show like Halt and Catch Fire, there was a feeling back then that regular folks could write a desktop publishing application or a system extension (heck whole games like Lunatic Fringe ran in a screensaver) and you could get Apple's attention and they might even buy you out. But today it's all locked down, we're all just users and consumers.
I still love the Mac I guess, and always come back to it after using the various runner ups. But I can't get over this feeling that it stopped evolving sometime just after OS X came out, almost 20 years ago. There is this gaping void where a far-reaching, visionary GUI running on top of a truly modern architecture should be. All we have now is a sea of loose approximations of what that could be. I wish I knew how to articulate this better. Sorry about that.
See https://en.wikipedia.org/wiki/OS/2#1990:_Breakup for more information about how they diverged over time.
"The family link between OS/2 and Windows NT"
"Follow-up: the family links between DOS, OS/2, NT and VMS"
There's a citation in that 2nd blog post. Well, there are lots, but the IT Pro Mag one goes into detail about the OS/2 and VMS connection.