Hacker News new | past | comments | ask | show | jobs | submit login
Be: From Concept to Near Death (mondaynote.com)
96 points by WoodenChair 40 days ago | hide | past | web | favorite | 76 comments

In 1996, I bought a Power Computing PowerTower 166 (a Mac clone, the second most powerful Mac one could buy at the time) and ran System 7.5 and then 7.6 on it. But I got bored and figured out how to install BeOS on a second partition. Holy cow! It was sooooo much faster. It had an OpenGL demo app, with spinning cubes, and you could drag and drop movies onto it and they’d play without stuttering as the cube spun around. It was a spare, simple, beautiful OS.

I say that often: when watching beos demo (97?) these decade.. it brings tears to your eyes. First, because it's so nicely designed it's beautiful. Second because even today my laptop system can't really do that.

Gotta say, my Power Computing was one of my favorite machines I've ever owned

Power Computing had, if I remember correctly, a one-hour long sale that doubled the amount of RAM you ordered (back when RAM was really expensive). I bought a great tower, pulled half the RAM and redistributed to my other desktop. Good times!

Not just the 604, but also the 66 MHz bus and the faaaast 7200 RPM Micropolis 2 GB SCSI hard drive... /swoon

I do kind of wonder whether I’d have learned more Unix faster if I’d have bought an SGI O2.

I never knew they had used the Amiga as their reference platform. I had so much hope in the mid and late 90s that Be might be the logical next platform and OS. So much potential. So of course they made little impact. :(

Windows was a remarkable disaster area that I could barely grow to tolerate, let alone love. Everything seemed to be a canonical example of how not to implement something.

> "taking the helm of Commodore"

Oh, if only. Anything to get rid of the idiot that ran them into the ground.

Be courted Amiga developers quite heavily. They would show up at events and declare loudly that the Amiga was a major inspiration and that Be was a modern Amiga in spirit. There was even a slogan floating around, something like "Amiga 1985 - Be 1995”.

so many things were done on Amiga before becoming products..

> I never knew they had used the Amiga as their reference platform. I had so much hope in the mid and late 90s that Be might be the logical next platform and OS. So much potential. So of course they made little impact. :(

There were articles and possibly demos of Be stuff in the Amiga magazines my family subscribed to IIRC (I was about 10).

It was discussed as a possible replacement, seeing as Amiga was going nowhere under Commodore/Escom/Gateway/etc. QNX was also discussed in a similar way, and I remember getting a demo of QNX for PC on a bootable floppy.

Linux turned out to be a more popular replacement (for me, and for Future Publishing who seemed to turn Amiga Format into Linux Format; at least many of the staff seemed to be the same)

It may have already been to late to save commodore by 1990.

I don't think so. The factor that really killed C= beyond any hope of recovery was Motorola abandoning development of the 68k line of processors in favor of the PowerPC initiative, a few years later. And a LOT of stuff died as a result of that, not just the C= Amiga line. Though with Gassee at the helm, they might have been perceptive enough to pivot to the nascent ARM architecture, partnering with Acorn and picking up their abandoned Archimedes line, and thus owning the low-power gaming/multimedia space. (Keep in mind thst the 3DO was in fact ARM-based, so the technical affordances were there). But it's a long shot.

But they have already lost their performance edge by 1990. In 1985 the amiga was unprecedented in multimedia abilities and audio-visual performance. It was years ahead of the competition.

And then commodore more or less just sat on it with very minor incremental improvements until by 1990 a 386 PC with VGA & a SoundBlaster pretty much closed the gap with a similarly priced & specced amiga 3000.

That's why (well one of the reasons anyway) the next generation of games like Doom originated on the PC and not on the Amiga.

> That's why (well one of the reasons anyway) the next generation of games like Doom originated on the PC and not on the Amiga.

Well, for the specific example of Doom the Amiga's main problem was its planar graphics; great for scrolling platformers and SEUs, terrible for raycasters, since plotting a single 1-byte pixel takes 8 writes (one per bit) in a naive implementation. There were a lot of clever hacks to address this, but none of them could quite close the gap.

SEU in this context refers to Shoot 'em Up (or shmups), 2D scrolling shooter games like Gradius or TwinBee.

That was down to management though. If they'd been bought by someone with more interest in what engineering were prototyping, I think it could have been saved, even that late. Even prospered with an engineering dept that had a budget.

Dave Haynie has written extensively on the C= suicide (as he describes it), I think there's even a few posts on Quora. Lots that might have kept the edge were started, even prototyped and announced to developers.

Course once developers start migrating away, it takes far more effort to get them back, if at all.

> But they have already lost their performance edge by 1990.

Sure, but they had a robust community of gaming and software developers on their side. Hence why they should have pivoted to the lower-power niche of the market - that's precisely where their pre-existing community could still be a decisive factor. High-end systems (x86, PPC) were a whole other ballgame, even in the mid-to-late 1990s. The console space is even more proof of this - mass-market consoles in the 1990s and to some extent the early-2000s were a lot closer to the 3DO than to anything based on x86 or PPC.

Also, nitpicking but Doom itself was "originated" on the NeXT, not on x86-- and the NeXT architecture was quite comparable to Amiga both technically and in its focus on the multimedia space. If anything, it makes a better case for Amiga having lost their technical "edge" at some point.

To continue with the nitpicking: Although Doom was "originated" on NeXT the original NeXT implementation was only meant as debugging aid and did graphics output by essentially emulating VGA in user space and didn't support sound at all.

Also NeXT is more or less normal Unix workstation with m68k CPU and framebuffer and does not have convoluted architecture with tightly coupled various multimedia accelerators that Amiga has. And while the Amiga's architecture was good way to get impresive performance in 80's it is also the reason why Amiga could not keep up in the 90's as removing all the NTSC video related stuff would break backward compatibility.

I was in university when the BeBox was released. One of my better funded colleagues got one of them shipped to his dorm room. All the nerds went crazy over this thing. It was incredibly fast, physical CPU activity lights (it was a dual PowerPC machine), and a plethora of hardware connectors, including the "GeekPort", a set of GPIOs that made it really easy to interface with external hardware.

I still have my BeBox, and love it.

For the Blinky lights, a magicshifter suffices:


(I did the OS)

Lots of modern desktop PCs have case LED's that could easily find use as system activity indicators if the Linux kernel (or userspace HAL) supported them. Unfortunately support seems to be really patchy.

I believe there is some work on standardizing the RGB LED interface going on as we speak:


Linux already has an LED interface, it's just that currently you can give an LED a brightness value 0-255.

So a RGB LED would have to be represented by three entries in /sys/class/leds/

That's kind of inconvenient when those three LEDs are actually one RGB LED.

This is only how the LEDs are represented to user space, you still need a driver for the chips the LEDs are connected to.

The dual-PowerPC BeBox[1] I bought with savings from my campus job in college was the machine that made me "fall in love" with computers.

At the time I was so incredibly jazzed that I didn't need to "telnet" to the campus workstations to complete my Unix-based programming projects, since there was a full standard Unix API and an IDE (CodeWarrior IIRC) came with the machine. And compared to any other computer, the UI was mind-blowingly responsive under multitasking. It was also the platform I used for my first side-project (a graphical IMAP email client).

I thought I was quite the tech rebel at the time, thinking I was onboard the nascent stages of this OS, but it seems that by the time I became a devotee, it had already suffered a major internal setback with the AT&T deprecation of the Hobbit CPU. I had always assumed (mistakenly) that they switched to PowerPC because it was "better".

[1] https://en.wikipedia.org/wiki/BeBox

Why is the big main hero screenshot Windows with a Be theme rather than actual BeOS?

Haiku still exists, and is actually really good now. I was running it for a while.

True, but is there any point to Haiku now that the Linux PREEMPT_RT patchset is just around the corner and will probably give us soft-realtime capabilities on a par with the old 16-bit computer OS's (Amiga, Atari ST, etc.) and with BeOS/Haiku itself? Sure, the default UIs (GNOME-Shell, Plasma) are nothing to write home about from a performance POV, but lightweight alternatives do exist and are still highly usable. I find that LXDE or Xfce are no trouble at all even on hardware that's a decade old or perhaps more (the one thing they don't really address is touch-screen focused use, where GNOME itself has a big, if serendipitous, headstart).

The big thing about BeOS was that everything was non-blocking - simply adding a kernel patch won’t fix all of the software which locks up any time I/O blocks, whereas the BeOS UI on 90s hardware was more responsive than, say, Windows 10 is today.

> simply adding a kernel patch won’t fix all of the software

That's tautologically true, but something like the RT preemption kernel patch is still a basic requirement for a truly responsive system. Once that's merged, you then have buy-in for that use case and can start fixing all the things that are broken in user-space, as well.

Additionally, there's a lot that can be fixed by simple tuning. There's no excuse for UI input locking up when a system is low on RAM, for example. Any tasks that are critical to input on the system's console-like channel (whatever that is - it might be a serial console, UI graphics, a network connection or even an audio input subsystem plus some speech-recognition AI) should be running with elevated priority, so that (1) they're always responsive, no matter what, and (2) when the system starts thrashing, they can still grab enough of whatever system resources are available (RAM, CPU, device I/O), so that the user can survey the situation and kill the offending process. Linux distributors are dropping the ball here - this sort of system-wide tuning is their job!

Developers could have used asynchronous interfaces for decades now and it would make the user experience substantially better with no kernel changes at all. There are reasons why they haven't but I don't see any reason to believe those will change with yet another real-time kernel patch.

> Any tasks that are critical to input on the system's console-like channel (whatever that is - it might be a serial console, UI graphics, a network connection or even an audio input subsystem plus some speech-recognition AI) should be running with elevated priority

This approach is prone to priority-inversion and it doesn't help with the underlying problem that programmers have written tons of synchronous I/O all over the place and the call chains are often highly non-obvious. For example, most Linux desktop environments will freeze if you use an authentication provider which isn't instantly responsive and that prevents things like interacting with menus, closing windows, etc. because various low-level libraries do things like check settings which at some point does something like getting your list of groups, which is normally very fast because it's cached. I got some patches into pam_ldap years ago to set socket-level timeouts so it would recover from something like a single socket error but the better solution would be fewer dependencies and interfaces which don't have tons of different things happening on a single thread.

> This approach is prone to priority-inversion

Priority inversion is one of the main things that are addressed by the PREEMPT_RT patchset - without a fix to the priority inversion problem, you don't really have a RT system. Aside from that, I agree that we should move towards having async interfaces as the default whenever slow I/O is a realistic possibility. I'm not sure how many people actually use something like PAM-LDAP, though.

Also the framework experience, which isn't really there in Linux, even with KDE, as it isn't the full stack like in Be or other desktop oriented OSes.

Look, I don't even like frameworks. A framework adds incredible amounts of complexity if you're trying to develop for the system, and acts as a single-point-of-failure from a social POV. (That's the issue many people have with systemd on Linux, for example - it addresses a number of issues that probably need to be addressed, but does so in an inevitably hackish way that ends up looking like a big ball of mud. Hopefully it'll eventually reach some degree of maturity and we'll be able to replace it with something that's more decoupled and more geared towards expressing mechanisms, not policies.) What does the Haiku "framework" experience provide that other OS's don't?

An integrated developer experience from top to bottom, just like using Kits on macOS/iOS, Frameworks on Android, .NET on Windows.

Not something where each computer is a special snowflake with a developer fragmented experience about which libraries are even available on the system, and lack of integrated tooling, forcing the trend of just bundling a web browser with the application.

Note that I included "desktop oriented OSes" on the same group as Be.

As for systemd, the very fact that Linux is late to the party of commercial UNIXes init replacements says it all, in terms of being a need that a large majority actually wants.

> Not something where each computer is a special snowflake with a developer fragmented experience about which libraries are even available on the system

AIUI, these days you can just develop for Flatpack (if you're making a proprietary application) and declare your dependencies as needed. But do note that Flatpack is not a kitchen-sink "framework" or "platform": it's just something that addresses a number of fairly well-understood issues with the prevailing, mostly FLOSS-oriented, development model. (One reason that the issues are well-understood by now is that other "solutions" have been tried before, such as the LSB effort-- and found wanting.) This is key to decreasing complexity and making the system as a whole more comprehensible and surveyable. The commercial UNIXes you point to are far closer to your view of "each platform being a special snowflake", and big frameworks make this issue worse; they don't ameliorate it.

Sure the year of GNU/Linux desktop is just around the corner, meanwhile don't complain if your beautiful lack of frameworks only gives you Electron based stuff.

Isn't most software on Haiku a Linux port anyway?

Not really. Most of the included desktop apps are specific to Haiku (or BeOS) AFAICT, though third-party apps do indeed more often than not have Linux versions. The userland looks like it might be GNU (lots of GNU components in there). The drivers IIRC are either written specifically from Haiku or ported from FreeBSD.

The UI appears to be a complete reimplementation of the BeOS UI and so nothing to do with GNOME/KDE. A great many of the underlying packages are from Linux however.

> more responsive than, say, Windows 10 is today

Setting the bar kinda low aren't we?

It's a bit more than just a kernel. The whole UI is very light and responsive. I'd urge you to try it and see for yourself, you can run it under QEMU.

You can also run it off a LiveCD. Much better experience (please report if it doesn't boot).

I remember having a PC with BeOS loaded up on it in the early 2000s (aside from a PC 5150, we were not uhh... with the times), and while I agree that it was quick, that could be chocked up to being compared with the Windows of the time, which already had a lot of legacy to contend with (and frankly was plenty fast anyway, at least when it came to basic UI).

Haiku is pretty rad but not so useful without hardware acceleration. I'd contribute code but I'm a hack who only knows higher level languages.

> Haiku is pretty rad but not so useful without hardware acceleration. I'd contribute code but I'm a hack who only knows higher level languages.

It can be hard to find the time to get productive in a new programming language. But if you have some spare time, this project could provide the motivation!

Are you involved with Haiku? I'm somewhat interested in helping.

Hello! Current Haiku developer here. A lot of us hang around #haiku on Freenode pretty regularly, and we also have haiku@ and haiku-development@ lists at Freelists. Please don't hesitate to introduce yourself and get your feet wet :)


I'm not involved in Haiku, but I imagine it's not hard to find a good way to contact the dev team.

I think Haiku's current status supports the "Near Death" assertion.

How so? It has an active community and has loads of important POSIX/Linux applications ported. There was a talk at FOSDEM about it only last weekend.

Because hardly anyone is actually using it for it's (or Be's) stated purpose? Because it has a fraction of the users Be did (by percentage)? Hell, HN is pretty up on tech stuff and yet whenever an article about Haiku is posted there's still a lot of people who have no idea what it is. And it has been in active development for over a decade.

If that isn't "Near Death", then what is?

Dead means no development is being done on it, so it cannot possibly work on modern computers. The original Be presumably hasn't seen any development for decades, but Haiku is actively developed and so very much alive. It means it works on current PCs, and has the potential to become more widely used in future (which Be does not).

"Nearly Dead" != "Dead". It is possible Haiku could start gaining ground and become something widely used in the future, however it is more likely that the project languishes is relative obscurity before eventually being abandoned.

I've been following Haiku for a bit, it looks like development is pretty active.

> hardly anyone is actually using it


Kinda hard to prove a negative.

I’ve always wondered (but never read about) what the key people of Microsft, Apple and IBM thought of the Amiga in the late 80’s. Were they worried, powerless, dare they dream that C= would do so many missteps? If anyone has any reading links of how the competition viewed the Amiga, I’d love to read about it.

IBM licensed Amiga tech for use in the OS/2 2.0 (and later) desktop. In fact, that's how we got ARexx; as part of the licensing arrangement, IBM granted Commodore-Amiga a license to use their REXX language.

So... IBM definitely thought there was something of value there.

What Amiga tech is that?

ARexx started as retail software that Wishful Thinking Development sold (like WShell). Commodore acquired it years later.

MS didn't want to collaborate on OS2 any more. So IBM licensed some things for the GUI from Amiga that became Workplace Shell. Possibly included BOOPSI. A REXX licence was indeed part of the deal from what little I remember.


That's an extremely thin sourced description. WPS and BOOPSI both were object-based systems written in C, yes, but WPS originated from the SOM work at IBM Austin: http://collaboration.cmc.ec.gc.ca/science/rpn/biblio/ddj/Web...

I developed for the Amiga and used OS/2, even on PowerPC machines. They did not look like each other. Although BOOPSI was object-based, it was still something bolted on top of the 1.x APIs. For example, BOOPSI had base classes for gadgets and images, but windows were an entity that lived outside altogether. OS/2 had the same problem: application windows were handles. But inside WPS, everything was a SOM object.

In the Amiga Workbench, everything was ad-hoc, even if some semblance of object-orientedness appeared with things like AppIcons, which required cooperation from the applications. They were still not objects: I am fairly confident that AppIcons were implemented through calls to workbench.library, not by creating a new BOOPSI object. I remember this stuff because I had lots of arguments with some very vocal OS/2 developers.

I still don't see what IBM would have needed from Commodore. It was actually the other way around: IBM extorted patent fees and cross licensing from Commodore. They had more patent lawyers in Boca Raton than C= had engineers in West Chester (my source: Dave Haynie, who dealt with such lawyers and had probably the most useful Commodore patents, the Zorro ones).

And as I mentioned, ARexx came out in 1987, in the 1.x days, as a commercial product, years before Commodore adopted it and before the OS/2 events mentioned. It had no license from IBM (what for?), since Will Hawes wrote it himself for the Amiga's architecture.

I don't quite see what they might have needed either - BOOPSI was pure guess on my part after seeing how that link described it. My only contact with OS/2 was work had one ps/2 tower with it. I used it a little, never encountered an API or compiler. In those days my experience was 95% Amiga, and just enough Windows to ensure that after Amiga my career went back to Unix. After Intuition and BOOPSI, even ARexx, Windows just seemed too primitive and too much like hard work to develop for.

I do have some vague memory of IBM licensing something for OS/2 from the time. That weak reference was all search turned up. Quite a few links of people talking about it though, including a near duplicate of the conversation we're having on The Register. So if urban legend it definitely gained some legs somewhere. My CATS newsgroup archives, Devcon handouts and other stuff from that era are long gone.

That's not very helpful, sorry. :)

Commodore never had a competitive offering for the business market.

The one really unambiguously great thing that C= did was the custom chipset for graphics and sound. That was the area they should have continued to progress in; perhaps they would have ended up out-competing SGI and eventually inventing the 3D accelerator.

Commodore and Apple were the same size for a while, but Apple had a better PR spin. Steve Jobs always talked about IBM vs Apple, ignoring Commodore et al. And he was right to do so, it turns out.

While AT&T have always been jerks, having Steve Sakoman meltdown because they lost a key supplier is totally beyond the pale.

Startups are, almost by definition, exercises in turning disasters into opportunities.

MIPS and ARM were quite well established by that time. TI DSP's similarly. Redoing the hardware was certainly an option.

"Redoing the hardware was certainly an option."

And that's exactly what Be did, switching to dual PowerPC processors instead.

Gasse is one of the few people I have subscribed to on Medium. His rants on Apple/tech can be a little off, but he's a great writer and has lots of interesting stories.

Even though I know how this ends up (they move to PowerPC), I felt left hanging when AT&T canceled the Hobbit processor and they weren't sure what to do next. It was like the feeling you get at the end of a chapter of a great novel.

If you like Gassee's writing about Be, there's plenty more available in his articles for the Be Newsletter:


From that link, Gassé writing in 1997:

When Jobs hired Sculley, the company was on the way up. Today, it's losing customers, developers, money, and market share. [..]

Apple doesn't need a CEO, they need a messiah (or a crash test dummy). And any problem that requires walking on water as a solution is, you'll grant me, a problem ill-stated. Still, there may be a way to make the search for a new CEO easier

Only got to use the downloadable / bootable BeOS on CD / installable. Had a lot of fun on a P166 MMX in my dorm, and a P2-400 in greyscale at work (Nvidia TNT wasn't supported in color initially).

Sad to have watched BeOS go away. It was and still is pretty amazing.

I wonder what strategy might have succeeded in Be getting traction against Windows back then?

Free from day one?

The big problem for any competitor to Windows was that nothing had the drivers support that windows had. Steve Jobs solves that by supporting only his own hardware. Drivers was a huge issue.

I remember reading about Hitachi being prepared to preinstall Be as a dual-boot system - but then Microsoft threatened to pull their Windows licences. The anti-trust stuff wasn't just about browsers.

Interested to hear if Gassee talks about that in this series.

IIRC the big problem was Microsoft refusing to allow OEMs to ship machines with other operating systems.

> Free from day one?

That's great in theory, but how do you then pay the developers?

The income from selling BeOS wasn't ever covering a significant fraction of R&D costs so who cares?

Be was losing about $10M per year from the outset. It never came anywhere close to breaking even. The idea was, either this takes off (in which case you can make money a lot of different ways even if you give the core product away) or the threat that it might forces Apple to buy the whole thing, either way you get your money back and JLG gets what he wants, which is and has always been to be "right" even when he's monumentally and obviously wrong.

[ JLG led Apple product development when it was failing, Be which failed, and the Palm spin out that made an OS so lousy even _Palm_ never bothered using it any products. The man is often cited as a "guru" presumably in the same sense as when some random guy is brought in by gullible actors or musicians who've got a lot of money suddenly ]

In the late 1990s it looked as though it would probably close its doors and the investors would get nothing (technically they'd get the source code, branding and so on, but those are basically worthless), which happens all the time with that sort of business.

But the dot com boom saved them. Not Be, the investors. An Offering was written which under normal circumstances would get laughed out of the room. Ha ha ha, you have a product that nobody uses, and you want a nine figure sum of money for a business with a failed Apple executive and a bunch of hackers, No. But at the time you could write blah blah blah "Internet" blah blah and people wouldn't bother reading past the bottom of the first page of your IPO because they already had their wallets open.

So now the original investors had their money back, and gradually the big institutional hitters (e.g. pension companies) could exit too, you take $50000 of BEOS and you sell it to a thousand Be fans for $50 per time, when it turns into $4 of cash five years later you're fine and they learn a valuable lesson about investment.

Be got a little bit of money left over. The idea was, pivot into making software for "Internet appliances". You know, the computer next to your stove, or that multi-purpose device in your gym which... oh right, yeah, no, those aren't a thing, the idea went nowhere and Be's business went down in flames. JLG got to go ruin Palm instead.

> The idea was, pivot into making software for "Internet appliances". You know, the computer next to your stove, or that multi-purpose device in your gym which... oh right, yeah, no, those aren't a thing

Well, they weren't a thing at the time, and couldn't be a thing given the severe lack of wireless, always-on, Internet connections. They definitely are a thing today, but too late to save Be.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact