Hacker News new | past | comments | ask | show | jobs | submit login
BeOS: The Alternate Universe's Mac OS X (hackaday.com)
672 points by fogus 11 days ago | hide | past | web | favorite | 415 comments





Ah, memories. Used to use BeOS as my primary OS for a year or two. I think it's the only OS I ever found to be truly intuitive and pleasant to work with.

That said, I don't think the world would be in a better place had Apple chosen Be over NeXT. The elephant in the room is security: NeXTSTEP, being Unix-based, has some amount of security baked in from the ground up. BeOS didn't; it was more akin to Windows 95 or classic Mac OS on that front. Consequently, I doubt it could have made it far into the 21st century. It would have died unceremoniously in a steaming pile of AYBABTU. Taking Apple with it, presumably.


I used to work for Be in Menlo Park in a previous life and I can confirm that the code base quality would have made for a very bad outcome for Apple. Security was the least of the numerous serious issues. That said BeOS somewhat still exists in spirit as a lot of folks from Be went to build/contribute to Android.

> a lot of folks from Be went to build/contribute to Android.

Does that include the quality and security perspective as well? ;-) j/k

Having never crossed paths with a former Be employee before, __thank you so much__ for your contribution. BeOS was so instrumental to my perspective on computing and operating systems (and potentially the conception of my disdain for what Microsoft did to the world of operating systems around the turn of the century).

From a user perspective, BeOS was nearly perfect. Great UI and utilities, POSIX command line, so fast and responsive. The "install to Windows" option was amazing for trying things out. BeFS was wonderful (it's nice to see Mr. Giampaolo's work continue in macOS).


> a lot of folks from Be went to build/contribute to Android.

That's correct, the IPC in AOSP, Binder, is basically borrow from BeOS


I too used to work at Be (Hi!) as well as developed applications for BeOS. I also worked at Apple on various releases of OS X. NextStep was far ahead of BeOS on multiple fronts. BeOS was a lot of fun to work on, but only scratched the surface of what was needed for a truly commercial general purpose OS. If Apple would have acquired Be instead of Next, who knows what the world would be like today. Apple ended up with a large number of former Be employess as well (some directly and others from Eazel.)

I can never let a thread about BeOS go by without adding my two cents, because I also worked at Be in Menlo Park, back in the day. (I went down with the ship and got laid off as they went out of business.)

I was sore about it at the time, but I agree that Apple made the right decision by choosing NextStep over BeOS. If for no other reason, because that's what brought Jobs back. It's hard to imagine Apple making their stunning comeback without him.


Thanks a lot! I ran BeOS fulltime for a few years (R3/4/5) and I'm looking at a BeOS "the Media OS" poster at my wall here. Fond memories!

Care to share where you got the poster?

It was not in the box. Back then, it was still quite difficult to get a hold on an actual R3 box here in Europe. There was 1 official reseller here in the Netherlands and I actually bought their official demo machine: the famous first dual processor Abit BP6 with 2x 400Mhz Celeron processors. When picking it up in their office I spotted the poster and asked if I may have it. Still got a T-Shirt and a hat too ;-).

I vaguely remember it being in the box (bought R4, R4.5, and R5).

And apparently a couple Amiga gurus made their way to Be (see: Fred Fish).

I’d always heard that after Amiga (and Be) many decided to opt for Linux for philosophical reasons.


Which is ironic, given that I am yet to see a GNU/Linux based hardware setup that matches the experience, hence why I went back to macOS/Windows that much a much closer multimedia experience.

Wow! Thanks so much for working on BeOS. This was a super fun OS to use.

I'm curious what sort of issues you have in mind. I was never very familiar with BeOS but from what I understood the issue with it was more that its responsiveness came from very heavy use of multi-threading, but that also made it very hard to write robust apps for it as, in effect, all app code had to be thread safe. App devs found that condition too hard to handle.

Can I assume that the quality issues were somewhat related to that? BeOS devs found it no easier to write thread safe code in C++ than app devs did?


I’m the guy who left a case of champagne at the office one weekend, to celebrate an early release.

Thanks for the memories.


“That said, I don't think the world would be in a better place had Apple chosen Be over NeXT.”

Yes. Except that it wasn’t acquiring NeXTSTEP that saved Apple’s skin; it was acquiring Steven P Jobs.

True, version 1 had been rough and flakey as hell, and honestly really didn’t work all that well.

But Steve 2.0? Damn, that one could sell.


NeXTSTEP pretty directly evolved into iOS, though, so it was certainly a significant asset in the acquisition, too.

True, but a technology is only a means to an end, not an end itself. What sells is product.

You may have the finest tech on the planet—and that means precisely squat. What counts is putting bums on seats. Your seats. And keeping them there. Limps of tech are just a vehicle for that; to be used, abused, chewed up, and/or discarded on the road(s) to that end.

Apple could have done better; they certainly did plenty worse (Copland, Taligent, the first Mac OS).

As it turned out, NeXTSTEP proved it was indeed “good enough” to fit a pressing need at the time; and the rest was just hammering till it looked lickable enough for consumers to bite. All it was needed was a salesman to shift it—and Steve 2.0 proved to be one of the greatest salesman in modern marketing history.

That’s what made the difference between selling a tech to a million dyed-in-the-wirewool nerds, and selling tech to a billion everyday consumers. And then up-selling all of those customers to completely new worlds of products and services invented just for the purpose.

..

Want to created a whole new device? Hire Steve Wozniak.

Want to create a whole new world? Oh, but that is the real trick.

And Steve Jobs gave us the masterclass.

..

Had Steve started Be and Jean-Louis built NeXT, we would still be in the exact same situation today, and the only difference would be chunks of BeOS as the iPhone’s bones instead. Funny old world, eh? :)


I'm not sure I've ever encountered someone so invested in the "great man" theory of history.

Jobs was obviously talented, but assuming no matter where he went he would have had the same level of success is discounting a lot of luck in how everything lined up,and who was available to help bring all the things to market jobs is famous for. There's no guarantee the hundreds or thousands of people that were also essential to the major successes of Apple would have been around jobs had he stayed at Next. Those people deserve respect and recognition too.


You forgot his family had been the largest share holder of Disney not because Steve got apple. He is VERY successful to the point he even gave up getting anything but an private jet. That is billion of course but that is not success. What is.

And unlike v1 v2 seems better on human level as well. We do not need saint. He still parked in space for hadicapped only I guess. But let us admit, it is not just one for all. But all for one.


ISTR a tale of Legal keeping a large slush fund from which to pay off all the ex-Apple-employees that Steve 2.0 would straight tell to their face to fuck off. Just because that is what worked best for him†. :)

“But let us admit, it is not just one for all. But all for one.”

Damn straight. Epically focused leadership.

--

(† For all others who aspire to build their own businesses, there is HR procedure and askamanager.org—and do not for the life of you ever bypass either!)


>Epically focused leadership. Just to support that, I remember hearing a story told by Larry Elison (they were apparently neighbours for a while), where he would pop over to see Steve, and would be subjected to the 100th viewing of Toy Story where Jobs was obsessively pointing out every new tiny improvement they'd made in the story or graphics.

Epically focused indeed.


“Those people deserve respect and recognition too.”

ORLY? Name them.

--

Not “great man”. Great vision.

Geeks tend massively to overrate the importance technical aptitude, which is what they’re good at, and underrate everything else—business experience, sales skills, market savvy, and other soft skills—which they’re not.

Contrast someone like Jobs, who understood the technical side well enough to be able to surround himself with high-quality technical people and communicate effectively with them, but make no mistake: they were there to deliver his vision, not their own.

Tech-exclusive geeks a useful resource, but they have to be kept on a zero-length leash lest they start thinking that they should be the ones in charge since they know more about tech than anyone else. And the moment they’re allowed to get away with it, and you end up with the tails-wagging-the-dog internecine malfunction that plagued Sculley’s Apple in the 90s and has to some extent resurfaced under Cook.

Lots of things happened under Jobs 2.0. That was NEVER one of them.

..

Case in point: Just take the endless gushing geek love for Cook-Apple’s Swift language. And then look at how little the iOS platform itself has moved forward over the 10 years, it’s taken to [partly] replace ObjC with the only incrementally improved Swift. When NeXT created what is now AppKit, it was 20 years ahead of its time. Now it’s a good ten behind, and massively devalued to boot by the rotten impedance-mismatch between ObjC/Cococa’s Smalltalk-inspired model and Swift’s C++-like semantics.

Had Jobs not passed, I seriously doubt Lattner’s pet project would ever have advanced to the point of daylight. Steve would’ve looked at it and asked: How can it add to Apple’s existing investments? And then told Lattner to chuck it, and create an “Objective-C 3.0”; that is, the smallest delta between what they already had (ObjC 2.0) and the modern, safe, easy-to-use (type-inferred, no-nonsense) language they so pressingly needed.

..

Look, I don’t doubt eventually Apple will migrate all but the large legacy productivity apps like Office and CC away from AppKit and ObjC and onto Swift and SwiftUI. But whose interest does that really serve? The ten million geeks who get paid for writing and rewriting all that code, and have huge fun squandering millions of development-hours doing so? Or the billion users, who for years see minimal progress or improvement in their iOS app experience?

Not to put too a fine a point on it: if Google Android is failing to capitalize on iPhone’s Swift-induced stall-out by charging ahead in that time, it’s only because it has the same geek-serving internal dysfunction undermining its own ability to innovate and advance the USER product experience.

--

TL;DR: I’ve launched a tech startup, [mis]run it, and cratered it. And that was with with a genuinely unique, groundbreaking, and already working tech with the product potential to revolutionize a major chunk of trillion-dollar global industry, saving and generating customers billions of dollars a year.

It’s an experience that has given me a whole new appreciation for what another nobody person starting out of his garage, and with his own false starts and failures, was ultimately able to build.

And I would trade 20 years of programming process for just one day of salesmanship from Steve Jobs’ left toe, and know I’d got the best deal by far. Like I say, this is not about a person. It is about having the larger vision and having the skills to deliver it.


Jobs was far more of a "tech guy" than either Sculley or Cook. He understood the technology very well, even if he wasn't writing code.

I would also say, Jobs had a far, far higher regard for technical talent than you do. He was absolutely obsessed with finding the absolute best engineering and technical people to work for him so he could deliver his vision. He recognized the value of Woz's talents more than Woz himself. He gathered the original Mac team. If he had, say, a random group of Microsoft or IBM developers, the Mac never would have happened. Same with Next, many of whom were still around to deliver iOS and the iPhone.

Your take is like a professional sports manager saying having good athletes isn't important, the quality of the manager's managing is the only thing that matters.


“Your take is like a professional sports manager saying having good athletes isn't important, the quality of the manager's managing is the only thing that matters.”

Postscript: You misread me. I understand where Jobs was coming from better than you think. But maybe I’m not explaining myself well.

..

When my old man retired, he was executive manager for a national power company overseeing distribution network. Senior leadership. But he started out as a junior line engineer freshly qualified from EE school, and over the following three decades worked his way up from that.

(I still remember those early Christmas callouts: all the lights’d go out; and off into the night he would go, like Batman.:)

And as he later always said to engineers under him, his job was to know enough engineering to manage them effectively, and their job was to be the experts at all the details and to always keep him right. And his engineers loved him for it. Not least ’cos that was a job where mistakes don’t just upset business and shut down chunks of the country, they cause closed-coffin funerals and legal inquests too.

--

i.e. My old man was a bloody great manager because he was a damn good engineer to begin with. And while he could’ve been a happy engineer doing happy engineering things all his life he was determined to be far more, and worked his arse off to achieve it too.

And that’s the kind of geek Steve Jobs was. Someone who could’ve easily lived within comfortable geeky limitations, but utterly refused to do so.

’Cos he wanted to shape the world.

I doff my cap at that.


“Jobs was far more of a "tech guy" than either Sculley or Cook.”

Very true. “Renaissance Man” is a such cliche, but Steve Jobs really was. Having those tech skills and interests under his belt is what made him such a fabulous tech leader and tech salesman; and without that mix he’d have just been one more Swiss Tony bullshit artist in an ocean of the bums. (Like many here I’ve worked with that sort, and the old joke about the salesman, the developer, and the bear is frighteningly on the nose.)

But whereas someone like Woz loved and built tech for its own sake, and was perfectly happy doing that and nothing else all his life, Jobs always saw tech as just the means to his own ends: which wasn’t even inventing revolutionary new products so much as inventing revolutionary new markets to sell those products into. The idea that personal computers should be Consumer Devices that “Just Work”; that was absolutely Jobs.

And yeah, Job always used the very best tech talent he could find, because the man’s own standards started far above the level that most geeks declare “utterly impossible; can’t be done”, and he had ZERO tolerance for that. And of course, with the very best tools in hand, he wrangled that “impossible” right out of them; and the rest is history.

Woz made tech. Jobs made markets.

As for Sculley, he made a hash. And while Cook may be raking in cash right now, he’s really made a hash of it too: for he’s not made a single new new market† in a decade, while Apple’s rivals—Amazon and Google—are stealing the long-term lead that Jobs’s pre-Cook Apple had worked so hard to build up.

--

(† And no, things like earpods and TV programming do no count, because they’re only addons, not standalone products, and so can only sell as well the iPhone sells. And the moment iPhone sales drop off a cliff, Cook’s whole undiversified house of cards collapses, and they might as well shut up shop and give the money back to the shareholders.)


I hear you, I do, but here's another perspective: Jobs without Wozniak wound up being California's third-best Mercedes salesman.

And neither of them would've mattered a jot if they were born in the Democratic Republic of the Congo, or if they were medieval peasants, or if Jobs hadn't been adopted, or or or ...

Luck is enormously influential. There thousands of Jobsalikes per Jobs. Necessity isn't sufficiency.


I think Steve Jobs The Marketing and Sales Genius is an incorrect myth.

Jobs was an outstanding product manager who sweated all the details for his products. And in contrast to Tim Cook, Jobs was a passionate user of actual desktop and laptop computers. He sweated the details of the iPhone too, but his daily driver was a mac, not an iPad. Cook is less into the product aspect, and it really really shows. Cook is a numbers and logistics guy, but not really into the product.

That's a thing I think Apple has fixed recently with some reshuffling and putting a product person (Jeff Williams) in the COO role. The COO role is also a signal that he'll be the next CEO when Tim Cook retires.

To be clear, I don't disagree that Jobs was a great marketer. But that stemmed from his own personal involvement with the product design of the mac--and later the iOS devices--rather than some weirdly prodigious knack for marketing.


> You may have the finest tech on the planet—and that means precisely squat.

You shouldn't talk about Sun like that.


NeXTSTEP appears to have first gotten incorporated throughly into the OS X codebase. Browse through the Foundation library for the Mac - https://developer.apple.com/documentation/foundation/ . Everything that starts with NS was part of NextStep.

It didn't get 'incorporated'.

OSX/macOS/iOS is the latest evolution of NeXTStep/Mach which originated in the Aleph (and other) academic kernels.

(of course OS's evolve pretty far in a few decades...)

(https://en.wikipedia.org/wiki/Mach_(kernel))


My understanding was always that NeXTSTEP served as the foundation of OS X, and while it certainly got a new desktop environment and compatibility with MacOS's legacy Carbon APIs, it was essentially still NeXTSTEP under the hood.

Yes. That is all those NS... prefix meant.

I always thought that, too.

It's wrong.

Original NeXT classes were prefixed NX_. Then NeXT worked with Sun to make a portable version of the GUI that could run on top of other OSes -- primarily targeting Solaris, of course, but also Windows NT.

That was called OpenStep and it is the source of classes with the prefix NS_ -- standing for Next/Sun.

https://en.wikipedia.org/wiki/OpenStep#History

This is why Sun bought NeXT software vendor Lighthouse, whose CEO Jonathan Schwartz who later became Sun's CEO.

Unfortunately for NeXT (and ultimately for Sun), right after this, Sun changed course and backed Java instead.


> Everything that starts with NS was part of NextStep.

Not quite. Everything in Foundation gets the NS prefix because it's in Foundation; only a fraction of it came directly from NeXT.


Yeah, Rhapsody > Mac OS X Server 1.0 > Mac OS X > iOS which was literally described as being "OS X" when it first launched.

Security from what? Do user accounts really provide much benefit in the personal computing space? Where the median user count is 1?

Neither OS had the kind of security that is really useful today for this usecase, which is per-application.


But a bunch of the methods we have for securing, say, mobile phones, grew out of user accounts.

Personally I don't know Android innards deeply, but when I was trying to backup and restore a rooted phone I did notice that every app's files have a different owner uid/gid and the apps typically won't launch without that set up correctly. So it would seem they implemented per-app separation in this instance by having a uid per app.

Imagine a world where Google had chosen to build on a kernel that had spent many decades with no filesystem permissions at all. Perhaps they'd have to pay the same app compatibility costs that Microsoft did going from 9x to NT kernel, or changing the default filesystem to ACL'd-down NTFS.


Then you'd maybe get something like iOS, where the POSIX uid practically does not matter at all, and the strong security and separation is provided by other mechanisms like entitlements...

Someone else pointed out that BeOS allegedly had "quality and security" problems in general (I myself have no idea), so that may indeed have led to problems down the line, whereas BSD was pretty solid. But I agree with the OP and don't think POSIX security in particular is much of a factor today.


Yeah. Funny enough, if Apple had skipped OS X and gone directly to iOS, BeOS would have been a superior foundation. No uselessly mismatched security model or crusty legacy API baggage to clog up the new revolution in single-user always-online low-powered mobile devices.

Of course, that was in back in the days when an entire platform from hardware to userland could be exclusively optimized to utterly and comprehensively smash it in just one very specific and precisely targeted market. Which is, of course, exactly what the iPhone was.

Just as the first Apple Macintosh a decade earlier eschewed not only multi-user, multi-process, and even a kernel; every single bit and cycle its being being exclusive dedicated to delivering a revolutionary consumer UI experience instead!

In comparison, NeXTSTEP, which ultimately became iOS, is just one great huge glorious bodge. “Worse is Better” indeed!

..

Honesly, poor Be was just really unlucky in timing: a few years too late to usurp SGI; a few too early to take the vast online rich-content-streaming world all for its own. Just imagine… a BeOS-based smartphone hitting the global market in 2000, complete with live streaming AV media and conferencing from launch! And Oh!, how every Mac OS and Windows neckbeards would’ve screamed at that!:)


On a similar note, I've often wondered what Commodore's OS would have turned into. Not out of some misplaced nostalgia, just curiousity about the Could Have Been.

My guess is that by now in 2020, it would have at some point had an OSX moment where Commodore would have had to chuck it out, since both Apple and Microsoft have effectively done exactly that since then. Still, I'd love to peek into Amiga OS 9 descended from continual usage.


I think AmigaOS 3 could be a nice kernel as it is. And to make it more Unix-y memory protection could be introduced but only for a new userland process with more traditional syscalls.

It's a bit how DragonflyBSD is slowly converging to.


Amiga OS 9 would have looked very different from the Amiga OS that we know (I am talking from a developer's point of view, not about the GUI).

Since inter-process communication in Amiga OS was based on message passing with memory-sharing, it was impossible to add MMU-based memory protection later. As far as I know, even Amiga OS 4 (which runs on PowerPC platforms) is not able to provide full memory protection.

There was also only minimal support for resource tracking (although it was originally planned for the user interface). If a process crashed, its windows etc. would stay open. And nobody prevented a process to pass pointers to allocated system resources (e.g. a window) to other processes.

The API was incomplete and tied to the hardware, especially for everything concerning graphics. This encouraged programmers to directly access the hardware and the internal data structures of the OS. This situation was greatly improved in Amiga OS 3, of course far too late. Amiga OS 3 was basically two or three years too late. As far as I know, Apple provided much cleaner APIs, which greatly simplified later the evolution of their OS without breaking all existing programs.

Finally, the entire OS was designed for single-core CPUs. At several places in the OS, it is assumed that only one process can run at a time. This doesn't sound like a big issue (could be fixed, right?) but so far nobody has managed to port Amiga OS to multi-core CPUs (Amiga OS4 runs on multi-core CPUs, but it can only use one core).

I have been the owner of an Amiga 500 and Amiga 1200, but to be brutally honest, I see Amiga as a one-hit wonder. After the initial design in the mid-1980s, development of the OS and the hardware basically stopped.


> Since inter-process communication in Amiga OS was based on message passing with memory-sharing, it was impossible to add MMU-based memory protection later.

Why can't you do shared memory message passing with MMU protection? There is no reason an application in a modern memory protected OS can't voluntarily share pages when the use case is appropriate. This happens today. You can mmap the same pages, you can use posix shm, X has the shm extension...


Or just take a Docker-like approach, where each app thinks it is the only user and intra-app communication is where you put the security functionality

But the predecessor to containers were features like having daemons chroot into somewhere else and drop their uid to something that can't do much. That very much grew out of the Unix solutions. If Unix daemons were written for decades assuming all processes have equal privilege maybe we wouldn't see that.

I think this sort of thing is a capabilities ladder in an arms race.

If you never evolved account based security, you never built the infra for even evaluating application permissions in the first place.


“Security” is a bit of a misnomer in this context: I think what you actually meant was “multi-user architecture” which, as remarked elsewhere, undergirds the whole notion of keeping processes from promiscuously sharing any and all resources.

yeah i think of it more as multi tenet safety

Yes, in short - users & groups serve as a rudimentary implementation of capabilities. Best example is Android. But there's more to it.

Separating admin user from non admin user always has advantages and I do it even on Windows.


Best counter example to their point is iOS, though, where POSIX permissions don't play much of a role in securing the system and separation applications.

I do like that you have to “sudo” a program to allow it to access certain files. Even if I am the only user, it stops malicious programs from modifying certain files without me noticing.

Obligatory related xkcd: https://xkcd.com/1200/

Posting links to XKCD like this is generally considered to be a low quality post, hence the downvotes. I’m not one of the downvoters, but thought I’d share the reason as nobody else did.

Edit: gotta love HN! I try to be helpful to someone else that was downvoted to heck with an explanation of why that was the case (based on past replies I’ve seen) and now my post is the own with a negative score. Cheers y’all!


First rule about downvotes is we don't talk about downvotes.

Under the hood though there's multiple accounts which different applications use, the user might only log in with one but applications are isolated from each other and the system because of it.

Security from malicious programs or exploits, accidentally altering system files and other device users.

We used to be able to trust intentionally installed programs not to exfiltrate data. It's sad that we still can't.


Wouldn't the median need to be over 1? I get your point but am feeling pedantic today.

If more than 50% of personal computers have 1 or 0 users, then the median would be 1, assuming 0 users is less common than 1, regardless of how many users the remaining computers had.

If more than (or equal to) half of computers are used by only one person, then the median user count is 1, no?

If you have 3 PCs in the world, one with 0 users, one with 1 user and one with 23 users the median is 1.

Median is literally the middle, like a highway.


They're just stating that having more than half of all computers with just 1 user guarantees that the median is 1.

No.

For example, suppose five computers have 1 user, 1 user, 1 user, 3 users, 300 users. The median is 1 user.

The claim of "median 1 user" just means more than half of computers has a single user.


> Used to use BeOS as my primary OS for a year or two. I think it's the only OS I ever found to be truly intuitive and pleasant to work with.

I love everything I've read about BeOS but to be honest I must mention I couldn't understand how to use Haiku (I've never used the original BeOS) once I've tried - id didn't feel intuitive at all. And I'm not really stupid, I've been using different flavors of Linux as a primary OS for over a decade.

> That said, I don't think the world would be in a better place had Apple chosen Be over NeXT. The elephant in the room is security: NeXTSTEP, being Unix-based, has some amount of security baked in from the ground up. BeOS didn't; it was more akin to Windows 95 or classic Mac OS on that front.

Some times I miss the days of Windows 95 so much. I wish desktop OSes could be more simple, i.e. without multi-user and file access rights. When it's my own personal computer all I want of it from the security perspective is to prevent others from unlocking it or recovering data from it and to prevent any network communication except that I authorized. Sadly Linux still doesn't even have a decent implementation of the latter (Mac has LittleSnitch).

Windows 9x did pretty well for me - I've never caught a virus, never corrupted a system file and it was easy to fix for others who did.


> I wish desktop OSes could be more simple, i.e. without multi-user and file access rights.

Have a look into Oberon and its successor A2/Bluebottle.

http://ignorethecode.net/blog/2009/04/22/oberon/

https://liam-on-linux.livejournal.com/46523.html


Security, networking, multiuser, i10n, print (don’t even begin to underestimate this and quartz and display postscript) beos was an rtos with a neat UI. it was still fun but there was a gigantic pile of work before it could do what system 7 did, let alone what next did.

Additionally, NeXTStep had been in use in production on investment bank trading floors, in scientific institutions, and in military/intelligence agencies. It wasn't widely used, but it was used.

So while it might not have been quite ready for the median System 7 user's expectations, it was pretty solid.


May be so. But I have Mac OS 1.0 running on my MacBook. It is so slow and really not that working. Unlike the Mac OS 9. It is not that smooth. Luckily he found iPod ... even the colour one is very slow.

Also, the the familial relations of MacOS and Linux made it possible to share code fairly seamlessly between both (providing not talking about hardware integration). In a world where we there was 3 separate universes: Windows, BeOS, and Linux it's possible Linux would've become more isolated.

BeOS had a regular Unix like (even posix IIRC) dev environment.

I was able to do most of the CS course work projects normally done on my University's Sun workstations on BeOS instead. Most of these of courses were data structures, algorithms, compilers, etc projects in C, and not things that required platform specific APIs.

But arguably, BeOS' overall model - a single user desktop OS built on top of but hiding its modern OS underpinnings like memory protection and preemptive multitasking - is far more similar to what eventually became MacOSX than Linux. Which isn't so surprising since it was built by ex apple folks. Remember that consumer OSs before this point had no memory protection or preemptive multitasking.

Linux, though it had the same modern OS features, was far more closely aligned in spirit with the timeshared modern multi-user Unix OS's like what ran the aforementioned Sun workstations (it's "Linus' Unix after all).


BeOS had a POSIX-compliant layer, but under the hood it was totally different from a UNIX.

Also, let’s keep in mind that Windows95 (released that same year) featured preemptive multitasking on a desktop user OS (albeit not a strong memory protection model), and WindowsNT has been available for a couple of years by then (having first shipped in 1993, If memory serves) and was a fully ‘modern’ OS (indeed it serves as the basis for the latter Windows), albeit with a comparatively large footprint.

I was an avid BeOS user (and coincidentally a NeXT user too) and I was enthralled by its capabilities, but in terms of system architecture it was a dead end.


IIRC the Unix compatibility layer had some pretty grotty warts. Porting Unix applications virtually always required fiddling to get them working, especially the network code.

Unfortunately this meant BeOS was perpetually behind the curve on stuff like the World Wide Web. I had a native FreeBSD build of Netscape long before Be managed to get a decent browser.


The Amiga had preemptive multitasking in the 80's. (No memory protection though.)

So did the Lisa even earlier (and Xenix, which was a derivat of Unix Vers. 7, anecdotally also seen on the Lisa).

Is that true? I see contradictory information about Lisa OS. Some posts claim it was cooperative, like the original Mac System. Example: https://macintoshgarden.org/apps/lisa-os-2-and-3

(A bit of research later:) It's actually a bit of a mixed bag. The "Operating System Reference Manual for the Lisa" [0] reads on pp. 1-3/1-4:

> Several processes can exist at one time, and they appear to run simultaneously because the CPU is multiplexed among them. The scheduler decides what process should use the CPU at any one time. It uses a generally non-preemptive scheduling algorithm. This means that a process wlll not lose the CPU unless it blocks. (…)

> A process can lose the CPU when one of the following happens:

> • The process calls an Operating System procedure or function.

> • The process references one of its code segments that is not currently in memory.

> If neither of these occur, the process will not lose the CPU.

In other words, non-preemptive, unless the OS becomes the foreground process, in which case it may block the active process in favor of another one currently in ready or blocked state.

[0] https://lisa.sunder.net/LOS_Reference.pdf


BeOS was as UNIX like, as Amiga was.

Surely it had a cli, UNIX like directory navigation and a couple of UNIX command like utilities.

But good luck porting UNIX CLI software expecting a full POSIX environment.

If I am not mistaken, Haiku has done most of the work regarding POSIX support.


It had a bash shell, and used glibc, and partially implemented POSIX.

I was also able to get most my CS homework done in BeOS. But I definitely needed to keep FreeBSD around for when I hit a wall.


It was ok. Back when I ran BeOS as my primary OS (2001 or so) I built half a C++ web application on BeOS, the other half on a HP-UX server logged in through an X terminal using ftp to sync between the two. Not much support in the wider *nix ecosystem though, so anything big would often fail to build.

I regretted having to move away from BeOS, it was by far the most pleasant OS I’ve used, but the lack of hardware and software support killed it.


In college I wrote a web server in beos and ported it back to Linux, learning pthreads along the way. Bonus achievement was making it multithreaded, so I got that for free, since beos makes you think architecturally as multithreaded first

AmigaOS was not UNIX-like in the least. Amiga UNIX, which shipped on a couple models, was directly System V UNIX, though.

That was the point I was trying to convey regarding BeOS.

Having a shell that looks like UNIX, and a couple of command line utilities similar to the UNIX ones, does not make an OS UNIX.


Ah. I gotcha now.

Hmm possibly. But it could also have been to Linux benefit. It would be alone among these in having then advantage of Unix heritage.

Yes, I remember so many software developers switching from Linux to OSX in the 2000's because "it's a Unix too, but it's shiny".

bounced between windows and os/2, never really used beos as an os, mostly just as a toy for fun. the one thing I remember is that I could play a video that for the time looked amazing without issue. I want to say I even played Quake on it, in a window!

Funny you should mention Windows 95. The company that sold that ended up doing pretty well.

Sure, but at the time Windows 95 was released, they already had a couple of Windows NT releases (3.1, 3.5, and 3.51). Windows NT was a different, more modern operating system than the Windows 95/98/ME line. So, they did not have to evolve Windows 95 into a modern operating system. After ME, they 'just' switched their user base to another operating system and made this possible through API/ABI compatibility (which is quite a feat by itself).

The company that sold classic Mac OS did, too.

But you have to consider what else was going on at the time: Microsoft was actively moving away from the DOS lineage. OS/2 had been in development since the mid-1980s, and, while that project came to an ugly end, they had also released the first version of Windows NT in the early '90s, and, by the late '90s, they were purposefully moving toward building their next-gen consumer OS on top of it.

Apple needed to be making similarly strong moves toward a multi-user OS with concerns like security baked in deeply. BeOS had the memory protection and the pre-emptive multitasking, which were definitely steps forward, but I don't think they would have taken Apple far enough to allow them to keep up with Microsoft. Which, in turn, would have allowed Microsoft to rest on its laurels, probably to the detriment of the Windows ecosystem.


Really? Most people I talk with these days seem to agree that the proprietary OS is a liability.

I’ve never heard anyone say Windows is a problem because it’s proprietary. I have heard that having to pay to upgrade is a pain because you (the company) have to budget for it. Even then, you would also need to budget for the downtime and time to verify that it works before deploying the update, and both those have to be done on Linux too (it’s why LTS releases are a thing).

Anyways, Windows 10 may have its problems, but Microsoft the company is doing pretty well. Their stock is up about 50% this year (200% over the past 5). And that’s not to mention the fact that they’ve open sourced .NET among many other things.


I interpreted then as saying it was a liability to Microsoft.

Outside HN and Reddit talks, most people I know don't even care about FOSS OSes existence, they just want something that they buy at the shopping mall and can use right away.

In fairness, I don't think most people care about the OS at all, FOSS or otherwise; they care that the UI is something they can use, and that their apps work. If you perfected WINE overnight, I'll bet you could sit 80% of the population down at a lightly-skinned FreeBSD box and they'd never know.

I don't even think you'd need that for most of the population: it's been quite some time since the median user cared about desktop software[1]. I switched my parents over to a Linux Mint install a decade ago when I went away to college, and it lowered my over-the-phone tech support burden to zero overnight.

I also had (non-CS but very smart) friends who switched to (ie dual-booted) Linux on their own after seeing how much better my system was than a Windows box. A decade later, one of them is getting her PhD in veterinary pathology and still dual boots, firing Windows up only when she feels like gaming.

[1] My impression is that committed PC gamers aren't a large portion of the desktop user population, but I may be wrong.


I know a decent number of people who have That One Program that they've been using for 20 years and can't/won't leave. It probably varies by population group.

It didn't kill them, though, which was my only point. I guess HN didn't think it was as funny as I did.

AYBABTU = All Your Base Are Belongs to Us, which is a mangled or broken translation in English of a Japanese phrase from a Japanese game `Zero Wing` [1]

[1] https://en.wikipedia.org/wiki/All_your_base_are_belong_to_us

Edit: removed the extra A in the acronym


Got an extra A in there

The anti competitive business practices of Apple make it hard to imagine the world could be worse.

Instead of competition, Apple survives off marketing medium quality products at high prices.

I'm not sure how that's good for anyone unless they own Apple stock.


You don't get to Apple is (large market cap, high customer satisfaction scores, high reviews in the tech press, etc.) because of marketing. If it were that easy, companies would just copy their marketing or load up on marketing and they would be successful.

And a huge part of Apple's current success is based on the tech and expertise they got from NeXT. That work underpins not just laptops and desktops but phones, tablets, set-top boxes, and more.


Perhaps you only get to where Apple is with world-class marketing.

Apple's iPod wasn't the first mp3 player, and it for damn sure wasn't technically superior.

The iPhone was not the first smartphone, nor the first phone with a touchscreen, nor the first phone with a web browser, nor the first phone with an App Store. It arguably had a better UX than incumbents, but better UX doesn't win markets just by dint of existing.

The iMac was a cute computer that couldn't run prevalent Windows software and didn't have a floppy drive.

Recent MacBook Pros have an awful keyboard, not just aesthetically but with known hardware problems. I understand at long last they're reverting to an older, better design.

Tech and expertise don't win just because they exist.


You've left out the part where Apple makes products that have user experiences that are miles ahead of whatever existed at the time.

I'm as reflexively inclined as many technical people to be dismissive of marketing, but I dont think you're right here. You can't "just copy" marketing in the way you can't "just copy" anything else that a company is world-class in, and good marketing can indeed build market dominance (do you think coca cola is really a vastly superior technical innovation over Pepsi?)

The fact that it isn't a net good for users in most cases doesn't mean that it's trivial to do.


> If it were that easy, companies would just copy their marketing or load up on marketing and they would be successful.

Maybe good marketing is really hard and you can't just "copy Apple"?


If people willingly exchange currency for products from a company and are satisfied with the value that they get out of it to the point that they become repeat customers, then how can you judge that no one except stockholders are benefitting?

Because Apple obviously sucks. I don't understand how hard it is for all their happy customers to understand that they suck. /s

Network/lock-in effects and negative externalities can easily have that result.

> negative externalities

This is very true. macOS and the iPhone, for me, went from being "obviously the very best of the best" to "the lesser of all evils".

When my 2015 rMBP finally gives up the ghost and / or when 10.13 loses compatibility with the applications I use, I have no idea what I'm going to do - probably buy another working 2015 rMBP used and pray that the Linux drivers are livable by then.

I know it's ridiculous, but it helps me fall asleep at night sometimes.


You don’t agree on the 16” being the spiritual successor of the mid 2015 15”?

I feel like it's a huge step in the right direction, but for my own personal use:

- I still have mostly USB 2.0 peripherals. I don't see that changing anytime soon.

- I'm still hung up on the MagSafe adapter.

- I love the form factor. The 13" display is the perfect size, for me. I could've switched to a 15" 2015 rMBP with better specs, but I hated how big it was.

- I have no interest in using any version of macOS beyond 10.13, at present.

I'm really glad that they brought the Esc key back, especially as a pretty serious vim user. I don't know, maybe I'm stuck in the past. I'm certain that many, many people are really enjoying the new Macbook Pro 16; I just really, really like this laptop. It's the best computer I've ever owned.


I'm in the same boat as the sibling poster (albeit with a 15" machine) and I'll add this:

- The TouchBar is terrible

I hope they'll bring back a non-TouchBar configuration when they release the "new" keyboard on a 13" MacBook Pro. I could live with both a 13" or 15" laptop, but right now the list of drawbacks is still 1-2 items too long.


Can? Sure. I would commend anyone that can make the case that this is the best explanation for Apple’s success as a whole though.

make it hard to imagine the world could be worse

This seems like a failure of imagination.

I'm not a huge Apple fan, but I lived through the Bad Old Microsoft of the '90s, and grew up on stories of IBM of the '80s.

Apple is nothing like them.


I was another former Be "power user." And I think that was probably accurate -- if you weren't in the "BeOS lifestyle" during the admittedly short window that it was possible, it's hard to understand how much promise it looked like it had. When I tell people I ran it full-time for over a year, they wonder how I managed to get anything done, but...

- Pe was a great GUI text editor, competitive with BBEdit on the Mac

- GoBe Productive was comparable to AppleWorks, but maybe a little better at being compatible with Microsoft Office

- SoundPlay was a great MP3 player that could do crazy things that I still don't see anything doing 20 years later (it had speed control for files, including playing backwards, and could mix files that were queued up for playback; it didn't have any library management, but BeOS's file system let you expose arbitrary metadata -- like MP3 song/artist/etc. tags! -- right in file windows)

- Mail-It was the second-best email client I ever used, behind the now also sadly-defunct Mailsmith

- e-Picture was an object-based bitmapped graphics editor similar in spirit and functionality to Macromedia's Fireworks, and was something I genuinely missed for years after leaving BeOS

And there were other programs that were amazing, even though I didn't use them: Adamation's video editor (videoElements? something like that), their audio editor audioElements, Steinberg's Nuendo, objektSynth, and two programs which are incredibly still being sold today: Lost Marble's Moho animation program, now sold by Smith Micro for Mac and PC, and the radio automation package TuneTracker (incredibly now being sold as a turnkey bundle with Haiku). Also, for years, there was a professional-grade theatre light/audio control system, LCS CueStation, that ran on BeOS -- and it actually ran Broadway and Las Vegas productions. I remember seeing it running at the Cirque de Soleil permanent installation at Disney World in Orlando.

At the time Apple bought Next rather than Be, I thought they'd made a horrible mistake. Given Apple's trajectory afterward, of course, it's hard to say that looking back. It's very possible that if they'd bought Be, they'd have gone under, although I think that would have less to do with technology than with the management they'd have ended up with (or more accurately, stayed with). But it's still an interesting "what if."


I actually toyed with the idea of starting a radio station based on the BeOS MP3 player + file system. The thought was to have a system without human DJs that used a simple web interface to gather "votes" for songs/genres, and to use the metadata in the file system to queue up the songs. If I remember correctly, BeOS also had a macro programming interface (ie. AREXX) that could be used to glue things together.

This made BeOS (and BeBox) a great product in my mind; the ability to use it in unexpected ways.


You may have hinted at it, but I think Apple's subsequent turnaround after acquiring Next was mainly due to their founder, Steve Jobs, coming back to Apple.

Jobs helped enormously, of course, but if Apple was still trying to sell classic MacOS in 2005 I'm not sure even Steve Jobs could have kept them afloat long enough to ship an iPhone.

But the choice wasn't Next vs MacOS, it was BeOS vs Next.

That's true, but most keep forgetting that even before his comeback, Apple was very close to filing for bankruptcy and who knows what would have happened without the intervention from Gates. Microsoft was the only juggernaut who's fate was never doubted in the 1980s - 1990s.

NeXT hardware also failed but was the rightful choice for Apple over Be due to getting NeXTSTEP and Jobs again. But even after the war is over, we're now all generals.


When was Sun close to collapsing? I want to learn more about this history.

Absolutely. Even though I honestly think Gil Amelio gets dumped on more than he deserves, I doubt he could have really saved them in the long run.

NetPositive was much better in Internet Explorer compatibility than Netscape when it used to matter.

> Pe was a great GUI text editor, competitive with BBEdit on the Mac

Why wasn't it called BeBeEdit?!


wow, this list is bringing back all the memories. You're right, there was a short wave of enthusiasm, and some apps which were actually very innovative for the time. I remembered that it was actually used in some pro audio and lighting stuff, but I'd forgotten most of those apps. I remember playing around with Moho.

what's the name of the 2d illustration software that modelled real wet paint brushes and textured pencils? That was unlike anything I'd seen at the time, I remember putting quite a lot of effort into finding a compatible wacom.


Gosh. I remember the illustration program you're talking about but can't remember its name, either. :) I was surprised that it seemed to take so long for that concept to show up on other platforms, though -- other than Fractal Design Painter, it didn't seem like anyone on the Mac or Windows was really trying for that same kind of "real ink and paper" approach.

One of my favorite anecdotes about BeOS was that it had a CPU usage meter[1], and on the CPU meter there were on/off switches for each core. If you had two cores and turned one off, your computer would run at half speed. If you turned both off, your computer would crash. Someone once told me that this was filed as a bug against the OS and the response was "Works As Intended" and that it was the expected behavior.

(These are fuzzy memories from ~25 years ago. It would be nice if someone could confirm this story or tell me if it's just my imagination.)

[1]: http://www.birdhouse.org/beos/8way/8way-1.jpg


The CPU monitor program was called Pulse and early versions allowed you to turn all the processors off and crash the machine. I think it was fixed in 3.something or 4.0.

The 8-way PIII Xeon was a Compaq someone tested BeOS on before it went into production. I Remember it being posted on some BeOS news site. There should be another screenshot or two with 25 avi files playing and a crap load of CPU hungry programs running at once. Impressive feat circa 2000. Edit: browse the screenshot directory for the other two. Amazing they survived time, internet, bit rot and my memory: http://birdhouse.org/beos/8way/

The BeOS scheduler prioritized the GUI and media programs so you could load the machine down to 100% and the GUI never stuttered and windows could be smoothly moved, maximized and minimized at 100% CPU. Rather, your programs would stutter. And everything was given a fair chance at CPU time.

Very nice design and the OS was built from the ground up for multimedia and threading for SMP. It was a real nice attempt at building a next generation desktop OS. Had no security even though it had basic POSIX compatibility and a bash shell. Security bits meant nothing.


I remember circa 2000 being able to simultaneously compile Mozilla, transfer DV video from a camcorder into an editor, check email, and surf the web on a dual Pentium Pro system with no hint of UI stutter or dropped frames in the firewire video transfer. It was at least another decade before SSDs and kernel improvements made that possible on Linux, Windows, or OS X.

The tradeoff was the throughput of your compilation was terrible. BeOS wasn't magic, it just prioritized the UI over all else. That's not advanced, it's just one possible choice.

MacOS prior to OS X had the same property: literally nothing else could happen at the same time if the user was moving the mouse, which is why you had to take the ball out of the mouse before burning a CD-R on that operating system.


Oh, sure, it was obviously limiting the other tasks. The point was that this is almost always the right choice for a general purpose operating system: no user wants to have their music skip, UI go unresponsive, file transfers to fail, etc. because the OS devoted resources to a batch process.

You’re only partially correct about classic macOS: you could definitely hang the OS by holding down the mouse button but this wasn’t a problem for playing music, burning CD-Rs, etc. in normal usage unless you had the cheapest of low-end setups because a small buffer would usually suffice. I worked with a bunch of graphic designers back then and they didn’t get coasters at a significant rate or more than their Windows-using counterparts, and they burned a lot of them since our clients couldn’t get large files over a network connection faster than weeks.


You can down play it all you want but it was a really nice OS for its time. It's smooth GUI was very competitive to other clunky windowing systems of the time. The advanced part was threading and smp support were woven into the system api making smp development a first class programming concept. Other operating systems felt like threading was bolted on and clunky. And thanks to the smp support prioritizing the GUI made 100% sense. And I believe there were some soft real time abilities of the scheduler so processes with high priority ran reliably.

Thanks for this. I remember being at MacWorld and watching a movie play while holding down menu items. On Classic Mac, which I was used to, this would block the entire OS (almost). BeOS seemed space-age.

Oops, probably too late but the memorial day videos were included with BeOS. It was a bunch of the Be employees tossing a few broken monitors off the roof of their office building. https://www.youtube.com/results?search_query=beos+memorial+d...

Reminds me of a game called NieR:Automata. You play as an android and the skill/attribute-system is designed as a couple of slots for chips. There were chips for things like the minimap and various other overlays along with general attributes, so if you decided you want to exchange your experience gauge for another chip with more attack speed, you could totally do that.

Among these chips was one called "OS chip" you had from the very beginning. If you'd try to replace that or simply exchange it for another one you "died" instantly and were greeted by the end-credits.


It also has an `IsComputerOn` system call.

int IsComputerOn() { return 1; }

??


if i recall correctly, if the computer is off the return value is undefined.

When I started University in 2000, I had a quad-boot system: Win98, Win2000, BeOS 5 and Slackware Linux (using the BeOS bootload as my primary because it had the prettiest colors). I mostly used Slackware and Win98 (for games), but BeOS was really neat. It had support for the old Booktree video capture cards, could capture video without dropping frames like VirtualDub often did, and it even had support for disabling a CPU on multicpu systems (I only saw videos of this; never ran BeOS on a SMP system).

I wish we had more options today. On modern x86 hardware, you pretty much just have Windows, Linux and maybe FreeBSD/OpenBSD if you replace your Wi-Fi card with an older one (or MacOS if you're feeling Hackintoshy .. or just buy Apple hardware). I guess three is kinda the limit you're going to hit when it comes to broad support.


I think BeOS was the only OS that allowed smooth playback of videos and work at the same time, something Windows was capable 5 and Linux 10 years later :D

Bluebottle OS (written in Active Oberon, a GC enabled systems programming language) was also capable of it.

http://www.ocp.inf.ethz.ch/wiki/Documentation/WindowManager?...


Ok, so this is the big BeOS thing I've heard.

What technically enabled this on such limited hardware? Was there lack of security/containerization/sandboxing that made os call much faster and context switches better?


Other people mentioned the real preemptive scheduling — and the general overall better worst-case latency — but another factor was the clean design. The other operating systems tended to be okay in the absence of I/O content but once you hit the capacity of your hard drive you would find out that e.g. clicking a menu in your X11 app did a bunch of small file I/O in the background which would normally be cached but had been pushed out, etc. A common mitigation factor in that era was having separate drives for the operating system, home directory, and data so you could at least avoid contention for the few hundred IOPs a drive could sustain.

Yes. This always amazed me with BeOS. It would play 6 movie simultaneously making my PC very slow but still responsive. As if the framerate just went down.

Bear in mind that resolutions back then were much lower than now, and not all computers had 24 bit color frame buffers. Video cards ran one monitor for the most part, with no others attached.

Be had well written multi threading and preemptive multitasking implemented on a clean slate - no compatibility hacks required. That meant it worked well and was quick/responsive. There were still limits, and the OS didn't have many security protections that would get written in today.


I mean maybe. I was running 1600x1200 on my monitor back then.

Some people were, but it wasn't too common. Workstations had far higher resolutions long before this, but home PCs running non 3d accelerated hardware were still mostly 1024x768-ish.

The BeBox itself was vastly different hardware than a standard PC as well, so it could break a lot of rules as far as smooth concurrency and multitasking... kinda like the Amiga did.


Also we had actual 32bit colour, not this 24 + 8 alpha. You could actually look at a rainbow gradient without banding.

2048x1536 was already a thing as well.

Yup, had a 22” Mitsubishi monitor that could do that resolution in ~2002. Everyone would pick on me about the text being so small, but I’d let them sit at my desk squinting and I’d stand ten feet back and read the screen with ease as they struggled. The monitor was a beast though, around 70lbs if memory serves.

Edit:

Pretty sure this is the monitor: https://www.necdisplay.com/documents/ColorBrochures/RDF225WG...


1280x1024, not so far from 1920x1080.

That was more the exception than the rule. Besides, 1080P is about 45% more pixels per frame than 1280, and likely at a higher frame rate. Big difference in hardware load.

I think it was their thread/process scheduler. It had a section of priorities which got hard realtime scheduling, then lower priority stuff got more "traditional" scheduling. (Alas, I don't know too much about thread/process scheduling so the details elude me.) That way the playback threads (and also other UI threads such as the window system) got the timeslices they needed.

Isnt giving near real time priority scheduling to audio/video how Windows handles things those days? I think I read that somewhere last week under Linux kernel scheduler behaviour response discussion.

Real pre-emptive multitasking? I seem to recall that was one of the huge differentiating factors of it against Mac / Windows.

Amiga did this in 1985. It's just that for compatibility reasons Apple couldn't do this. Even funnier: the fastest hardware to run old MacOS (68k version) on: an Amiga computer.

A non-Apple computer? Was "Hackintosh" (presumably under a different name) a thing back then?

You didn't need to build a Hackintosh. You could buy legal, officially-licensed Mac clones back then.

Ah yeah I still have my PowerComputing PowerTower Pro! At the time it was a current model, its 512mb of RAM was insane and my friends & classmates were jealous! hahah :)

Check out this video[0], basically an Amiga with an accelerator card potentially makes for the fastest environment to run 68k-based Mac OS (System 7) ...

[0] https://www.youtube.com/watch?v=Jph0gxzL3UI


Whoa, the Amiga is faster even though it's running in a VM!

Well, it's more akin to something like Wine where it's not exactly a virtual machine, since the processor instructions are the same. Tho that's about the extent of my understanding.. haha

I sometimes used my Atari ST with an emulator called Aladin. "Cracked" to work without Mac ROMs. But wasn't really useful to me because of lack of applications (at the time).

IIRC there were solutions like this for the Amiga too.


BeOS would also let you do that while playing them backwards; useless, but a nice demo of the multimedia capabilities of the OS.

This would be challenging with modern codecs using delta frames. The only way I can see it work is precomputing all frames from the preceeding keyframe. Doable, but decent effort for a fairly obscure feature.

But did video formats back then use delta frames?


I never saw BeOS do that with video, but I heard it do it with MP3 files. SoundPlay was a kind of crazy bananas MP3 player -- it could basically act as a mixer, letting you not only "queue up" multiple files but play each of them simultaneously at different volume levels and even different speeds. I've still never seen anything like it outside of DJ software.

Linux could do that in 2001 just fine, and without crashing like Windows. XV was amazing. So was MPlayer.

> without crashing like Windows

That depended _very_ heavily on your graphics card at the time. In 2001, I could get X to crash on my work computer if I shook my mouse too fast. At home on my matrox card, yes, it was rock stable.


Nvidia TNT2/Geforce2 MX later.

EDIT: Also, Slackware was rock solid and it crashed far less than SuSE/Mandrake.


High definition playback is still not as smooth as it could be in browsers on Linux (or if your CPU is fast enough, it will drain your battery more quickly), because most browsers only have experimental support for video acceleration.

https://wiki.archlinux.org/index.php/Hardware_video_accelera...


Pretty much any CPU released in the past decade should be capable of decoding 1080P video as well as a GPU (though yes, will use slightly more power). The only exceptions I can think of are early generation Atom processors, which were terribly slow.

Pretty much any CPU released in the past decade should be capable of decoding 1080P video as well as a GPU (though yes, will use slightly more power).

The point is that modern GPUs have hardware decoding for common codecs, and will use far less power than CPU decoding. But the major browsers on Linux (Firefox and Chrome) disable hardware decoding on Linux, because $PROBLEMS.

So, you end up with battery draining CPU-based 1080p decoding. And even more battery draining or choppy 4k decoding.


I found Wayland more capable than xwindows in this regard

In 2001 I could still hang X Windows as much as I liked.

And it still happens occasionally, while my Windows 10 userspace drivers just reinitialize instead of locking the OS.


I think most hangs on Linux today are caused by low memory pressure, see this post: https://news.ycombinator.com/item?id=20620545

No swap, some swap, huge swap, swappiness=0, zram - nothing helped me.


Linux could do that only if your system was lightly loaded. Once you started to have I/O contention, none of the available kernel schedulers could reliably avoid stuttering.

I had this experience too, my video card was so shitty that I wasn't able to watch 700mb divx videos in windows, I had to boot into linux and use mplayer.

Don't forget about xanim

Windows 2000 was rock solid stable with uptime measured in "how long do I keep running before I get bored if bragging about my uptime?"

No comment in regards to the 9x line. ;)


The thing also booted up faster than you could blink.

Of course that might have changed if they added more system services, but from POST screen to a usable desktop was easily under 15 seconds.


> you pretty much just have Windows, Linux and maybe FreeBSD/OpenBSD [...]

That sounds just as good? Compared to quad-booting Win98/Win2000/BeOS5/Slackware, today you could quad-boot Win10/FreeBSD/OpenBSD/Ubuntu. Actually, depending on what you count as different systems and what exact hardware you have, you could have 2 laptops sitting on your desk: a pinebook running your choice of netbsd, openbsd, freebsd, or some linux (https://forum.pine64.org/forumdisplay.php?fid=107), and an x86 laptop multibooting Windows 10, Android, Ubuntu GNU/Linux, Alpine Busybox/Linux, FreeBSD, OpenBSD, NetBSD, and Redox (https://www.redox-os.org/screens/). That's 2 processor families in 2 machines running what I would count as 4 and 8 operating systems each.

I think we're doing fine.


There also used to be other CPU architectures--though even at the time, enough people complained about "Wintel" that maybe it was obvious that the alternatives weren't ever going to catch on.

People complained about "Wintel" because the 32-bit x86 chips were so fast and cheap they destroyed the market for RISC designs and killed existing RISC workstation and server architectures, like SPARC and HPPA and MIPS.

By the time the Pentium came around, the future looked like a completely monotonous stretch of Windows NT on x86 for ever and ever, amen. No serious hardware competition, other than Intel being smart enough to not kill AMD outright for fear of antitrust litigation, and no software competition on the desktop, with OSS OSes being barely usable then (due to an infinite backlog of shitty hardware like Winmodems and consumer-grade printers) and Apple in a permanent funk.

We were perpetually a bit afraid that Microsoft/Intel would pull something like Palladium/Trustworthy Computing [1] and lock down PC hardware but good, finally killing the Rebel Alliance of Linux/BSD, but somehow the hammer never quite fell. It did in the cell phone world, though, albeit in an inconsistent fashion.

[1] https://en.wikipedia.org/wiki/Next-Generation_Secure_Computi...

Microsoft and Intel really seemed permanent back then. I wonder if that's how people felt about IBM back in the 1950s.


Microsoft went out of their way to prohibit computer manufacturers, namely Dell, from bundling BeOS:

https://www.osnews.com/story/681/be-inc-files-suit-against-m...


To Microsoft's credit, the early Windows NT versions were multiplatform. I remember that my Windows NT 4.0 install CD had x86, Alpha, PowerPC, and MIPS support.

The other thing people forget, which is still a bit incomprehensible to me, is that the multiple Unix vendors were saying they'll migrate to Windows NT on IA-64.

I don't know if it's true or not, but I've long blamed Microsoft for killing SGI (Silicon Graphics Inc).

MS worked with SGI on a project known as Fahrenheit - to unify Direct3D and OpenGL:

https://en.wikipedia.org/wiki/Fahrenheit_(graphics_API)

...well, we all know what happened - but I've often thought that Microsoft hastened their demise.

Somewhere in there, of course, was also the whole SGI moving away from IRIX (SGI's unix variant) to Windows NT (IIRC, this was on the Octane platform) - there being some upset over it by the SGI community. Maybe that was part of the "last gasp"? I'm sure some here have better info about those times; I merely watched from the sidelines, because I certainly didn't have any access to SGI hardware, nor any means to purchase some myself - waaaaay out of my price range then and now.

Of course - had SGI not gone belly up, I'm not sure we'd have NVidia today...? So maybe there's a silver lining there at least?


They couldn't afford to compete with Intel on processors... they just didn't have the volumes and every generation kept getting more expensive. For Intel, it was getting relatively cheaper thanks to economies of scale since their unit volumes were exploding throughout the 90's. Also, Intel's dominance in manufacturing process kept leapfrogging their progress on the CPU architecture front.

Perhaps Digital UNIX and HP/UX but HP/Compaq was a collaborator on IA-64. I don't think I heard of SUN or IBM saying that.

Sgi is another prominent example cited elsewhere in the thread.

Indeed, it probably killed SGI.

I think there was some token POSIX compatibility in Windows NT back then. Probably for some government contracts.

It actually worked pretty nicely - if anything better back in those days when software expected to run on different unixes, before the linux monoculture of today.

It also did in the PC hardware, unless you failed to notice the trends of laptops, 2-1 hybrids hardware.

Desktops are now a niche product, where buying cards has to be done mostly online, with most malls having only laptops and 2-1 on display.

Servers are away in some cloud, running virtualized OSes on top of a level-1 hypervisor.


> We were perpetually a bit afraid that Microsoft/Intel would pull something like Palladium/Trustworthy Computing [1] and lock down PC hardware but good, finally killing the Rebel Alliance of Linux/BSD, but somehow the hammer never quite fell. It did in the cell phone world, though, albeit in an inconsistent fashion.

I agree that phones are more locked down than desktops/laptops nowadays, but it's worth pointing out that neither Microsoft or Intel are really winners in this area. They both still are doing fairly well in the desktop/laptop in terms of market share though.


I honestly think it was less any type of Wintel conspiracy and more that platforms have network effects. Between Palladium not working out and Microsoft actually making Windows NT for some RISC ISA's, there wasn't actually an Intel/Microsoft conspiracy to dominate the industry together. They each wanted to separately dominate their part of the industry and both largely succeeded, but MS would have been just as happy selling Windows NT for SPARC/Alpha/PowerPC workstations and Intel would have been just as happy to have Macs or BeBoxes using their chips.

> I honestly think it was less any type of Wintel conspiracy and more that platforms have network effects.

True. I've always regarded "Wintel" as more descriptive than accusatory. It's just a handy shorthand to refer to one specific monoculture.

> Between Palladium not working out and Microsoft actually making Windows NT for some RISC ISA's, there wasn't actually an Intel/Microsoft conspiracy to dominate the industry together.

Right. They both happened to rise and converge, and it's humanity's need to see patterns which turns that into a conspiracy to take over the world. They both owe IBM a huge debt, and IBM did what it did with no intention of being knocked down by the companies it did business with.


OS X was around in the days of XP and Linux was perfectly usable on the desktop.

A few years earlier things were a little more bleak.


> OS X was around in the days of XP and Linux was perfectly usable on the desktop.

> A few years earlier things were a little more bleak.

I admit I was unclear on the time I was talking about, and probably inadvertently mangled a few things.

As for Linux in the XP era, I was using it, yes, but I wouldn't recommend it to others back then because it still had pretty hard sticking points with regards to what hardware it could use. As I said, Winmodems (cheap sound cards with a phone jack instead of a speaker/microphone jack, which shove all of the modem functionality onto the CPU) were one issue, and then there was WiFi on laptops, and NTFS support wasn't there yet, either. I remember USB and the move away from dial-up as being big helps in hardware compatibility.


Yeah Wifi on Linux sucked in those days. For me that was the biggest pain point about desktop Linux. In fact I seem to recall having fewer issues with WiFi on FreeBSD than I did on Linux -- that's pure anecdata of course. I remember the first time I managed to get this one laptop's WiFi working without an external dongle and to do that I had to run Windows drivers on Linux via some wrapper-tool (not WINE). To this day I have no idea how that ever worked.

> I remember the first time I managed to get this one laptop's WiFi working without an external dongle and to do that I had to run Windows drivers on Linux via some wrapper-tool (not WINE). To this day I have no idea how that ever worked.

ndiswrapper. It's almost a shibboleth among people who were using Linux on laptops Way Back When.

https://en.wikipedia.org/wiki/NDISwrapper

> NDISwrapper is a free software driver wrapper that enables the use of Windows XP network device drivers (for devices such as PCI cards, USB modems, and routers) on Linux operating systems. NDISwrapper works by implementing the Windows kernel and NDIS APIs and dynamically linking Windows network drivers to this implementation. As a result, it only works on systems based on the instruction set architectures supported by Windows, namely IA-32 and x86-64.

[snip]

> When a Linux application calls a device which is registered on Linux as an NDISwrapper device, the NDISwrapper determines which Windows driver is targeted. It then converts the Linux query into Windows parlance, it calls the Windows driver, waits for the result and translates it into Linux parlance then sends the result back to the Linux application. It's possible from a Linux driver (NDISwrapper is a Linux driver) to call a Windows driver because they both execute in the same address space (the same as the Linux kernel). If the Windows driver is composed of layered drivers (for example one for Ethernet above one for USB) it's the upper layer driver which is called, and this upper layer will create new calls (IRP in Windows parlance) by calling the "mini ntoskrnl". So the "mini ntoskrnl" must know there are other drivers, it must have registered them in its internal database a priori by reading the Windows ".inf" files.

It's kind of amazing it worked as well as it did. It wasn't exactly fun setting it up, but I never had any actual problems with it as I recall.


Yeah I know what ndiswrapper is (though admittedly I had forgotten it's name). I should have been clearer in that I meant I was constantly amazed that such a tool existed in the first place and doubly amazed that it was reliable enough for day to day use.

Oh man! I first tried BeOS personal edition when it came on a CD with Maximum PC magazine. (Referring to same demo CD, though the poster is not me: https://arstechnica.com/civis/viewtopic.php?f=14&t=1067159&s.... Also, how crazy is it that Ars Technica’s forums have two decade old posts? In 2000, that would be like seeing forum posts from 1980.) I remember being so happy when we got SDSL, and I could get online from BeOS. (Before that, my computer had a winmodem.)

BeOS was always a very tasteful design. And well documented! I learned so much about low level programming from the Be Newsletters: https://www.haiku-os.org/legacy-docs/benewsletter/index.html. The BONE article is a great introduction to how a network stack works: https://www.haiku-os.org/legacy-docs/benewsletter/Issue5-5.h.... I still have a copy of Dominic Giampalo’s BeFS book somewhere.

BeOS was very much a product of its time. (Microkernel, use of C++, etc.) What would a modern BeOS look like? My thought: use of a memory and thread safe language like Rust for the main app-level APIs. (Thread safety in BeOS applications, where every window ran in its own thread, was not trivial.) Probably more exokernel than microkernel, with direct access to GPUs and NICs and maybe even storage facilitated by hardware multiplexing. What else?


> What would a modern BeOS look like?

Haiku. (0)

But if you change your question to "What would a modern OS look like?"

Fuchsia. (1)

The only relationship that they have is that a kernel engineer called Travis Geiselbrecht designed NewOS (Haiku's modified kernel) and Zircon (Fuchsia's Microkernel).

[0] https://haiku-os.org

[1] https://fuchsia.dev


> In 2000, that would be like seeing forum posts from 1980.

That's the premise of Jason Scott's project launched in 1998 :) http://textfiles.com/


He also livestreams on Twitch; recently he's been streaming the archival of Apple II floppies.

https://www.twitch.tv/textfilesdotcom


> What would a modern BeOS look like?

There's a bit of BeOS in Android. Binder IPC is much like BMessage. And nowadays everyone puts stuff like graphics and media in a separate user space daemons, which was unusual for the time. Pervasive multithreading basically happened in the form of pervasive multiprocessing.



BeOS did not use a microkernel, it was a straight-up up boring monolithic kernel.

People at the time described the kernel as a microkernel, including the O’Rielly book: https://www.oreilly.com/openbook/beosprog/book/ch01.pdf. It ran networking for example in user space.

Haiku OS still uses C++.

I installed BeOS a long time ago on a PC. It was something ahead of the times.

I still remember how incredible it was the rotating cube demo where you coud drag and drop images and videos on the cube faces... it worked without a glitch on my pentium.

Just found out the demo video shows the application with a GL wave surface playing a video over it: https://youtu.be/BsVydyC8ZGQ?t=1074


Agreed, I remember trying BeOS in the late 90s and I felt the way Tesla fans report feeling about their cars - "it just feels like the future".

The responsiveness of the UI was like nothing I'd ever seen before. Unfortunately BeOS fell by the wayside, but I have such fond memories I keep meaning to give Haiku a shot.


By all the stars in heaven, that was an impressive demo.

It's about making a virtue of a necessity.

When Be wrote that demo the situation is that the other operating systems you might plausibly choose all have working video acceleration. Even Linux has basic capabilities in this area by that point. BeOS doesn't have that and doesn't have a road map to get it soon.

So, any of the other platforms can play full resolution video captured from a DVD for example, a use case actual people have, on a fairly cheap machine and BeOS won't be able to do that without a beast of a CPU because it doesn't have even have hardware colour transform acceleration or chromakey.

But - 1990s hardware video acceleration can only play one video at a time, because "I want to play three videos" isn't a top ask from actual users. So, Be's demo deliberately shows several different postage stamp videos instead of one higher resolution video, as the acceleration is no help to competitors there.

And then since you're doing it all in software, not rendering to a rectangle in hardware, the transform to have this low res video render as one side of a cube or textured onto a surface makes it only very slightly slower, rather than being impossible.

Audiences come away remembering they saw BeOS render videos on a 3D surface, and not conscious that it can't do full resolution video on the cheap hardware everybody has. Mission success.


BeOS R4.5 did have hardware accelerated OpenGL for 3dfx Voodoo cards. I played Quake 2 in 1999 with HW OpenGL acceleration. For R5 BeInc wanted to redo their OpenGL stack, and the initial prototypes seeded to testers actually had more FPS on BeOS than under Windows.

Eh, multithreading decoding could help a lot. And by the population of DVD video in computers (and the PS2), most people had a Pentium3 450MHZ at homes, which was more than enough for DVD video with an ASM optimized video player such as MPlayer and a good 2D video card.

2D acceleration was more than enough.

http://rudolfs-place.nl/BeOS/NVdriver/3dnews.html

On Linux you didn't need OpenGL, just Xv.

Source: I was there, with an Athlon. Playing DVD's.


Impressive. But it makes me think how far we've come; it's long been possible to do a rotating cube with video using pure HTML and CSS.

Remember when the Amiga bouncing ball demo was impressive? Ironically 3D graphics ended up being the Amiga's specific achilles heel once Doom and co came on the scene.

That's curious to me. Doom is specifically not 3D. Was it a publishing issue (that Doom and co weren't produced for the Amiga), or a power issue, or something else?

The Amiga had planer graphics modes, while the PC/VGA cards had chunky mode (in 320x200x256 color mode).

It means that, to set the color of a single pixel on the Amiga, you had to manipulate bits at multiple locations in memory (5 in 32 colours), while for the PC each pixel was just one memory location; In chunky mode you could just do something like: videomem[320*y+x]=158 to set the pixel at (x,y) to color 158, where videomem would point directly to the graphics memory (at address 0xa0000) -- It really was the simplest graphics mode to work with!

If you just copied 2D graphics (without scaling/rotating) the Amiga could do it quite will using the blitter/processor, but 3D texture mapping was more challenging because you constantly read and write to individual pixels (each pixel potentially requiring 5 memreads/writes on the Amiga vs. 1 on the PC).

Doom's wall texture mapping was affine, which basically means scaling+rotation operations were involved. The sprites were also scaled. Both operations a problem to the Amiga.

As software based 3D texture mapping games became the new hot thing in 1993-1997, the Amiga was left behind. Probably wouldn't have been a problem if the Amiga has survived until the 3D accelerators in the late 90s.

This is quite well described elsewhere. Google is your friend if you want to know more! :-)


Also Amiga didn’t have hardware floating point whereas DX series of PCs in the 90s did. Essential for all those tricky 3D calculations and texture maps.

No. Hardware floating point was _Quake_

Quake has software full 3D which runs appallingly if you can't do fast FP, it's targeting the new Pentium CPUs which all have fast FPUs onboard, it runs OK on a fast 486DX but it flies on a cheap Pentium.

Doom is just integer calculations, it's fixed point math.


Duke3D Build engine did use FPU for slopes :O http://fabiensanglard.net/duke3d/build_engine_internals.php Luckily you already needed at least DX2-66 to play the game comfortably so not many people stumbled onto this.

I didnt know Doom was all integer ... quite a feat.

In the general sense though the lack of floating point, as well as flat video addressing seriously hampered Amiga in the 3D ahem space.

EDIT I just remebered there is definitely at least one routine I know of that performs calculations based on IEEE 754 - “fast inverse square” or something. That could be at the root [badum] of my confusion vis-a-vis Doom ...


The famous "fast inverse square root" was in Quake 3.

Doom didnt use polygons but it very much was 3D in any practical sense of the term.

No, it was "distorted" 2D, like cardboards put in perspective. Not 3D.

You are still getting confused by polygons. It was a 3D space that you could move around in. The matter of how it was rendered is an implementation detail.

Doom was a 2D space that looked like a 3D space due to rendering tricks. You could never move along the Z-axis though because the engine doesn't represent, calculate, or store one. That's why you can't jump, and there are no overlapping areas of the maps.

Regardless of the “technicalities”. My point was that this, and other 3D games were something that Amiga could not do well - whether 3D, or “simulated 3D”.

It really wasn't. Doom's gameplay almost entirely took place in a 2D maze with one-way walls. It was rendered to look 3D, and as you said, that's an implementation detail.

You coudn't look up and down, neither you could do with DN3D.

I am not confused, the opposite. I grew up with that.


You can look up/down in Duke3D, its under home/end keys. It doesnt look pretty nor correct, but you can do it.

I grew up with it too ... I disagree with your categorical boundaries. The distinctions you draw are purely technical.

purely technical? You can't go above or below anything; no two objects can exist at the same X/Y; height doesn't exist in any true fashion (the attribute is used purely for rendering --- there is no axis!). How is the existence of the third axis in a supposedly 3D environment purely technical?

With only two axis, it is literally a 2D space, which gives some illusion of 3D as an implementation detail --- not the other way around.


It isn't "literally" a 2D space. It is "topologically" a 2D space in that you could represent it as a 2D space without loosing information. It doesn't provide 6 degrees of freedom but it is very much experienced as a 3D game environment.

EDIT also, using the term "literally" to talk about 3Dness when it is all rendered onto a 2D screen, is fairly precarious. No matter how many degrees of freedom, or how rendered, it will never be "literally" 3D, in the literal sense of the term.


No free look meant no perspective distortions in Doom.

Doom's 3Dness or lack thereof only mattered to programmers. Players didn't care, to them Doom looked entirely 3D.

Curious. As a player, I certainly cared. There's a world of difference between Doom and Quake...

Players didn't have to aim up to shoot something above them

One of the last lines threw me off...

"Would Tim Berners-Lee have used a BeBox to run the world’s first web server instead?"

The BeBox didn't ship until 1995. Tim Berners-Lee wrote the first version of the web in 1991. So nope, that wouldn't have happened.


He used a NeXT computer for that IIRC

> What’s left for us now is to wonder, how different would the desktop computer ecosystem look today if all those years ago, back in 1997, Apple decided to buy Be Inc. instead of NeXT? Would Tim Berners-Lee have used a BeBox to run the world’s first web server instead?

For this hypothetical scenario to ever had been possible, BeOS would’ve had to time travel, as TBL wrote WorldWideWeb on a NeXT machine in 1990[0]. BeOS was initially developed in 1991 per Wikipedia[1] and the initial release of BeOS to the public wasn’t until 1995.

[0] https://www.w3.org/People/Berners-Lee/FAQ.html#browser

[1] https://en.wikipedia.org/wiki/BeOS


I used BeOS for most of the 2nd half of the '90s and I guess in my mind at least the regrettable, messy, and unethical end of BeOS in 2001-2002 is emblematic of the Dot Com collapse.

Crushed by Microsoft's anti-competitive business practices and sold for scrap to a failing company who was unable to actually do anything with parts they wound up with but who never the less made damn sure that no one else could either.


PSA: nevertheless is written as a single word

BeOS was really something of what the future 'could' have almost been. Too bad that it was killed by better competitors. But again I think its fair to compare with the lessons learned from its successor 'Haiku' that can be learned by many other OSes:

From what I can see from using Haiku for a bit, it has the bazar community element from the open-source culture with its package management and ports system from Linux and BSD whilst being conservative with its design from its apps, UI, and SDK like macOS. Although I have tired it and its surprisingly "useable", the driver story is still a bit lacking. But from a GUI usability point of view compared with many Linux distros, it feels very consistent unlike the countless confusing interfaces coming from those distros.

Perhaps BeOS lives on in the Haiku project, but whats more interesting is that the real contender who learned from its failure is the OS that has its kernel named 'Zircon'.


I installed BeOS when it first came out. To me it was a cool tech demo, but it was fairly useless as it didn't have a usable browser (NetPositive was half baked at best), couldn't play a lot of video codecs and couldn't connect to a Windows network share.

I feel like if they launched a better experience for existing Windows users, it would have done much better.


There was a version of Opera 3.62 for BeOS around 2000. At the time, Opera was a great browser.

I actually licensed that version :)

- VLC existed for Be

- So was Opera


> the driver story is still a bit lacking

That's a hell of an understatement right there. It still doesn't have any capability for accelerated video, does it?

Unfortunately that's the story for any OS these days that isn't already firmly established. Which is a huge shame since they all suck in their own ways.


> Unfortunately that's the story for any OS these days that isn't already firmly established.

Maybe because we're coming at this from the wrong perspective?

I love the theoretical idea that I could build a generic x86 box that can boot into any OS I feel like using, but has that ever truly been the case? We certainly don't pick software this way—if you're running Linux, you're not going to buy a copy of Final Cut and expect it to work.

Well-established software will of course work almost everywhere, but niche projects don't have the ability. Unless you use something based on Java or Electron, which is equivalent to using Virtualbox (or ESXi) in this comparison.

It's long been said that one of Apple's major advantages with macOS is they don't need to support any hardware under the sun. Non-coincidentally, the recommended way to make a Hackintosh is to custom build a PC and explicitly select Mac-compatible hardware.

Now, if an OS doesn't for instance have support for any model GPUs at all, cherry picking hardware won't help. But perhaps this is where projects like BeOS need to focus their resources.


> The "correct" way to go about things is to choose the OS first, and then select compatible hardware.

Yeah, wouldn't it be nice if we weren't constrained by real world requirements? If I were to write an OS today, the hardware I'm targeting may become quite rare and/or expensive tomorrow. Or it may just go out of fashion. Regardless, very few people are going to buy new hardware just to try out an OS they're not even sure they want to use yet.


> very few people are going to buy new hardware just to try out an OS

We do have VM's and emulators, but yes, the cost of switching OS's is huge. That's true with or without broad hardware compatibility.

My point is this: I don't think the idea of OS-agnostic hardware ever really existed. The fact that most Windows PC's can also run Linux is an exceptional accomplishment, and not something other projects can be expected to replicate. You might get other OS's to boot, but not with full functionality.


> That's a hell of an understatement right there. It still doesn't have any capability for accelerated video, does it?

We do not. But that (and proper power management) basically all that's missing at this point; the rest are "just bugs".

That is to say: WiFi, ethernet, USB, SSDs, EFI, etc. should all work on the majority of hardware, both current and past.


That's the case. I can't use Haiku til the video is sorted, and it looks like that's a long way out. I'd love to help but I don't know C++ and I don't have time to dive into something like that.

Well it wasn't as simple as "killed off by better competitors". It was actually both much better than Windows 98 and Mac OS at the time.

But ultimately the deathblow came from Apple which, after struggling with low sales and poor quality software, almost chose to buy BeInc's tech but dropped it so they could bring in Steve Jobs. So it was more like vendor lock-in (Windows) and corporate deals (Apple) as well as failing partners (Palm).


Apple also dropped it because they couldn't come together on price, partly because BeOS was in a fairly unfinished state:

> Apple's due diligence placed the value of Be at about $50 million and in early November it responded with a cash bid "well south of $100 million," according to Gassée. Be felt that Apple desperately needed its technology and Gassée's expertise. Apple noted that only $20 million had been invested in Be so far, and its offer represented a windfall, especially in light of the fact that the BeOS still needed three years of additional expensive development before it could ship (it didn't have any printer drivers, didn't support file sharing, wasn't available in languages other than English, and didn't run existing Mac applications). Direct talks between Amelio and Gassée broke down over price just after the Fall Comdex trade show, when Apple offered $125 million. Be's investors were said to be holding out for no less than $200 million, a figure Amelio considered "outrageous."

> ...With Be playing hard to get, Apple decided to play hardball and began investigating other options.

http://macspeedzone.com/archive/art/con/be.shtml


Yeah, I feel saying "better competitors" really does Be a disservice. They largely failed due to an unfair playing field and very shady practices.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: