A Testing Fairy Tale
Two test engineers were in a crunch. The floppy drive they were currently testing would work all day while they ran a variety of stress tests, but the exact same tests would run for only eight hours at night. After a few days of double-checking the hardware, the testing procedure, and the recording devices, they decided to stay the night and watch what happened. For eight hours they stared at the floppy drive and drank espresso. The long dark night slowly turned into day and the sun shone in the window. The angled sunlight triggered the write-protection mechanism, which caused a write failure. A new casing was designed and the problem was solved. Who knew?
How does that even happen?
So, in a not-write-protected disk (no through-hole), the laser light couldn’t make it through to the sensor... but sunlight sneaking in through the drive casing and under the disk still could.
Many open source projects and commercial projects too tend to be much more inclusive in what they pick. The end result often becomes something like when Homer Simpson designed a car. Oh that's neat! 
BeOS was like those Transmeta Crusoe CPUs, Segways and other products that gave journalists something to write about even if adoption was not going to happen.
The circus aspect of the media wanting to write something (as they have pages to fill and eyeballs to feed) must be quite costly to small companies wanting to get a legit product out there. I can still remember the name of Jean-Louis Gassée from his frequent soundbites in the tech press about the wonders of what BeOS was going to be. Did he actually do anything in between conferences, interviews and scheduling media appointments? If so, how did he get the time?
The 'look what we could have won' tech retrospectives are also part of this media cycle, a version of garbage-in-garbage-out applies where those companies that got hyped get recycled as article-worthy a decade on...
Perhaps too true?
HN has its bents and focuses, but it's at least more conversationally balanced than reddit (perhaps https://news.ycombinator.com/item?id=15965536 partially explains some of the reasons why). The news media is of course pure bias and content/distribution/agenda masquerading as fact-based objectivity (eg, https://youtu.be/hWLjYJ4BzvI).
Where else can I look for... the kind of apolitical, discussion-focused information sharing that eg used to happen on blogs circa 2007, but in discussion community form, and within a community framework that is bias-resistant?
BeOS has the equivalent of the POSIX O_LARGEFILE permanently enabled. Historically 32-bit Unix systems assumed files were no larger than 2^31 bytes for the purposes of random access, and a feature labelled O_LARGEFILE says that your program wants 64-bit offset values instead so you can seek inside a larger file. In a 64-bit OS this doesn't make any difference to anything, and in modern programs O_LARGEFILE will always be set for 32-bit programs too. This isn't about the filesystem at all, on BeOS, or any other Unix-like system.
BeFS does, like a lot of modern filesystems, theoretically support larger storage than is ever likely to be in the hands of ordinary end users. Many petabytes of storage in a single filesystem are in principle possible with BeFS. But that's not strongly related, in either direction, with the use of a 64-bit integer for file seeking.
Beyond that BFS has lots of annoying problems, which are very understandable in the context of it being rushed into use over such a short period of time and with really only one key person doing much of the work, but they don't vanish just because they have an excuse:
The metadata indices are clearly aimed at end user operations like "Where's that file with the client's name in it?" or "What songs do I have by Michael Jackson?" but they're designed in a way that wastes a lot of space and yet also has poor performance for such queries - because they're case sensitive for no good reason. They also incur a LOT of extra I/O so if you don't need that feature you'd really want to switch it off, but you can only really do that at filesystem creation time.
Fragmentation is a really nasty problem. This is an extent-based filesystem, so that's somewhat inevitable, but BeFS almost seems to go out of its way to make it worse, and provides no tools whatsoever to help you fix it. It's actually possible to get a "disk full" type error when trying to append to a file which is badly fragmented, even though there is plenty of disk space.
Unix files often have an existence that transcends the mere name on the disk, but BeFS takes that a step further, allowing application software to identify a file without knowing its name at all. There are a few scenarios where this is quite clever, but if you ever want to retro-fit actual privilege separation to the OS (which has been a long term ambition for Haiku for more than a decade) this produces a Gordian knot - permissions are associated with names, but software can simply obtain (or guess!) the anonymous number for the file and sidestep such permissions altogether.
The journalling has a nasty hole in it. There's no way to safely delete files using the provided metadata journalling and recover the freed space, so, in practice if you crash while deleting files some of the space is just mysteriously gone until you run a "check" tool (remember the scorn for such tools in the article...) to find and recover it.
A great deal of mindshare has been eaten up by Linux. And Linux has largely "grown up" and has incorporated many ideas (and in some cases code) from the commercial Unixes. But I do wonder how many people are out there who have broad perspective and ability to push things forward at the OS level.
Also, back in the 90s I feel like systems programming was cooler and had more cachet. Even if people weren't actively writing code for their OS of choice, the icons of programming were the folks working on the guts. Nowadays, I think the cool stuff is all happening pretty far up the stack, and that there are relatively few people working on the substrate.
That's just my impression though. I'd be delighted for someone to look at kernel and system contributions to Linux, IllumOS, and the BSDs and convince me that there is sustainable mindshare to go around, and that all these OSes have a bright future.
For instance, the BSDs didn't have 64-bit block offsets until UFS2 which wasn't a default until FreeBSD 5 in 2003. In other systems, HFS didn't have 64-bit file lengths until HFS+ in 1998. NTFS started as full 64-bit, but you couldn't really get that on a consumer system until WinXP (maybe Win2000 for the prosumer market).
Nope, the change to off_t is literally all that's behind this claim. It's not Be's claim, explicitly, although they were of course in no hurry to clear it up.
BFS is specifically _not_ itself suitable for huge files. Last time I looked the default Haiku BFS settings won't allow files to grow beyond about 140GB. Which is fine for an ordinary file of course, but you certainly won't be happy if you expected to store a filesystem image _inside_ a file.
This happens because it's a rush job. For example there's lots of special corner case code needed for the extent handling in the multiple indirection case. I can easily imagine a week or two of work just to develop, let alone test, this part of the system. So, instead BFS although it labels these "extents" still just uses a fixed size block so that the code is simpler. It's not a problem at all, for small to medium size files on a defragmented or largely empty disk. But as files get bigger and disks get fragmented... oops.
NTFS dates back to 1993, it takes some real dodging and diving to conclude that NT 3.1 (in 1993) isn't a "consumer system" and yet somehow BeOS, available only to people who swore they were software developers who wanted to write software for Be's new BeBox architecture was.
I think the mail application and the media player were largely built this way because Be needed to build a handful of demonstration applications quickly and cheaply. On the other hand, I remember thinking how nice it was that email and multimedia behaved like the file browser with all its search and filtering features.
iTunes always felt like the exact antithesis of the BeOS model. It was a heavyweight application that had its own interaction model and idiosyncrasies. It might as well have been its own OS by comparison.
The .vol directory: http://www.westwind.com/reference/os-x/invisibles.html
Does anyone know what functionality the anonymous developer is referring to here?
It also suffers greatly from the lack of engineers working on it - as in some parts have a lot of comments, some nothing what so ever. Some parts look very well done, some parts less so.
Back in the day, I saw the code (those leaks were real) and one zip was able to fully compile (and this is where the PowerPC version of post R5.03 came from, I used to have it compiling on my old BeBox - long gone now) and the other was a much later version, but the zip was corrupt and half the files were garbled.
DISCLAIMER: No I don't have the code anymore, not that I would never share it if I did, and no, I have never worked for or contributed to the Haiku Project.
Most likely it contains some parts of third-party code under non-ooen license (i.e. in device drivers, but not only) and maybe some subsystems were licensed in some form of exclusive form to others. Also they might want to double check that code not by "accident" looks like somebody else (i.e. the ext2 code might be inspired by Linux's) This all leads to review of old contracts and code while serving no business (besides marketing to a niche of nerds)
Access owns it. Even though they aren't doing anything with it, if they just released the code they would probably open themselves up to lawsuits over possible 3rd party code. So they would have to pay someone to go over the code to take out 3rd party stuff. That would take money for which there is no tangible benefit to them so they won't do it.
"The only BeOS code that has made it into Haiku are Tracker and the Deskbar (the file manager and the equivalent of the start menu/taskbar, respectively)."
No. They rewrote the whole thing (minus the deskbar/launcher). If it looks to you that they're just keeping the old system stable, that'd mean haiku has been very successful in recreating it as open source.
They're close to 1.0beta1, which is defined as feature-complete for 1.0. It's after they do get 1.0 that they plan on really taking the system's design forward.
Not that it doesn't already have a few fundamental improvements, such as the integrated kernel debugger, the support for newer architectures or the package manager.