The grand irony is that Larry was one of the earliest advocates of open sourcing the operating system at Sun[1] -- and believed that by the time Sun finally collectively figured it out and made it happen (in 2005), it was a decade or more too late.[2] So on the one hand, you can view the story of BitKeeper with respect to open source as almost Greek in its tragic scope: every reason that Larry outlined for "sourceware"[3] for Sun applied just as much to BK as it did to SunOS -- with even the same technologist (Torvalds) leading the open source alternative! And you can say to BK and Larry now that it's "too late", just as Larry told Sun in 2005, but I also think this represents a forced dichotomy of "winners" and "losers." To the contrary, I would like to believe that the ongoing innovation in the illumos communities (SmartOS, OmniOS, etc.) proves that it's never too late to open source software -- that open source communities (like cities) can be small yet vibrant, serving a critical role to their constituencies. In an alternate universe, might we be running BK on SunOS instead of git on Linux? Sure -- but being able to run an open source BK on an open source illumos is also pretty great; the future of two innovative systems has been assured, even if it took a little longer than everyone might like.
So congratulations to Larry and crew -- and damn, were you ever right in 1993! ;)
Yeah this irony is not lost on me. But in both cases, the companies acted in self interest. Neither had the guts to walk away from their existing revenue stream. It's hard to say what would have happened.
It's been an interesting ride and if nothing else, BK was the inspiration for Git and Hg, that's a contribution to the field. And maybe, just maybe, people will look at the SCCS weave and realize that Tichy pulled the wool over our eyes. SCCS is profoundly better.
Sun's TeamWare [1] was probably the first real distributed version control system. It worked on top of SCCS. Larry McBoy, BitKeeper's creator, was involved in its development. I believe BitKeeper also uses parts of SCCS internally.
That is my understanding, yes -- but the NSE and (McVoy-authored) NSElite chapters of the saga pre-date me at Sun. Before my "Fork Yeah!" talk[1][2], from which this is drawn, I confirmed this the best I could, but it was all based only on recollections of the engineers who were there (including Larry). I haven't found anything written down about (for example) NSElite, though I would love to get Larry on the record to formalize that important history...
The NSE was Suns attempt at a grand SCM system and it was miserably slow (single threaded fuse like COW file system implemented in user space). I did performance work back then, sort of a jack of all trades (filesystem, vm system, networking, you name it) so Sun asked me to look at it. I did and recoiled in horror, it wasn't well thought out for performance.
My buddies in the kernel group were actually starting to quit because they were forced to use the NSE and it made them dramatically less productive. Nerds hate being slowed down.
Once the whole SCM thing crossed my radar screen I was hooked. Someone had a design for how you could have two SCCS files with a common ancestry and they could be put back together. I wrote something called smoosh that basically zippered them together.
Nobody cared. So I looked harder at the NSE and realized it was SCCS under the covers. I built a pile of perl that gave birth to the clone/pull/push model (though I bundled all of that into one command called resync). It wasn't truly distributed in that the "protocol" was NFS, I just didn't do that part, but the model was the git model you are used to now minus changesets.
I made all that work with the NSE, you could bridge in and out and one by one the kernel guys gave up on NSE and moved to nselite. This was during the Solaris 5.0 bringup.
I was forced to stop developing nselite by the VP of the tools group because by this time Sun knew that nselite won and NSE lost so they ramped up a 8 person team to rewrite my perl in C++ (Evan later wrote a paper basically saying that was an awful idea). They took smoosh.c and never modified it, just stripped my history off (yeah, some bad blood).
Their stuff wasn't ready so I kept working but that made them look bad, one guy with some perl scripts outpacing 8 people with a supposedly better language. So their VP came over and said "Larry, this went all the way up to Scooter, if you do one more release you're fired" and set back SCM development almost a decade, that was ~1991 and I didn't start BitKeeper until 1998. There is no doubt in my mind that if they had left me alone they would have the first DVCS.
Fun times, I went off and did clusters in the hardware part of the company.
Wow, jackpot -- thank you! That 2000th resync is a who's who of Sun's old guard; many great technologies have been invented and many terrific companies built by the folks on that list! I would love to see the nselite man pages that the README refers to (i.e., resync(1) and resolve(1)); do you happen to still have those?
Also, even if privately... you need to name that VP. ;)
You weren't the only one to do SCCS over NFS. The real-time computer division of Harris did it too. That version control system was already considered strange and old by 2004 when I encountered it.
1 man using perl can outpace 8 on C++. Who would've thought? /sarcasm. But seriously, I think this is one of the classic instances of what is now quite common knowledge about dynamic scripting languages: They let you get things done MUCH faster. I think the tools group learned the wrong lesson from this, but OTOH, who would want to start developing all of their new software in perl? And given that python hadn't caught on yet, there really wasn't much else out there in the field.
Firstly, the mention of LISP would have probably sent most of Sun screaming and running for the hills at the time. Secondly, LISP has never been very good at OS integration, one of the most important things for many software projects.
And for the record, Evan was somewhat justified in not saying I had anything to do with Teamware since I made his team look like idiots, ran circles around them. On the other hand, taking smoosh.c and removing my name from the history was dishonest and a douche move. Especially since not one person on that team was capable of rewriting it.
The fact remains that Teamware is just a productized version of NSElite which was written entirely by me.
If I sound grumpy, I am. Politics shouldn't mess with history but they always do.
I really doubt it. There's not enough demand to make them competitive in price-performance. The server chips have stayed badass but still niche market. There's even open-source, SPARC HW with reference boards for sell from Gaisler. That didn't take off.
I liked the desktops but there's no money in them. Market always rejects it. So does FOSS despite it being the only open ISA with mainstream, high-performance implementations.
> There's not enough demand to make them competitive in price-performance.
There does not need to be demand: Steve Jobs (in)famously said that where there was no market, "create one".
I for one would absolutely love to be able to buy an illumos-powered A4-sized tablet which ran a SPARC V9 instruction set, plugged into a docking station, and worked with a wireless keyboard and mouse to be used as a workstation when I'm not walking around with my UNIX server in hand. Very much akin to Apple Computer's iPad Pro (or whatever they call it, I don't remember, nor is that really relevant).
But the most important point was, and still is, and always will be: it has to cost as much as the competition, or less. Sun Microsystems would just not accept that, no matter how much I tried to explain and reason with people there: "talk to the pricing committee". What does that even mean?!? Was the pricing committee composed of mute, deaf and blind people who were not capable of seeing that PC-buckets were eating Sun's lunch, or what?
"There does not need to be demand: Steve Jobs (in)famously said that where there was no market, "create one"."
What people forget is that Steve Jobs was a repeated failure at doing that, got fired, did soul-searching, succeeded with NEXT, got acquired, and then started doing what you describe. Even he failed more than he succeeded at that stuff. A startup trying to one-off create a market just for a non-competitive chip is going to face the dreaded 90+% failure rate.
"But the most important point was, and still is, and always will be: it has to cost as much as the competition, or less."
That's why the high-security stuff never makes it. It takes at least 30% premium on average per component. I totally believe your words fell on deaf ears at Sun. I'd have bought SunBlades myself if I could afford them. I could afford nice PC's. So, I bought nice PC's. Amazing that echo chamber was so loud in there that they couldn't make that connection.
"I for one would absolutely love to be able to buy an illumos-powered A4-sized tablet which ran a SPARC V9 instruction set"
That's actually feasible given the one I promote is 4-core, 1+GHz embedded chip that should be low power on decent process node.
The main issue is the ecosystem and components like browsers with JIT's that must be ported to SPARC. One company managed to port Android to MIPS but that was a lot of work. Such things could probably be done for SPARC as well. The trick is implementing the ASIC, implementing the product, porting critical software, and then charging enough to recover that but not more than competition whose work is already done for them. Tricky, tricky.
Raptor's Talos Workstation, if people buy it, will provide one model that this might happen. Could get ASIC's on 45-65nm really quick, use SMP given per-chip cost is $10-30, port Solaris/Linux w/ containers, put in a shitload of RAM, and sell it for $3,000-6,000 for VM-based use and development. It would still take thousands of units to recover cost. Might need government sales.
The problem is the number of people who are into Illumos and want a portable Unix server is insignificant. All products came with fixed overheads (e.g. cost of tooling to start production), which have to be divided over the likely customer base. Small customer base == each customer pays a bigger share of the fixed overheads.
Basically, you want something to suit you and a small number of other people, but you won't pay for the cost of having something that "tailor made". You will only pay for the high-volume, lower-cost, more general product. So... that's all you get.
The problems were Sun's failure to recognize that cheap IBM PC clones would disrupt them like they disrupted mainframes and Sun not trying hard enough to overcome Wintel's network effect. Sun needed to die shrink old designs to get something that they could fabricate at low cost and compete on price. Such a thing would have canabalized Sun workstation sales, which might be why they never did it.
You again! (:-) You know that the VHDL code for UltraSPARC T1 and T2 has been open sourced? If I had enough knowledge about synthesizing code inside of an FPGA, I would be building my own SPARC-based servers like there is no tomorrow!
As long as the code for those processors remains free, and a license to implement a SPARC ISA compliant processor only costs $50, the SPARC will never really, truly be gone, especially not for those people capable of synthesizing their own FPGA's, or even building their own hardware.
Some people did exactly that, a while back. Too bad they didn't turn their designs into ready-to-buy servers.
" You know that the VHDL code for UltraSPARC T1 and T2 has been open sourced?"
That was exciting. It could do well even on a 200MHz FPGA given threading performance. Then, use eASIC's Nextreme's to convert it to structured ASIC for better speed, power-usage, and security from dynamic attacks on FPGA. That it's Oracle and they're sue-happy concerns me. I'd read that "GPL" license very carefully just in case they tweaked it. If it's safe, then drop one of those badboys (yes, the T2) on the best node we can afford with key I/O. Can use microcontrollers on the board for the rest as they're dirt cheap. Same model as Raptor's as I explained in another comment.
Alternatively, use Gaisler as Leon3 is GPL and designed for customization. Simple, too. Leon4 is probably inexpensive compared to ARM, etc.
"SPARC ISA compliant processor only costs $50, the SPARC will never really, truly be gone, especially not for those people capable of synthesizing their own FPGA's,"
Yep.
"Too bad they didn't turn their designs into ready-to-buy servers."
Not quite a server but available and illustrates your point:
No, they are magic: arbitrary hardware designs run without the cost of chip fabrication. Two non-profit FPGA's, one for performance at 28nm & one for embedded at 28SLPnm, would totally address the custom hardware and subversion problem given we could just keep checking on that one. The PPC cores and soon Intel Xeons already show what a good CPU plus FPGA accleration w/ local memory can do for applications.
Yeah, buddy, they're like magic hardware. Even if they aren't ASIC-competitive for the best ASIC's. Still magic with a market share and diverse applications that shows it. :)
In more ways than you'd know. They're already pre-backdoored like almost all other chips for debugging purposes in what's called Design for Test or scan chains or scan probes or something. Much hardware hacking involves getting to those suckers to see what chip is doing.
Now, for remote attacks, you can embed RF circuitry in them that listens to any of that. You can embed circuits that receive incoming command, then dump its SRAM contents. You might modify the I/O circuitry to recognize a trapdoor command that runs incoming data as privileged instructions. You can put a microcontroller in there connected to PCI to do the same for host PC attacks. I know, that would be first option but I was having too much fun with RTL and transistor level. :)
You make that sound easy. The POWER and SPARC T/M chips are amazing. Yet, Intel Xeon still dominates to point that they can charge less and invest more. That's with one hell of a head start from Sun and IBM. The other's... Alpha, MIPS, and PA-RISC... folded in server markets with Itanium soon to follow.
You can't just compete directly in that market: you have to convince them yours is worth buying for less performance at higher price and watts. Itanium tried with reliability & security advantages. Failed. Fortunately, Dover is about to try with RISC-V combined with SAFE architecture (crash-safe.org) for embedded stuff. We'll see what happens there.
Yeah, that. I think he got a PhD for RCS and what he should have gotten is shown the door. RCS sucks, SCCS is far, far better.
Just as an example, RCS could have been faster than SCCS if they had stored at the top of the file the offset to where the tip revision starts. You read the first block, then seek past all the stuff you don't need, start reading where the tip is.
But RCS doesn't do that, it reads all the data until it gets to the tip. Which means it is reading as much data as SCCS but only has sorta good performance for the tip. SCCS is more compat and is as fast or faster for any rev.
And BK blows them both away, we lz4 compress everything which means we do less I/O.
RCS sucked but had good marketing. We're here to say that SCCS was a better design.
RCS is patch based, the most recent version is kept in clear text and the previous version is stored as a reverse patch and so on back to the first version. So getting the most recent version could be fast (it isn't) but the farther back you go in history the more time it takes. And branches are even worse, you have to patch backwards to the branch point and then forwards to the tip of the branch.
SCCS is a "weave". The time to get the tip is the same as the time to get the first version or any version. The file format looks like
^AI 1
this is the first line in the first version.
^AE 1
That's "insert in version 1" data "end of insert for version one".
Now lets say you added another line in version 2:
^AI 1
this is the first line in the first version.
^AE 1
^AI 2
this is the line that was added in the second version
^AE 2
So how do you get a particular version? You build up the set of versions that are in that version. In version 1, that's just "1", in version 2, that's "1, 2". So if you wanted to get version 1 you sweep through the file and print anything that's in your set. So you print the first line, get to the ^AI 2 and look to see if that's in your set, it isn't, so you skip until you get to the ^AE 2.
So any version is the same time. And that time is fast, the largest file in our source base is slib.c, 18K lines, checks out in 20 milliseconds.
I had... much too extensive experience both with SCCS weaves and with hacking them way back in the day; I even wrote something which sounds very like your smoosh, only I called it 'fuse'. However, I wrote 'fuse' as a side-effect of something else, 'fission', which split a shorter history out of an SCCS file by wholesale discarding of irrelevant, er, strands and of the history relating to them. I did this because the weave is utterly terrible as soon as you start recording anything which isn't plain text or which has many changes in each version, and we were recording multimegabyte binary files in it by uuencoding them first (yes, I know, the decision was made way above my pay grade by people who had no idea how terrible an idea it was).
Where RCS or indeed git would have handled this reasonably well (indeed the xdelta used for git packfiles would have eaten it for lunch with no trouble), in SCCS, or anything weave-based, it was an utter disaster. Every checkin doubled the number of weaves in the file, an exponential growth without end which soon led to multigigabyte files which xdelta could have represented as megabytes at most. Every one-byte addition or removal doubled up everything from that point on.
And here's where the terribleness of the 'every version takes the same time' decision becomes clear. In a version control system, you want the history of later versions (or of tips of branches) overwhelmingly often: anything that optimizes access time for things elsewhere in the history at the expense of this is the wrong decision.
When I left, years before someone more courageous than me transitioned the whole appalling mess to git, our largest file was 14GiB and took more than half an hour to check out.
The SCCS weave is terrible. (It's exactly as good a format as you'd expect for the time, since it is essentially an ed script with different characters. It was a sensible decision for back then, but we really should put the bloody thing out of its misery, and ours.)
Yeah. I suspect the answer is 'store all binary data in BAM', which then uses some different encoding for the binary stuff -- but that then makes my gittish soul wonder why not just use that encoding for everything. (It works for git packfiles... though 'git gc' on large repos is a total memory and CPU hog, one presumes that whatever delta encoding BAM uses is not.)
We support the uuencode horror for compat (and for smaller binaries that don't change) but the answer for binaries is BAM, there is no data in the weave for BAM files.
I don't agree that the weave is horrible, it's fantastic for text. Try git blame on a file in a repo with a lot of history then try the same thing in BK. Orders and orders of magnitude faster.
And go understand smerge.c and the weave lightbulb will come on.
Yeah, that's the problem; it's optimizing for the wrong thing. It speeds up blame at the expense of absolutely every other operation you ever need to carry out; the only thing which avoids reading (or, for checkins, writing) the whole file is a simple log. Blame is a relatively rare operation: its needs should not dominate the representation.
The fact that the largest file you mention is frankly tiny shows why your performance was good: we had ~50,000 line text files (yeah, I know, damn copy-and-paste coders) with a thousand-odd revisions and a resulting SCCS filesize exceeding three million lines, and every one of those lines had to be read on every checkout: dozens to hundreds of megabytes, and of course the cache would hardly ever be hot where that much data was concerned, so it all had to come off the disk and/or across NFS, taking tens of seconds or more in many cases. RCS could have avoided reading all but 50,000 of them in the common case of checkouts of most recent changes. (git would have reduced read volume even more because although it is deltified the chains are of finite length, unlike the weave, and all the data is compressed.)
I'd do that if I was still working there. I can probably still get hold of a horror case but it'll take negotiation :)
(And yes, optimizing merge matters too, indeed it was a huge part of git's raison d'etre -- but, again, one usually merges with the stuff at the tip of tree: merging against something you did five years ago is rare, even if it's at a branch tip, and even rarer otherwise. Having to rewrite all the unmodified ancient stuff in the weave merely because of a merge at the tip seems wrong.)
(Now I'm tempted to go and import the Linux kernel or all of the GCC SVN repo into SCCS just to see how big the largest weave is. I have clearly gone insane from the summer heat. Stop me before I ci again!)
Our busiest file is 400K checked out and about 1MB for the history file lz4 compressed. Uncompressed is 2.2M and the weave is 1.7M of that.
Doesn't seem bad to me. The weave is big for binaries, we imported 20 years of Solaris stuff once and the history was 1.1x the size of the checked out files.
Interesting.
"^AI Spec" where Spec feeds into a predicate f(Spec, Version) to control printing a particular Version?
Looks like you could drop the ^AE lines.
Probably missing something. Both are working on one file at a time and have some form of changeset. One is adding from back to forward (kind of) another is from forward to back (rcs). Not sure where the reduction of work coming from.
bzr got more than that from BK, it got one of my favorite things, per-file checkin comments. I liken those to regression tests, when you start out you don't really value them but over time the value builds up. The fact that Git doesn't have them bugs me to no end. BZR was smart enough to copy that feature and that's why MySQL choose bzr when they left BK.
The thing bzr didn't care about, sadly, is performance. An engineer at Intel once said to me, firmly, "Performance is a feature".
Git's attitude, AFAIK, is that if you want per-file comments, make each file its own checkin. There are pros and cons to this.
Performance as a feature, OTOH, is one of Linus's three tenets of VCS. To quote him, "If you aren't distributed, you're not worth using. If you're not fast, you're not worth using. And if you can't guarantee that the bits I get out are the exact same bits I put in, you're not worth using."
If I remeber the history correctly, per-file commit messages were actually a feature that was quickly hacked in to get MySQL on board. It did not have that before those MySQL talks and I don't think it was very popular after.
Performance indeed killed bzr. Git was good enough and much faster, so people just got used to its weirdness.
> Git was good enough and much faster, so people just got used to its weirdness.
And boy is git weird! In Mercurial, I can mess with the file all day long after scheduling it for a commit, but one can forget that in git: marking a file for addition actually snapshots a file at addition time, and I have read that that is actually considered a feature. It's like I already committed the file, except that I didn't. This is the #1 reason why I haven't migrated from Mercurial to git yet, and now with Bitkeeper free and open source, chances are good I never will have to move to git. W00t!!!
I just do not get it... what exactly does snapshotting a file before a commit buy me?
It's probably the same idea as the one behind committing once in Mercurial and then using commit --amend repeatedly as you refine the changes. Git's method sounds like it avoids a pitfall in that method by holding your last changeset in a special area rather than dumping it into a commit so that you can't accidentally push it.
I often amend my latest commit as a way to build a set of changes without losing my latest functional change.
I always do a hg diff before I commit. If in spite of that I still screw up, I do a hg rollback, and if I already pushed, I either roll back on all the nodes, or I open a bug, and simply commit a bug fix with a reference to the bug in the bug tracking system. I've been using Mercurial since 2007 and I've yet to use --amend.
> In Mercurial, I can mess with the file all day long after scheduling it for a commit
OTOH, I find that behavior weird as I regularly add files to the index as I work. If a test breaks and I fix it, I can review the changes via git diff (compares the index to the working copy) and then the changes in total via git diff HEAD (compares the HEAD commit to the working copy).
10 or even 20% performance is not a feature. But when tools or features get a few times faster or more then their usage model changes - which means they become different features.
In short, RCS maintains a clean copy of the head revision, and a set of reverse patches to be applied to recreate older revisions. SCCS maintains a sequence of blocks of lines that were added or deleted at the same time, and any revision can be extracted in the same amount of time by scanning the blocks and retaining those that are pertinent.
Really old school revision control systems, like CDC's MODIFY and Cray's clone UPDATE, were kind of like SCCS. Each line (actually card image!) was tagged with the ids of the mods that created and (if no longer active) deleted it.
I've read that "sourceware" article before, in the distant past when it was still a roughly accurate picture of the market (maybe 1995 or 1996). It's weird to read it again now, in a world that is so remarkably changed. Linux, the scrappy little upstart with a million or so users at the time of the paper, is now the most popular OS (or at least kernel) on the planet, powering billions of phones and servers. NT was viewed as the unfortunate but inevitable future of server operating systems.
I remember looking at IT jobs back then, and seeing a business world covered in Windows NT machines; I even got my MCSE (alongside some UNIX certifications that I was more excited about), because of it. Looking at jobs now, the difference is remarkable, to say the least. Nearly every core technology a system administrator needs to know is Open Source and almost certainly running on Linux.
And, the funny thing is that the general prescription (make a great Open Source UNIX) is exactly what it took to save UNIX. It just didn't involve any of the big UNIX vendors in a significant way (the ones spending a gazillion dollars on UNIX development at the time). Linux got better faster than Sun got smarter, and ate everybody's lunch, including Microsoft. Innovator's Dilemma strikes again.
Apple is an interesting blip on the UNIX history radar, too...though, they're likely to lose to the same market forces in the end, as phones become commodities. I'm a bit concerned that it's going to be Android, however, that wins the mobile world since Android is nowhere near the ideal OS from an Open Source and ethical perspective; but, I guess they got the bits right that Larry was suggesting needed to be right.
Anyway, it was a weird flashback to read that article. Things change, and on a scale that seems slow, until you look back on it, and see it's "only" been a couple of decades. In the grand scheme of things, and compared to the motion of technology prior to the 1900, that's a blink of an eye.
Eh. Linux does have a LOT of problems. Well, so does everything, but it's not like it's "great." More like "good."
But yeah, it was weird that everybody thought NT was going to be the future. And now, MS has opensourced a good deal of infrastructure, is working with Node, has announced an integrated POSIX environment for Windows. And since it's in corporate, it might even be able to fix the fork(2) performance problems.
Great is a relative term. But, can you name an OS, especially a UNIX, that is better in the general case? By "general case", I mean, good for just about anything, even if there's something better for some niche or role. Also, take into account the world we live in: More computing happens on servers and phones than on desktops and laptops; and judge the OS based on how it's doing in those roles.
I sincerely consider Linux a great UNIX. Probably the best UNIX that's existed, thus far. There are warts, sure. Technically, Solaris had (and still has, in IllumOS and SmartOS) a small handful of superior features and capabilities (at this point one can list them on one hand, and one could also list some "almost-there" similar techs on Linux). But, I assume you've used Solaris (or some other commercial UNIX) enough to have an opinion...can you honestly say you enjoyed working on it more than Linux? The userspace on Solaris was always drastically worse than Linux, unless you installed a ton of GNU utilities, a better desktop, etc. But, Linux brought us a UNIX we could realistically use anywhere, and at a price anyone could afford. That's a miracle for a kid that grew up lusting after an Amiga 3000UX (because it was the closest thing to an SGI Indy I could imagine being able to afford).
Fair 'nuff. And no, I haven't used commercial UNIXes all that much, but I have experienced plenty of Linux's warts. I do agree with a lot of those points, but containers on Linux just aren't there, systemd is a mess that's going to get pushed in no matter what we say, and there are plenty of issues to be had, although the ladder is true of any UNIX. If you want to know what the rest of the issues are, just start googling. And while you're at it, listen to some of Bryan Cantrill's talks. They are biased (obviously), but they're entertaining, and they do point out some things that I think are real problems (posix conformance (MADV_DONTNEED), and epoll semantics, mainly).
Oh, and don't flame me for speaking in ignorance. I've been a Linux user for half a decade at least now, and I CAN say I see problems with it. I can also say, as a person who is programmer, that some of the things that Cantrill pointed out are actually evil. Note, however, that I don't claim Solaris, or any other OS is better. Every UNIX is utterly fucked in some respect. I just know Linux's flaws the best.
By the way, I've been trying to get Amiga emulation working for a while. It basically works at this point, but the *UAEs are a misery on UNIX systems. Without any kind of loader, you have to spend 10 minutes editing the config every time you want to play a game. But if you're in any way interested in the history you lived through, check out youtube.com/watch?v=Tv6aJRGpz_A
Those issues seem so trivial with the benefit of hindsight and a memory of what it was like to deploy an application to multiple UNIX variants. Having one standard (we can call that standard "it ain't quite POSIX, but it runs great on Linux") is so superior to the mine field that was all of the UNIXen in 199x, that I don't even register it as a problem. Shoot, until you've had to use autotools or custom build a makefile for a half dozen different C compilers, kernels, libc, and so on, you don't know from POSIX "standards" pain.
But now my beard is showing and I'm ranting. My point is this: it took something from completely outside of the commercial UNIX ecosystem, so far out in left field that none of the UNIX bosses (or Microsoft) saw it as a threat until it was far too late...and it took something that was good, really good in at least some regards, that it would have passionate fans even very early in. Linux did that. And, compared to everything else (pretty much everything else that's ever existed, IMHO), it's great.
And, I'm on board the retro computing bandwagon. I have a real live Commodore 64 and an Atari 130xe. I'd like to one day find an Amiga 1200 in good shape, but because I live in an RV and travel fulltime, I don't have a lot of room to spare. But I do like to tinker and reminisce.
> until you've had to use autotools or custom build a makefile for a half dozen different C compilers, kernels, libc, and so on, you don't know from POSIX "standards" pain.
Truer words have ne'er been spoken. My first big boy job involved building and maintaining a large open source stack on top of AIX. These days I occasionally experience hiccups related to OpenBSD not being Linux. Problems aren't even in the same league. That said, the thrill of getting stuff to work on AIX was certainly greater (and purchased with more human suffering).
> Great is a relative term. But, can you name an OS, especially a UNIX, that is better in the general case? By "general case", I mean, good for just about anything, even if there's something better for some niche or role. Also, take into account the world we live in: More computing happens on servers and phones than on desktops and laptops; and judge the OS based on how it's doing in those roles.
Okay then, SmartOS. Why is an exercise left for the reader, because it would just take too much and too long to list and explain all the things it does better, faster and cheaper than Linux in server space; that's material rife for an entire book.
> can you honestly say you enjoyed working on it more than Linux?
Enjoyed it?!? Love it, I love working with Solaris 10 and SmartOS! It's such a joy not having a broken OS which actually does what it is supposed to do (run fast, be efficient, protect my data, is designed to be correct). When I am forced to work with Linux (which I am, at work, 100% of the time, and I hate it), it feels like I am on an operating system from the past century: ext3 / ext4 (no XFS for us yet, and even that is ancient compared to ZFS!), memory overcommit, data corruption, no backward compatibility, navigating the minefield of GNU libc and userland misfeatures and "enhancements". It's horrible. I hate it.
> The userspace on Solaris was always drastically worse than Linux,
Are you kidding me? System V is great, it's grep -R and tar -z that I hate, because it only works on GNU! Horrid!!!
> But, Linux brought us a UNIX we could realistically use anywhere, and at a price anyone could afford.
You do realize that if you take an illumos derived OS like SmartOS and Linux, and run the same workload on the same cheap intel hardware, SmartOS is usually going to be faster, and if you are virtualizing, more efficient too, because it uses zones, right? Right?
It's like this: when I run SmartOS, it's like I'm gliding around in an ultramodern, powerful, economical mazda6 diesel (the 175 HP / 6 speed Euro sportwagon version); I slam the gas pedal and I'm doing 220 km/h without even feeling it and look, I'm in Salzburg already! When I'm on Linux, I'm in that idiotic Prius abomination again: not only do I not have any power, but I end up using more fuel too, even though it's a hybrid, and I'm on I-80 somewhere in Iowa. That's how I'd compare SmartOS to Linux.
> It's like this: when I run SmartOS, it's like I'm gliding around in an ultramodern, powerful, economical mazda6 diesel (the 175 HP / 6 speed Euro sportwagon version);
Nothing would make me happier professionally than to have the opportunity to work with Bryan (sadly, we've never met, although I did work in Silicon Valley for a while). For instance, those times when I wouldn't be writing C, I could finally have an orgy of AWK one-liners and somebody would appreciate it without me having to defend why I used AWK.
I'm struck by how much this sounds like a Linux fan ranting back in 1995, when Windows and "real" UNIX was king. The underdog rants were rampant back then (I'm sure I penned a few of them myself).
I think the assumption you're making is that people choose Linux out of ignorance (and, I think the ignorance goes both ways; folks using Solaris have been so accustomed to Zones, ZFS, and dtrace being the unique characteristic of Solaris for so long that they aren't aware of Linux' progress in all of those areas). But, there are folks who know Solaris (and its children) who still choose Linux based on its merits. We support zones in our products/projects (because Sun paid for the support, and Joyent supported us in making Solaris-related enhancements), and until a few years ago it was, hands-down, the best container technology going.
Linux has a reasonable container story now; the fact that you don't like how some people are using it (I think Docker is a mess, and I assume you agree) doesn't mean Linux doesn't have the technology for doing it well built in. LXC can be used extremely similarly to Zones, and there's a wide variety of tools out there to make it easy to manage (I work on a GUI that treats Zones and LXC very similarly, and you can do roughly the same things in the same ways).
"Are you kidding me? System V is great, it's grep -R and tar -z that I hate, because it only works on GNU! Horrid!!!"
Are you really complaining about being able to gzip and tar something in one command? Is that a thing that's actually happening in this conversation?
I'll just say I've never sat down at a production Sun system that didn't already have the GNU utilities installed by some prior administrator. It's been a while since I've sat down at a Sun system, but it was standard practice in the 90s to install GNU from the get go. Free compiler that worked on every OS and for building everything? Hell yes. Better grep? Sure, bring it. People went out of their way to install GNU because it was better than the system standard, and because it opened doors to a whole world of free, source-available, software.
"You do realize that if you take an illumos derived OS like SmartOS and Linux, and run the same workload on the same cheap intel hardware, SmartOS is usually going to be faster"
Citation needed. Some workloads will be faster on SmartOS. Others will be faster on Linux. Given that almost everything is developed and deployed on Linux first and most frequently, I wouldn't be surprised to see Linux perform better in the majority of cases; but, I doubt it's more than a few percent difference in any common case. The cost of having or training staff to handle two operating systems (because you're going to have to have some Linux boxes, no matter what) probably outweighs buying an extra server or two.
"and if you are virtualizing, more efficient too, because it uses zones, right? Right?"
Citation needed, again. Zones are great. I like Zones a lot. But, Linux has containers; LXC is not virtualization, it is a container, just like Zones. Zones has some smarts for interacting with ZFS filesystems and that's cool and all, but a lot of the same capabilities exist with LVS and LXC.
I feel like you're arguing against a straw man in a lot of cases here.
Why do you believe LXC (or other namespace-based containers on Linux) are inherently inefficient, compared to Zones, which uses a very similar technique to implement?
And, it's not Linux' fault the systems you manage are stuck on ext4. There are other filesystems for Linux; XFS+LVM is great. A little more complex to manage than ZFS, but not by a horrifying amount. So, you have to read two manpages instead of one. Not a big deal. And, there's valid reasons the volume management and filesystem features are kept independent in the kernel (I dunno if you remember the discussions about ZFS inclusion in Linux; separate VM and FS was a decision made many years ago, based on a lot of discussion). Almost any filesystem on Linux has LVM, so, filesystems on Linux get snapshots and tons of other features practically for free. That's pretty neat.
Anyway, I think SmartOS is cool. I tinker with it every now and then, and have even considered doing something serious with it. But, I just don't find it compellingly superior to Linux. Certainly not enough to give up all of the benefits Linux provides that SmartOS does not (better package management, vast ecosystem and community, better userland even now, vastly better hardware support even on servers, etc.).
I predate the Solaris stuff, I'm not a fan. I liked SunOS which was a bugfixed and enhanced BSD. When I wrote the sourceos paper I was talking about SunOS. (I lied, I overlapped with Solaris but I try and block that out)
All that said, Sun had an ethos of caring. In the early days of bitmover amy had some quote about the sun man pages versus the linux man pages, if someone can find that, it's awesome. We keep a sun machine in our cluster just so we can go read sane man pages about sed or ed or awk or whatever. Linux man pages suck.
Sun got shoved into having to care about System V and it sucked. I hated it and left, so did a bunch of other people. But Sun carried on and the ethos of caring carried on and Bryan and crew were a big part of that. My _guess_ is that Solaris and its follow ons are actually pleasant. I'll be pissed if I install it and it doesn't have all the GNU goodness. If that's the case then you are right, they don't get it.
What I expect to see is goodness plus careful curating. That's the Sun way.
I agree that GNU man pages suck. At least the atrocity that was "this man page is a stub, use info for the real docs" is gone now (I don't know if GNU stopped trying to force me to use info, or if distros fix it downstream). I have always hated info and the persistent nagging that GNU docs used to try to make people use it.
SmartOS is nice. I've always thought so and I have a lot of respect for the folks working on it. But, it isn't nice enough to overcome the negatives of being a tiny niche system. Linux has orders of magnitude more people working on it (and many of those people are also very smart). That's hard to beat.
So, why isn't any major hosting provider offering a multi-tenant Linux container hosting service directly on bare-metal Linux servers, whereas Joyent is providing Docker-compatible container hosting on SmartOS using LX-branded zones? Does anyone trust the security of Linux namespaces and cgroups in a multi-tenant environment? That's the one thing that SmartOS really seems to have going for it.
> Citation needed, again. Zones are great. I like Zones a lot. But, Linux has containers; LXC is not virtualization, it is a container, just like Zones. Zones has some smarts for interacting with ZFS filesystems and that's cool and all, but a lot of the same capabilities exist with LVS and LXC.
How about simple logic instead? I know zones work, because they have been in use in the enterprises since 2006, and they are easy to work with and reason about; if I have the same body of software available on a system with the original lightweight virtualization as I do on Linux, and my goal is data integrity, self-healing, and operational stability, what is my incentive to running a conceptual knock-off copy of zones, LXC? To me, the choice is obvious: design the solution on top of the tried and tested, original substrate, rather than use a knock-off, especially since the acquisition cost of both is zero, and I already know from experience that investing in zones pays profit and dividends down the road, because I ran them before in production environments. I like profits, and the only thing I like better than engineering profits are engineering profits with dividends. That, and sleeping through my nights without being pulled into emergency conference calls about some idiotic priority 1 incident. Incident which could have easily been avoided altogether, if I had been running on SmartOS with ZFS and zones. Based on multiple true stories, and don't even get me started on the dismal redhat "support", where redhat support often ends up in a shootout with customers[1], rather than fixing customer's problems, or being honest and admitting they do not have a clue what is broken where, nor how to fix it.
> And, it's not Linux' fault the systems you manage are stuck on ext4. There are other filesystems for Linux; XFS+LVM is great.
Did you know that LVM is an incomplete knock-off of HP-UX's LVM, which in turn is a licensed fork of Veritas' VxVM? Again, why would I waste my precious time, and run up financial engineering costs running a knock-off, when I can just run SmartOS and have ZFS built in? The logic does not check out, and financial aspects even less so.
On top of that, did you know that not all versions of the Linux kernel provide LVM write barrier support? And did you know that not all versions of the Linux kernel provide XFS write barrier support (XFS at least will report that, while LVM will do nothing and lie that the I/O made it to stable storage, when it might still be in transit)? And did you know that to have both XFS and LVM support write barriers, one needs a particular kernel version, which is not supported in all versions of RHEL? And did you know that not all versions of LVM correctly support mirroring, and that for versions which do not require a separate logging device, the log is in memory, so if the kernel crashes, one experiences data corruption? And did you know that XFS, as awesome as it is, does not provide data integrity checksums?
And we haven't even touched upon systemd knock-off of SMF, nor have we touched upon lack of fault management architecture, nor have we touched upon how insane bonding of interfaces is in Linux, nor have we touched upon how easy it is to create virtual switches, routers and aggregations (trunks in CISCO parlance) using Crossbow in Solaris/illumos/SmartOS... when I wrote that there is enough material for a book, I was not trying to be funny.
> I'm struck by how much this sounds like a Linux fan ranting back in 1995, when Windows and "real" UNIX was king. The underdog rants were rampant back then (I'm sure I penned a few of them myself).
It sounds like a Linux fan ranting circa 1995 because that is precisely what it is: first came the rants. Then a small, underdog company named "redhat" started providing regular builds and support, while Linux was easily accessible, and subversively smuggled into enterprises. Almost 20 years later, Linux is now everywhere.
Where once there was Linux, there is now SmartOS; where once there was redhat, there is now Joyent. Where once one had to download and install Linux to run it, one now has but to plug in an USB stick, or boot SmartOS off of the network, without installing anything. Recognize the patterns?
One thing is different: while Linux has not matured yet, as evidenced, for example, by GNU libc, or by GNU binutils, or the startup subsystem preturbations, SmartOS is based on a 37 years old code base which has matured and reached operational stability about 15 years ago. The engineering required for running the code base in the biggest enterprises and government organizations has been conditioned by large and very large customers having problems running massive, mission critical infrastructure. That is why for instance there are extensive, comprehensive post-mortem analysis as well as debugging tools, and the mentality permeates the system design: for example, ctfconvert runs on every single binary and injects the source code and extra debugging information during the build; no performance penalty, but if you are running massive real-time trading, a database or a web cloud, when going gets tough, one appreciates having the tools and the telemetry. For Linux that level of system introspection is utter science fiction, 20 years later, in enterprise environments, in spite of attempts to the contrary. (Try Systemtap or DTrace on Linux; Try doing a post-mortem debug on the the kernel, or landing into a crashed kernel, inspecting system state, patching it on the fly, and continuing execution; go ahead. I'll wait.) All that engineering that went into Solaris and then illumos and now SmartOS has passed the worst trials by fire at biggest enterprises, and I should know, because I was there, at ground zero, and lived through it all.
All that hard, up-front engineering work that was put into it since the early '90's is now paying off, with a big fat dividend on top of the profits: it is trivial to pull down a pre-made image with imgadm(1M), feed a .JSON file to vmadm(1M), and have a fully working yet completely isolated UNIX server running at the speed of bare metal, in 25 seconds or less. Also, let us not forget almost ~14,000 software packages available, most of which are the exact same software available on Linux[1]. If writing shell code and the command line isn't your cup of tea, there is always Joyent's free, open source SmartDC web application for running the entire cloud from a GUI.
Therefore, my hope is that it will take less than 18 years that it took Linux for SmartOS to become king, especially since cloud is the new reality, and SmartOS has been designed from the ground up to power massive cloud infrastructure.
> I think the assumption you're making is that people choose Linux out of ignorance
That is not an assumption, but rather my very painful and frustrating experience for the last 20 years. Most of those would-be system administrators came from Windows and lack the mentoring and UNIX insights.
> (and, I think the ignorance goes both ways; folks using Solaris have been so accustomed to Zones, ZFS, and dtrace being the unique characteristic of Solaris for so long that they aren't aware of Linux' progress in all of those areas).
I actually did lots and lots of system engineering on Linux (RHEL and CentOS, to be precise) and I am acutely aware of the limitations when compared to what Solaris based operating systems like SmartOS can do: not even the latest and greatest CentOS nor RHEL can even guarantee me basic data integrity, let alone backwards compatibility. Were we in the '80's right now, I would be understanding, but if after 20 years a massive, massive army of would-be developers is incapable of getting the basic things like data integrity, scheduling, startup/shutdown or init subsystem working correctly, in the 21st century, I have zero understanding and zero mercy. After all, my time as a programmer and as an engineer is valuable, and there is also financial cost involved, that not being negligible either.
> Linux has a reasonable container story now; the fact that you don't like how some people are using it (I think Docker is a mess, and I assume you agree)
Yes, I agree. The way I see it, and I've deployed very large datacenters where the focus was operational stability and data correctness, Docker is a web 2.0 developer's attempt to solve those problems, and they are flapping. Dumping files into pre-made images did not compensate for lack of experience in lifecycle management, or lack of experience in process design. No technology can compensate for lack of a good process, and good process requires experience working in very large datacenters where operational stability and data integrity are primary goals. Working in the financial industry where tons of money are at stake by the second can be incredibly instructive and insightful when it comes to designing operationally correct, data-protecting, highly available and secure cloud based applications, but the other way around does not hold.
> Are you really complaining about being able to gzip and tar something in one command? Is that a thing that's actually happening in this conversation?
Let's talk system engineering:
gzip -dc archive.tar.gz | tar xf -
will work everywhere; I do not have to think whether I am on GNU/Linux, or HP-UX, or Solaris, or SmartOS, and if I have the above non-GNU invocation in my code, I can guarantee you, in writing, that it will work everywhere without modification. If on the other hand I use:
tar xzf archive.tar.gz
I cannot guarantee that it will work on every UNIX-like system, and I know from experience I would have to fix the code to use the first method. Therefore, only one of these methods is correct and portable, and the other one is a really bad idea. If I understand this, then why do I need GNU? I do not need it, nor do I want it. Except for a few very specific cases like GNU Make, GNU tools are actually a liability. This is on GNU/Linux, to wit:
% gcc -g hello.c -o hello
% gdb hello hello.c
GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-45.el5)
...
...
...
Dwarf Error: wrong version in compilation unit header (is 4, should be 2) [in module /home/user/hello]
"/home/user/hello.c" is not a core dump: File format not recognized
(gdb)
Now, why did that happen? Because the debugger as delivered by the OS doesn't know what to do with it. Something like that is unimaginable on illumos, and by extension, SmartOS. illumos engineers would rather drop dead, than cause something like this to happen.
On top of that, on HP-UX and Solaris I have POSIX tools, so for example POSIX extended regular expressions are guaranteed to work, and the behavior of POSIX-compliant tools is well documented, well understood, and guaranteed. When one is engineering a system, especially a large distributed system which must provide data integrity and operational stability, such concerns become paramount, not to mention that the non-GNU approach is cheaper because no code must be fixed afterwards.
So it seems that my innocent suggestion that Linux isn't perfect may have spawned a tiny... massive holy war. Great. You know what? SmartOS is fantastic. It's great. But Linux isn't terrible. They both have their flaws. Like SmartOS not having a large binary footprint, and not having the excellent package repositories. And the fact that every partisan of one hates /proc on the other. And the fact that KVM is from linux. And that the docker image is actually a pretty good idea. Or the fact that SMF and lauchd were some of the inspirations for systemd. Okay, now I'm just tossing fuel on the fire, by the bucketload.
Personally, I run linux on my desktop. Insane, I know, but I can't afford mac, and the OSX posix environment just keeps getting worse. Jeez. At this rate, Cygwin and the forthcoming POSIX environment from MS will be better. But anyways, I'm not switching my system to BSD or Illumos anytime soon, despite thinking that they are Pretty Cool (TM). Why? The binary footprint. Mostly Steam. Okay. Pretty much just Steam. Insane, I know, but I'm not running (just) a server.
So in summary, all software sucks, and some may suck more than others, but I'm not gonna care until Illumos and BSD get some love from NVIDIA.
And why do you care what a crazy person thinks? Oh, you don't. By all means continue the holy war. Grab some performance statistics, and hook in the BSDs. I'll be over heating up the popcorn...
Actually that's great that it's from Linux, because one major point of embarrassment for Linux is that KVM runs faster and better on SmartOS than it does on Linux, because Joyent engineers systematically used DTrace during the porting effort:
Once you're able to afford one, you won't care about the desktop every again, because your desktop will JustWork(SM).
> but I'm not gonna care until Illumos and BSD get some love from NVIDIA.
NVIDIA provides regular driver updates for both Solaris and BSD. I bought a NVidia GX980TX, downloaded the latest SVR4 package for my Solaris 10 desktop, and one pkgrm && pkgadd later, I was running accelerated 3D graphics on the then latest-and-greatest accelerator NVIDIA had for sale.
> By all means continue the holy war. Grab some performance statistics, and hook in the BSDs.
They take our code, and we take theirs; they help us with our bugs, and we help them with theirs. BSD's are actually great. BSD's have smart, capable, and competent engineers who care about engineering correct systems and writing high quality code. We love our BSD brethren.
...Annnd the BSDs involved. Now all we need is some actual fans, and the Linux hackers should start to retaliate...
But good to know that the Solaris and BSD NVIDIA drivers work. If they work with lx-branding, I might actually consider running the thing.
>Once you're able to afford one, you won't care about the desktop every again, because your desktop will JustWork(SM).
Yeah, no. It seems like OSX is making increasingly radical changes that make it increasingly hard for applications expecting standard POSIX to run. By the time I get the cash, Nothing will work right.
"I'm a bit concerned that it's going to be Android, however, that wins the mobile world since Android is nowhere near the ideal OS from an Open Source and ethical perspective; but, I guess they got the bits right that Larry was suggesting needed to be right."
They've already won; Apple isn't coming back (did they "go thermonuclear" in the end or did that nonsense die with Mr Magical Thinking?).
Don't confuse Android with Google; you can grab the source and do what you want with it, like Cyanogenmod have, or like millions of hobbyists are doing themselves. It's all available with bog standard open source licenses - no need to worry about ethics.
The point about small communities is great. I've never seen it spelled out like that, but it captures what I was thinking perfectly. People get caught up on popularity as the only meaningful metric for success, but it really isn't.
> And you can say to BK and Larry now that it's "too late", just as Larry told Sun in 2005,
In case of software, it is never too late: as you once put it so eloquently, software does not suddenly stop working and does not have an expiration date.
If this software works and works well, then Paul Graham's revolutionary idea of when you choose technology, you have to ignore what other people are doing, and consider only what will work the best applies. (Common sense really, but apparently not to the rest of our industry.)
If this software will work the best, and do exactly what I want and need it to do, I have enough experience to know not to care that everyone else runs something like git just because that is trendy right now. (A lesson appreciated by those who run SmartOS because it is the best available technology for virtualization, cloud, and performance, instead of running Linux and Docker.)
Sun is also no more... and it's not at all clear they would've survived had Solaris been open sourced a decade earlier.
Yes, you're absolutely right that there are a ton of startups built on opensolaris (who have proprietary code they haven't and don't intend to ever give back to the community), and there is smartos/omnios/illumos as well. But none of those projects would have in any way contributed to the health of Sun Microsystems, nor provided the funding to get Solaris to where it is today. ZFS may have never seen the light of day if Solaris were open sourced in 1995.
> ZFS may have never seen the light of day if Solaris were open sourced in 1995.
It depends on how that would have affected Jeff Bonwick. If it kept him from deciding that Sun ought to develop a new filesystem, promising Matthew Ahrens a job writing one out of college and working together with Matt on it, ZFS would never have existed.
Some history. I was at Sun and Bob Hagmann was teaching at Stanford and got me to be a TA there. He retired and Stanford asked me if I would teach CS240B so I did. Jeff Bonwick was student and I recognized his ability and recruited him to Sun. He said "I have no experience programming in C" and I said "You are smart. I can teach you C, I can't teach you smart".
I also told him that he'd go way farther at Sun than I did and I was right, I think he made DE, I didn't. He played the game better. Smart guy. Him, Bryan, Bill Moore, those guys were the new Sun in my mind.
I actually posted before I saw your comment. I guess I was right. So are you happy to finally have an open source bring-over-modify-merge VCS whose command set makes sense?
Except that we've been around for 18 years and made payroll without fail that entire time. Supported a team of 10-15 people every year. That's something, many many companies in the valley, including many that open sourced everything, have not done as well.
You may have done more by open sourcing whatever it is that you have done; if so congrats.
My remark was intended to cite how much farther bit keeper could have gone had it been open source from the start rather than belittle what bit keeper accomplished.
At work, many of my newer colleagues have backgrounds in closed source software development. We are developing software that has no exact analog to existing software, and we hope that it will have a big impact. If it becomes as important as we think it could be, then bit keeper vs git is a fantastic example of why our work should be open source from the start.
Except you're not Linus and you probably don't have a cult like following for any work you produce.
Linus brought DVCS to the masses, but to pretend there wasn't more at play than simply open sourcing a project and hoping it all works out is complete rubbish. People have families to feed. Closed source is not inherently evil.
It takes a unique situation to produce something like git that's product is beyond the sum of the project itself.
I wrote enough patches to ZFSOnLinux that I have the distinction of number 2 by commit count. It was a hobby for me at first and quite frankly, I never expected it to make a difference for more than a few hundred people. Now ZoL is on millions of systems through Ubuntu in part because of my work and there are far more places using it than I can count.
Open sourcing those patches rather than keeping them to myself made a difference that was greater than anything I imagined. Similarly, the impact of making ZFS open source far surpassed the expectations of the original team at Sun. I think that making any worthwhile piece of software open source will lead to adoption beyond the scope of what its authors envisioned. All it takes is people looking for something better than what previously existed.
As for closed source being inherently evil (your words, not mine), how do you fix bugs in closed source software that a vendor is not willing to fix? How do you catch things like a hard coded password that gives root privileges? How do you know that the software is really as good as they say? It is far easier with open source software than with closed source software. Closed source software is a bad idea.
I still think you need to look at it from the point of view of an employer. Like me. I'm weird, I really care about my people, our company is more like a cooperative than anything else.
I grew this to a place where I could pay salaries. Doing so was super super hard. I had a lot of scary nights where I thought I couldn't make payroll. Just building up to a place where the next payroll was OK was a big deal for us.
So open source it? When you finally got to the point where you can pay people without worrying all night?
I get that you see that open source is the answer, and it is for some stuff. For me, jumping on that years ago was asking too much.
I have no idea why people are down voting this. In an alternative universe, we would all be using bit keeper. The reason we are not is mainly because Larry McVoy ceded the market to git and mercurial because he was afraid of disrupting his existing business. git and mercurial would never have existed had he practiced at Bit Movers what he preached at Sun.
So congratulations to Larry and crew -- and damn, were you ever right in 1993! ;)
[1] Seriously, read this: http://www.landley.net/history/mirror/unix/srcos.html
[2] The citation here is, in that greatest of all academic euphemisms, "Personal communication."
[3] "Sourceware" because [1] predates the term "open source"