Hacker News new | past | comments | ask | show | jobs | submit login

25+ years ago, our company used Clearcase for version control and it's clearmake had distributed build capability. Clearcase used a multi version file system (MVFS) and had build auditing so clearmake knew exactly what versions of source files were used in each build step. It could distribute build requests to any machine that could render the same "view" of the FS.

Even without distributed builds, clearmake could re-use .o files built by other people if the input dependencies were identical. On a large multi-person project this meant that you would typically only need to build a very small percentage of the code base and the rest would be "winked in".

If you wanted to force a full build, you could farm it out across a dozen machines and get circa 10x speedup.

Clearcase lost the edge with the arrival of cheaper disk and multi-core CPUs. I'd say set the gold standard for version control and dependency tracking and nothing today comes close to it.




Clearcase was utter crap. 6 hour code checkouts and 2 weeks to setup a new developer is a freaking joke. I literally did a conversion from Clearcase to git and reduced the setup time to 15 minutes and this is for a code base older than Clearcase is.

Not to mention the absolutely bad design for handling merge conflicts (punt to human if more than 1 person touched a file seriously???)


If you're talking about Clearcase snapshot views, I agree they were garbage. And IIRC merging in a Clearcase snapshot view was also a hot mess. Snapshot views was a bold-on that we were forced to use in later years. TBH the migration to other VCSs was already underway by then in our company but snapshot views was the last straw for us.

On the other hand Clearcase dynamic views were pretty awesome. You just needed to edit your view config spec and the view FS would render the right file versions immediately. No checkout required. There was even view extended naming to let you open multiple versions of the same file directly from any editor.

As for merging Clearcase set the gold standard for three-way automatic merges and conflict resolution that wasn't matched until git came along. It's still superior in one important way - Clearcase had versioned directories as well as files so you could commit a file move and someone else's changes to the same "element" in the original file location would be merged correctly after the move. No heuristics, just versioned elements with versioned directory tree entries. Backporting fixes was a breeze, even with a significant refactor in between.

Git more or less wins hands down today because it is fast, distributed and works on change sets. But something with git's storage model, a multi version file system and Clearcase's directory versioning would be an awesome VCS.


we got a bit closer with gitfs but nobody has really merged all the parts into a "it just works" setup.

https://wiki.archlinux.org/title/Gitfs


Clearcase's (clearmake's) cloud building capability sounds nice, but I have to pile on to this:

> I'd say set the gold standard for version control and dependency tracking and nothing today comes close to it.

In my 2005-2011 experience with Clearcase, it was slow and required dedicated developers just to manage the versions, and I'm so happy its version control model has been an evolutionary dead-end in the greater developer community. The MVFS is an attractive trap. Giving people the ability to trivially access older versions means you've just outsourced the job of keeping everything working together to some poor SOB. It was very much a "enough rope to hang yourself" kind of design.

As I said, it was slow, because MVFS. The recommended solution from Clearcase/IBM was to break up the source tree into different volumes (or whatever Clearcase's "repo"-analogue was named), which just increased the pain of keeping things in sync.

Additionally, it was an "ask-permission" design for modifying files, where you had to checkout a file before being able to modify it, and you couldn't checkout if someone else had, which added a ton of painful overhead.

I'll grant that my company/group didn't know what they were doing, but following IBM/Clearcase's guidance was not a good idea.

These days, I use Clearcase as a warning to the younger generation.


Around that time I went down the rabbit hole of "what if I wanted to write a filesystem for Windows NT?" It turns out that, aside from network redirectors, Microsoft didn't think this was something people ought to do very often. The people at Atria (which got bought by Rational) who were working on ClearCase were pioneers, and they bought a source license from Microsoft to be able to do it. The result was that there was a lot of information from them on a mailing list about how do implement a filesystem. However, it was mostly incomprehensible to me, because kernel programming on NT is quite different from what one would expect coming from other backgrounds, including Windows 95, OS/2, and Linux.


Brings back memories. Fresh out of college, I was given the additional job of being the Clearcase and Unix admin for my team. Not that I had any special skills but others didn't know a few Unix commands (System V) that I did. But Clearcase was such a good product and was used in the Telecom companies that I worked for (Motorola, Lucent etc.) It was owned by Rational at that time and if memory serves me right, were acquired by IBM.

To this day, I find Clearcase's way of doing things is the better way to do version control. Git, in comparison, kind of feels alien and I could never really get the same type of comfort on it.


That was also the standard development workflow at Nokia Networks, back when NetAct was being developed for HP-UX.

Nowadays most of it has been ported into Java, if the ongoing efforts when I left, finally managed to migrate everything away from C++, Perl and CORBA.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: