Hacker News new | past | comments | ask | show | jobs | submit login

Yeah this irony is not lost on me. But in both cases, the companies acted in self interest. Neither had the guts to walk away from their existing revenue stream. It's hard to say what would have happened.

It's been an interesting ride and if nothing else, BK was the inspiration for Git and Hg, that's a contribution to the field. And maybe, just maybe, people will look at the SCCS weave and realize that Tichy pulled the wool over our eyes. SCCS is profoundly better.




Thank you for contributing to the development and evangalizing of DVCS, directly (BK) and indirectly (the ideas and inspiration for git, hg).


It's probably fair to say the DVCS accelerated the growth of the entire software industry.

Was BitKeeper the first version control system to "think distributed" ?


Sun's TeamWare [1] was probably the first real distributed version control system. It worked on top of SCCS. Larry McBoy, BitKeeper's creator, was involved in its development. I believe BitKeeper also uses parts of SCCS internally.

[1] https://en.wikipedia.org/wiki/Sun_WorkShop_TeamWare


We did a clean room reimplementation of SCCS and added a pile of extensions.


So basically when Tridge "reverse engineered" BK, he basically reimplemented SCCS?

https://lwn.net/Articles/132938/


nope, he did a clone and pull. He did rsync or tar but had no awareness of the file format.


NSE begat NSE-lite begat TeamWare begat BK begat git. Or so says Cantrill.


That is my understanding, yes -- but the NSE and (McVoy-authored) NSElite chapters of the saga pre-date me at Sun. Before my "Fork Yeah!" talk[1][2], from which this is drawn, I confirmed this the best I could, but it was all based only on recollections of the engineers who were there (including Larry). I haven't found anything written down about (for example) NSElite, though I would love to get Larry on the record to formalize that important history...

[1] https://www.usenix.org/legacy/events/lisa11/tech/slides/cant...

[2] https://www.youtube.com/watch?v=-zRN7XLCRhc


The NSE was Suns attempt at a grand SCM system and it was miserably slow (single threaded fuse like COW file system implemented in user space). I did performance work back then, sort of a jack of all trades (filesystem, vm system, networking, you name it) so Sun asked me to look at it. I did and recoiled in horror, it wasn't well thought out for performance.

My buddies in the kernel group were actually starting to quit because they were forced to use the NSE and it made them dramatically less productive. Nerds hate being slowed down.

Once the whole SCM thing crossed my radar screen I was hooked. Someone had a design for how you could have two SCCS files with a common ancestry and they could be put back together. I wrote something called smoosh that basically zippered them together.

Nobody cared. So I looked harder at the NSE and realized it was SCCS under the covers. I built a pile of perl that gave birth to the clone/pull/push model (though I bundled all of that into one command called resync). It wasn't truly distributed in that the "protocol" was NFS, I just didn't do that part, but the model was the git model you are used to now minus changesets.

I made all that work with the NSE, you could bridge in and out and one by one the kernel guys gave up on NSE and moved to nselite. This was during the Solaris 5.0 bringup.

I still have the readme here: http://mcvoy.com/lm/nselite/README and here are some stats from the 2000th resync inside of Sun: http://mcvoy.com/lm/nselite/2000.txt

I was forced to stop developing nselite by the VP of the tools group because by this time Sun knew that nselite won and NSE lost so they ramped up a 8 person team to rewrite my perl in C++ (Evan later wrote a paper basically saying that was an awful idea). They took smoosh.c and never modified it, just stripped my history off (yeah, some bad blood).

Their stuff wasn't ready so I kept working but that made them look bad, one guy with some perl scripts outpacing 8 people with a supposedly better language. So their VP came over and said "Larry, this went all the way up to Scooter, if you do one more release you're fired" and set back SCM development almost a decade, that was ~1991 and I didn't start BitKeeper until 1998. There is no doubt in my mind that if they had left me alone they would have the first DVCS.

Fun times, I went off and did clusters in the hardware part of the company.


Wow, jackpot -- thank you! That 2000th resync is a who's who of Sun's old guard; many great technologies have been invented and many terrific companies built by the folks on that list! I would love to see the nselite man pages that the README refers to (i.e., resync(1) and resolve(1)); do you happen to still have those?

Also, even if privately... you need to name that VP. ;)


There used to be a paper on smoosh here: http://www.bitmover.com/lm/papers/smoosh.ps but it's gone now. Do you mind putting it back? I'd like to read it again.



You weren't the only one to do SCCS over NFS. The real-time computer division of Harris did it too. That version control system was already considered strange and old by 2004 when I encountered it.


1 man using perl can outpace 8 on C++. Who would've thought? /sarcasm. But seriously, I think this is one of the classic instances of what is now quite common knowledge about dynamic scripting languages: They let you get things done MUCH faster. I think the tools group learned the wrong lesson from this, but OTOH, who would want to start developing all of their new software in perl? And given that python hadn't caught on yet, there really wasn't much else out there in the field.


> there really wasn't much else out there in the field.

What about shell scripts?


Shell scripts are even more miserable to write than perl, and are missing a lot of features you want for most applications


Or LISP.


Firstly, the mention of LISP would have probably sent most of Sun screaming and running for the hills at the time. Secondly, LISP has never been very good at OS integration, one of the most important things for many software projects.


>> Evan later wrote a paper basically saying that was an awful idea

Is this paper available online? Thanks.


https://www.usenix.org/legacy/publications/library/proceedin...

And for the record, Evan was somewhat justified in not saying I had anything to do with Teamware since I made his team look like idiots, ran circles around them. On the other hand, taking smoosh.c and removing my name from the history was dishonest and a douche move. Especially since not one person on that team was capable of rewriting it.

The fact remains that Teamware is just a productized version of NSElite which was written entirely by me.

If I sound grumpy, I am. Politics shouldn't mess with history but they always do.


Good to know I (probably) got it right.


BK and Monotone begat Git


There was a paper published on a DVCS using UUCP (!) in 1980: "A distributed version control system for wide area networks " B O Donovan, http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=tru...


The date of publication on Xplore is September 1990 though?


> Thank you for contributing to the development and evangalizing of DVCS, directly (BK) and indirectly (the ideas and inspiration for git, hg).

I concur.


> the companies acted in self interest

Sun, could, at least, make a profit building workstations and servers and licensing chips.

It's actually very sad they don't build those SPARC desktops anymore.


I really doubt it. There's not enough demand to make them competitive in price-performance. The server chips have stayed badass but still niche market. There's even open-source, SPARC HW with reference boards for sell from Gaisler. That didn't take off.

I liked the desktops but there's no money in them. Market always rejects it. So does FOSS despite it being the only open ISA with mainstream, high-performance implementations.


> There's not enough demand to make them competitive in price-performance.

There does not need to be demand: Steve Jobs (in)famously said that where there was no market, "create one".

I for one would absolutely love to be able to buy an illumos-powered A4-sized tablet which ran a SPARC V9 instruction set, plugged into a docking station, and worked with a wireless keyboard and mouse to be used as a workstation when I'm not walking around with my UNIX server in hand. Very much akin to Apple Computer's iPad Pro (or whatever they call it, I don't remember, nor is that really relevant).

But the most important point was, and still is, and always will be: it has to cost as much as the competition, or less. Sun Microsystems would just not accept that, no matter how much I tried to explain and reason with people there: "talk to the pricing committee". What does that even mean?!? Was the pricing committee composed of mute, deaf and blind people who were not capable of seeing that PC-buckets were eating Sun's lunch, or what?


"There does not need to be demand: Steve Jobs (in)famously said that where there was no market, "create one"."

What people forget is that Steve Jobs was a repeated failure at doing that, got fired, did soul-searching, succeeded with NEXT, got acquired, and then started doing what you describe. Even he failed more than he succeeded at that stuff. A startup trying to one-off create a market just for a non-competitive chip is going to face the dreaded 90+% failure rate.

"But the most important point was, and still is, and always will be: it has to cost as much as the competition, or less."

That's why the high-security stuff never makes it. It takes at least 30% premium on average per component. I totally believe your words fell on deaf ears at Sun. I'd have bought SunBlades myself if I could afford them. I could afford nice PC's. So, I bought nice PC's. Amazing that echo chamber was so loud in there that they couldn't make that connection.

"I for one would absolutely love to be able to buy an illumos-powered A4-sized tablet which ran a SPARC V9 instruction set"

That's actually feasible given the one I promote is 4-core, 1+GHz embedded chip that should be low power on decent process node.

http://www.gaisler.com/index.php/products/processors/leon4?t...

The main issue is the ecosystem and components like browsers with JIT's that must be ported to SPARC. One company managed to port Android to MIPS but that was a lot of work. Such things could probably be done for SPARC as well. The trick is implementing the ASIC, implementing the product, porting critical software, and then charging enough to recover that but not more than competition whose work is already done for them. Tricky, tricky.

Raptor's Talos Workstation, if people buy it, will provide one model that this might happen. Could get ASIC's on 45-65nm really quick, use SMP given per-chip cost is $10-30, port Solaris/Linux w/ containers, put in a shitload of RAM, and sell it for $3,000-6,000 for VM-based use and development. It would still take thousands of units to recover cost. Might need government sales.


The problem is the number of people who are into Illumos and want a portable Unix server is insignificant. All products came with fixed overheads (e.g. cost of tooling to start production), which have to be divided over the likely customer base. Small customer base == each customer pays a bigger share of the fixed overheads.

Basically, you want something to suit you and a small number of other people, but you won't pay for the cost of having something that "tailor made". You will only pay for the high-volume, lower-cost, more general product. So... that's all you get.


The problems were Sun's failure to recognize that cheap IBM PC clones would disrupt them like they disrupted mainframes and Sun not trying hard enough to overcome Wintel's network effect. Sun needed to die shrink old designs to get something that they could fabricate at low cost and compete on price. Such a thing would have canabalized Sun workstation sales, which might be why they never did it.


Could be. It's gone now, though.


You again! (:-) You know that the VHDL code for UltraSPARC T1 and T2 has been open sourced? If I had enough knowledge about synthesizing code inside of an FPGA, I would be building my own SPARC-based servers like there is no tomorrow!

As long as the code for those processors remains free, and a license to implement a SPARC ISA compliant processor only costs $50, the SPARC will never really, truly be gone, especially not for those people capable of synthesizing their own FPGA's, or even building their own hardware.

Some people did exactly that, a while back. Too bad they didn't turn their designs into ready-to-buy servers.


" You know that the VHDL code for UltraSPARC T1 and T2 has been open sourced?"

That was exciting. It could do well even on a 200MHz FPGA given threading performance. Then, use eASIC's Nextreme's to convert it to structured ASIC for better speed, power-usage, and security from dynamic attacks on FPGA. That it's Oracle and they're sue-happy concerns me. I'd read that "GPL" license very carefully just in case they tweaked it. If it's safe, then drop one of those badboys (yes, the T2) on the best node we can afford with key I/O. Can use microcontrollers on the board for the rest as they're dirt cheap. Same model as Raptor's as I explained in another comment.

Alternatively, use Gaisler as Leon3 is GPL and designed for customization. Simple, too. Leon4 is probably inexpensive compared to ARM, etc.

"SPARC ISA compliant processor only costs $50, the SPARC will never really, truly be gone, especially not for those people capable of synthesizing their own FPGA's,"

Yep.

"Too bad they didn't turn their designs into ready-to-buy servers."

Not quite a server but available and illustrates your point:

http://www.gaisler.com/index.php/products/systems/gr-rasta?t...

Btw, I found this accidentally while looking for a production version of OpenSPARC:

http://palms.ee.princeton.edu/node/381


FPGA is not magic. SPARC implemented on FPGA will never be competitive with consumer-level x86's


No, they are magic: arbitrary hardware designs run without the cost of chip fabrication. Two non-profit FPGA's, one for performance at 28nm & one for embedded at 28SLPnm, would totally address the custom hardware and subversion problem given we could just keep checking on that one. The PPC cores and soon Intel Xeons already show what a good CPU plus FPGA accleration w/ local memory can do for applications.

Yeah, buddy, they're like magic hardware. Even if they aren't ASIC-competitive for the best ASIC's. Still magic with a market share and diverse applications that shows it. :)


I thought that is clear? Apparently not...

FPGA's are cheap and good enough for prototyping; once one has a working VHDL / Verilog code, it's tapeout time.


Security wise, a FPGA is superior to consumer level x86 processors. How do you backdoor a FPGA?


In more ways than you'd know. They're already pre-backdoored like almost all other chips for debugging purposes in what's called Design for Test or scan chains or scan probes or something. Much hardware hacking involves getting to those suckers to see what chip is doing.

Now, for remote attacks, you can embed RF circuitry in them that listens to any of that. You can embed circuits that receive incoming command, then dump its SRAM contents. You might modify the I/O circuitry to recognize a trapdoor command that runs incoming data as privileged instructions. You can put a microcontroller in there connected to PCI to do the same for host PC attacks. I know, that would be first option but I was having too much fun with RTL and transistor level. :)


> There's not enough demand to make them competitive in price-performance

High-end chips only need to compete with other high-end chips. And low-end SPARC will not take off now that x86 has taken over.


You make that sound easy. The POWER and SPARC T/M chips are amazing. Yet, Intel Xeon still dominates to point that they can charge less and invest more. That's with one hell of a head start from Sun and IBM. The other's... Alpha, MIPS, and PA-RISC... folded in server markets with Itanium soon to follow.

You can't just compete directly in that market: you have to convince them yours is worth buying for less performance at higher price and watts. Itanium tried with reliability & security advantages. Failed. Fortunately, Dover is about to try with RISC-V combined with SAFE architecture (crash-safe.org) for embedded stuff. We'll see what happens there.


They do, sort of. Intel were using SPARC cores in at least one of their "Management Engine" devices in chipsets recently anyway. ;)



Yeah, that. I think he got a PhD for RCS and what he should have gotten is shown the door. RCS sucks, SCCS is far, far better.

Just as an example, RCS could have been faster than SCCS if they had stored at the top of the file the offset to where the tip revision starts. You read the first block, then seek past all the stuff you don't need, start reading where the tip is.

But RCS doesn't do that, it reads all the data until it gets to the tip. Which means it is reading as much data as SCCS but only has sorta good performance for the tip. SCCS is more compat and is as fast or faster for any rev.

And BK blows them both away, we lz4 compress everything which means we do less I/O.

RCS sucked but had good marketing. We're here to say that SCCS was a better design.


> in both cases, the companies acted in self interest. Neither had the guts to walk away from their existing revenue stream.

Why does bitmover have the guts now?


he explained the move in another comment https://news.ycombinator.com/item?id=11668492


I though SCCS had the same problems as RCS. What did it do differently?


RCS is patch based, the most recent version is kept in clear text and the previous version is stored as a reverse patch and so on back to the first version. So getting the most recent version could be fast (it isn't) but the farther back you go in history the more time it takes. And branches are even worse, you have to patch backwards to the branch point and then forwards to the tip of the branch.

SCCS is a "weave". The time to get the tip is the same as the time to get the first version or any version. The file format looks like

  ^AI 1
  this is the first line in the first version.
  ^AE 1
That's "insert in version 1" data "end of insert for version one".

Now lets say you added another line in version 2:

  ^AI 1
  this is the first line in the first version.
  ^AE 1
  ^AI 2
  this is the line that was added in the second version
  ^AE 2
So how do you get a particular version? You build up the set of versions that are in that version. In version 1, that's just "1", in version 2, that's "1, 2". So if you wanted to get version 1 you sweep through the file and print anything that's in your set. So you print the first line, get to the ^AI 2 and look to see if that's in your set, it isn't, so you skip until you get to the ^AE 2.

So any version is the same time. And that time is fast, the largest file in our source base is slib.c, 18K lines, checks out in 20 milliseconds.


I had... much too extensive experience both with SCCS weaves and with hacking them way back in the day; I even wrote something which sounds very like your smoosh, only I called it 'fuse'. However, I wrote 'fuse' as a side-effect of something else, 'fission', which split a shorter history out of an SCCS file by wholesale discarding of irrelevant, er, strands and of the history relating to them. I did this because the weave is utterly terrible as soon as you start recording anything which isn't plain text or which has many changes in each version, and we were recording multimegabyte binary files in it by uuencoding them first (yes, I know, the decision was made way above my pay grade by people who had no idea how terrible an idea it was).

Where RCS or indeed git would have handled this reasonably well (indeed the xdelta used for git packfiles would have eaten it for lunch with no trouble), in SCCS, or anything weave-based, it was an utter disaster. Every checkin doubled the number of weaves in the file, an exponential growth without end which soon led to multigigabyte files which xdelta could have represented as megabytes at most. Every one-byte addition or removal doubled up everything from that point on.

And here's where the terribleness of the 'every version takes the same time' decision becomes clear. In a version control system, you want the history of later versions (or of tips of branches) overwhelmingly often: anything that optimizes access time for things elsewhere in the history at the expense of this is the wrong decision.

When I left, years before someone more courageous than me transitioned the whole appalling mess to git, our largest file was 14GiB and took more than half an hour to check out.

The SCCS weave is terrible. (It's exactly as good a format as you'd expect for the time, since it is essentially an ed script with different characters. It was a sensible decision for back then, but we really should put the bloody thing out of its misery, and ours.)


Huh. Now I wonder how BK resolved this.


Yeah. I suspect the answer is 'store all binary data in BAM', which then uses some different encoding for the binary stuff -- but that then makes my gittish soul wonder why not just use that encoding for everything. (It works for git packfiles... though 'git gc' on large repos is a total memory and CPU hog, one presumes that whatever delta encoding BAM uses is not.)


We support the uuencode horror for compat (and for smaller binaries that don't change) but the answer for binaries is BAM, there is no data in the weave for BAM files.

I don't agree that the weave is horrible, it's fantastic for text. Try git blame on a file in a repo with a lot of history then try the same thing in BK. Orders and orders of magnitude faster.

And go understand smerge.c and the weave lightbulb will come on.


Yeah, that's the problem; it's optimizing for the wrong thing. It speeds up blame at the expense of absolutely every other operation you ever need to carry out; the only thing which avoids reading (or, for checkins, writing) the whole file is a simple log. Blame is a relatively rare operation: its needs should not dominate the representation.

The fact that the largest file you mention is frankly tiny shows why your performance was good: we had ~50,000 line text files (yeah, I know, damn copy-and-paste coders) with a thousand-odd revisions and a resulting SCCS filesize exceeding three million lines, and every one of those lines had to be read on every checkout: dozens to hundreds of megabytes, and of course the cache would hardly ever be hot where that much data was concerned, so it all had to come off the disk and/or across NFS, taking tens of seconds or more in many cases. RCS could have avoided reading all but 50,000 of them in the common case of checkouts of most recent changes. (git would have reduced read volume even more because although it is deltified the chains are of finite length, unlike the weave, and all the data is compressed.)


Give me a file that was slow and lets see how it is in BitKeeper. I bet you'll be impressed.

50K lines is not even 3x bigger than the file I mentioned. Which we check out in 20 milliseconds.

As for optimizing blame, you are missing the point, it's not blame, it's merge, it's copy by reference rather than copy by value.


I'd do that if I was still working there. I can probably still get hold of a horror case but it'll take negotiation :)

(And yes, optimizing merge matters too, indeed it was a huge part of git's raison d'etre -- but, again, one usually merges with the stuff at the tip of tree: merging against something you did five years ago is rare, even if it's at a branch tip, and even rarer otherwise. Having to rewrite all the unmodified ancient stuff in the weave merely because of a merge at the tip seems wrong.)

(Now I'm tempted to go and import the Linux kernel or all of the GCC SVN repo into SCCS just to see how big the largest weave is. I have clearly gone insane from the summer heat. Stop me before I ci again!)


Our busiest file is 400K checked out and about 1MB for the history file lz4 compressed. Uncompressed is 2.2M and the weave is 1.7M of that.

Doesn't seem bad to me. The weave is big for binaries, we imported 20 years of Solaris stuff once and the history was 1.1x the size of the checked out files.


Presumably if you then delete that first line in the third version, you get something like

  ^AI 1
  this is the first line in the first version.
  ^AE 1
  ^AD 3
  ^AI 2
  this is the line that was added in the second version
  ^AE 2

?


Close. By the way there is a bk _scat command (sccs cat, not poop) that dumps the ascii file format so you can try this and see.

The delete needs to be an envelope around the insert so you get

  ^AD 3
  ^AI 1
  this is the first line in the first version.
  ^AE 1
  ^AE 3
  ^AI 2
  this is the line that was added in the second version
  ^AE 2
That whole weave thing is really cool. The only person outside of BK land that got it was Braam Cohen in Codeville, I think he had a weave.


It's sort of suprising then that a delete doesn't just and end-version on the insert instead:

  ^AI 1..2
  this is the first line in the first version.
  ^AE 1..2
  ^AI 2
  this is the line that was added in the second version
  ^AE 2
This way the reconstruction process wouldn't need to track blocks-within-blocks.


Interesting. "^AI Spec" where Spec feeds into a predicate f(Spec, Version) to control printing a particular Version? Looks like you could drop the ^AE lines.


Sounds like equivalent representations no? Limit the scope of the I lines or wrap them in D lines.


Probably missing something. Both are working on one file at a time and have some form of changeset. One is adding from back to forward (kind of) another is from forward to back (rcs). Not sure where the reduction of work coming from.


SUN must like the scat names :) I used to use scat tool for debugging core files.


That actually is pretty neat


Aha, so that's where bzr got it from. :-)


bzr got more than that from BK, it got one of my favorite things, per-file checkin comments. I liken those to regression tests, when you start out you don't really value them but over time the value builds up. The fact that Git doesn't have them bugs me to no end. BZR was smart enough to copy that feature and that's why MySQL choose bzr when they left BK.

The thing bzr didn't care about, sadly, is performance. An engineer at Intel once said to me, firmly, "Performance is a feature".


Git's attitude, AFAIK, is that if you want per-file comments, make each file its own checkin. There are pros and cons to this.

Performance as a feature, OTOH, is one of Linus's three tenets of VCS. To quote him, "If you aren't distributed, you're not worth using. If you're not fast, you're not worth using. And if you can't guarantee that the bits I get out are the exact same bits I put in, you're not worth using."


Big fan of `git commit -vp` here. Enables me to separate the commits according to concerns.


I suppose that in Git if you wanted to group a bunch of these commits together you could do so with a merge commit.


If I remeber the history correctly, per-file commit messages were actually a feature that was quickly hacked in to get MySQL on board. It did not have that before those MySQL talks and I don't think it was very popular after.

Performance indeed killed bzr. Git was good enough and much faster, so people just got used to its weirdness.


> Git was good enough and much faster, so people just got used to its weirdness.

And boy is git weird! In Mercurial, I can mess with the file all day long after scheduling it for a commit, but one can forget that in git: marking a file for addition actually snapshots a file at addition time, and I have read that that is actually considered a feature. It's like I already committed the file, except that I didn't. This is the #1 reason why I haven't migrated from Mercurial to git yet, and now with Bitkeeper free and open source, chances are good I never will have to move to git. W00t!!!

I just do not get it... what exactly does snapshotting a file before a commit buy me?


It's probably the same idea as the one behind committing once in Mercurial and then using commit --amend repeatedly as you refine the changes. Git's method sounds like it avoids a pitfall in that method by holding your last changeset in a special area rather than dumping it into a commit so that you can't accidentally push it.

I often amend my latest commit as a way to build a set of changes without losing my latest functional change.


I always do a hg diff before I commit. If in spite of that I still screw up, I do a hg rollback, and if I already pushed, I either roll back on all the nodes, or I open a bug, and simply commit a bug fix with a reference to the bug in the bug tracking system. I've been using Mercurial since 2007 and I've yet to use --amend.


> In Mercurial, I can mess with the file all day long after scheduling it for a commit

OTOH, I find that behavior weird as I regularly add files to the index as I work. If a test breaks and I fix it, I can review the changes via git diff (compares the index to the working copy) and then the changes in total via git diff HEAD (compares the HEAD commit to the working copy).


Did you know you can do 'git add -N'? That will actually just schedule the file to be added, but won't snapshot it.


Cool. I've used bzr but never knew about per-file comments.


10 or even 20% performance is not a feature. But when tools or features get a few times faster or more then their usage model changes - which means they become different features.


In short, RCS maintains a clean copy of the head revision, and a set of reverse patches to be applied to recreate older revisions. SCCS maintains a sequence of blocks of lines that were added or deleted at the same time, and any revision can be extracted in the same amount of time by scanning the blocks and retaining those that are pertinent.

Really old school revision control systems, like CDC's MODIFY and Cray's clone UPDATE, were kind of like SCCS. Each line (actually card image!) was tagged with the ids of the mods that created and (if no longer active) deleted it.


| CDC's MODIFY and Cray's clone UPDATE, were kind of like SCCS

Do you have references? I've heard of these but haven't come across details after much creative searching since they are common words.



Thank you! A peek into the (as far as I know) into root node of source control history.


I've heard that too. It comes from card readers somehow.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: