Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Git isn't very good for large files, unfortunately. In my experience large is quite over >100MB, though.

I'd love to convince everyone to switch to git, but I want to note that SVN's branching/merging capabilities are at par with p4's, since at least 1.4 using svnmerge and probably in 1.5 standalone. (My current employer uses p4, and I switched my last workplace over to SVN from CVS.)

It does suck to go back to centralized SCM after using a DVCS though (I clone my work p4 checkout into git for minute-to-minute use). A possible solution: if your build artifacts are most of the large-file problem, consider storing them out of band and having workstations fetch them using Ant/Ivy or something similar?



I would love to have the build artifacts stored in a system for managing build artifacts, but I don't really know of any.

We could (theoretically) store the build artifacts in Subversion and the source in Git, but that would be very unpopular. No one in the company is interested in having multiple revision control systems to maintain.

Our main product is built in Visual Studio, but we use ant for the surrounding build tasks. I think of ant as being a 'better make', what does it have to do with grabbing remotely stored build artifacts?


Ivy is an addon of sorts for ant that does dependency management (e.g. libraries, etc.) It sort of jacks the Maven dependency resolution bits and leaves the rest. It's definitely not a 'system for managing build artifacts' (and it's pretty java-focused) but you can set up your own repository and have a build target fetch them, put them in the right place, etc.

I wouldn't necessarily advocate this since you'd inevitably end up having to build some system around it, but conceptually it seems like what you'd want. YMMV, I've only used it for pretty straightfoward Java stuff.


None of the distributed VCS handle large files well because they load the whole file into memory when they diff. Only Subversion seems to do well currently.


By "handle large files" all I really want is for it to think of it as a blob that can't be diffed and store it.


Git handles symlinks, so if your devs were all on UNIX-like systems, you could just store symlinks to your un-diff-able blobs under your Git repository.

Sorry to offer a non-solution, but maybe someone else out there does have a UNIX dev platform, and could get some benefit from storing big binary files in network share with versioned symlinks.


I hadn't thought of that. You'd then be revision-ing the link, and not the file itself.

Unfortunately we are tied pretty tightly to Windows dev. environments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: