Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow, this makes me think, do animation teams don't really use a version control system to have multiple people contributing to the same movie at the same time? Does anybody knows how that works?


Oh they 100% do use a version control system.

Your standard "shot" will contain 10s or hundreds of versions of animation, lighting, texturing, modelling & lighting. Not to mentions compositing. They are often interdependent too.

A movie like shrek will have literal billions of versions of $things.

> Does anybody knows how that works?

This changes by company, but everyone tends to have an asset database. This is a thing that (should) give a UUID to each thing an artist is working on, and a unique location where it lives (that was/is an NFS file share, although covid might have changed that with WFH)

Where it differs from git et al is normally the folder tree is navigable by humans as well so the path for your "asset" will looks something like this:

/share/showname/shot_number/lighting/v33/

There are caveats, something like a model of a car, or anything else that gets re-used is put somewhere else in the "showname" folder.

Now, this is all very well, but how do artists(yes real non linux savvy artists) navigate and use this?

Thats where the "Pipeline" comes it. They will make use of the python API of the main programs that are used on the show. So when the coordinator assigns a shot to an artist, the artist presses a button saying "load shot" and the API will pull the correct paths, notify the coordination system (something like ftrack or shotgun) and open up the software they normally use (maya, zbrush, mari, nuke, etc etc) with the shot loaded.

Once the artist is happy, they'll hit publish.

The underlying system does the work of creating the new directory, copying the data and letting the rest of the artists know that there are new assets to be pulled into their scene as well.

Then there are backups. Again this is company dependent. Some places rely on hardware to figure it out. As in, they have a huge single isilon cluster, and they hook up the nearline and tape system to it and say: every hour sync changes to the nearline. every night, stream those to tape.

Others have wrapped /bin/rm to make sure that it just moves the directory, rather than actually deletes things.

Some companies have a massive nearline system that does a moving window type sync, so you have a 12 hourlys, 7 dailies and 1 monthly online at once. The rest is on tape. The bigger the company, the more often the fuckup, the better the backups are tested.


Not really. With multiple animators working on files simultaneously, its kind of hard.

Sadly my animation knowledge has left me since I left the studio working gig three years ago. But we did create content for netflix and many animators came up sheepishly because they deleted a folder for it to be restored. It's not as uncommon as you think.

FWIW, the live archive/backup server was called Dumbo. Which were 3x 4u Supermacho chassis with over 1.2PB in drives served over 10Gbit to each workstation connected to 1Gbit running CentOS 5. I dropped the new chassis while racking once and is partly the reason to why I lost my job :/


Can't edit my post, but for clarity. Im wrong on the line "Not really. With multiple animators working on files simultaneously, its kind of hard." see comments above.


Pixar used RCS at the time. Problem is, when you run `rm -rf /`, that deletes the RCS directories as well.


IIRC perforce is (was?) commonly used in teams with workflows involving large assets.


Perforce is still the defacto standard in the video game industry where it's not uncommon for a game's source assets to run into the tens of terabytes.

That said, Toy Story 2 was developed in the late 90s, and while Perforce existed then I don't know how popular it was.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: