Hacker News new | past | comments | ask | show | jobs | submit login

The paper mentions that the accretion disc was rendered in Houdini.

It also mentions that ‘several hundred’ of their 1633 10-core, 156GB (weird number?) blade servers were used, but didn’t seem to go into details on data storage.

Is it possible you were working at the compositing phase, which would have been very heavy on random read and writes, resulting in more wear on the disks?




> you were working at the compositing phase,

I was a systems engineer at the time. Compositing is actually really quite ideal linear streaming. You read the images serially and tend to write them back serially. which for spinny hard disks is about as optimal as possible. each frame tends to be between 20-150 megs (resolution, layers, bitdepth). but they are rarely read in a random IO way, you start at frame 1 and go to frame

there were 32 primary fileservers at the time, two were spare, and, including the nearlines (which were _n_ lustre machines and each had 4 raid arrays attached by SAS rather than one) we'd normally expected to replace twoish disks a week.

They ran particle sims all the time. Water, smoke, explosions are all staples of VFX. It was just something weird about this particular sim.

My understanding was that the actual simulation was causing the disks to die, rather than the render. A render can be restarted, a sim, less so. Well, this sim at least.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: