Hacker News new | past | comments | ask | show | jobs | submit login

Rendering scales fairly linearly with hardware. If I'm rendering with Arnold for example, it can saturate all my cores pretty well and gives back an almost linear rendering time reduction.

Same with GPU systems like redshift, alternatively I can dedicate half the machine to rendering and the other half to continued work.




Does 16+ threads actually contribute to render times with Redshift?

I know

Unless you’re using Houdini, I think you’ll get the same result using Redshift with 16 threads. Houdini is the only DCC that uses more than one thread to prep the scen




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: