

If it doesn't hurt to think about it, we're not going to try it - T-A
http://highfidelity.io/

======
erichocean
I do agree with them that voxels are where it'll all end up, particularly at
the high end of rendering.

I doubt they're ready to take the next leap and abandon monte carlo + HACKS,
which is currently state of the art. Removing the hacks and going with
straight up individual wavelength photon tracing without _any_ hacks (other
than using the adjoint formulation, which simply flips the direction of the
particles) is where it'll end up IMO.

The amount of data and computation needed is breathtaking, but it can be done
today with a large cluster, Intel E5s, SSDs, Infiniband, and thousands and
thousands of low end graphics cards to carry the math load. Individual scenes
are in the 1.5 petabyte range using voxels, but there's enough compute there
to turn off the monte carlo hacks and just get the damn answer, every frame,
like clockwork with extremely low latency, and with all geometric optics
effects enabled.

The cluster hardware I spec'd to do the above clocked in around $2 million 12
months ago. I doubt it's changed much since then, maybe dropped another
20-30%? That's actually incredibly low—30 months ago, it was over $100 million
(and physically impossible due to space requirements). When it dropped down to
$16 million two years ago, I went after funding (and failed to raise it, thus
ending a 10+ year journey).

Now it's down to $2 million, but I've moved on to other work. I'm sure
someone, someday will take up the torch and make it happen—maybe even these
High Fidelity people. And if I ever end up with "fuck you" money from the
startups I'm CTOing for currently, I'll write a check and do it myself.

Best of luck!

------
JoeAltmaier
Latency is certainly the most important variable. If the world slews around
you when you turn your head, instant vertigo. Some people adapt; others suffer
from 'simulator sickness' which can induce vomiting, aphasia, headaches and
last for days after the experience.

And latency is darned hard to control. Turn from staring at a wall to a window
with a vista stretching to the distance - compute requirements go from trivial
to astronomical in an instant.

I've always imagined this might be accomplished by modeling in reverse -
algorithmic objects that are defined fractally, beginning with basic size and
color, and broken down iteratively to more detail. Isn't that something like
voxel rendering?

SO initially the mountain in the distance is a blue triangle, then resolves
into peaks and valleys, snow and blue-green treeline. If compute resources are
limited (which they are most of the time) it can take a while to resolve, yet
there's always something to display at the right position and size. So there's
no lag in positioning objects, just in bringing them 'into focus'.

Not sure if it'd cure/avoid simulator sickness, but it'd be better than a
wobbly delayed image that screwed with your inner ear.

------
ollifi
People working with voxels consider VDB as an alternative to octrees. It's
sparse volume format working with large voxel amounts. From Ken Museth & co at
Dreamworks Animations
[http://www.openvdb.org/documentation/doxygen/index.html](http://www.openvdb.org/documentation/doxygen/index.html)

------
ballard
This sounds like building a dream rather than what people want. Validate the
market first before building anything. This is a market segment that has had
failure after failure, so there's probably a good reason or two this space
hasn't been conquered.

