“If you just render the geometry, it’s pretty, but it doesn’t look lush and furry,” says supervising technical director Bill Wise. “We wanted Spanish moss hanging from tree limbs, and clumps and hummocks of moss. The Highlands of Scotland were like another character in the film, a living backdrop for what was going on. We had never tackled as vast an outdoor landscape, but we were able to generate it using insane procedural geometry developed by Inigo Quilez. He’s a magician.” (http://www.cgw.com/Publications/CGW/2012/Volume-35-Issue-4-J..., section "Painting with Code")
He contributed some basic lessons about it to Khan Academy's "Pixar in a Box" section on Environment modeling (https://www.khanacademy.org/partner-content/pixar/environmen..., direct link to first video: https://www.youtube.com/watch?v=fuwUltMAdYQ).
Also worth checking out is "Elevated", one of the coolest 4066-byte programs you'll ever see: http://www.pouet.net/prod.php?which=52938.
Oh, no, sir. This wasn't built by a magician. This was built by a wizard. A man who can actually do what magicians pretend to do.
He does things that I wouldn't have dreamed possible with fragment shaders and implicit geometry. https://www.youtube.com/watch?v=yMpG6qEb8js
IQ is a wizard and really inspired my interest in programming and math.
...and you can see how it was made from basic shapes:
This page has a great cache of formulae for various primitives, patterns and blend functions: http://mercury.sexy/hg_sdf/
The boolean and domain stuff there blew my mind. It's incredible how concise the descriptions are - goes a long way to explaining some of the more magical stuff that came out of the Shaderlab/Demoscene world.
Question - are many people using SDF/Ray marching in games outside of very niche uses? (clouds etc)?
The performance benefits only seem to kick in for very specific applications but I'd like to see whole levels/environments built like this. Maybe as the current gen of GPUs drops in price and become ubiquitous it will become a more feasible approach all round.
PS If anyone has access to a Rift (or a Vive via ReVive with a bit of fiddling) try out https://github.com/jimbo00000/RiftRay
It's a lot of Shaderlab stuff in VR and some of it is truly astonishing in a headset. (I know you can run the website itself in WebVR but it's clunky as hell and performance isn't great)
For 'designed' things where you place a bunch of spheres manually, the complexity of the mesh is still about linear in the complexity of the expression, and an 'eval' of the expression probably would be helped by spatial optimization strategies such as a k-d tree or other such things usually used in scene structures. It seems best for things where the complexity emerges out of a lower order of complexity of the expression itself.
An analog (ha) can be made with how synth + sequenced music has two layers of complexity: an emergent one 'within' the sounds, and a designed one in the sequence.
Downloading now. Itch is becoming a very interesting platform for discovering experimental games and interactive experiences.
The game designs that SDFs allow haven't been explored fully, so I'm picking away at it game jam by game jam. ;)
I was disappointed to learn that OpenGL wouldn't do it and we have to calculate it on our own and send in a 3rd texture coordinate to each vertex for extra calculation on the shader.
Sadly, there isn't much useful info on this subject on the internet (I'm not talking about projective mapping, but bilinear interpolation).
I've found this: http://www.reedbeta.com/blog/quadrilateral-interpolation-par... that I think will work for my case.
My problem is that I also use normals to do "fake" parallel lighting on quads and it accentuates even more the affine errors on non parallelogram quadrilaterals. It's easier to just subdivide the quad to minimize the affine artifacts.
If anyone knows a better way to solve it, I'd love to learn.