
How Voxels Became ‘The Next Big Thing’ - mariuz
https://medium.com/@EightyLevel/how-voxels-became-the-next-big-thing-4eb9665cd13a
======
strictnein
I remember when Voxels were the "Next Big Thing", 20 years ago.

Delta Force:

[https://en.wikipedia.org/wiki/Delta_Force_(video_game)](https://en.wikipedia.org/wiki/Delta_Force_\(video_game\))

Which used the Voxel Space engine:

[https://en.wikipedia.org/wiki/Voxel_Space](https://en.wikipedia.org/wiki/Voxel_Space)

~~~
hardwaresofton
What was/is holding voxel back? the new programming model? computation
density?

I guess minecraft is an example of a voxel game (supposed a voxel game written
in block style according to a random post on the internet[0]) that became
really popular -- so I guess the answer could be "nothing", but it sure seems
like voxels are still niche (and don't want to be).

[0]: [https://forum.unity.com/threads/voxel-games-vs-block-
games.2...](https://forum.unity.com/threads/voxel-games-vs-block-
games.286747/)

~~~
pandaman
In principle, "voxels" and "polygons" are the same algorithm - you project
some points on the surface to the screen and eliminate invisible ones with the
z-buffer. Polygons allow to use sparser points and fill space between them
with interpolation, voxels rely on the points density or blowing up the point
size.

Interpolation is quite complicated and voxels, on the other hand, could be
arranged so it would be very easy to draw on a traditional CPU. So one could
write a really fast voxel renderer on something like Pentium 200, which would
produce better looking graphics than a polygonal renderer at the same speed
(it would not be as versatile but when you do a demo you are free to chose
scenes where voxels will do the job better).

On GPUs interpolation is not just cheap but it's the only way to exploit
massive parallelism since you can draw all interpolated points simultaneously.
Drawing points, on the other hand, is sequential (because of z-buffer all
primitives have to be drawn in the fixed order) and is already super slow by
itself. GPUs are also optimized for drawing triangles so they do a lot of
things with them they cannot do for a list of points (which would be the
primitive you'd use to draw voxels). Of course you can do your voxels on
compute but then you are not using color/depth buffer hardware so either way
you are disadvantaged.

~~~
dkersten
It was my understanding that a large reason why early voxel engines had great
performance is because you only draw each pixel once (kinda like how deferred
lighting only calculates lights for each pixel once) eliminating overdraw and
therefore doing work that wasn’t ever going to be visible.

But now GPUs can crunch that data super fast (as you say) and memory bandwidth
is often a bigger issue. So while voxels still get used in some places,
they’re avoided in favour of more compressed data formats and rendering
techniques.

At least, that’s what it sounded like to me, I’m no graphics expert.

~~~
pandaman
>because you only draw each pixel once

It depends on the algorithm, if you did heightfield then yes, you could easily
eliminate invisible pixels without testing each of them against z-buffer.
However you could not do complex models with heightfields.

A popular general voxel algorithm in 90s was "chains", where an object was
represented as a sequence of voxels, where position of each was encoded as an
offset from the previous one and voxels were on a regular grid so there were
only 26 possible offsets. If you did not do perspective projection then you
could have just projected all 26 deltas into the screen space and compute each
voxel's position with 3 additions. With the offset taking 5 bits you could fit
each voxel in a 16-bit dword together with some ramp-lit material for per-
pixel lighting. It was a very fast loop even on a CPU without division. On a
modern CPU or GPU, of course, this is painfully slow, since each position
depends on the previous one and ensures strict order on the loop. Memory-
bandwidth wise this is quite nice because of the very compact input data. E.g.
you could put position and texture deltas into 8 bit voxels and had more
compact representation for a smaller model than using popular 16/32 _byte_
vertices.

~~~
dkersten
Oh that’s interesting, thanks for the comment! Must read up on it more :)

Do you know how modern voxel algorithms work? Do they just blindly render
point clouds to make use of the GPU parallelism and let the z-buffer remove
invisible voxels (and a spatial index to render only the ones in view) or is
there more sophistication? I assume there’s some clever parallel algorithms
available nowadays, or?

~~~
pandaman
I am not closely following voxel developments so I could be wrong, but my
impression is that the practical implementations focus on the representation
and its compression. They all render with some type of raytracing using
compute. There had been approaches to generate a triangle mesh from a voxel
representation but I am not sure how far did they get.

~~~
dkersten
Thanks!

------
audunw
I think the trojan horse for voxel engines is lighting systems.

There are already game engines that voxelizes the world to a very rough voxel
representation to efficiently calculate indirect light propagation.

I think more and more lighting and physics simulations will be implemented in
a rough voxel representation of the game world. Then, to improve accuracy and
detail this representation will become finer and finer, until it makes more
sense to render this voxel representation directly, rather than rendering the
polygons.

What's missing is perhaps hardware support for rasterization of polygons to
voxels. It's hard to compete against the super efficient hardware
rasterization of polygons to pixels otherwise.

I think character models will be polygon based for a long time still, since
it's much more suited for character animation.

~~~
blackrack
You can use the GPU's harware rasterizer to voxelize a scene.

[http://www.alexandre-pestana.com/voxelization-using-gpu-
hard...](http://www.alexandre-pestana.com/voxelization-using-gpu-hardware-
rasterizer/)

This approach is used in most commercial games that support voxel-based global
illumination.

------
acou_nPlusOne_t
Voxels always where the next big thing. The problem is they dont just demand
ressources n^3 for all dimensions- they demand n^6, if you want to go into
fine grained detail depending on closeness to the actor.

Cube: Sauerbraten did go there, and it uses something like oct-trees to store
these tesselated, refined voxels.

And that is where the whole concept crawls on the shore to die - the tooling
for this to work with all that has been invented before is just- not
efficient- to sculpt in voxels, you basically send marching cubes through
polygon models.

Personally that was the point where i dropped it- because- when i have polys
allready- and dont need the destructable physics (they ruin gameplay
&performance anyway most of the time) - then why bother.

Its one of those technical superior solutions which never seem to surpass the
legacy solutions (like Rust and C). My assumption is, that one day, the
polygon world will just eat the voxel world- having a
.toVoxel.ApplyPhysicalModel(Sand).AfterPhysicsRestDo(ToPoly), and thats what
will remain.

Very small cubes of dust.

~~~
teraflop
Where are you getting n^6 from?

A hierarchical level-of-detail scheme (e.g. the one used in Sauerbraten) is
still only n^3 in the worst case. If you store one full-resolution copy of
your world, and then a copy with 1/2 the resolution, then 1/4 the resolution
and so on, you get a geometric series that converted to only a constant-factor
overhead.

~~~
AstralStorm
You need a way to anizotrophpic filter the voxels. This means the mipmap
should be fully indexable. That is n^3 * m memory for mipmap and n^4
filtering...

Isotopic filtering is easier, just 3*n. However, it will make surfaces look...
unreal, as if physics didn't form them, at times.

You cannot wing the quality as easily in 3D though. Continuity is paramount as
is some degree of smoothness. Ringing is unacceptable.

------
shultays
Here is a very old voxel project that still amazes me

[http://www.advsys.net/ken/voxlap.htm](http://www.advsys.net/ken/voxlap.htm)

It has pretty cool looking visuals, dynamic objects and some basic physics as
well. I think the most impressive part the developer was able to run this on
~10-15 years old computers.

~~~
Nition
For those that don't know, Ken Silverman also created the Build engine that
runs Duke Nukem 3D and others.

------
kazinator
This site does something weird, I haven't seen before. As you scroll around
the page, not clicking on anything, it adds entries to your stack of visited
pages! You then have to hit the back-button N times to get out. It's
stealthily redirecting to itself or something like that.

You can monitor this: stay on the page, and then every once in a while right
click on the back-button in Firefox (or however it's done in your browser to
pop up the forward/back list). See the growing number of repetitions of the
page title in the list.

~~~
andreareina
One of the reasons I've disabled js on Medium.

~~~
paulie_a
Just another reason to not even visit medium. That site has become complete
garbage.

------
sergiotapia
What happened to that australian company that had millions in funding for
voxel tech? They seems quite skiddish and didn't want to show all their
information in their demos. Did anything come of it?

Euclideon - 7 years ago:
[https://www.youtube.com/watch?v=00gAbgBu8R4](https://www.youtube.com/watch?v=00gAbgBu8R4)

~~~
Animats
2018 Euclideon video.[1]

The big quadrillion-vowel worlds and planet-sized world claims seem to have
disappeared, but now they can do small-scale voxel projects and handle
deformation. No info as to how, and no products on their site for non-static
voxel environments.

Voxel systems are straightforward if everything is in memory, but how do they
scale?

[1]
[https://www.youtube.com/watch?v=nr5JqYYye3w](https://www.youtube.com/watch?v=nr5JqYYye3w)
[2] [https://developer.nvidia.com/content/basics-gpu-
voxelization](https://developer.nvidia.com/content/basics-gpu-voxelization)

~~~
toxicFork
The video you linked to in your [1] is of Automontage. Automontage and
Euclideon are different teams.

Euclideon's channel is
[https://www.youtube.com/user/EuclideonOfficial](https://www.youtube.com/user/EuclideonOfficial)

I myself like Automontage better because they are waaay less "hype" and more
"here's what I have".

~~~
toxicFork
Typo: It should be Atomontage

------
asdfnionio
I don't think the recent surge in interest in voxels is from people with deep
knowledge of the technology. For the vast majority of people voxels are just
"that thing Minecraft uses." That's why every indie game made in the last five
years is made of huge blocks with low-res textures.

~~~
erikb
I was really surprised that this is the only comment mentioning the obvious.
That will be 60% or more of the current revival.

Additionally there are probably quite a few people who didn't know anything
about Voxels, but transitioned the "pixelart" hype into the 3D world, coming
to the same end point.

I bet both these reasons together make 90% of the current interest.

------
amelius
Perhaps someone here can answer this: if a voxel is an aligned cube in a
rectangular grid, how could one represent a mirror (or shiny surface) that's
oriented at (say) 45 degrees w.r.t. the grid?

~~~
Ygg2
If all pixels are squares in a grid, how could one represent a curved line?

Answer is similar - approximation, and lots of tiny squares/cubes/voxels.

E.g.
[https://static1.squarespace.com/static/5aa5ba4b55b02ca2a2f3b...](https://static1.squarespace.com/static/5aa5ba4b55b02ca2a2f3b593/t/5aa72daf9140b794ec5e990f/1520905671050/voxbun2.gif?format=500w)

~~~
amelius
The problem is that no matter how small you make the voxels, the reflection
will never converge to something accurate.

You have this situation:

    
    
        *
        *
        *
        ****
           *         <=== incoming light
           *         ====> reflected light
           ****
              *
              *
              *
    

Versus:

    
    
        \  | reflected light
         \ |
          \|______ incoming light
           \
            \
             \
    

PS: This makes me wonder how nature does it. Is the fact that photons are (in
a sense) "bigger" than atoms responsible for the fact that we can have shiny
surfaces?

~~~
Ygg2
Sure, but then you don't use just Voxels, you use voxels organized in an
octree. That way your voxels have information on their neighbors and can
compute lighting and reflection for each such Voxel.

[http://lup.lub.lu.se/luur/download?func=downloadFile&recordO...](http://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=5384546&fileOId=5384547)

~~~
Const-me
> That way your voxels have information on their neighbors and can compute
> lighting and reflection for each such Voxel.

How many neighbors do you need to fetch from the tree for every pixel, to
reconstruct a surface normal vector with good precision like 0.5-1% each axis?

Without these normals, it’ll be very hard to implement stuff like specular
lightning, and reflections/refraction/environment mapping.

I think for this reason there’re no reflecting materials on the first video
from that article (the one with the tank), and there’s high temporal noise on
the second video “Voxel Surfaces with Materials”.

~~~
Ygg2
I can't really say. This particular area isn't really my specialty, but
cursory Google searches yielded some interesting links.

From what I remember, there are techniques to cull voxels that aren't visible.
So it's possible some of them don't even matter when doing calculation. See:
[https://tomcc.github.io/2014/08/31/visibility-1.html](https://tomcc.github.io/2014/08/31/visibility-1.html)
[https://tomcc.github.io/2014/08/31/visibility-2.html](https://tomcc.github.io/2014/08/31/visibility-2.html)

From what I gather, the biggest issue with Voxel are lack of proper hardware
support, forcing you to render more using software.

~~~
Const-me
> there are techniques to cull voxels that aren't visible

Culling invisible voxels is fine. But to reconstruct a normal with sufficient
precision, I’d say you need to sample quite large area of nearby visible
voxels. IMO the RAM bandwidth costs are prohibitive.

As far as I understand it’s impossible to do in screen space because that
would introduce artifacts near the edges of the objects.

With traditional rasterizers, accurate per-pixel normals are very cheap to
compute, i.e. some computations in the VS, then interpolation within triangle
implemented in hardware, then a single lookup from the normal map in the PS.

> lack of proper hardware support

What hardware support would you need for them? GPUs are very efficient doing
many kinds of general-purpose computing. For the storage, modern GPUs support
volume tiled resources, at the first sight this feature in D3D 11.3 and 12.0
looks very suitable for these voxels: [https://msdn.microsoft.com/en-
us/library/windows/desktop/dn9...](https://msdn.microsoft.com/en-
us/library/windows/desktop/dn914605\(v=vs.85\).aspx)

------
hartror
Waiting for one of these demos to include animated characters. Not holding my
breath.

~~~
milesvp
There's been a lot of work done in the last 5 years on animating voxel
characters. You can probably do a google search and find some papers to your
satisfaction. It is something of a challenge though, in that voxels used as an
atomic unit can't deform. I can't remember what tricks are used, but I believe
it requires pervertig the voxel engine to add exceptions/complexity to deal
with animation.

------
DonHopkins
Check out what some mad genius did with The Sims 1's original "2D sprite +
z-buffer" artwork -- it's not perfect, but it's totally flabbergasting that it
works as well as it does!

The Sims 1 in 3D:
[https://www.youtube.com/watch?v=r5D7GPQDDUI](https://www.youtube.com/watch?v=r5D7GPQDDUI)

Here's how The Sims 1 sprites work:

Each object has a set of sprite bitmaps for four different rotations, at three
different scales. Symmetrical objects can re-use and flip sprites for
different rotations. Each set of sprite bitmaps includes a color bitmap, an
alpha bitmap, and a z-buffer bitmap. The smaller scales can be derived by
shrinking the largest scale, but Maxis renders them all directly in the 3D
Studio Max sprite exporter, which looks nicer, and players can export and
import them with Transmogrifier, and edit and touch them up with 2D tools like
Photoshop or Gimp (or even program 3D tools like Blender to export them).

Making a totally new object! By Bunny Wuffles.
[http://bwsost.woobsha.com/9firstnewobject/page1.html](http://bwsost.woobsha.com/9firstnewobject/page1.html)

Transmogrifier walk-through:
[http://wiki.thesimsresource.com/images/5/54/Steve-
Using_Tmog...](http://wiki.thesimsresource.com/images/5/54/Steve-
Using_Tmog_A_Simple_WalkThrough.pdf)

------
tjpnz
Despite all of the controversy I'm still in awe at what Hello Games were able
to pull off using this technique in No Man's Sky.

------
Bizarro
Atomontage demos are cool, but there's always been some controversy about what
their tech can do in the "real-world".

~~~
anfilt
Well demo video's don't tell you a lot. They claim to have some secret sauce.

However, the demo video's makes you even question that they have a secret
sauce at all. When you see them delete parts of the game world you see how
thin the ground layer is when part of it is deleted.

The are big problems with voxels. The simplest being space as the game world
increases in size the amount vowels grows with the 3rd power.

My big problem is no technical details have been published. They claim to have
a pending patent on their site. However, that may not really tell us much.

I have played with voxels myself and oct-trees and other algorithms related to
voxels. The only real use voxels have seen is medical imaging. If they have
figured something out they should be having a paper in a math journal. I doubt
it though mainly since it would involve pushing thing to the limit or beyond
the limits of information theory. The only other option for such small voxels
is massive amount data storage or object with relative homogeneous structure
at which point it's not much better than hollow polygons.

Not only the main memory is limited, and while SSDs are getting better when
playing with high density voxels with a pretty how amount entropy you can
quickly exist main memory. So what I did instead was just request
exposed/surface voxels and arrange the octree data structure to be linear from
outside in to help with memory caches. Still all the really mattered was the
exposed voxels, and that's not much different from a hollow polygon. Honestly,
after having played with voxels myself I would say they are cool, but you not
making a whole game out of them. If anything it's just tool to be used in some
parts of a game.

I will add an other thing that gives me doubt they have anything really new is
the fact when they start destroying their models you see the same color in
side the model for the most part. Even the brick wall that got destroyed is
pretty homogeneous.

~~~
Bizarro
Looks like they just formed a real company
[https://www.atomontage.com/blog/2018/4/5/atomontage-inc-
laun...](https://www.atomontage.com/blog/2018/4/5/atomontage-inc-launches-to-
bring-unprecedented-voxel-technology-to-market-announces-initial-founding-
team-and-advisors).

One of their investors is an MD specializing in imaging.
[https://www.atomontage.com/#team-section](https://www.atomontage.com/#team-
section) That'll be their bread and butter.

I always wondering if the community picked up VoxelQuest
[https://www.voxelquest.com/](https://www.voxelquest.com/) and ran with it,
but last I looked that wasn't the case.

I'm too old to do serious graphics programming, but might be interested in a
voxel engine written in C# with a good community.

~~~
fesoliveira
The issue with VoxelQuest is that, while it was an astonishing piece of
technology, it wasn't documented in any way a normal human can understand. I
spent several hours trying to read it, and while I'm impressed with the
results, the lack of design patterns and structured code (the code is
separated in just about 10 files, and it is almost 50k lines long, if not
more) makes it really difficult to work with.

I dabbled in voxels myself, even wrote a Minecraft clone and all that, but
VoxelQuest was the first real voxel engine I saw working in a really nice way.
It had proper lighting, shadows and even nice effects such as reflections. It
is kind of a bummer that the dev quit the project, so much potential wasted :(

~~~
gavanwoolery
I agree with you on the poor documentation - that stemmed from me almost never
working with others professionally and either doing a lot of solo contract
work or work where my code was mostly isolated from other systems. I've gotten
a little bit better about standard practices after working for a company that
uses them and team work is super-critical as small changes can potentially
cost $10k+ in damage per hour, so understanding the code and making it
testable is more important than producing new code.

Over the past year I have been working on Voxel Quest silently, because
progress is so incredibly slow that I still do not have much interesting to
show (and I dove to the deepest depths of feature creep and tried to build a
compiler/transpiler to address a lot of problems I was having with c++ and
slow turnaround times).

To be honest, over the past month or so I got frustrated with my slow progress
and took a break to work on a simple 2D game, just to see if I had any ability
to control what I consider my greatest weakness: failure to prioritize. So far
that experiment is successful but now I'm asking myself if making a 2D game is
a waste of my knowledge - maybe that is true or maybe its just shiny object
syndrome. >_<

Overall the biggest problem is my past two years have been almost entirely
consumed by my work, renovating my house, raising a kid, and other stuff. I
typically have less than 10 hours per week to concentrate on projects (more
often 5), so I am working about 1/8 to 1/16 as fast as I was when working on
VQ full time.

Anyhow, it would be nice to work on it in greater capacity, or at least
something similar, I just dont know where that money would come from. I am
very hesitant to ask people for money at this point, given my track record, so
I feel like my next thing has to be self-funded or funded by someone who can
truly afford to lose the money. I could easily find people who would throw
more money at VQ but I dont think that it would be a sufficient amount to
survive off of, so it would be effectively wasted.

Not sure what to do in the short term, to be honest, other than work at my
current pace. :/ I'll share my current progress publicly once there is
something interesting to show.

------
noway421
Lighting, shadows and global illumination are still incredibly hard with
voxels. And this is true for both Atomontage and Euclideon. Their demos look
like really over HDR'ed.

~~~
caffeineviking
Not really, in fact it's the other way around.

While rendering triangle meshes using rasterization (the "classic" way) is
fast (because modern hardware is built for this), doing real-time light
transport is a pain-in-the-ass, and involves a lot of inelegant (but very
smart!) tricks (e.g. shadow mapping). On the other hand, we have ray tracing
on triangle meshes, which is elegant, but really doesn't perform as well as
rasterization, because you need a lot of rays to converge to a non-noisy
result.

The state-of-the-art global illumination renderers all use a hybrid approach.
Either they reduce the triangle mesh so real-time Radiosity becomes viable
(like in the Frostbite 2 engine), but these suffer from not being able to
simulate specular surfaces as good. While the other approach is to discretize
the mesh to a voxel representation and do Voxel Cone Tracing (VCT) (which is
the same as ray tracing, but instead of shooting rays, you shoot "thick" rays,
and get the incoming importance over an area), which produces non-noisy
results, and can be done really fast. There is a reason why engines like
Unreal Engine, Unity and Godot have started to implement VCT, and why a lot of
research is being geared toward finding efficient ways to do scene
voxelization (just look at the DICE/SEED thesis work proposals and you'll see
that this is indeed true).

The advantage of having an engine that only deals with voxels is that the
techniques I've shown above can be done naturally, you can use a "pure" voxel
approach instead of a hybrid. There is no need to convert triangle mesh scene
to a voxel representation. My main worries with the method shown in the
article is that doing animation (those not generated procedurally) using
voxels is very hard. Also, the amount of memory needed to store a voxel scene
is several orders of magnitude larger than a triangle mesh, even if you
compress it, and worst of all, you need to store the animation as well, and
depending on how that's done, can be a absolute nightmare (but I'm not up-to-
date with the latest reserach there, maybe there are some fancy techniques
that solve this problem).

------
cwyers
I'm not sure what exactly that Playstation 2-looking tank in the video is
supposed to do to convince me that voxels are the next big thing, but it's not
doing it.

~~~
lawlessone
you're looking at the wrong part.

------
LouisCastle
Fun! Yeah, wes stored our data as slices for space and restricted rotation to
the Y axis. Both were optimization s since each frame of an animation was a
full model there was no need to rotate them. The rendered could render them
from any angle though so I still consider them voxels. More like voxels lite
then voxel plus. We also used a lot of sprite cards with zdepth and a quick
normal hack for lighting. You had to cut corners where you could back then!!

------
firedev
Hope they'll make it, always liked voxels.

------
jzl
Want to play with Minecraft-style voxels? There's a really wonderful freeware
voxel editor called MagicaVoxel:

[https://ephtracy.github.io/](https://ephtracy.github.io/)

Pretty incredible what you can do with it.

------
red2awn
Does anyone know any comprehensive article or tutorial on learning voxels?

------
5kyn3t
Is there somewhere a "hello World" code about how to create such a voxel in
same sense as this article suggests?

Are they just creating "boxes" or "cubes" for each voxel?

~~~
garmaine
Ray tracing is one oft used method. Store voxels in a kd tree and do a
directed search to find the first intersection (first “filled” cube
intersecting with the ray).

------
deorder
Why do people call a textured 3D cube on a grid a voxel (Minecraft), but do
not call a textured 2D tile on a grid a pixel?

~~~
LoSboccacc
pixel was already used so they call it texel

~~~
deorder
A texel is a element of a texture:
[https://en.wikipedia.org/wiki/Texel_(graphics)](https://en.wikipedia.org/wiki/Texel_\(graphics\))

------
timavr
The more different approaches the better. It is always worth trying, even if
it takes a while.

------
alkonaut
So voxels makes some things easier than with polygons.

What is HARD with voxels? Animation? shadow rendering?

~~~
erikb
Without some tricks, the number of calculations increases drastically.

------
LouisCastle
For Blade Runner.

------
matte_black
Voxels will be key for further VR immersion.

In a VR game, being able to look at things very close, and seeing smaller and
smaller details almost infinitely, will help go a long way toward making
things feel more real.

Think taking a break in an FPS, and kneeling down to see the blades of grass,
and then closer to see dirt and pebbles, and even closer to maybe seeing ants.

~~~
chupasaurus
> being able to look at things very close, and seeing smaller and smaller
> details almost infinitely

That is called tesselation and it's already there in games for more than a
decade.

~~~
veli_joza
There's so much misunderstanding and wrong terminology surrounding voxels.
Voxels are a way to store world data in memory, basically 3d bitmaps. They
won't give you infinite details, just the opposite - they can only give you
single '3d resolution'.

To actually get infinite details, you have to switch underlining world
representation from static data (meshes, voxels) to mathematical (perlin
noise, fractals). Here you make an important trade-off - mathematical
representation is harder to make changes to, because each modification would
increase complexity of math formula used to describe the world. You can see
lot of deformations in voxel demos but they are global transformations (scale,
shear).

Minecraft uses hybrid approach. Inital world is described by math (called
world generation) but it's then serialized (sampled) as data with specific
voxel size, that you can easily manipulate.

Edit: another, IMO far more impressive approach is Media Molecules' upcoming
Dreams software that uses constructive solid geometry and sparse distance
fields to describe and manipulate 3d geometry. Here's a detailed tech talk:
[https://www.youtube.com/watch?v=u9KNtnCZDMI](https://www.youtube.com/watch?v=u9KNtnCZDMI)

