
Unlimited Detail Wants To Kill 3D Cards - lobo-tuerto
http://www.rockpapershotgun.com/2010/03/10/unlimited-detail-wants-to-kill-3d-cards/
======
henning
A big part of what video cards do is not focused on models. It's focused on
lighting and shading. That's what gives games realism, something completely
lacking in these demo scenes.

Make a tech demo that blows Crysis out of the water that runs in software on
commodity hardware and we'll talk.

Hire some fucking artists if you need to. Saying "this is just programmer art"
is a copout. It's like a slacker student who says "I could get straight As if
I studied more and did my homework".

I'm not unwilling to entertain radical ideas but you need to show something
more than flythroughs with lighting reminiscent of Quake II.

~~~
dirtbox
I'm itching to find out what the modelling tools are like. At this point I
doubt they would gel with many 3D artists. Programmer designed interfaces for
creative tools are generally a pretty ropey affair.

However, I'm certainly willing to attempt to do battle with the uncanny valley
if they let me.

~~~
windsurfer
I bet they would work great. If you could tell a 3D artist that he can use as
much detail as he wants for this game... well, you'll get what they show on
Zbrush central. All those models have hundreds of times more polygons than
what would ever be accepted in a real game.

~~~
dirtbox
I use ZBrush to create displacement maps using HD geometry, which doesn't use
actual polygons in the classic sense. A lot of the ultra high poly models you
see on the turntables and such use the same technique. You can easily make a
mesh well over a billion polys that doesn't impact the system.
<http://www.pixologic.com/docs/index.php/HD_Geometry>

I doubt they'd have anything anywhere near ZBrush's flexibility and depth.

~~~
windsurfer
You could totally take a displacement map and tessellate a model with it. In
fact, that's what OpenGL4 and DirectX does in hardware.

------
samlittlewood
The contacts page lists "Greg Douglas" as CTO - sounds like it might be this
guy: [http://www.doolwind.com/blog/game-developer-spotlight-
greg-d...](http://www.doolwind.com/blog/game-developer-spotlight-greg-douglas-
game-engine-programmer/)

Also, the same name appears against:

\- A C++ R-Tree ( <http://en.wikipedia.org/wiki/R-tree> ) implementation:
<http://www.superliminal.com/sources/sources.htm> (towards end)

\- A contributor to the GameMonkey script engine.

I think it is legit, but the other shoe will drop when we see which knob got
turned to max. at the expense of the others - My initial guess covers pretty
much what others have pointed out - materials, lighting, compression. The
example scenes look instanced to all hell (ie: the scene is a DAG).

Edit: Also see:

Bruce Dell's comments (and one from his dad!) in:

[http://www.tkarena.com/Articles/tabid/59/ctl/ArticleView/mid...](http://www.tkarena.com/Articles/tabid/59/ctl/ArticleView/mid/382/articleId/38/Death-
of-the-GPU-as-we-Know-It.aspx)

Comments from a 'Greg' - I assume 'Greg Douglas' who reports seen the inner
loop:

[http://www.somedude.net/gamemonkey/forum/viewtopic.php?f=12&...](http://www.somedude.net/gamemonkey/forum/viewtopic.php?f=12&t=419)

------
noonespecial
It sounds a little to me like the old "once the polys get small enough,
everythings a particle system" approach. Sure you save lots of GPU by
essentially doing away with surfaces and textures (by making everything a
floating colored dot) but you then have to contend with massive storage,
manipulation, and filtering problems.

They might be at the stage where they say "we'll just make everything a point!
this'll be cake! all our renderer will have to do is figure out which points
to show." and not yet at the phase where a they come to realize that a room
full of monsters will require 100 gig of ram.

~~~
mullr
Indeed. Through many mentions of the word "unlimited", never once was the word
"storage" mentioned, or "cache" or "memory". How about the design toolchain?
The only solution I can think of is to store all the assets as ... polygons.
Unless there is toolchain support for a CSG/procedural approach.

~~~
mullr
Another thing I wonder is, if their "search" system, what kind of indexing is
required? How long does it take, and can you re-index on the fly? Their demos
have a conspicuous lack of any kind of movement at all, much less dynamic
geometry.

And what about shading? Shading typically require surface normals, something
that's not readily available from a mess of points in 3d.

~~~
noonespecial
Shading in particle systems (when its addressed at all) usually involves
making a topo-map like structure out of the points and then creating virtual
polys out of the contours. You can then apply shading to groups of points
contained in the virtual polys based on those surface normals. It takes lots
and lots of cpu. Decidedly not like the "it'll run on your cell phone without
a GPU" hype presented here.

My guess is that they created a small static particle system that looks like
3d figure, "rotated" it by selectively displaying particles and got all
excited.

A classic case of needing to be an expert in a field before trying to push the
state of the art in order to save yourself the trouble of chasing a dead end
that most people in said field already know is infeasible.

------
mcormier
If the Duke Nukem Forever team still existed I bet they'd be jumping on this
technology right now.

------
hristov
I listened to their explanation and it sounds a bit shady. First they used
that old salesman trick of using a British accent to appear smart, so that was
already a red flag.

And then their explanation of the secret to their technology sounded rather
ridiculous. They said that their technology was like the google search engine
or like searching for the word "money" in an MS word document (the latter was
the lamest attempt at subliminal messaging you are likely to find outside of a
political ad). Needless to say that is a very silly explanation.

Of course a lot of graphics systems have methods where they determine what
elements are supposed to be visible and render only those elements. But that
is not something that will give you "unlimited detail".

So yeah .... shady.

~~~
Luc
Come on, there's a lot to be critical about in this video, but do you honestly
think he faked a British accent to appear smart? There's a few million of them
that talk like that, you know. And the word search for money being lame
subliminal messaging? Really?

~~~
hristov
Maybe I am being unfair, but that accent seemed a bit fake.

~~~
samlittlewood
Sounded a bit like Lloyd Grossman to me, and 'data' as 'darta'. From some
digging around, I think these guys are in Australia. There are a couple of
other vowel slips which would fit.

------
Raphael_Amiard
The problem here is that we essentially have snake oil and vaporware. As a
result, all the comments are angry and quite empty too. It seems like their
idea could be interesting, but they should really do a proper presentation and
tech demo before saying they buried traditional ways of rendering 3D graphics

------
dirtbox
From a modelers point of view, this type of technology would be perfect,
essentially rendering redundant most of the current workflow, from battling
with poly counts to playing tricks with displacement maps and baked occlusion.
The model swapping he mentioned is pretty outdated, most engines use dynamic
LOD which automatically reduces a models polycount depending on your view
distance, also removing parts of a model that are under a certain size.

I'm not sold yet, it's making some incredibly bold claims. I'm aware of point
cloud data models from back when it was included in the 3D Mark '01 benchmark
(the rotating horse statue, the test was called point sprites
<http://www.ixbt.com/video/images/3dmark2001/gf3-sprites.jpg> ), and found it
an oddity at the time, not all that visually appealing and certainly not as a
believable modelling method. It's not raytracing, although it does bare a lot
of similarities.

I'll be keeping a sceptical eye and an open mind on this one.

Anyway, here's their website <http://unlimiteddetailtechnology.com/>

Edit: Thinking about it, it's biggest downfall will occur when physics are
involved. For a static scene it's ideal, but to get the branches of the trees
to blow in the wind, or to achieve any kind of real time environmental change
beyond lighting would likely cause some serious difficulties.

------
shin_lao
Isn't real time ray tracing more interesting?

[http://blogs.intel.com/research/2007/10/real_time_raytracing...](http://blogs.intel.com/research/2007/10/real_time_raytracing_the_end_o.php)

~~~
snprbob86
Their "search" algorithm _is_ ray tracing. It must be, to some extend. They
claim they only process each pixel on the screen, so they must trace a ray and
find out which voxel/point/whatever to draw. The "unlimited" part breaks down
as soon as you do any sort of reflection, refraction, or diffuse lighting. You
can't just trace one ray for each screen pixel; you need to handle branching
and scattering and the number of rays to trace becomes exponential.

------
aw3c2
Is it sparse voxel octrees? [http://s08.idav.ucdavis.edu/olick-current-and-
next-generatio...](http://s08.idav.ucdavis.edu/olick-current-and-next-
generation-parallelism-in-games.pdf) Is it GigaVoxels?
<http://artis.imag.fr/Membres/Cyril.Crassin/>

------
brandong
But what about model animation?

Very helpful reddit comment about this demo:
[http://www.reddit.com/r/gaming/comments/bbg9c/unlimited_deta...](http://www.reddit.com/r/gaming/comments/bbg9c/unlimited_detail_the_end_of_poligon_based/c0lxc1e)

------
Raphael
It's probably a lot easier to render when it's the same model over and over.

------
emankcin
Wouldn't take 100 gigs of ram as previously stated... they could probably do
the trick with raid ssds. I mean, people will pay $500 for a video card,
recently Newegg had great 40gig ssds available for $99. Three or four of those
still cost less than top end video cards, and the prices are only coming down.
Plus, a lot of end users aren't doing much with their multicore amazing i
series processors these days... mostly going to waste in a lot of systems.
This is vaporware until proven otherwise, but I look forward to finding out
which one ;-D.

------
Apreche
I point out that there is no animation whatsoever in their demo.

~~~
ks
I saw another video on their site where the grass seemed to move. It could of
course be a side effect of the low quality video version.

------
justinsb
Given we're on a collective patent kick at the moment, this is surely the
perfect example of why we have software patents. If we assume this to be real,
who here would like to have spent years working on this, only for ATI and
NVIDIA to reap all the rewards?

~~~
jrockway
ATI and Nvidia would still have to write code to make it work. That's the hard
part, not coming up with the idea.

~~~
justinsb
Are you serious? Figuring out the algorithm is absolutely the hard bit here.
When was the last time you had trouble implementing an algorithm?

I feel like I'm feeding a troll here - I had to check your profile to be sure
I wasn't. I think you're letting your dislike of patents warp your normally
intelligent viewpoints.

~~~
ramchip
It's said sometimes on HN that "actually implementing it is the real problem,
not coming up with the idea". This is referring to startups; in this domain an
idea like "let's do a site just like myspace but with feature X!" is worth
nothing, but an actual product can be worth a lot.

I think it's no trolling, he just translated this to a domain where it makes
no sense, which is a good reminder that web startups are not representative of
all programming/business/engineering problems. And that one must be careful
not to use a phrase like a meme, without thinking about its implications.

------
aresant
As impressive as this sounds, it's interesting that the graphics they display
are Quake 2 generation . . .

Although they call it search technology it sounds like a very effecient
graphic codec - blending pixels, focusing on rendering frame vs. scene etc.

------
calcnerd256
"searching" a "point cloud" sounds like a winner, reminds me of Seadragon's
claims about bandwidth/screen resolution, but in 3D (and complicated with a
search problem with probably lots and lots of parameters)

------
lutorm
How to you get specular lighting in a particle system? You need a surface
normal but there aren't surfaces.

Plus the unlimited thing bugs me. How does any algorithm that is O(N^0) work?

------
Aegean
I knew something like this would come up the day the polygon-based 3D was out.
The nice graphical games we played suddenly had turned into cold, rigid
polygonic characters (E.g. the difference between Diablo I and Diablo II, or
AOE or Age of Kings anyone? (Can't forget the moment Griswald the Blacksmith
was moving towards me as a polygon zombie in Diablo II, where he looked like a
decent Scottish lad in Diablo I). While polygons do give that extra depth
feeling, the individual objects look annoyingly geometric unless there are a
swarm of very small polygons.

I think this will pick up if the right conditions get in place.

~~~
fh
You have no idea what you're even talking about. Both Diablo 1 and 2 used pre-
rendered sprites. Yes, technically they're "polygonic", but in the same sense
that a Pixar movie is polygon-based: the amount of detail is only limited by
the time they were willing to give their render farms. This page has a nice
animated comparison's between D1's and D2's sprites for Diablo himself, you'll
see how small the difference actually is.
<http://diablo.wikia.com/wiki/Diablo> If anything, the models they used for D2
were more detailed, better lighted, and encoded with more colors.

------
gridspy
Sounds like a good implementation of Ray-Tracing. Lighting and animation are
going to be major hurdles to overcome, followed by tools.

~~~
hristov
But if you listened to their 8 minute video they said it was not ray tracing.

------
bitwize
Blast Processing!

