
Games company claims their graphics are 100,000x better - trog
http://www.ausgamers.com/news/read/3093969
======
coffeemug
I think what they're doing is great, but I see two problems with their
presentation. First, computer rendering techniques are extremely well
understood and well researched. We've picked the low hanging fruit, much of
the high hanging fruit, and everything in between. There is no "groundbreaking
new technology" to be invented. They're converting polygons into voxels
(although each voxel is probably a sphere for cheaper computation), and using
software ray-tracing to render in real time. Since ray-tracing is trivially
parallelizable, the multicore technology is just about there now. A 12-core
machine will give just about 20FPS. The reason why they can get away with an
incredible amount of detail is that ray-tracing diffuse objects is fairly
independent of the number of visible polygons in the scene.

The second problem is that 10^4x improvement in level of detail _does not_
mean 10^4x aesthetically pleasing (or in fact, more aesthetically pleasing at
all). Ray tracing gets very expensive the moment you start adding multiple
lights, specular materials, partially translucent materials, etc. It is very,
very difficult to do _that_ in real-time even with standard geometry, let
alone with 10^4x more polygons. This is why their level doesn't look nearly as
good as modern games despite higher polygon count (compare it to the unreal
demo: <http://www.youtube.com/watch?v=ttx959sUORY>) They only use diffuse
lighting and few lights. In terms of aesthetic appeal of a rendered image,
lighting and textures are _everything_.

Furthermore, one of the biggest impacts on how aesthetically pleasing a
rendered images looks is made by global illumination. That's also something
that's _extremely_ difficult to do in real time with raytracing, but is
possible with gpu hardware with tricks. The trouble is, these tricks look much
better than raw polygons.

Again, I love what they're doing. Real-time ray-tracing is without a doubt the
future of graphics, but it would be nice if they were a little less
sensational about the technology, and more open about the limitations and open
issues.

~~~
tintin
And there are no moving objects in this demo. Voxel animations are much harder
than poly animations.

Then there is memory. The elephant is looking great, no doubt about that. But
I think you will need a lot of space for it. On a PC this might work, but I'm
not sure it can be used on consoles.

~~~
badmonkey0001
That and I don't know of anyone (outside of using them as particles) that has
successfully created collision models with voxels in real time. The collision
calculations of just a few of their scanned rocks and a ground plane would be
plenty complex - a whole scene perhaps even a little insane given today's
specs.

They might have to resort to polygonal collision models in the same way that
polygonal games end up using low-poly collision models (with the same pitfalls
such as moonwalking, blocked projectiles or mystery-bouncing).

~~~
albertzeyer
What about this:
[http://www.youtube.com/watch?v=Gshc8GMTa1Y&feature=relat...](http://www.youtube.com/watch?v=Gshc8GMTa1Y&feature=related)

~~~
badmonkey0001
Sorry for the belated reply. Yeah, I saw Atommontage and it's quite
interesting. Despite the realism of the track marks left and suspension
movement, the truck still seems to exhibit some unnatural "floating" feeling.
These are the very pitfalls of dealing with lower-res poly collision that I
was speaking about. Kind of an uncanny barrier for movement.

I do feel that this author is a bit further along at something ship-able than
the Euclideon folks are though.

------
prawn
John Carmack's repsonse:

"Re Euclideon, no chance of a game on current gen systems, but maybe several
years from now. Production issues will be challenging."

[https://twitter.com/#!/ID_AA_Carmack/statuses/98127398683422...](https://twitter.com/#!/ID_AA_Carmack/statuses/98127398683422720)

~~~
jarin
I trust Carmack's response over pretty much anyone's in matters of rendering
engines. The guy is a god of coding them.

~~~
prawn
I saw a comment on Reddit about this 6-7 hours ago suggesting that Carmack had
a horse in this race and that we might be best to presume the very slightest
of bias in his response(s).

That said, I also view his perspective in this field as tending towards
unquestionable. He's always seemed to have had a pretty sparkling reputation
as well as the obvious talent.

~~~
unconed
The tech basically has to be a variant of sparse voxel octrees. Carmack has
mentioned it in an interview years ago, It's reasonable to assume he's built
his own prototype, as he does with pretty much anything else.

------
saulrh
I'll say the same thing now as I said last time these guys released a video:
I'll believe it when I see them make a single blade of grass move, or when
they place a single dynamic light source and cast a single dynamic shadow.
Until then, this technology is awesome, but more or less useless.

~~~
hristov
Also if they actually make a scene with different things in it. Having
millions of objects in your scene is not that hard if they are all copies of
the same object. The old demonstration showed a bunch of repeated copies of a
single object. This one showed a lot of copies as well, although not as
obvious.

~~~
extension
It is hard in that you still have to draw every poly in every copy of the
object. Instancing allows you to reuse some per-vertex calculations but it
doesn't help at all with fragments, which is where most of the power is
typically used.

~~~
wlievens
Yeah, it "only" solves the memory use problem.

------
gavanwoolery
As someone who has worked with GPUs and software renderers for over a decade:

I am pretty sure that their tech depends on a few types of repeatable data,
which they are able to cache effectively based on rotation -- in other words,
they have come up with an efficient way of querying the front-facing voxels in
a large set of data based on the resulting view matrix. Where this falls flat
is if the data is not procedural, or it is not diverse - as you can see in the
video, there is a bunch of the same data copied over and over. However you
compress it, such detail is not free and I am guessing there is a lot of data
that depends on a good deal of memory/storage to work properly.

I am not so worried about animation or dynamic lights or textures as everyone
else is. If they can render it to a buffer and get the normals/depth/UV
coordinates, the rest of the rendering can be done in screen-space, including
SSAO, deferred lighting, and similar rasterization tricks. Animation can also
be rendered on top of the scene, and intersected with the former depth buffer.
The only thing I am worried about is the size of the data set and ability to
create more diverse landscapes.

~~~
gavanwoolery
Also, for those interested in seeing a LIVE demo of very similar technology,
there are many examples, but here is one:

<http://voxels.blogspot.com>

It is not likely to be the exact the same technique, but I am guessing they
are using similar methods.

------
greendestiny
It's no accident that there is that much repetition in the models. It's also
no accident that they are all nicely tiled in power of two axis aligned
bounding boxes. Clearly these things take up enormous amounts of memory and
need to be in some big octtree like hierarchy - so while they can instantiate
these pretty impressive leaf nodes they can't do things like have them on
uneven ground.

So much work left to do.

------
vrode
I'm sorry, but as much as I respect people both on Reddit and Hacker News, I
wonder where does all the enthusiasm come from, when:

* demos show nothing new from a technological perspective

* the presenter sounds like a door-to-door salesperson

* as it seems to me, the only purpose of the demo is to raise a hype and somehow (I still don't understand) they succeeded

Euclideon got financed by Australian government.

I really hope the board took a critical approach and relied on at least /some/
technical expertise to grant these people A$ 2m. If they made this decision
based just on a demo - I'm moving to Australia at once, where I will invent a
technology you have never seen in your whole life before. Ever.

~~~
Maxious
The startup grant program was this one:
[http://www.commercialisationaustralia.gov.au/WhatWeOffer/Ear...](http://www.commercialisationaustralia.gov.au/WhatWeOffer/EarlyStageCommercialisation/Pages/default.aspx)

Terms in that program are you have to match the government funding 1:1 so they
must have (or raise within 2 years) $2mil to put against the government's.
They also have to pay it back "on success" (5% of revenue once that reaches
100k total) and are monitored closely for "fast failure" (they are expected to
succeed and repay within 2 years, they have to repay even if they fail after 5
years). Euclideon claims to have had a 2010 funding round so maybe that's how
they got into this program. It also says this program is not to be used to
"Prove to the applicant that a certain technological problem can be overcome
(R&D projects)" so they must have shown it as a viable product that just needs
to be packaged up for sale.

What strikes me most is anything under a Commonwealth funding agreement has to
have the words "Funded by Australian Government through the XYZ Program. An
Australian Government Initiative" in all their promotional material. Yet the
shining star of Commercialisation Australia's portfolio forgot. Ouch.

------
mullr
The last time we talked about this
(<http://news.ycombinator.com/item?id=1179970>), the consensus was that it was
a lot of snake oil and not useful for most applications.

------
mambodog
Every time this comes up I like to point people to the Atomontage Engine[1],
which takes (what I think is) a more pragmatic approach, combining voxel and
polygon graphics. Voxels are used where appropriate (eg. landscape,
destructible buildings) and polygons can be used for dynamic objects.

[1] <http://www.youtube.com/watch?v=1sfWYUgxGBE>

~~~
palish
It would be nice if that demo was more exciting. Also, the detail on the
terrain (in the first minute) leaves much to be desired.

But it's a much higher quality demo than Euclideon IMO, from a technical
standpoint.

~~~
mambodog
I think the videos on the Atomontage channel show something I can actually see
being a game, for example it shows destructible environments and terrain
deformation, editing tools, and clever use of simulated DoF and a gameplay
perspective that plays to the strengths and hides some of the weaknesses of
the tech.

I could see it being great for a hybrid-RTS game like Ground Control, or some
kind of god-game.

------
yason
Anytime there's too much bragging--or any bragging at all--before the actual
product is finished, my bogus filter lights up. And it's really hard to turn
it off later.

------
AlfaWolph
I thought it was interesting that they predicted some sort forthcoming schism
between 'real' scene objects and 'artificial' ones. At first I thought he was
talking about the characters and scenes of residing on one side of the uncanny
valley or the other, which I think is a valid thought. But he wasn't talking
about that at all and posited instead that objects will either be scanned in
from the real world and placed in game vs assets created by artists. I don't
think this will be the case except in games that strive for realism. It will
be more like Photoshop, where real scanned in assets still require artists to
perfect and stylize them for your game. Until high resolution, tactile VR
arrives, you're still running into 'Ceci n'est pas une pipe'.

~~~
georgieporgie
I thought replicated and fabricated were more appropriate terms.

------
dkersten
IMHO polygon count is not nearly as important as texture, lighting (and
therefore also shadows) and animation quality.

Their polygon count is impressive and the object detail looks awesome, but, as
other people commented here, I wonder how well it will hold up when dynamic
objects, animation and dynamic lighting are added.

~~~
jerf
General consensus last time this rolled around, which I see no reason to
change, is that once you add "dynamic objects [or] animation", it _completely
stops working_ , so... it probably won't hold up at all.

Most obvious comparison comes in right at the 4:00 mark, where you can see the
"polygonal" grass waving gently in the wind, then at 4:22 their "unlimited
detail" grass appears to be carved out of some sort of grass-colored rock.
Also observe how at the end they are excited about their improvement in "the
lighting", by which they mean, _the_ lighting that you will get.

------
chime
I can understand that adding dynamic objects/shadows is difficult and their
videos do not show them being capable of doing that yet. However, why can't
90% of the objects be rendered in the new static way like they do (statues,
buildings, tree trunks) and dynamic objects be added on top of it using
whatever method game devs use right now? I don't really care if the cactus is
moving or not but I sure would like to see it in much higher detail.

Why can't we take the good from both and get better results?

~~~
palish
They can be, and that's likely how it will work. Triangle animated characters
+ voxel static world.

~~~
chime
Then why is everyone here complaining about the lack of dynamic objects?

~~~
corysama
Because they describe the tech as if was the answer to everything. It can do
specific things very well and other things not at all. If they toned down the
hype and were clear about the limitations, they would get nothing but love.

------
llambda
This has been "announced" since 2010 all the while being only "a few months"
from release. So far nothing has materialized. Vaporware. Also note this is
nothing but voxels plus an advanced search algorithm, for resource
conservation.

~~~
Torn
Vaporware is exactly what this is, note also that they also push investment
opportunities in their 'product'.

------
ja2ke
These guys are doing something similar and are demonstrating more dynamic
lighting and destruction tech: <http://www.atomontage.com>

------
jxcole
The most unrealistic things in video games for me are faces. While increasing
polygon counts and such will certainly help, I can't help but notice that
faces will never cross the uncanny valley unless they can do something about
the lighting.

Check out:

<http://graphics.ucsd.edu/~henrik/images/subsurf.html>

So, unless they can do all this AND ray trace it at the same time, it really
won't make my game experience 100,000 times better.

~~~
starwed
Surely a lot of that is due, not to lighting/shadows/rendering, but to the
incredible subtlety required of the animation?

HL2 has some pretty amazingly convincing faces, and that's obviously not
because it has the best rendering engine.

~~~
sliverstorm
I wouldn't go as far as "amazingly convincing". I think they look better after
you've been staring at the HL2 world for a while, rather than the real world.

~~~
starwed
Nah, I had that impression from the very first scene the game loads, which is
the creepy guy talking to you.

(I don't mean convincing in a "mistake-for-real" way, but in a "lets-me-
suspend-disbelief" way.)

------
Havoc
I love how they mock other game dev companies for using skyboxes (they call it
card board cut-out buildings in the distance @04m52) and then proceed to do
exactly the same thing in their demo. Hell they even managed to do it wrong
(their sky texture isn't stretched & compressed appropriately to hide the fact
that its a cube @2m27).

To many bold claims & deception in that vid. Colour me skeptical.

------
goalieca
Polygon engines work nice with physics and animation. I'm trying to figure out
how they could have a dynamic world based on particles.

~~~
shabble
make sure they're also waves?

~~~
fleitz
Very good solution to needing a polygon that can interfere with itself.

------
sycren
To all those talking about physics, interaction, lighting and shadows, what do
they think of the Atomontage engine using similar technology:
[http://www.youtube.com/watch?v=_CCZIBDt1uM&feature=BFa&#...</a>

------
alexscheelmeyer
Regarding the technology used. In this video:
<http://www.youtube.com/watch?v=JWujsO2V2IA> you can see lots of artifacts and
also talk of point cloud data, so it is clearly not raytracing but rather
point data rendering. All the repetition seen is because of the memory
constraints. The point data is probably preprocessed and compressed in
numerous ways, which makes it very difficult to do animations. But as others
have mentioned, even as a last resort they should be able to just use this
technology to render terrain/background and then use polygons for
moving/animated objects. This would probably also utilized current technology
better as the polygon pipeline would not just sit there unused.

------
hartror
Still no dynamic objects in these videos. A limitation they're not discussing
and working on?

------
virtualritz
I think these guys are on to it too. W/o govt. funding and "orders of
magnitude better" marketing babble:
<http://www.youtube.com/watch?v=1sfWYUgxGBE>

------
Nicknameless
If this technology is legit without problems with animation and lighting etc,
what will it mean for the costs of game content creation? You still need
artists to create the models, textures etc, and higher detail generally takes
much longer with diminishing returns. I also find that environments with
highly detailed objects also need far more objects to be convincing which
compounds the issue.

In short, the tech sounds great, but will it result in better games or just
more expensive games (and in turn, fewer games with more innovative but risky
design)?

~~~
jarin
From what I can gather, 3D game artists tend to start with high-res models and
then reduce the poly count, so it may not be THAT much of a problem. I think
it's because a lot of the modeling takes place in more organic editors now.

------
hamoid
I think a good game and a game with good graphics are two unrelated things.

What would my mom say? This game looks a bit better than this other game? Or
this game looks 100,000x better than this other one :)

------
Troll_Whisperer
Graphics don't improve linearly with the number of pixels or polygons added.
I'd say the improvement is closer to logarithmic. E.g. 500 polygons/second is
a step better than 50, 5000 is two steps better than 50, etc.

Improving the rate by 100,000 times is only a six step improvement and that's
_if_ they haven't surpassed the limits of the human eye. I much prefer 72
frames per second to 36, but giving me a thousand frames per second is a waste
since my optic nerve can't process most of them.

------
lobo_tuerto
Minecraft creator does the math and says:
<http://notch.tumblr.com/post/8386977075/its-a-scam>

------
monochromatic
Color me skeptical. However, I'd like to pose a question to HN:

If the claims from this video are legit, is this level of innovation deserving
of a patent? (Yes, a _software_ patent.)

~~~
stickfigure
Maybe, you'd have to be an expert in rendering engines to know. Too bad none
of those work for the patent office or are ever likely to.

~~~
monochromatic
[Citation needed.]

------
jarin
20 FPS in software is pretty impressive, I wonder (if this is real and if it
takes off) how long it will take to see hardware voxel acceleration.

~~~
mambodog
They're here already. NVIDIA Research paper describing one approach:
<http://www.nvidia.com/docs/IO/88889/laine2010i3d_paper.pdf>

Open source implementation: <http://code.google.com/p/efficient-sparse-voxel-
octrees/>

------
Dramatize
I think their point was if the same amount of research and money went into
improving their technology, then future games would be amazing.

------
plasma
I'm hoping they show a nicer looking demo (instead of 'programmer art') to get
a better idea of how much better it would be.

------
rektide
Graphics are only a very small part of the advantage a truly volumetric world
could present. A game that captured wind currents, scent, EM spectrum... these
are just some of the attributes air normally carries that most games do not
capture but that a volumetric system might be used to capture.

------
mikkom
It's kind of intersting until you realize that all their examples are static -
there is no animation anywhere and there is a very good reason for that.

Yes, they might implement some kind of a hybrid approach but then they will
lose all the things they hype about.

------
andrethegiant
I remember when the first video came out, they said their technology worked
"like Google" to find the appropriate pixels to display on the screen. No
further description of the technology. Seems too vague to be true.

~~~
siphr
I think it is true but it is hype. Assuming it is based on voxels, then the
earliest game engine to have used the technology (to my knowledge) is probably
Blade Runner circa 1990s. The hardware at that time, ofcourse, dictated the
quality of the tech at that time. By extrapolation I would say this is very
much possible but may not be such a huge deal as it is made out to be. The
results however could be amazing if widely adopted by the industry.

------
keyle
I'm in Brisbane. Mmm I'm nearly tempted to apply for a job there.

~~~
dstein
I'm not sure if you can ask for a better job than doing computer gaming
research on the government's dime.

------
tlrobinson
Oh, is this for real? I came across the video somewhere earlier today, but
stopped watching after I got to "we give give computer graphics _unlimited
power_ ".

------
zmonkeyz
The big problem that I see is that they don't show any animation. A camera
floating through a static scene can only get you so far in video games.

------
malkia
Honestly - it's quite ugly - it looks like a game from 8 years ago. I'm not
sure what are they drinking?

For sure their graphics are not 100,000x better.

------
leif
I just want to say that this is probably the best company name I've seen yet.

------
siphr
So when they say atoms do they mean voxels or is it something else?

------
Maro
I don't know much about current 3D graphics technology, but the fact that
they're trying to disrupt it is really cool. They remind me of a crazy
inventor, let's hope they got something =)

------
cpeterso
But will the games be 100,000x more fun?

------
crizCraig
Poll: Do you think this will result in _real_ games with 100,000x better
graphics? [http://www.wepolls.com/p/1653402/Is-Euclideans-graphics-
tech...](http://www.wepolls.com/p/1653402/Is-Euclideans-graphics-technology-
really-going-to-allow-for-games-with-100,000-times-better-graphics)

------
WoundedMarlin
If this takes off the whole graphics industry is going to get so much better.
I am talking from movies to games to everything else you can think of. I mean
think if FIFA started to use this in there games, you could make it so you see
a shoe lace move or individual hair strains moving.

I hope they get this out to market sooner rather then later.

~~~
SoftwareMaven
Even if polygons weren't the limiting factor on the shoe laces or hair, the
cost of computing the physics will limit how detailed things get. Need more
cores...

~~~
palish
Not really. Physics is computed against low-resolution polygon
representations.

You don't want to compute physics against your art representation, because
it's typically over-described.

~~~
SoftwareMaven
Of course. My point is that polygons aren't the limiting factor today, so
having unlimited polygons wouldn't "fix" that. The GPU typically determines
polygon budget, with the CPU determining physics and AI.

Of course, you can dump physics onto the GPU, but that will cost you polygons.
I guess that could, in theory (if you really had _unlimited_ atoms), give you
more cycles for physics.

