
Epic Games shows jaw-dropping graphics for next-generation consoles - evo_9
http://venturebeat.com/2011/03/08/epic-games-shows-jaw-dropping-graphics-for-next-generation-consoles-video/
======
squidsoup
As beautiful as this demo is, I think the push for realism has harmed the game
industry more than it has move it forward. One of the few developers in recent
memory that has managed to focus on both visual fidelity and gameplay is
Bioware with the Mass Effect series; for the most part most developers aren't
this successful and end up producing beautiful but lifeless, dull games.

Also consider that producing art assets for an engine like this is a huge
undertaking and adds a substantial cost to development. Publishers are less
likely to take risks with innovative concepts given film like budgets,
increasing the tendancy to stick to formulas that they know offer a good ROI.
The "hollywoodisation" of the games industry hasn't had much to offer me as a
gamer, and increasingly I find more enjoyment with wee indie offerings like
Minecraft over the AAA titles.

~~~
rorrr
I bet more of the art will be procedurally generated.

~~~
icefox
Can anyone in the game industry comment on this? Why type of problems are game
companies paying for? I have seen at least one company that sells procedurally
generated trees and am curious how good of an idea this is. Being customer
oriented, what are the problems game companies have today?

~~~
daeken
I'm not in the industry at all, but I've followed it for a long time and done
a good deal of random game dev. The biggest pain point is how much it costs to
produce a large world these days. As it stands, this is not a technical
problem, but a business one: artists are expensive and art for modern games is
insanely detailed.

Procedural generation is, IMO, the way to go here, but it's an unsolved
problem. I've been thinking about this for a long time and I've had two
specific ideas in mind for how to ease the cost of content development.
Please, steal these!

Idea one is for the creation of individual assets. You start with what is, in
effect, a block of clay. You (programmatically) cut away at it, apply
materials to it (which could, in turn, do things like cut a wood grain into
it), add on to it, etc. So let's say you want to build a sign generator for
shops in your town. You'd start with a block of clay roughly the size of your
sign, cut it into the (2d) shape you want, emboss the logo into it, then apply
a wood material to the sign, and paint the raised areas. This allows you to
quickly create unique assets in your world.

Idea two is for the high-level creation of a world. You have an editor where
you can create a landscape. You start by setting the size of the area, drag
points up and down to affect terrain, etc -- all stuff that's been done
before. The key, though, is in world brushes. For instance, you select a
'forest' brush. This brush, when painted onto an area, creates trees, rocks,
bushes, etc as appropriate based on elevation, angle of the ground, etc. So
you paint part of your area with a forest brush, then go in and paint in a
river, paint in paths, etc. Once you've created the high level look, you can
go in and add features like signs to cities, unique quest/story-related
assets, etc. This allows you to create large areas with completely unique
geometry very quickly and cheaply, without sacrificing quality; after all,
you're starting off with unique objects and then adding specific touches.

The latter idea has been implemented to a certain extent in the past, but it's
been entirely via instanced geometry. The problem there? _Everything looks the
same_. Procedural generation lets you get a unique world, then you can go in
and actually make it feel real.

~~~
LiraNuna
You bring good points, but you forget one things - artists love control. Take
that away from them and they cry like babies who lost their candy.

Procedural art is really cool if you can think about your asset as a
programmer - but you need to remember artists do not think like us. If you are
willing to replace your artists with tech artists though, that's a different
story (note: normally those people are rare, and stay in one field programming
shaders).

"The problem there? Everything looks the same." -- Yes, that's exactly the
desired consequence by artists, and as mentioned, will cry if their
flexibility is revoked by tools they are not familiar with (in house
model/texture generators).

While you as a programmer think of a texture of "noise and color", artists
think about it as brush strokes, materials and emotions.

~~~
wladimir
Procedural generation doesn't automatically mean that the artist has no
control. There will still be inputs guiding the generated content. Artists
will still have the final say as to what will get into the game.

You have to make the GUI artist-friendly though, inputting some numbers and
hitting a button won't do. The tree generation company (SpeedTree) is actually
a good example. Their program is very easy to work with even for artists and
you can generate a large scala of realistic-looking trees.

Also, with procedural generation you can achieve a _more_ varied look. For
example: If artists have to design every tree, you get a few specimens that
you place everywhere. With automatic generation, every tree can be different,

~~~
LiraNuna
"Procedural generation doesn't automatically mean that the artist has no
control" - It means they have _less_ control, and they always bitch and moan
about that.

~~~
wladimir
Sure, some people will always bitch and moan about change. But look at it
positively: it allows them to do more impressive things in less time. If they
can make a good-looking forest in a day instead of three months, it means they
can do more interesting things then hand-design trees.

------
malkia
Problem is how to handle worst-case scenarios. With pre-rendered movies that
might or might not be a problem, and even if it is - it's production one (e.g.
you can't predict how many well for how many days you would render your movie)

But for a game - this means dropping the frame rate, making it visually
unappealing.

That's why Ray-casting would never pick up for real-time games (Unreal most
definitely is not doing any ray-casting, I'm just giving it as a an extreme
example).

The other problem is content creation. Current games are 7-15gb of compressed
data - half of them textures, rest models, animations, etc. and fitting that
in memory (and loading it in memory).

Even with the fastest drives, you end up spending a lot of time loading (or
streaming).

Then if that thing looks so real, you start feeling that something ain't right
if your gun can't really destroy every piece of a building... And later
allowing that in the engine, makes it worse as not much pre-processed data can
be reused (static lighting, bsp geometry, etc.). Or you do it (somehow) real-
time.

Also this complicates the game "AI" - there is no real AI in an FPS shooter -
it's simply too god damn hard. Just think about the cover system, and how
laughable it would be if you have destructible surfaces, and you can see the
"AI" guy trying to hide behind thinking it was a fine cover point.

Less realism, more constrained world, and better gameplay are the key
components to good gaming... Not fancy graphics all the way :)

~~~
johngunderman
> That's why Ray-casting would never pick up for real-time games (Unreal most
> definitely is not doing any ray-casting, I'm just giving it as a an extreme
> example).

I'm not convinced that you are correct here. Although ray-tracing has not been
a good option in the past due to processor limitations, we're reaching the
point at which it may become feasible. Because ray-tracing scales linearly
with the number of cores you use, modern CPUs are fairly well equipped to work
with it, albeit at sub-optimal framerates. See the following (from 2007) for
reference: <http://www.q4rt.de/>

Outside of this, consider that the only way vector graphics are allowed to
perform at their current level is due to the existence of discrete graphics
cards which are optimized for the necessary matrix calculations. If we were to
develop discrete cards for ray-tracing, we could potentially see amazing
results. For example, in 2009 IBM developed a computer that could run full-
scene ray-tracing at 1080p averaging 90 FPS. While I do not know how complex
said scene was, I'm optimistic about the future of ray-tracing.

~~~
malkia
Just because it scales with the number of cores, it does not mean it would
keep fps in reasonable range given random camera position and orientation.

Take a look at Intel's raytracing demo, and when you go close to the lamps
(shiny bulby spherical objects with lots of reflections) you get a lot of
slowdown (using same amount of cpus).

That's what I'm talking about. Yes you can tweak it, but it loses it's
purpose.

~~~
Devilboy
That's easy to work around, you just stop computing once you hit your time
limit on that pixel. It'll be good enough.

~~~
cma
If every pixel takes too long you get nothing; plus you wasted computing power
to find out.

A different solution is to cast a sparser set of rays as things slow down, and
then use compressed sensing (which has a runtime complexity independent of
scene complexity) to integrate the full scene. Then as the scene gets more
complex, you get what (visually) amounts to a heavily compressed jpeg.

------
mortenjorck
Too bad all these beautiful effects will be covered up by the foreshortened
perspective of a gun in one corner and a health bar in the other corner.

I kid, but honestly, seeing this doesn't excite me anywhere near as much as a
preview of whatever the team that made Mirror's Edge has in store next for
_current_ -generation consoles.

~~~
potatolicious
DICE made Mirror's Edge, but it was a commercial flop, so we are unlikely to
see a sequel anytime soon :( Personally I loved it.

Slightly OT but interesting: from what I hear from people at EA, it was a huge
internal political war that year between the people who wanted to build more
sequels and licensed franchises, and the people who thought that EA would die
if it didn't innovate on its own IP. The latter won and were given a chance to
prove their worth - the two main titles to come out of that were Dead Space
and Mirror's Edge. From what I hear management considered this direction a
flop (Dead Space, while popular, and spawning a sequel, was not the sort of
hyper-blockbuster it needed to be, and ME was an unquestionable flop) and now
EA is culturally back to the sequel-mill mentality.

A sad opportunity that didn't pan out :( There are precious few new IPs being
worked on at EA right now.

~~~
gamble
The economics of the games industry mean that you can't turn a profit on AAA
games without sequels. The new IP push was about reducing EA's dependence on
sports titles and externally-licensed IP. (Also, to a degree, an attempt to
reinvigorate EA's corporate culture) It was always in the game plan that the
successful new IP games would receive plenty of sequels.

I wouldn't say it was a _total_ failure. Mass Effect and Dragon Age are pretty
good. =)

~~~
maximilianburke
This is very true. The first title establishes a buzz about the game, the
second (and third, etc.) game often make more money because the brand has been
established and demand created. There are a large number of people who don't
want to spend the whole $60 when a new IP is released and will instead wait
for it to be discounted, or buy it used.

EA circa 2003/2004 was pumping out sequels yearly for titles that should have
long been put to pasture. This has changed quite a bit in the last 5 or 6
years with IP's like Skate, Mass Effect, Dragon Age, Battlefield (and Bad
Company), Dead Space, where the sequels aren't rushed out the door without
much thought, not to mention other new projects like Dante's Inferno and
Mirror's Edge.

------
flashingleds
I get the impression (albeit as somebody with nothing to do with the industry)
that studios aren't very excited about another generation of hardware because
it already takes a huge amount of resources to make a current-generation
blockbuster, and profit is not alway guaranteed. The demo is completely
amazing and I loved it, but if it "took about three months for 12 programmers
and artists to build" this 2 minute scene, surely this is beyond the limit of
what a studio will undertake for a full length production? Or am I
underestimating the horsepower of the studios today?

~~~
gamble
It's safe to say that there's little appetite for a new console generation
right now.

As in the movie industry, the middle ground in the games industry has been
disappearing. Your best shot at turning a profit is either to make small bets
on mobile/indie games, or go full-bore for a AAA title with a budget in the
double or triple digit millions. Indies don't need another generation of
consoles; AAA developers don't want to rebuild all the tech they created for
the 360/PS3.

The assumption at the start of the current generation was that if you invested
in your tech fairly early on, you'd be able to get a trilogy out of it before
the subsequent generation of consoles made the tech obsolete. (This is why so
many 360/PS3 IPs were structured as trilogies) Now that it's obvious the
current consoles aren't going away anytime soon, developers are looking to
amortize their tech over a few more titles before the hardware changes again.

~~~
cageface
I'd like to see the current crop of consoles in a smaller, cooler, quieter
form factor.

------
bcrescimanno
Not to put too fine a "trolling point" on it--but after playing Final Fantasy
XIII I'm inclined for game developers to take a step _back_ in terms of their
graphics aspirations to continue to deliver quality games. It's been
speculated (and supposedly admitted to--though being at work I don't have time
to dig up a link) that the incredibly linear world of FFXIII was a direct
result of being overambitious with the graphics. There just wasn't time to
build a more "complete" world to the graphical standard they set.

I agree with much else of what is said here about it being a "perfect
conditions" demo that doesn't have to deal with any unknowns--and I also echo
that it looks exceedingly cool. All that said, great graphics does not a great
game make!

~~~
kevingadd
Final Fantasy 13's failings were mostly due to a horribly dysfunctional studio
environment and a broken development process. They released a surprisingly
honest postmortem of the project in a recent issue of Game Developer (you can
find a summary of it at
[http://www.gamasutra.com/view/news/30640/Exclusive_Behind_Th...](http://www.gamasutra.com/view/news/30640/Exclusive_Behind_The_Scenes_of__Square_Enixs_Final_Fantasy_XIII.php)
), and 'overambitious with the graphics' pales in comparison to the other
things that went wrong. My favorite tidbit: They went through a large portion
of the project with the game _not in a playable state_ until they realized
they needed to build a demo, and only then did they finally get the game
playable and start testing it.

Perhaps one might say that developers are aiming too high in every respect
(including graphics), and that's leading to flawed projects. But in this
specific case, I doubt the game would have turned out good even if the
graphics weren't given the effort they got. In many cases it's possible for a
studio to easily scale up development when it comes to graphics, because with
strong enough art direction, you can have 50 or 100 artists working on assets
for different parts of a game and have a reasonable chance of tying all that
art together at the end. Unfortunately, engineering teams don't scale nearly
that well...

~~~
archangel_one
Thanks for the link, interesting read. Reminds me of some similar comments
made by Warren Spector after Deus Ex - that it was a big wakeup call to them
once they had a single playable level, because you could suddenly see that
some aspects of the game didn't work as they were.

Possibly this suggests pg's minimum viable product + iteration strategy as
being just as applicable to game development too, although I imagine the
"minimum viable product" for FFXIII is a hugely significant amount of work in
itself.

~~~
icefox
For FF you have a number of different game "engines". The fighting engine for
example can be a stand alone component, same goes for the running around in
the world / track. There is always a minimum viable product.

------
Semiapies
_"Willard said that one of the effects is called bouquet depth of field_ "

Yeesh. It's "bokeh".

<http://en.wikipedia.org/wiki/Bokeh>

------
aphyr
Realtime graphics are finally blurring into stylized film. I expect in another
ten years or so we will, for all intents and purposes, have the horsepower to
render photorealistic scenes. The bigger problem may be finding ways to create
this content without insane amounts of work. Cloth mechanics, particle/hair
systems, realtime physics, constrained IK/semirigid body solvers, fractal
elaboration, fluid dynamics, procedural character/animation generation... all
of these problems are going to be really interesting as we start hitting the
limits of what humans can imagine and express to a computer.

~~~
emehrkay
It seems like the more horsepower that these consoles/video cards have, the
more opportunity exists to create tools that create part of the graphical
experience. IE a toolset that handles creating realtime dynamic forests or
cities or hair etc

~~~
nathanlrivera
Auto-generated content is the future. Look at how good Minecraft is. The
terrain in that game is all procedurally generated.

------
almond
Yes, the skin rendering was very nice, and the lighting was great. Which
brought into sharp relief how little progress has been made on procedural
character animation. When the welder-guy walked across the roof and stopped on
the edge, you could see a painfully clear walking loop and outro animation
back to the standing position -- very familiar from the earlier Unreal engines
(except for the residual swinging of the arms). It is jarring to see such
mechanical movements in a demo where the quality of the graphics is so
realistic. Epic should use Euphoria or some other engine for procedural body
movements. The act of walking to the ledge of a roof to inspect a fight below
shouldn't look like any other kind of walk.

~~~
potatolicious
Agreed. There's a distinct lack of R&D in what is IMHO more important than
graphical fidelity - conveying emotion and motion in games. Valve made waves
with its (honestly somewhat primitive) facial animation system, and not much
has been done with it until now (LA Noire has done something really cool with
it and is coming out soonish). It's amazing how much effort we'd spend
crafting the perfect artillery shell explosion but your CO yelling orders at
you moves like a mannequin.

We've made a lot of gameplay progress into action-RPGs (Mass Effect and the
like), and I for one would like to play a game where facial expressions and
body language actually _mean_ something (e.g., the character is lying, but
instead of smacking you over the head with it, the game can be subtle about
it).

------
jacques_chester
> The demo took about three months for 12 programmers and artists to build.

So using the new engine, the world-leading experts on Unreal Engine NG - Epic
- had to spend 1 person-year _per minute_ of video.

Like Hollywood, this can only make hit-driven game design more common on
consoles and PCs.

~~~
aphyr
Admittedly, this is a tech demo which was designed to pack in the maximum
amount of variation into the shortest possible time frame. Game development
typically reuses the same carefully-crafted assets (particle systems, models,
textures, shaders, even level geometry) in as many places as possible.

~~~
malkia
Yet... No GAME was actually shown here.

Yes this was rendered with the engine, but having the same camera angles,
people, etc. you can do lots of tricks that you can't when doing interactive
stuff ... ahem GAMES: \- You can cull your geometry offline (you know where
you camera goes in and out) \- You can prefetch certain calculations that are
to come soon (you know what's gonna happen) \- For sure you can implement
correct motion blur filter as you know where you are moving \- Probably no
physics - they might aswel prerecord them. \- In fact you can capture where
each vertex, color, texture channel moves, and just replay it. \- And probably
many more.

I'm not saying it's not cool, and I love good cutscenes in games (Metal Gear
Solid), but people seem to confuse that

    
    
       CUTSCENE RENDERED is the same as GAME RENDERED. No it's not.

------
Dn_Ab
Wow I could actually read the emotions on the guy's face and it didn't feel
too uncanny to me at least.

But calling this real-time I think is stretching things a bit, even if its
using the game engine and not prerendered. The demo and assets were probably
polished and optimized for that machine, angle, lighting and scenario. And it
was heavily scripted. I think an engine flexible enough for more general
gameplay - with more calculations, interactions, assets and erm unmasked NPCs
would be a lot less smooth and certainly less realistic looking. This seems
like one of those theoretical optimums with requirements that just don't line
up for most practical purposes.

Not trying to downplay how cool this is though. It will still be and is a
visually and computationally amazing feat.

~~~
patrickk
I remember seeing pictures around 2004/2005 of what new games would supposedly
look like on the old 'next gen' (read current gen) consoles. The actual games
were nowhere near the hype (games like Madden and FIFA). I seriously doubt
that the next 'next gen' consoles will have graphics like this for the reasons
you outline. I'd love to be wrong though. You could easily imagine this
technology be used in movies - I thought Jeff Bridges' digitally altered
younger version in Tron Legacy had such a case of the "uncanny valley" going
on that it detracted from every scene he was in.

------
ENOTTY
Why is everybody complaining about how long it took for Epic to create this
demo? This is a next generation engine so the team directly working on the
demo was probably working concurrently with development. They probably
encountered multiple bugs with the engine every day.

This engine isn't meant for release today, it's meant for release in a few
years. Any pronouncements you make about how this engine is too X for today's
Y are going to be invalid.

------
gamble
It's going to be a while before there's another console generation. (In a
sense, the Kinect and PS Move were an attempt to refresh the consoles without
changing the base hardware) I haven't even heard rumors of new console
development yet, so I'd guess they're at least 2-3 years out.

~~~
iaskwhy
Not sure if this means anything but Sony has been releasing a new generation
every 6 years (1994, 2000, 2006, 2012?).

~~~
w1ntermute
Well when did we first start hearing news about the PSes 1-3, relative to
their release dates? The PS3 was announced on May 16, 2005 (wow, that's so
long ago!) and was released in Japan on November 11, 2006. That means that if
a PS4 will be coming out on schedule, we should be hearing news very soon.

~~~
Qz
Sony has made statements to the effect that they consider the PS3 a 10-year
console, so time to start waiting for an announcement would be 2015 or so.

~~~
Xarnon
Wasn't the PS2 also a '10 year console'? Perhaps there was no _official_
announcement that it was a 10 year console, but it technically nearly was/is!
(The PS2 was released in 2000 and was still going on in 2009!)

------
MicahSeff
I've long been a supporter of the idea that gameplay comes before graphics.
Though it's certainly neat to see a demo looking so dazzling, it really
doesn't say all that much to me about the future of the games industry. It's
long been clear that over time games will continue to look progressively
better.

This reason is why I was actually so happy to see Nintendo abandon this
pursuit of ever-prettier graphics in the hopes that gamers would be drawn in
by the innovative gameplay ideas that propelled the Wii to the top of the
console heap this generation. Though the Wii has somewhat fallen short on my
expectations, I am still impressed by Nintendo's decision.

This generation has been the death knell for countless (up-til-now) successful
companies. The higher development costs of making prettier and prettier games
has meant that a single flop can tank the company. It's exactly because of
this that I believe we are seeing the rise of mobile and social games as they
are cheap not only for the consumer to pick up, but also for the developer to
produce in the first place.

------
Qz
To give a bit of historical context, this is the very same company that put
out such classic shareware hits as Jill of the Jungle, Jazz Jackrabbit, and my
personal favorite One Must Fall: 2097.

~~~
johnyzee
Other person knows One Must Fall! I played that for months with friends on a
386.

It is easy to grouch about how gameplay rules over shiny effects, but I am
quite impressed and uplifted that the company that made those games is still
around and on the cutting edge of games.

~~~
Qz
I used to rack up a bit of a phone bill secretly playing One Must Fall
multiplayer via some BBS. I wonder if i can find it and load it up on
Dosbox...

------
joshbert
<http://www.youtube.com/watch?v=RSXyztq_0uM>

Better version. I could appreciate it more.

------
narrator
Why do they always have to do these shots at night? Is it because the
processing to render far away objects in the background is just too much for
the engine and they have to compensate by making these things at night so the
background behind the characters is black or something close to them?

~~~
mustpax
It's actually the other way around. Night shots are harder to pull off well
since the effects of individual light sources are much more pronounced. In
this video, they are trying to showcase a number of different light sources,
reflections, diffusion which require a dark setting to stand out. Outdoor day
shots get most of their "character" from a spot source at infinite distance
(parallel rays) which is much less computationally demanding.

------
AndreSegers
The problem with these tech demos is exactly that--they're just tech. In-game
graphics will never look this good because of non-cinematic camera angles,
behind the shoulder viewpoint, and HUD elements mucking it up. It's nice
looking, for sure, but I can get that from a Pixar movie.

~~~
Semiapies
They do suggest opportunities for low-cost CGI film-making.

------
barrkel
I would like to know more about this "Core i9" of which they speak.

~~~
aphyr
They probably mean the 6-core i7-9* series, distinguished from "regular old
i7" by their Gulftown architecture and 50% higher transistor count. But yeah,
I thought that term died a while back.

~~~
barrkel
Oh, that's what I understood it to mean, but that kind of sloppiness makes me
doubt the quality of the article.

------
exit
too bad videogame marketers have cried "real-time" too many times for me to
believe any of it before i'm actually playing.

------
pragmatic
What Next-Gen Consoles?

Sony and MSFT didn't do that well on this generation.

> Collectively, that means that 2011, on the console side, is going to be
> relatively drab. Microsoft doesn't need a new console, Nintendo doesn't want
> to compete with the 3DS launch, and Sony can't afford a new console. So it
> seems like the only points of interest this year will be price cuts--
> certainly, Sony and Nintendo will have them.

[http://dubiousquality.blogspot.com/2011/01/console-post-
of-w...](http://dubiousquality.blogspot.com/2011/01/console-post-of-week-
december-npd-and.html)

There are no plans for a future generation, aside from Nintendo, which _might_
have a full HD console. (Please correct me if I'm wrong).

> The demo ran on a PC with an Intel Core i9 microprocessor with three Nvidia
> GeForce 580 GTX graphics cards connected through SLI technology. The demo
> took about three months for 12 programmers and artists to build.

It's on a huge, hot, expensive PC now, how would the big 3 recoup their
investment in such a beast? Wouldn't they be better off trying to sell
portable units with cheaper to produce games?

------
rbanffy
I would be surprised if the next Xbox didn't represent a big leap in rendering
performance. Microsoft has a tendency to throw its near-infinite resources to
brute-force problems.

I, however, am not sure whether Sony will do the same or learn from the lesson
Nintendo taught: that gameplay is more important then rendering performance.
The fact Cell is more or less a dead end doesn't help it much. The chip was
too expensive to develop, is too expensive to use and devilishly hard to
program effectively.

Higher rendering performance is inevitable - just like almost every netbook on
the market is a multi-threaded 64-bit beast. I question whether every console
maker will emphasize it by the same measure.

------
bnastic
The Running Man, soon in reality near you!

On a serious note - take every tech demo with a trunkfull of salt. The
infamous Unreal3 demo, from years ago, hasn't really delivered on the promise
some 6 years later. Tech demo != game.

------
gnosis
I'd be much more excited about a game having impressive gameplay, plot,
characters, and concept rather than impressive graphics.

Way too often most everything else is sacrificed for graphics.

------
jaysonelliot
This may be an unfashionable point to raise, but as the line continues to blur
between CG and live action, we need to check the ethics of the content we're
producing.

When I play a racing game for several hours, I have to check myself before I
get behind the wheel of a real car.

What does it do to the mind of a twelve year old to not just see hundreds of
graphic acts of violence, but to control them over and over again?

~~~
chadgeidel
It's not only unfashionable, it's downright wrong. Studies have shown there is
no link between "entertainment" (like comic books, movies, and games) and
real-life behavior.

Please stop raising this old canard.

~~~
hackinthebochs
I'd like to see some of these studies you cite. I'd wager you're drawing a far
stronger conclusion than the studies warrant. Even a very tiny effect, while
inconsequential in an individual, can have an impact on the scale of an entire
culture.

------
sgoraya
Goes almost without saying that Epic develops great game engines; however,
IMHO, Crytek's cryengine at GDC was more impressive and has done a better job
of rendering outdoor environments and foliage better than the Epic's Unreal
engine; Unreal still has top spot in market share but I think Crytek will make
significant gains in the next couple of years;

That said, Tim Sweeney is still a hero.

~~~
grogers
This has been true for over 5 years (far cry came out in 2004...). And yet,
CryEngine has basically only been used in other Crytek titles. If destructible
trees couldn't sell game engines, theres little new hotness in CryEngine 3
that will. I bet in another 5 years, we will be saying pretty much the same
thing.

------
iuguy
If you just want to watch the video, it's here:
<http://www.youtube.com/watch?v=VOxnqNMwpW4>

It looks incredible, as you'd expect. I wonder how this will translate to
better gameplay though, or will resources be diverted to the 'shiny shiny make
it all better' factor?

------
mambodog
I'm still waiting to hear about a Mac version of the UDK that was to be shown
at GDC[1].

Until Mac support is on the table, I'll be sticking with Unity.

[1] <http://twitter.com/MarkRein/status/42236794099466240#>

------
lallysingh
Finally a use for 3D. But I'd love to see some eye tracking (and some way to
determine each eye's focal length) to determine what we're focusing on -- Tron
gave me a headache.

But, a virtual blade runner in proper 3d would be a little fantastic.

------
TimothyBurgess
This is why I've always been a huge fan of Unreal Technology. You could
definitely call me a fanboy since I played the original Unreal (the game on
which UT was based) avidly for years... but for good reason. Just like most
males my age, I've played tons of first person shooters... but I've always
felt Unreal was way ahead of its time, not just with graphics but with
gameplay as well. Although the gameplay of the Unreal series has steadily
declined since the original but these days it's still hard to find a game on
its level in the sense of how many different ways can be approached to be good
at the game.

But back to the graphics, rendering these graphics in realtime with a high
framerate is definitely possible... but not presently feasible. The only
problem I've noticed with the Unreal series and its graphics is as I just
said: it's ahead of its time. Sure, there's hardware out there that can handle
it no problem... but the average consumer can't afford it. I have a feeling
that even when the next generation of consoles is released, they'll still
struggle to handle the Unreal Engine at its full potential. But I guess
_someone_ has to raise the bar.

I remember when UT3 came out... I was so stoked. My gaming PC was probably
only a year behind the top (affordable) components at the time... and I wasn't
about to go spend $500+ on a new video card, ram, and all that. I figured I'd
be able to run it decently enough... but boy was I wrong. I got 30 fps tops
(and that was when nothing was going on!) and if you've played an FPS like
Unreal competitively, you'd know that the only way you stand a chance at
dodging rockets and sniping people midair and all that awesome stuff possible
with the Unreal series... you need upwards of 60 fps, consistently. So I only
played the new game for a couple of weeks before giving up. It's probably for
the best too because I needed to concentrate on my studies haha... but the
moral of the story is the entire game basically died within a few months
because only a small handful of people could run it.

------
bonch
Note that these aren't graphics from a next-gen console. It's just a tech demo
of what Epic would like next-gen consoles to be able to support.

Personally, I think next-gen consoles are a ways off. Consumers aren't that
interested in another console generation. Good visuals are already a solved
problem with today's hardware, and people are exploring new, simpler types of
gaming on the web and mobile devices.

Basically, the 90s obsession with pushing for better graphics is long gone.
It's more about art style now.

