

Obvious Engine: a vision-based augmented reality engine for indie games  - tilt
http://obviousengine.com/

======
nickolai
This is really cool, but im not sure - how is it supposed to be used in games?
Reality cant possibly be a better playground than specifically tailored VR.

I always saw AR as a convenient way to provide additional data on
surroundings. Point out landmarks using a HUD in a car. A nice way to present
information on a product (like an artificial "the company making this product
supports SOPA" sticker :) ).

But I am really at a loss as to how this would be used in games. Virtual Pets
maybe? but whats the fun of having to look at it through a 4 inch screen ?
Hell, even his dancing foxes (or whatever they are) dont fit in tht ipad
screen. The user sees the space right next to the ipad window, where they
should be. And there's nothing there. Thats a huge immersion killer to me. Its
not the same as with VR, as VR doesnt have to pretend the illusion is actually
right behind the screen window. AR with full goggles ought to make up for that
- but the hardware isnt nearly there yet.

This _is_ technologically very impressive - but what can we really make of it?

~~~
Groxx
I dunno. I forsee a pokemon game with creatures you have to look for to find,
that can be accurately hiding behind/near/in things at various times of the
day.

------
IvoDankolov
Now implement a raytracer that infers lighting location and reflectance (and
subsurface scattering) of the objects in the scene from the image and then we
can have a whole new level of realism.

In all seriousness, though, I do wonder what kind of processing power you
would need in a handheld device to be able to do that. Could we realistically
achieve it within 15-20 years? It's certainly one of those "gimmicks" with
extreme potential.

That little tangent about realism aside, the engine itself does look quite
remarkable in how smooth it is able to run. I wonder hwo well it handles
occlusion, changes in lighting and the other benchmarks for computer vision,
as it was not demonstrated at all in the video. In fact, the presenter quite
handily avoided putting his hand between the camera and the soda can.

Other than that, I can't say much without trying the thing, but I do not own
an iOS device and don't plan to in the near future. If someone decides to try
out the framework, I'd be glad to read a more detailed analysis of it.

~~~
rmc
_we can have a whole new level of realism._

What would you be able to do that you can't do now?

~~~
IvoDankolov
As Geee stated, it's about imitating the lighting of the scene. There's all
sorts of subtle and not-so-subtle effects that come as a result of light
bouncing around.

While it may be obvious when a virtual object is missing a shadow (as in the
demonstration video), even implementing that would not be enough to fool the
human brain completely, though it isn't always obvious what the problem is,
only a subconscious nagging.

If you're interested in what precisely the effects that the real world has and
augmented reality generally doesn't have - i'd say the biggest ones are
shadows (including soft shadows [0]), depth of field [1], ambient occlusion
[2], and indirect lighting.

You don't necessarily have to write a full-blown raytracer with global
illuminasion, casting billions of rays to get a passable result. All of the
above mentioned things can more or less be approximated in some way, and most
modern game engines do so (I don't know of any phone games that do, mind, as
the calculations are still non-trivial, but I do remember an interesting tech
demo from Nvidia).

The biggest problem, though, is that we can do these things in a controlled
setting, where you know the exact shape, texture and light reflective
properties on every object in the scene. As you can imagine, that is not so in
the case of augmented reality (as I quipped in my earlier post, you would have
to infer surface reflectance from the image in some way). Compared to the
problem of global illumination in a virtual scene - well, let's just say it's
orders of magnitude harder. I don't recall anyone actually having scanned a
scene with a single camera. And that's an AI breakthrough that indeed people
would talk about.

[0] : <http://en.wikipedia.org/wiki/Soft_shadows> [1] :
<http://en.wikipedia.org/wiki/Depth_of_field> [2] :
<http://en.wikipedia.org/wiki/Ambient_occlusion> [*} : Also, google images

------
glimcat
Feature tracking on an object which stands out clearly against the background?
Using morphing filters on the segmented object? Not that innovative.

Now I do like the particle trick. You see this in some video games to
highlight objects of interest (e.g. World of Warcraft). It makes it much
easier for players to locate them and zero in on them against background
clutter. Probably useful for AR if you have a good application case.

However, the potential is heavily constrained until you get displays capable
of "non-interfering annotation" - which is to say, you need a good HMD and not
a phone. Not that you can go out and buy "a good HMD" by this measure, but
that's a big thorny mess out to the side of the software concerns.

------
Someone
Games? Imagine Google giving away augmented reality glasses (let's call them
Googgles) that help you find your way in the environment by allowing you to do
a local search ('where are my car keys?'), by adding name tags to people you
have seen before but do not know well, etc.

Now, that giving away comes at a price. Frequently, when a Dr. Pepper can
enters your visual field, that can becomes an animated advert. Your glasses
will know how to time those adverts so that they stay (just) below the
nuisance point, because they will have read your gmail and that of your
friends, and will know what you ate.

------
Qweef
How fun would it be if you could program it with facial recognition and
replace con attendants with their fursonas? :D

------
bsenftner
Quite nice! Kudos for what appears to be a significant setup for Augmented
Reality.

