
Meta: The Most Advanced Augmented Reality Interface - bensandcastle
http://www.kickstarter.com/projects/551975293/meta-the-most-advanced-augmented-reality-interface/
======
modeless
The videos there are misleading. The level of quality shown is nowhere near
what that hardware can achieve. For example, in the "video from the lab",
virtual objects are shown occluding someone's hands. That is not possible with
the display technology they're using. Your real experience with the device
will not be anything like the videos shown. The display will be more like a
ghostly, low resolution overlay, with significant latency.

The sad thing is that the hardware actually looks pretty neat. This device
should be cool enough that a realistic demo could easily sell it without
misleading people. I hope Kickstarter starts cracking down on projects using
pie-in-the-sky concept videos to raise expectations that they can't possibly
deliver on.

~~~
n3rdy
> The videos there are misleading.

I was thinking the same just based on the repeated "there is nothing like this
being developed", which I may have believed if I didn't come across Oculus
Rift just a few hours prior.

~~~
modeless
I think that claim is actually accurate. The Oculus Rift is much different:
it's virtual reality, whereas Meta is augmented reality. They are different
things, and augmented reality is much harder. The Oculus Rift completely
occludes your entire field of vision and replaces it with something else
entirely. Meta modifies the real world with virtual objects.

~~~
kybernetyk
Couldn't one mount a camera ontop of the Oculus Rift (just like with Meta) and
stream the camera feed into the Rift + process the video stream to offer
augmented reality?

~~~
green7ea
This would work much better. The main reason is latency. The added image will
always lag behind the real world by some milliseconds (16ms minimum refresh
latency for 60hz). There is also the image capture and processing latency
which is considerable. This is true for both the Meta (hard augmented reality
- hard AR) and the occulus rift (virtual reality - VR).

The difference in experience is for VR, the whole world lags so an added
object will always be at the right place. With hard AR, the added object will
be trying to catch up as the image moves.

From what I gather on google glass, they mostly aim for soft AR. This is where
you add information on top of the real world but it doesn't map directly to a
location in the real world: if you move your head, the added image stays the
same. This is much more resistant to delays and could be useable for extended
periods of time without discomfort.

For a better and more technical writeup:

[http://blogs.valvesoftware.com/abrash/why-you-wont-see-
hard-...](http://blogs.valvesoftware.com/abrash/why-you-wont-see-hard-ar-
anytime-soon/)

------
blhack
I think this stuff is cool, but I think it is going to be held back by a
romantic attachment to "real".

Why not use something like the occulus rift? Instead of projecting new objects
over the top of existing ones; replace the users field of vision completely.

I'd love that.

One of the guys in our hackerspace (plug: heatsync labs, Phoenix AZ) got an
occulus, and we've been talking about how cool it would be to build a "virtual
office" of sorts. Sit down with an occulus and some noise cancelling
headphones, and have an infinitely large workspace.

2 monitors? or 1000 monitors; it doesn't matter because your entire field of
vision (or your entire environment) is being rendered for you.

\--

I think people are very attached to the idea of your eyes seeing the "real"
world instead of a re-displayed one. I understand that, but I think that
ideology is going to hold AR back for a while.

~~~
angersock
My housemates and I have one coming in the mail.

My idea was to make a window compositor for X--the idea being that all open
windows would be scattered around the inside of a sphere or hemisphere or
perhaps hypercone with the user's head at the center.

You look at a window and slowly turn to face directly forward, and the window
blows up and moves into focus. A shake of the head dismisses the window, and a
sharp snap to either side cycles through open windows.

Shoot me an email if you'd like to talk more.

EDIT: X programming is fucking arcane. :(

~~~
drivebyacct2
Skip X, jump to QML and QWayland.

<http://www.youtube.com/watch?v=_FjuPn7MXMs>

Take that and extrapolate. Doesn't seem that "crazy" really. Compiz and KWin
both already have window animations you're describing. There was even a Wii-
head-tracking plugin for compiz that would give your desktop a third
perspective dimension. (Moving your head would reveal the edges/bits of the
other windows underneath higher ones).

It'd be a neat demo, but it sounds like a nightmare for daily usability, etc.

~~~
angersock
I'm quite happy doing it with X/GLX--I'm only interested in chasing one kind
of shiny at the moment.

(I reserve the right to eat my words and recant this misguided decision
later.)

------
clicks
Interesting to note that Meta is in fact YC-funded:
[http://allthingsd.com/20130517/meta-wants-to-become-the-
next...](http://allthingsd.com/20130517/meta-wants-to-become-the-next-
augmented-reality-glasses-phenom/)

------
silverlight
Is it just me, or is there very little "real software" being shown here?
Everything in the video was just "artist's interpretations" of what it "could
look like", no?

~~~
devindotcom
Yeah, I'm seeing a lot of "what it could be like," in a video that probably
cost thousands to produce. The big round $100K number is suspicious to me as
well. They're making... a dev kit? For an environment, device, and service
platform that doesn't exist (nor the technologies to enable it, just yet), and
their budget to create all this is exactly $100K? If so, that's not gonna
happen. If not, and this is just cherry on the cake funding, I don't think
that's what Kickstarter is "for," as much as Kickstarter can be "for" one
thing and not another.

~~~
bensandcastle
Hey, You're right, $100k is nowhere near enough to build all the supporting
technology. Our software development is covered by investment, including YC.
The Kickstarter is purely about covering hardware costs and the $100k
threshold is for minium production runs.

Filming stereoscopic video through the glasses is very complicated which is
why we used concept visualisation to give an idea how it will feel. We're
working on a rig to film through the glasses, but right now there's no way
we'll be able to convey how impressive it is in person through live video.

~~~
silverlight
You should really consider being more up-front about that in the video. A
little disclaimer along the bottom of "Artist's rendering" or something would
go a long way.

~~~
prutschman
Has Kickstarted backed off from their recent stance? As of last September
their position was that "Kickstarter campaigns will be unable to use
simulations or design renders to illustrate what a completed product may look
like or how it may function. Instead, creators must provide photos or video of
prototypes as they exist at the time of posting." --
<http://www.kickstarter.com/blog/kickstarter-is-not-a-store>

------
z-e-r-o
This technique, as it is shown, is not going to be possible for at least the
next 5 years. They either know it and are making a huge fake marketing
campaign (for a possible Google acquisition) or haven't realized it yet will
have a really hard moment when they realize it.

Couple of components which are not going to work:

1\. A see-through glass with Field of View as shown in the video just doesn't
exist today. The model they are going to use are more like a ‘tiny TV-screen
floating in your view’ and not even close to the visualization they created.

2\. Real time 3D gesture recognition from point cloud data on ARM (+ overhead
for applications + games all in low latency)

3\. Real time 3D environment reconstruction from moving point cloud data
(requires something like quad-core i7 + 32 GB RAM + desktop-class GPU
processing)

They want to achieve it on an ARM running from tiny batteries!!!

\+ On top of this would come the whole application / game experience,
something they seem to be concentrating on, instead of getting the basics
right.

4\. Then there is latency, which is just not going to be solved for the next 5
but more probably 10 years, just read Michael Abrash blog about the reality of
Augmented Reality glasses (<http://blogs.valvesoftware.com/abrash/>).

To be clear, I'm not saying that they won't be able to make what they promise,
I'm saying that not even Google or anyone will nearly be able to achieve it
for at least 5 years, and everyone knows this who is even a little bit into
augmented reality.

So personally I find the Kickstarter campaign to be a fake campaign and it is
just bending the rules of Kickstarter which requires a real-world hardware
prototype. So they made a glued together prototype with a fake visualization,
with the whole campaign built around the video.

Nonetheless, the campaign has a chance of being a massive hit, because every
sci-fi fan is dreaming about it for decades and is willing to back it if he
has the funds. In that case, it might have a chance of the biggest Kickstarter
failures of all time. The best case for them would be a quick Google
acquisition and integration into the Glass team.

------
cracker_jacks
I'm a little bit skeptical of some of the artist renderings here. Being a
researcher in the computer vision field, rendering accurately onto arbitrary
surfaces simply is nowhere near this precise. It requires an extraordinary
amount of scene understanding. Factors like shape, surface normals,
illumination, reflectance, etc. all need to be separated. These properties are
extremely entangled together and state of the art methods require a great deal
of computational power to do significantly worse than what's being shown here.

------
needle0
I own an Epson Moverio, which the Meta 1 glasses appear to be based on. I have
a small bit of firsthand experience with hacking on it to do ARish things.

* The Moverio consists of two parts, the glasses and the control box. The two connect via a seemingly proprietary connector. The control box runs Android 2.2, archaic by today's standards. USB host mode was introduced in Android 3.1, so there would be no straightforward way to feed the depth camera's information into the control box.

* Unity3D, which Meta's software stack claims to be using, does run on the control box once you output your Unity project to an Android application. For the app to run, I had to tweak the build settings to support both ARMv6 and ARMv7 (The app failed to start when built for ARMv7 only). This was doable in Unity 3.5.x. However, Unity 4 removes support for ARMv6.

So I'm full of question marks:

* Did the Meta team somehow obtain/reverse-engineer the specifications for the Moverio glasses' connector, plug it into a more powerful device, and ditch the control box?

* Did the Meta team replace the Moverio control box's OS with a more modern version?

* Is Meta 1 stuck with the older Unity 3.5.x?

* Or am I doing it wrong, and is it indeed possible to run Unity 4-built apps on the Moverio control box?

Also, other have mentioned but the field of view is disappointingly small with
this device - just a small window in in the middle of your view.

Overall, confused.

~~~
bensandcastle
Thanks, the dev unit allows software to run on Windows x86. We're not building
for Android. Use the latest version of Unity that you wish.

~~~
needle0
Ah, I see. If so, how did you guys manage to connect the Moverio's glasses to
an x86 PC? IIRC the glasses use a proprietary connector, and no info on the
specs/protocol of the connector (nor the Aux connector on the back of the
control box) is publicly available.

Did you guys receive the specs from Epson directly, receive dev glasses which
doesn't use that connector, or reverse-engineer it yourselves? Is that
information confidential or under NDA?

I strongly feel that ancient control box is holding back the full potential of
the glasses and would love to connect different hardware to it. The connector
spec/protocol is the only thing preventing that. I'm not very hopeful but if
there's any information on it I would love to know.

------
shadowmint
To be fair, they do have a real-video below the pitch video
([http://www.youtube.com/watch?feature=player_embedded&v=o...](http://www.youtube.com/watch?feature=player_embedded&v=oioSBP9XBNg))
which is much more honest and realistic look at what's being developed.

This is actually pretty exciting tech, but it's going to be absolutely nothing
like what they have to show in the pitch video.

------
BasDirks
Wow, what completely uninspiring consumerist bullshit. "Hacker's, this one's
for you". Right.

~~~
ryderm
It's an SDK, not for the general public, so what's wrong with saying that?

------
slashedzero
Isn't the camera just Intel's 3D gesture cam? It also has an SDK that
integrates very well with unity. [http://software.intel.com/en-
us/vcsource/tools/perceptual-co...](http://software.intel.com/en-
us/vcsource/tools/perceptual-computing-sdk)

The first screen shot from the second video shows exactly what their gesture
tracking looks like. When doing the perceptual challenge, this was mainly the
stuff we were thinking of as applications for the hardware, funny to see
someone now taking it and simply mounting it on glasses.

------
Pxtl
What I'm not seeing is anything about positioning. I don't see the usual white
balls for the camera (or colored ones as used in Sony's Move). So is it
relying entirely on the 3D camera and dead reckoning with accelerometers to
figure out where the user is? Because that stuff inevitably fails the moment
you start walking around the room.

I'm mostly thinking about ARQuake and the like, where the AR objects are
walking around the room or hallways rather than being confined to a table in
front of you.

~~~
devindotcom
The white balls are a color-separation process for location, the Kinect uses
an IR emitter.

The Kinect is perfectly capable of creating sets of objects based on its depth
map. Normally it's a "skeleton" in a whole body, but it can just as easily
make a skeleton of your hand, with the fingertips automatically tracked and
used as activation points.

~~~
Pxtl
Yes, but the kinect camera isn't moving around a space. That's what I'm
talking about. Figuring out the camera's position in an unknown environment
using a depth camera is much more difficult than figuring out the position of
objects moving through a static space with a depth camera.

------
namelesstrash01
Okay. Yet another hardware Kickstarter with egregious claims? No, thanks.

~~~
namelesstrash01
Because there's so much technical information about this project on the
campaign page... There's just _no way_ you are probably throwing your money
into a hole by backing it.

------
ryderm
Meron (founder) came to my computer vision class the other day to talk to us
about Meta. He took the same class at my uni and has since hired some
professors. Seemed like a great guy and a great product. I hope this takes off
and isn't eaten by Google.

------
dakrisht
I like it. But they're shooting for the stars here. I'm not sure they'll be
able to achieve the quality shown in the video (done with nice Hollywood VFX)
in real-life. Like others mentioned, latency might be an issue, but generally
the augmented 3D interaction (i.e. spreading a pool apart in an architectural
setting) will be difficult to reproduce as-shown. Misleading videos.
Kickstarter is like the Wild West these days.

------
petermelias
Backed. If they can deliver on this the value of the application potential is
completely worth the relatively low up front risk.

~~~
muyuu
Backed as well. Even the prototype shown in the video below, rough as it may
be, is good enough to try a lot of things.

Looks really promising and they are not asking for much.

First thing I ever back in Kickstarter, too.

------
kybernetyk
I'm rather skeptic. No real-life videos, only unrealistic concepts. The bold
"there's nothing like this being build" statements and then they use off the
shelf components.

Sounds like a great recipe for disappointment. They should have at least
posted a real-life video (or a realistic rendition).

------
timfrietas
Does Google Glass have stereo cameras? The specs suggest no:

<https://support.google.com/glass/answer/3064128?hl=en>

I have to feel v2 will, given the possibilities of applications as demoed
here.

~~~
DanBC
Google glass only outputs to one eye, so 3d vision isn't possible.

3d input is still useful, but less fun.

~~~
devindotcom
Furthermore, Glass doesn't put itself 'over' the world, but out of the way -
it's not about augmented reality. That's part of why I think its approach is
ultimately going to be abandoned.

------
daeken
Totally just backed this. I haven't been so excited about a piece of tech in
_ages_. This is truly a game-changer if it works anywhere near as well as
they're showing.

~~~
timdorr
Sorry, but it's not going to work anywhere near as well as they showed.

Look at the quick "From the labs" video down the page. It's grainy, messy, and
raw. There's simply too much noise for it to look good. Without a significant
bump in sensor resolution (it's only 320x240!
[https://forum.libcinder.org/topic/future-is-here-time-of-
fli...](https://forum.libcinder.org/topic/future-is-here-time-of-flight-
camera-for-150-from-intel-with-sdk)), it's not possible to have both smooth
and performant results.

~~~
petermelias
I am more optimistic. While they could have done a better job disclaiming the
early videos, I don't believe that they are dumb enough to make such exciting
claims to an educated audience if they cannot deliver.

I have also seen the Intel depth camera in action first-hand and that alone is
a very promising piece of hardware.

------
DanBC
Showing a pixel-art 3d bike is a good idea. It's going to appeal to the
Minecraft crowd.

Give me something like this that I can run through my MC worlds on and I'm
paying.

~~~
pazimzadeh
This reminds me of Minecraft Reality: <http://minecraftreality.com/>.

------
justncase80
Consider me skeptical. I predict the camera will not have low enough latency
to ever make this device usable.

------
bleachtree
Seriously one of the coolest things I've ever seen. Can't wait until this
vision is a reality.

------
DanBC
I'd love a 3D interface to HN.

Upvoted, busy, threads float nearer me.

Flame-fests shrink back from me.

------
aaronblohowiak
Meta II: a revolutionary meta circular compiler, circa 1967

