Hacker Newsnew | comments | show | ask | jobs | submit login
Meta: The Most Advanced Augmented Reality Interface (kickstarter.com)
128 points by bensandcastle 702 days ago | 83 comments



The videos there are misleading. The level of quality shown is nowhere near what that hardware can achieve. For example, in the "video from the lab", virtual objects are shown occluding someone's hands. That is not possible with the display technology they're using. Your real experience with the device will not be anything like the videos shown. The display will be more like a ghostly, low resolution overlay, with significant latency.

The sad thing is that the hardware actually looks pretty neat. This device should be cool enough that a realistic demo could easily sell it without misleading people. I hope Kickstarter starts cracking down on projects using pie-in-the-sky concept videos to raise expectations that they can't possibly deliver on.

-----


I got to try this out when he was showing it in Toronto. It was a couple of months ago, so what I remember might not be exact.

- The demo included glowing hands of ice and fire. I did see my own hand with ice and fire glowing from the finger tips. I'm sure I saw it as blended around my fingers, rather than an overlay on top. - There was no latency at all, he let me type some stuff out on the virtual keyboard and the feedback was instant.

My only problem was it kept falling off my nose :)

-----


Not true.

Looks like it's an Intel depth camera (http://goo.gl/ivrwY) glued to some Epson glasses. With the depth map from the camera, occluding virtual objects is possible.

-----


Occluding virtual objects (real hands in front of virtual sphere) is possible. Occluding real objects (real hands behind virtual sphere) is not.

-----


They're both possible and we'll have more videos very soon to show this. Filming through the glasses is really hard, we've been working on a rig the last few days and will have videos up soon.

-----


Really? To occlude real objects, the glasses would need to have per-pixel adjustable transparency. They look like Epson Moverio glasses[1], and I don't see anything on Epson's site to suggest that they can selectively occlude light, but I'd be delighted to be wrong.

[1] http://www.epson.com/cgi-bin/Store/jsp/Product/Specification...

-----


No, it does not support selective occlusion. Much like Google Glass it's a microprojector against your face, albeit this is two-eyed whereas Google's has one.

-----


I think this is "possible" but unlikely to work well anytime in the near future. There are some really big problems that need to be addressed for it to work properly. To do this convincingly and effectively the glasses would need to incorporate eye tracking. Not only because the relative position of the screens is dependent on the way the glasses are resting on your head but most importantly because the placement of the screens in your view is dependent on the convergence of your eyes. If you're looking at something close (basically anything in the range of what you could manipulate with your hands) the relative position of the screens must be shifted closer together because otherwise you'll see double. It's really something that's very difficult to do and as far as I can see still at least half a decade off.

-----


You're right. I misread your post and didn't realize they demo hands being 100% occluded by virtual objects. Can't believe they're that brazen.

-----


Not understanding the objections. These glasses are a display, right? There's a computer in between ultimately responsible for what the eyes see. With enough clever processing of the scene, why wouldn't you be able to achieve this?

-----


The glasses are not a normal display. Most displays are opaque, and can display pure black by simply not emitting any light. These glasses are transparent, so when they don't emit any light you see whatever's behind the display, not black. Displaying true black on a transparent display requires an extra element which can selectively block incoming light. These glasses don't have such an element, and cannot display black or dark colors unless you happen to be looking at a black object in the real world.

-----


I don't see why it would be impossible; it'd just be pretty hard. What if you had another layer of glass that was like those old LCD calculator screens that have opaque numbers, but can otherwise be transparent? Maybe you could display things in front of those to have full opaqueness. That wouldn't help with variable opacity though.

-----


You're right, an extra LCD layer could definitely do it. I'm just saying that the specific glasses being used here don't have this capability, unless there's some enhancement Meta isn't talking about and isn't showing in their videos.

-----


Virtual objects occluding real objects is in fact nearly impossible. The way you're thinking about it with the extra LCD layer isn't actually a viable solution for this sort of display (or any sort of display my mind can come up with right now). The way glasses such as these work is a lcd screen hidden in the side of the glasses is viewed through a beamsplitter that combines the image of the screen with the real world. The screen appears to be "at infinity" from the viewers perspective. Unfortunately to do occlusion on glasses like this the light would have to be blocked between the source of the light (outside world) and the viewers eyes. The light blocking layer would always appear blurry from the users perspective since it would be so close to the eyes. You would also have problems aligning the light blocking layer with the screen as the apparent position of the screen in the glasses is dependent on the position of the glasses on your head.

-----


Wow, you're totally right! Alignment problems are solvable with accurate positional tracking at least in theory, but I hadn't considered the blurriness problem. That seems like a show-stopper. It might be fixable if the light-blocking element was essentially a holographic display, capable of treating light differently depending on the incident angle, but such technology is going to be out of reach for a long, long time.

-----


There is some nascent technology that allows the eye to converge on two focal planes simultaneously, which looks like it can solve the bluriness problem[1], but I don't get the impression that meta is using this. And it doesn't solve the issue of ensuring parallax alignment between the display screen and the background, which can only really be done if you're tracking the pupil and compensating accordingly...

[1]: http://www.bbc.co.uk/news/technology-17692256

-----


To be fair, it looks like all the demos that show a person actually wearing the glasses have slightly translucent objects. While it is technically impossible to occlude an object 100% with this technology, I believe the percieved opacity of the augmented reality object is based on the difference between the intensity of the projected image and the real world background.

Theoretically, it should be possible to overpower the background light and have an object appear to be totally opaque, which is similar to the principle behind one-way mirrors. That amount of light might be blinding in certain circumstances, though.

-----


Theoretically you could render occluding white objects that are strictly brighter than anything in the real scene by overpowering the background light. Dark objects, shadows, or strongly colored objects could not be rendered in this way, as they require blocking incoming light of the wrong brightness or color.

In practice, overpowering background light is not possible with current displays. They're simply not bright enough.

-----


That is a really good point. I seem to remember a really interesting article posted to HN awhile ago that went over all the pitfalls in depth. I'm really wishing I had bookmarked it now.

I think I'm going to have to rewatch the promo video in depth to see if they have any unrealistic demos with glasses-wearing users.

I still believe this technology would be really useful for doing Real Work. You could get around the shadow problem by working in a black surface. I am imagining an interface with a black drafting desk, small bluetooth keyboard, and a 3d tracked pen with this AR display could be absolutely revolutionary for certain engineering disciplines.

-----


Agreed. 20% of what they're showing might work, the rest is nice Hollywood VFX.

-----


> The videos there are misleading.

I was thinking the same just based on the repeated "there is nothing like this being developed", which I may have believed if I didn't come across Oculus Rift just a few hours prior.

-----


I think that claim is actually accurate. The Oculus Rift is much different: it's virtual reality, whereas Meta is augmented reality. They are different things, and augmented reality is much harder. The Oculus Rift completely occludes your entire field of vision and replaces it with something else entirely. Meta modifies the real world with virtual objects.

-----


Couldn't one mount a camera ontop of the Oculus Rift (just like with Meta) and stream the camera feed into the Rift + process the video stream to offer augmented reality?

-----


This would work much better. The main reason is latency. The added image will always lag behind the real world by some milliseconds (16ms minimum refresh latency for 60hz). There is also the image capture and processing latency which is considerable. This is true for both the Meta (hard augmented reality - hard AR) and the occulus rift (virtual reality - VR).

The difference in experience is for VR, the whole world lags so an added object will always be at the right place. With hard AR, the added object will be trying to catch up as the image moves.

From what I gather on google glass, they mostly aim for soft AR. This is where you add information on top of the real world but it doesn't map directly to a location in the real world: if you move your head, the added image stays the same. This is much more resistant to delays and could be useable for extended periods of time without discomfort.

For a better and more technical writeup:

http://blogs.valvesoftware.com/abrash/why-you-wont-see-hard-...

-----


> the repeated "there is nothing like this being developed"

Maybe it's meant to include themselves?

-----


To support parent, a somewhat technical writeup:

http://blogs.valvesoftware.com/abrash/why-you-wont-see-hard-...

-----


This technique, as it is shown, is not going to be possible for at least the next 5 years. They either know it and are making a huge fake marketing campaign (for a possible Google acquisition) or haven't realized it yet will have a really hard moment when they realize it.

Couple of components which are not going to work:

1. A see-through glass with Field of View as shown in the video just doesn't exist today. The model they are going to use are more like a ‘tiny TV-screen floating in your view’ and not even close to the visualization they created.

2. Real time 3D gesture recognition from point cloud data on ARM (+ overhead for applications + games all in low latency)

3. Real time 3D environment reconstruction from moving point cloud data (requires something like quad-core i7 + 32 GB RAM + desktop-class GPU processing)

They want to achieve it on an ARM running from tiny batteries!!!

+ On top of this would come the whole application / game experience, something they seem to be concentrating on, instead of getting the basics right.

4. Then there is latency, which is just not going to be solved for the next 5 but more probably 10 years, just read Michael Abrash blog about the reality of Augmented Reality glasses (http://blogs.valvesoftware.com/abrash/).

To be clear, I'm not saying that they won't be able to make what they promise, I'm saying that not even Google or anyone will nearly be able to achieve it for at least 5 years, and everyone knows this who is even a little bit into augmented reality.

So personally I find the Kickstarter campaign to be a fake campaign and it is just bending the rules of Kickstarter which requires a real-world hardware prototype. So they made a glued together prototype with a fake visualization, with the whole campaign built around the video.

Nonetheless, the campaign has a chance of being a massive hit, because every sci-fi fan is dreaming about it for decades and is willing to back it if he has the funds. In that case, it might have a chance of the biggest Kickstarter failures of all time. The best case for them would be a quick Google acquisition and integration into the Glass team.

-----


I think this stuff is cool, but I think it is going to be held back by a romantic attachment to "real".

Why not use something like the occulus rift? Instead of projecting new objects over the top of existing ones; replace the users field of vision completely.

I'd love that.

One of the guys in our hackerspace (plug: heatsync labs, Phoenix AZ) got an occulus, and we've been talking about how cool it would be to build a "virtual office" of sorts. Sit down with an occulus and some noise cancelling headphones, and have an infinitely large workspace.

2 monitors? or 1000 monitors; it doesn't matter because your entire field of vision (or your entire environment) is being rendered for you.

--

I think people are very attached to the idea of your eyes seeing the "real" world instead of a re-displayed one. I understand that, but I think that ideology is going to hold AR back for a while.

-----


It's not romantic attachment, it's pragmatism.

It will be a while before the VR experience becomes seamless. Resolution and dynamic range of VR still lag the real world by an order of magnitude. Latency is extremely touchy for scenes of any complexity, especially given hardware that is designed to be wearable. Even if these issues only manifest subconsciously, the differences add up: the human eye has ~100megapixel resolution and hardwired motion tracking so detailed that the information has to be lossfully compressed 100:1 before being shipped across the optic nerve (the first level of feature extraction happens in your eye). Those compression "algorithms" suffer under VR.

People will probably be willing to put up with the quality drop to play games because games are viscerally engaging in a way that allows you to push through sensory obstacles. Workplaces, however, lack that energy and are subject to the constraint of productivity, which I think will make them much harder to penetrate with VR: tiny psychological stresses quickly become intolerable if you're already frustrated by dependency management / tedious busywork / whatever, while the virtual world doesn't actually allow you to do all that much that you couldn't do before. Pro apps will probably take 5-10 years to adjust in a meaningful way, and I doubt that more than a handful of industries will actually be revolutionized by the difference.

In the meantime, AR dramatically helps the smoothness problem while still enabling almost all of the gee-whiz features of VR that would have relevance in a professional setting. Military applications are the obvious exception (full VR potentially offers protection from lasers/irritants/bullets to the face) but even military funding won't be enough to close the gap between VR and AR instantaneously.

In the long term, you're right, VR will win, but given that the "short term" will probably last for decades, I think you're jumping the gun on VR.

-----


> I think people are very attached to the idea of your eyes seeing the "real" world instead of a re-displayed one. I understand that, but I think that ideology is going to hold AR back for a while.

I don't think it's fair calling "ideology" the fact screens still suck, and they suck hard. Dynamic range, refresh rate, color reproduction and depth perception are all big shortcomings not solved yet.

Just look at digital cameras. If EVFs are not even ready to replace conventional optical viewfinders yet, it's utopia to ask people to slap one in their faces 24/7.

-----


>refresh rate

In this age of nanotechnology I find it entirely ridiculous that manufacturers are cost cutting by multiplexing column and row transistors. LCDs are such awesome tech that you don't even need to have refresh rate. It might increase initial complexity but in the long run it will be worth it.

-----


I do believe the retail version of the Oculus Rift will come with two cameras on the front, and I've heard rumors of a depth camera as well. We'll see. Exciting times we live in, that's for sure.

-----


I think the reason is that people like to consider being able to move around environments that you don't have memorized without tripping on things. Do I have to step out of my virtual office to go get a beer from the fridge, or can I get the real-life obstacles that are in the way projected into my virtual office?

Imagine a virtual office with a thousand monitors that is your apartment that has been plopped down in the middle of a forest with the walls knocked out.

-----


Why not use something like the occulus rift?

While the occulus rift is cool, I think you're missing the point. VR has been around for quite some time, and while it has it's applications, AR, IMHO, is more immediately applicable today. Even if you are selectively reproducing pieces of reality in your VR, that's going to take extra processing power and not going to be quite realistic (or real-time) for a long time. AR, on the other hand, is just rife with applications that we can use now. The classic one I always think of is helping someone with Alzheimer's get home by "painting" a path of footsteps on the ground. Try that with an occulus rift, or any other VR setup. Almost guaranteed to be cumbersome and dangerous, if it can even be used outside of a lab at all.

-----


of course that could be used outside of a lab. look at things like world lens.

-----


1) Your original post I responded to was talking about oculus rift; while it may be possible to use the oculus rift outdoors or untethered, it's definitely not currently being developed for that use case.

2) FWIW, world lens is described as augmented reality. Also, it has a much narrower FOV, probably doesn't update in real-time (and probably doesn't need to), doesn't require wearing a bulky headset, doesn't occlude your entire FOV, and in any case, is not connected to an oculus rift.

-----


World lens does update in [practically] real time.

And bringing it up was to address your claim that it was computationally infeasible to retransmit video with some of the content changed.

-----


World lens does update in [practically] real time.

And "practically real time" isn't going to cut it for something which occludes your view, that was why I brought that, and my other points up. I didn't claim it was infeasible; I claimed it was currently infeasible to do it for the entire FOV of a human, and with that being true, it was impractical to assume that VR and AR can currently be applied to the same problems. In the future, the distinction between AR and VR may not need to be made, but before that can happen, VR (as it currently is) has a long way to go.

-----


> infinitely large workspace.

I've never considered this. As many terminals as the eye can fit, with a living rainforest as a backdrop. If it doesn't seriously hurt the eyes to do it for programming-session durations, then I'd gladly pay good money for that.

And yes, the ARI comparisons are not lost on me.

-----


That was the first thing I considered. Another commenter here pointed out that it would not be very possible due to the relatively low resolution of the display on the Oculus. Maybe on the next model, though.

-----


My housemates and I have one coming in the mail.

My idea was to make a window compositor for X--the idea being that all open windows would be scattered around the inside of a sphere or hemisphere or perhaps hypercone with the user's head at the center.

You look at a window and slowly turn to face directly forward, and the window blows up and moves into focus. A shake of the head dismisses the window, and a sharp snap to either side cycles through open windows.

Shoot me an email if you'd like to talk more.

EDIT: X programming is fucking arcane. :(

-----


Skip X, jump to QML and QWayland.

http://www.youtube.com/watch?v=_FjuPn7MXMs

Take that and extrapolate. Doesn't seem that "crazy" really. Compiz and KWin both already have window animations you're describing. There was even a Wii-head-tracking plugin for compiz that would give your desktop a third perspective dimension. (Moving your head would reveal the edges/bits of the other windows underneath higher ones).

It'd be a neat demo, but it sounds like a nightmare for daily usability, etc.

-----


I'm quite happy doing it with X/GLX--I'm only interested in chasing one kind of shiny at the moment.

(I reserve the right to eat my words and recant this misguided decision later.)

-----


Is it just me, or is there very little "real software" being shown here? Everything in the video was just "artist's interpretations" of what it "could look like", no?

-----


Yeah, I'm seeing a lot of "what it could be like," in a video that probably cost thousands to produce. The big round $100K number is suspicious to me as well. They're making... a dev kit? For an environment, device, and service platform that doesn't exist (nor the technologies to enable it, just yet), and their budget to create all this is exactly $100K? If so, that's not gonna happen. If not, and this is just cherry on the cake funding, I don't think that's what Kickstarter is "for," as much as Kickstarter can be "for" one thing and not another.

-----


Hey, You're right, $100k is nowhere near enough to build all the supporting technology. Our software development is covered by investment, including YC. The Kickstarter is purely about covering hardware costs and the $100k threshold is for minium production runs.

Filming stereoscopic video through the glasses is very complicated which is why we used concept visualisation to give an idea how it will feel. We're working on a rig to film through the glasses, but right now there's no way we'll be able to convey how impressive it is in person through live video.

-----


You should really consider being more up-front about that in the video. A little disclaimer along the bottom of "Artist's rendering" or something would go a long way.

-----


Has Kickstarted backed off from their recent stance? As of last September their position was that "Kickstarter campaigns will be unable to use simulations or design renders to illustrate what a completed product may look like or how it may function. Instead, creators must provide photos or video of prototypes as they exist at the time of posting." -- http://www.kickstarter.com/blog/kickstarter-is-not-a-store

-----


Another thing that bothers me is no mention of license for software. I'm not about to fund a kickstarter for closed source software. I'd login and post a question, but I'm kind of busy with code of my own right now.

-----


This is not their only source of funding. I think the round number is fine since it's basically "hey we could do even cooler stuff if people gave us an extra $100k."

-----


I'm a little bit skeptical of some of the artist renderings here. Being a researcher in the computer vision field, rendering accurately onto arbitrary surfaces simply is nowhere near this precise. It requires an extraordinary amount of scene understanding. Factors like shape, surface normals, illumination, reflectance, etc. all need to be separated. These properties are extremely entangled together and state of the art methods require a great deal of computational power to do significantly worse than what's being shown here.

-----


I own an Epson Moverio, which the Meta 1 glasses appear to be based on. I have a small bit of firsthand experience with hacking on it to do ARish things.

* The Moverio consists of two parts, the glasses and the control box. The two connect via a seemingly proprietary connector. The control box runs Android 2.2, archaic by today's standards. USB host mode was introduced in Android 3.1, so there would be no straightforward way to feed the depth camera's information into the control box.

* Unity3D, which Meta's software stack claims to be using, does run on the control box once you output your Unity project to an Android application. For the app to run, I had to tweak the build settings to support both ARMv6 and ARMv7 (The app failed to start when built for ARMv7 only). This was doable in Unity 3.5.x. However, Unity 4 removes support for ARMv6.

So I'm full of question marks:

* Did the Meta team somehow obtain/reverse-engineer the specifications for the Moverio glasses' connector, plug it into a more powerful device, and ditch the control box?

* Did the Meta team replace the Moverio control box's OS with a more modern version?

* Is Meta 1 stuck with the older Unity 3.5.x?

* Or am I doing it wrong, and is it indeed possible to run Unity 4-built apps on the Moverio control box?

Also, other have mentioned but the field of view is disappointingly small with this device - just a small window in in the middle of your view.

Overall, confused.

-----


Thanks, the dev unit allows software to run on Windows x86. We're not building for Android. Use the latest version of Unity that you wish.

-----


Ah, I see. If so, how did you guys manage to connect the Moverio's glasses to an x86 PC? IIRC the glasses use a proprietary connector, and no info on the specs/protocol of the connector (nor the Aux connector on the back of the control box) is publicly available.

Did you guys receive the specs from Epson directly, receive dev glasses which doesn't use that connector, or reverse-engineer it yourselves? Is that information confidential or under NDA?

I strongly feel that ancient control box is holding back the full potential of the glasses and would love to connect different hardware to it. The connector spec/protocol is the only thing preventing that. I'm not very hopeful but if there's any information on it I would love to know.

-----


To be fair, they do have a real-video below the pitch video (http://www.youtube.com/watch?feature=player_embedded&v=o...) which is much more honest and realistic look at what's being developed.

This is actually pretty exciting tech, but it's going to be absolutely nothing like what they have to show in the pitch video.

-----


Wow, what completely uninspiring consumerist bullshit. "Hacker's, this one's for you". Right.

-----


It's an SDK, not for the general public, so what's wrong with saying that?

-----


Isn't the camera just Intel's 3D gesture cam? It also has an SDK that integrates very well with unity. http://software.intel.com/en-us/vcsource/tools/perceptual-co...

The first screen shot from the second video shows exactly what their gesture tracking looks like. When doing the perceptual challenge, this was mainly the stuff we were thinking of as applications for the hardware, funny to see someone now taking it and simply mounting it on glasses.

-----


What I'm not seeing is anything about positioning. I don't see the usual white balls for the camera (or colored ones as used in Sony's Move). So is it relying entirely on the 3D camera and dead reckoning with accelerometers to figure out where the user is? Because that stuff inevitably fails the moment you start walking around the room.

I'm mostly thinking about ARQuake and the like, where the AR objects are walking around the room or hallways rather than being confined to a table in front of you.

-----


The white balls are a color-separation process for location, the Kinect uses an IR emitter.

The Kinect is perfectly capable of creating sets of objects based on its depth map. Normally it's a "skeleton" in a whole body, but it can just as easily make a skeleton of your hand, with the fingertips automatically tracked and used as activation points.

-----


Yes, but the kinect camera isn't moving around a space. That's what I'm talking about. Figuring out the camera's position in an unknown environment using a depth camera is much more difficult than figuring out the position of objects moving through a static space with a depth camera.

-----


Okay. Yet another hardware Kickstarter with egregious claims? No, thanks.

-----


Because there's so much technical information about this project on the campaign page... There's just no way you are probably throwing your money into a hole by backing it.

-----


Interesting to note that Meta is in fact YC-funded: http://allthingsd.com/20130517/meta-wants-to-become-the-next...

-----


Meron (founder) came to my computer vision class the other day to talk to us about Meta. He took the same class at my uni and has since hired some professors. Seemed like a great guy and a great product. I hope this takes off and isn't eaten by Google.

-----


I like it. But they're shooting for the stars here. I'm not sure they'll be able to achieve the quality shown in the video (done with nice Hollywood VFX) in real-life. Like others mentioned, latency might be an issue, but generally the augmented 3D interaction (i.e. spreading a pool apart in an architectural setting) will be difficult to reproduce as-shown. Misleading videos. Kickstarter is like the Wild West these days.

-----


Backed. If they can deliver on this the value of the application potential is completely worth the relatively low up front risk.

-----


Backed as well. Even the prototype shown in the video below, rough as it may be, is good enough to try a lot of things.

Looks really promising and they are not asking for much.

First thing I ever back in Kickstarter, too.

-----


I'm rather skeptic. No real-life videos, only unrealistic concepts. The bold "there's nothing like this being build" statements and then they use off the shelf components.

Sounds like a great recipe for disappointment. They should have at least posted a real-life video (or a realistic rendition).

-----


Does Google Glass have stereo cameras? The specs suggest no:

https://support.google.com/glass/answer/3064128?hl=en

I have to feel v2 will, given the possibilities of applications as demoed here.

-----


Google glass only outputs to one eye, so 3d vision isn't possible.

3d input is still useful, but less fun.

-----


Furthermore, Glass doesn't put itself 'over' the world, but out of the way - it's not about augmented reality. That's part of why I think its approach is ultimately going to be abandoned.

-----


Depth cameras using infra-red projection aren't going to work outside due to solar saturation, so I think Google would be foolish not to embed a second camera and enable stereo vision. It's the only light-weight and low-power way to get depth information in a glasses form factor.

In principle (assuming some off-device computation), you could run SLAM and get a decent estimate of head orientation to then do AR with Google's glasses. Unlikely in the near future, but inevitable at some point.

-----


Totally just backed this. I haven't been so excited about a piece of tech in ages. This is truly a game-changer if it works anywhere near as well as they're showing.

-----


Sorry, but it's not going to work anywhere near as well as they showed.

Look at the quick "From the labs" video down the page. It's grainy, messy, and raw. There's simply too much noise for it to look good. Without a significant bump in sensor resolution (it's only 320x240! https://forum.libcinder.org/topic/future-is-here-time-of-fli...), it's not possible to have both smooth and performant results.

-----


I am more optimistic. While they could have done a better job disclaiming the early videos, I don't believe that they are dumb enough to make such exciting claims to an educated audience if they cannot deliver.

I have also seen the Intel depth camera in action first-hand and that alone is a very promising piece of hardware.

-----


Showing a pixel-art 3d bike is a good idea. It's going to appeal to the Minecraft crowd.

Give me something like this that I can run through my MC worlds on and I'm paying.

-----


This reminds me of Minecraft Reality: http://minecraftreality.com/.

-----


I would hate to fight the ender dragon like this.

-----


Consider me skeptical. I predict the camera will not have low enough latency to ever make this device usable.

-----


I'd love a 3D interface to HN.

Upvoted, busy, threads float nearer me.

Flame-fests shrink back from me.

-----


Seriously one of the coolest things I've ever seen. Can't wait until this vision is a reality.

-----


Meta II: a revolutionary meta circular compiler, circa 1967

-----




Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: