Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
John Carmack is making a virtual reality headset (pcgamer.com)
244 points by otibom on June 6, 2012 | hide | past | favorite | 63 comments


There's something very pleasing to know that guys like Carmack aren't just punching out tomorrow's FPS engines, but doing out of the box stuff like this. I wonder if we're entering some kind of tech golden age. The post-PC stuff, SpaceX/Tesla, Kindle/Nook, Win8, Smartphones, etc things have gotten weird quickly.

I'm still waiting to wake up one day and see a $999 home robot that can do everything from clean the bathroom to walk the dog. If this happened tomorrow I wouldn't be that surprised.


Sounds like you may not be familiar with Armadillo Aerospace yet...


Yeah, Carmack has a rocket company: http://www.armadilloaerospace.com/


And it looks like he used some tech from Armadillo in this unit. The software for the gyro sensors come straight out of the Armadillo rocket tech. Awesome. (see the youtube video mentioned in another comment)


Well, the software. IIRC, the Armadillo flight electronics use a Crossbow IMU, (which must be very accurate, endure a whole lot of vibration, and drift very little) which is $100k.


Or the fact that Carmack has driven a lot of innovation in 3d hardware for that matter. (Tim Sweeney likewise.)


I had that weird/awesome realization moment yesterday after seeing all these x86 tablets and arm laptops all running Win8 / Win8RT. Lines are getting quite blurry, its a great time to be a tech consumer.


Entering? This is it.


Ain't seen nothing yet.

Just wait until:

We have fully reusable rocket ships that let us put people in orbit at less than $200k per person (SpaceX reusable Falcon 9 / Dragon, likely to see fruition within the next 5 to 15 years or so).

We have hand held computers and smartphones with the cpu power of an entire server rack today and with petabytes of non-volatile storage that's nearly as fast as ram (memristors, possibly RSFQ logic, possible on a circa 20 year timeline).

We have fully automated factories (the grand-children of today's CNC machines and 3D printers), you upload some data and press a few buttons and not too long after a fully assembled complex product (a tablet computer, an automobile, an excavator, a spaceship) comes out the other end, as many as you want. And then you start self-replicating such factories this way. That's probably going to happen within the next 50 years, if not sooner.

And this is hardly everything. The future is a crazy place.


The present is a crazy place that we pretend is normal. The future will be another crazy place, and we'll still pretend it's normal.


No, this isn't it. We're entering an age of higher order technology, where each advancement leads to faster advancement.

In short, we are approaching what many have called the singularity.


The singularity specifically requires AIs that can build smarter AIs. I'm not sure any of our current progress is really leading to true AI.


> The singularity specifically requires AIs that can build smarter AIs.

Not really. The singularity requires an intelligence explosion. A perfectly acceptable alternative route is human-intelligence augmentation, via some or all of biological hacking (genetic or otherwise), chemical hacking, and tool use. The former two are plausible but not yet off the ground, but regarding tool-based augmentation I think we're seeing meaningful, if preliminary, progress. (Then again, I think literacy counts in this bin.)

I think the key question is whether you think the self-amplifying returns of technology that we're already seeing will extend sustainably to amplifying intelligence (either human or artificial).


"specifically requires AIs that can build smarter AIs"

The trendy "rapture of the nerds" version of the singularity does.

The original definition is technology advancing at a faster rate than humans are capable of integrating and managing it. It was identified while trying to identify existential threats to humanity.

It's fascinating watching a warning about future dangers get repackaged by AI focused futurists into a utopian neo-religion.


well, specifically it refers to a period of technology advancement so rapid that we entirely lose the ability to make predictions about the other side. (this is by analogy with the mathematical definition of a singularity, a point at which your equations break down, by way of the physics application of the concept to black holes, out of which information cannot flow)


Yes, that is stating it more clearly than I did.


No, it's always been about AI:

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make." - I. J. Good, 1965


"One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."

- Stanisław Ulam, 1958 (referring to a conversation with John von Neumann)

Good's "intelligence explosion" concept is the basis of the modern singularity religion but not the origin of the term 'Singularity' used to mean a technological singularity. Good never uses the term.

So your quote makes my point doesn't it? That a utopian version of the intelligence explosion has replaced the original meaning of the technological singularity. One that favours the views of the AI researchers who went into futurism when the research money dried up.

Edit: I didn't know this but when looking up the date of that Ulam quote I learned that Vernor Vinge seems to be the first person to use the term singularity in the context of Good's intelligence explosion. In 1983. In the pages of that wonderful bastion of great scifi and pseudoscience that I loved as a kid: Omni Magazine.


The term "singularity" is very overloaded. The parent is using Kurzweil's definition. You're using something like Good/Yudkowsky's.

http://yudkowsky.net/singularity/schools


Useful link, thanks.

However, I am not convinced the schools are entirely contradictory, even with their strong claims.

For example,

- Accelerating Change school: this is mainly a (historical) trajectory claim based on positive feedback loop of technology. It does not seem to make any strong (maximum) claims about even trajectories beyond the point of strong AI.

- Event Horizon school: this is a forecastability claim that the repurcussions of AI or intelligence enhancement significantly beyond current human intelligence is unknowable until you get there.

- Intelligence Explosion school: this is a positive feedback claim that the most you can tell when human intelligence is surpassed is that further intelligence gains, be it by enhancement or AI, will feedback on itself to create further and faster change.

These claims seem to agree with each other on fundamental aspects of technology and intelligence.

In addition, in my opinion, it would not take too much imagination to combine the separate claims into a unified Singularity model.

That said, the school delineation is itself a useful tool.


Agreed. However, I don't think we really want AI. We want IA (Intelligence Augmentation). AI leads to slavery or at best pet-ness. IA leads to godhood (maybe?)


The slavery-inducing AI is more like pandoras box. Inevitably someone will try to create and harness such tech for a competitive advantage, but they have created something they cannot control...


That's a Foom, or hard takeoff singularity. Soft takeoff singularity can be done via emulated minds, superhuman but non-recursively-self-improving de novo AI, etc.


This is the forum thread where Carmack and others have been discussing the project for a couple of months. Interesting read:

http://www.mtbs3d.com/phpBB/viewtopic.php?f=120&t=14777&...


Wow, how did you find that?


Lots of procrastination :)


No, John Carmack is developing software for the Oculus RIFT being developed by Palmer Luckey. http://oculusvr.com/?page_id=2

It is a pretty amazing piece of hardware though. Over 90 degree field of view for $500, rather than the $10000+ it still costs from anywhere else.


It's been, what, 17+ years since I bought my Virtual I/O iGlasses? Amazing how long it takes some technologies to catch on, if ever. The RIFT is certainly an improvement (1280×800 vs 640x480); we'll see if the "visual acuity" twist and improved tracking gives VR the push it needs. Hopefully Carmack (and if anyone can, it's him) can solve whatever the limiting nuance is.

Then again, the Newton came out around the same time, and took just as long to be reborn in a viable "killer app" form (iPad). Between Carmack, RIFT, and Google Glass - with "smartphone" power making the needed CPU cycles portable - maybe we'll finally see VR happen.


ETA: "wireless" is very important. Even if the host CPU is just a few feet away, either it must be wireless or on the user. Getting tangled in wiring is really annoying. Trust me.


I get that augmented reality needs to be portable for most applications, I'm not so sure VR needs portable CPU power.

Other than that I agree with you.


"It’s worth noting that the prototype Carmack is demoing wasn’t made by him, but by another Texan builder of VR headsets. It’s using the same tech and principles as Carmack’s own version, which was unfortunately unable to make the trip to E3."

It sounds like he is making his own headset as well. The demo was not on his, though.


That was my impression. It sounds like he is really just looking at all options to figure out what does and does not work and why. To that extent, he is building and modifying hardware to experiment while also working with manufacturers.


That's just one of the many HMD's that Carmack has played with and hacked on extensively. It's quite clear from the interview that Carmack has done quite a bit of hardware hacking as well as software development.


sensics p-sight was the best available last time I cared to look, in the $100,000+ range =/

http://sensics.com/products/head-mounted-displays/pisight-ul...

the trouble with stereoscopic curved displays seems to be keeping them calibrated to your eyes properly, slight shifts of the head clamp messes things up pretty badly


Nice interview with John demoing and discussing his work:

"So the way this has gone, is, I decided to treat myself after Rage on there, I bought a, you know, a head mount for $1,500 or so. It's a little cottage industry, there's a few places that do these things with integration, and... it sucked. It was really bad. It was everything that I expected it to be, that it, it didn't look like there had been any progress in 20 years, since... or 15 years since I had looked at these things last.

But, when I got that, then, I started taking it apart, both literally and figuratively, to go ahead and see what are all of the aspects on here, on the sensing side, where, when I wrote my own test software for this, using their library to go ahead and get the head-tracking, it had 100ms of latency, and this just didn't make any sense to me. Why is it so bad, did they need to have so much filtering?

And, I wound up, I took the software that I wrote for Armadillo Aerospace for our rocket control with fiber-optic gyros, I took that gyro integration software and took raw values from the micro-machine sensors on there, and all the sudden it got way better. You know I can only guess that they may have filtering from 10 years ago when they had really noisy sensors, and they're a lot better now."

[...]

Resolution is gonna get better. We're gonna get to 120Hz displays, I'm haranguing all the display vendors about this. [...] Removing the latency, one of the cases that I've been making that shows the ridiculousness of it all, where, I can measure 50ms of delay on this, and I do that by, I have a program that switches colors when I hit a button, and you put a high-speed camera here, you mash it, and you wait, you count frames until it switches, and it's 50ms for that over a very fast display. That's more time than it takes to send a packet from America to England, you know. That's just ridiculous! But it's because router people and switch people care about latency, they know it's important so they don't pile it up. Display people don't know yet, but I'm trying to educate all of them about that."

[...]

"And this field of view (90 degree horizontal field of view, 110 degree vertical field of view), you couldn't get in a $10,000 head-mount display; actually, you still can't today, it's that much higher."

[...]

"The head mount display stuff is, it makes even this 8 year old game a fundamentally different experience. It really is like nothing you've ever played liked that... 10 times more graphics power doesn't give you that level of intensity."

https://www.youtube.com/watch?v=NYa8kirsUfg


I've visited the Stanford Virtual Reality Lab and tried their $50k HMD and high-precision trackers. I agree with John that latency is currently the #1 problem for VR / AR.

Interestingly, the #2 problem is something you don't realize is a problem until you try: "vergence-accommodation conflicts", meaning that the display fails to render optical depth. The solution is "Fixed-Viewpoint Volumetric 3D": http://quora.com/Volumetric-3D , which is what I'm creating as my academic career and my startup Vergence Labs.


Thanks for the info. You will definitely be interested in this part of John's interview, where he discusses depth and how he approached the problem. It starts at around 9:20 and ends at around 11:25:

https://www.youtube.com/watch?v=NYa8kirsUfg#t=9m20s


Oh wait John does talk about focus! https://www.youtube.com/watch?v=NYa8kirsUfg#t=7m32s

He seems to have not heard of the latest Volumetric 3D display technology: "time-multiplexing": http://bankslab.berkeley.edu/projects/projectlinks/fastswitc... (the bottommost diagram, labeled "Switchable lens volumetric display.", made practical in 2008. The diagram labeled "Illustration of 3 mirrors display" is a bulkier 2004 technology.)


Oh! That explains why he lamented that a transatlantic ping is faster than pushing a pixel to screen: http://news.ycombinator.com/item?id=3914638


Here's another interview by kotaku with more detail on the hacking process Carmack went through:

http://kotaku.com/5916210/carmack-being-carmack-a-dozen-minu...


thanks, this actually shows the helmet in action. I think I'm gonna buy this kit.


Wow, Carmack looks like a passionate, geeky 16 year old. It's very endearing for such a legend.


I love how scientific he is when he talks and evidently when he does his work. He said something along the lines of "if you can't decide which way to do something, do it both ways and see which is better".

Refreshing to hear a talk like that amongst discussions around language, platform wars.


He always looks like that, it's great :) You should see him demoing the iPhone version of his engine: https://www.youtube.com/watch?v=keu4GiTGQ6M


i'll warrant his ability to tap into his inner passionate, geeky 16-year-old is what has made him a legend in the first place.


Yep, in a recent QuakeCon he stated that he's more passionate now and having more fun now than ever. Great guy.


It's like he's just doing a science project. That's totally amazing.


I don't think it's the author's job to tell the reader he can't understand what Carmack's saying.


No kidding - this guy doesn't understand refresh rates and resolutions?


The hardware kit hes talking about - Oculus RIFT. Kickstarter coming this month!

http://oculusvr.com/?page_id=2


Here's to hoping someone with a name in the industry can finally push through decent consumer VR, and that this will get some attention at E3...

I bought the Sony HMZ-T1 earlier this month hoping that it would be what I was always looking for in terms of home VR (despite the lack of headtracking, which I was going to add with trackIR). However, it was horribly uncomfortable to wear and just didn't give the immersive feel I was looking for.

In looking at the Oculus site it mentions that there is a Kickstarter campaign, but then when I click through to a forum post on it the date he mentioned that was back in 2009. I'd be first to contribute to a decent HMD Kickstarter campaign...


Carmack's been pushing companies like this for nearly two decades. In particular, he has been a significant force in the evolution of the video card industry. The hardware companies know and respect him. If anyone can do it, he can.


VALVe is also working on something similar: http://blogs.valvesoftware.com/abrash/valve-how-i-got-here-w...


Can someone filter out the low-frequency noise in these videos?


So, imagine you've got acceptable VR HMDs? What software would you write for them? Why not start today?

I want an IDE that's like the Bubbles from _Signal to Noise_

I want seamless telepresence for driving mechs

I'm learning faster now because I can have multiple physics and math books along with note-taking software on my iPad. What do good HMDs do for education?

...


Oh man, the memories. I remember playing the original Doom game with a virtual reality helmet at a gaming convention back in 1994 or so when I was a little kid. It was a tiny convention at a small Finnish town, but I remember seeing same kind of helmets in photographs from CeBITs and E3's in computer magazines.

Does anyone remember a consumer VR helmet circa 1993? It had a helmet with displays of roughly "mode 13h" resolution and a hand held hockey puck with a few buttons? The helmet and the puck had accelerometers and/or inclinometers and/or gyroscopes or something to detect movement.

Head turning affected in-game camera turn and tilting the puck was movement. Puck buttons were fire, change weapon, etc. Proper FPS aiming would have been difficult but Doom's projectile collision detection is 2d anyway :)


I had one years ago that came from an IT auction for the princely sum of £40. That included basically everything but the PC including the big nest thing you had to stand in. I set it up to work with AlphaWorld on a windows 95 machine. It required a bit of work to get it to accept straight VGA signals and it wasn't the stereoscopic variant. It sat in my living room for a year and mainly collected junk.

The experience both rocked and sucked. The setup was dialled into UUNet at 14.4k and you could count the pixels but it was still slightly cool. It made your eyes and neck hurt badly though.

Ended up on eBay in '00 and sold for £1 because "buyer collects".


So now Carmack is working on headsets, and Valve is working on wearable computing[1].

That's some pretty interesting support.

[1] http://blogs.valvesoftware.com/abrash/valve-how-i-got-here-w...


If you follow Carmack's twitter feed, you'll see him mentioning his frustration over the last year at some of the display tech under the hood as he was exploring the space.

http://twitter.com/#!/ID_AA_Carmack


Virtual Reality is really the only thing I've been looking forward to in gaming. Once this becomes a reality, I will be genuinely excited about it again.


For those of you who follow his Twitter account, this is nothing new, but still cool that its getting some official coverage. :)


why does this flash player peg my cpu at 100% ??


Like the VFX1 I have?

I have been wondering for quite some time how much better the thing would be if I just swapped out the puny 640xwhatever lcd for something that's 1080p, or better yet had 1080p per eye. I think the focusing adjustments seem to be the coolest part of carmack's stuff that older projects didn't have. I just hope someone quickly comes up with a gyro based control scheme, and a new Descent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: