There's something very pleasing to know that guys like Carmack aren't just punching out tomorrow's FPS engines, but doing out of the box stuff like this. I wonder if we're entering some kind of tech golden age. The post-PC stuff, SpaceX/Tesla, Kindle/Nook, Win8, Smartphones, etc things have gotten weird quickly.
I'm still waiting to wake up one day and see a $999 home robot that can do everything from clean the bathroom to walk the dog. If this happened tomorrow I wouldn't be that surprised.
And it looks like he used some tech from Armadillo in this unit. The software for the gyro sensors come straight out of the Armadillo rocket tech. Awesome.
(see the youtube video mentioned in another comment)
Well, the software. IIRC, the Armadillo flight electronics use a Crossbow IMU, (which must be very accurate, endure a whole lot of vibration, and drift very little) which is $100k.
I had that weird/awesome realization moment yesterday after seeing all these x86 tablets and arm laptops all running Win8 / Win8RT. Lines are getting quite blurry, its a great time to be a tech consumer.
We have fully reusable rocket ships that let us put people in orbit at less than $200k per person (SpaceX reusable Falcon 9 / Dragon, likely to see fruition within the next 5 to 15 years or so).
We have hand held computers and smartphones with the cpu power of an entire server rack today and with petabytes of non-volatile storage that's nearly as fast as ram (memristors, possibly RSFQ logic, possible on a circa 20 year timeline).
We have fully automated factories (the grand-children of today's CNC machines and 3D printers), you upload some data and press a few buttons and not too long after a fully assembled complex product (a tablet computer, an automobile, an excavator, a spaceship) comes out the other end, as many as you want. And then you start self-replicating such factories this way. That's probably going to happen within the next 50 years, if not sooner.
And this is hardly everything. The future is a crazy place.
> The singularity specifically requires AIs that can build smarter AIs.
Not really. The singularity requires an intelligence explosion. A perfectly acceptable alternative route is human-intelligence augmentation, via some or all of biological hacking (genetic or otherwise), chemical hacking, and tool use. The former two are plausible but not yet off the ground, but regarding tool-based augmentation I think we're seeing meaningful, if preliminary, progress. (Then again, I think literacy counts in this bin.)
I think the key question is whether you think the self-amplifying returns of technology that we're already seeing will extend sustainably to amplifying intelligence (either human or artificial).
"specifically requires AIs that can build smarter AIs"
The trendy "rapture of the nerds" version of the singularity does.
The original definition is technology advancing at a faster rate than humans are capable of integrating and managing it. It was identified while trying to identify existential threats to humanity.
It's fascinating watching a warning about future dangers get repackaged by AI focused futurists into a utopian neo-religion.
well, specifically it refers to a period of technology advancement so rapid that we entirely lose the ability to make predictions about the other side. (this is by analogy with the mathematical definition of a singularity, a point at which your equations break down, by way of the physics application of the concept to black holes, out of which information cannot flow)
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."
- I. J. Good, 1965
"One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."
- Stanisław Ulam, 1958 (referring to a conversation with John von Neumann)
Good's "intelligence explosion" concept is the basis of the modern singularity religion but not the origin of the term 'Singularity' used to mean a technological singularity. Good never uses the term.
So your quote makes my point doesn't it? That a utopian version of the intelligence explosion has replaced the original meaning of the technological singularity. One that favours the views of the AI researchers who went into futurism when the research money dried up.
Edit: I didn't know this but when looking up the date of that Ulam quote I learned that Vernor Vinge seems to be the first person to use the term singularity in the context of Good's intelligence explosion. In 1983. In the pages of that wonderful bastion of great scifi and pseudoscience that I loved as a kid: Omni Magazine.
However, I am not convinced the schools are entirely contradictory, even with their strong claims.
For example,
- Accelerating Change school: this is mainly a (historical) trajectory claim based on positive feedback loop of technology. It does not seem to make any strong (maximum) claims about even trajectories beyond the point of strong AI.
- Event Horizon school: this is a forecastability claim that the repurcussions of AI or intelligence enhancement significantly beyond current human intelligence is unknowable until you get there.
- Intelligence Explosion school: this is a positive feedback claim that the most you can tell when human intelligence is surpassed is that further intelligence gains, be it by enhancement or AI, will feedback on itself to create further and faster change.
These claims seem to agree with each other on fundamental aspects of technology and intelligence.
In addition, in my opinion, it would not take too much imagination to combine the separate claims into a unified Singularity model.
That said, the school delineation is itself a useful tool.
Agreed. However, I don't think we really want AI. We want IA (Intelligence Augmentation). AI leads to slavery or at best pet-ness. IA leads to godhood (maybe?)
The slavery-inducing AI is more like pandoras box. Inevitably someone will try to create and harness such tech for a competitive advantage, but they have created something they cannot control...
That's a Foom, or hard takeoff singularity. Soft takeoff singularity can be done via emulated minds, superhuman but non-recursively-self-improving de novo AI, etc.
It's been, what, 17+ years since I bought my Virtual I/O iGlasses? Amazing how long it takes some technologies to catch on, if ever. The RIFT is certainly an improvement (1280×800 vs 640x480); we'll see if the "visual acuity" twist and improved tracking gives VR the push it needs. Hopefully Carmack (and if anyone can, it's him) can solve whatever the limiting nuance is.
Then again, the Newton came out around the same time, and took just as long to be reborn in a viable "killer app" form (iPad). Between Carmack, RIFT, and Google Glass - with "smartphone" power making the needed CPU cycles portable - maybe we'll finally see VR happen.
ETA: "wireless" is very important. Even if the host CPU is just a few feet away, either it must be wireless or on the user. Getting tangled in wiring is really annoying. Trust me.
"It’s worth noting that the prototype Carmack is demoing wasn’t made by him, but by another Texan builder of VR headsets. It’s using the same tech and principles as Carmack’s own version, which was unfortunately unable to make the trip to E3."
It sounds like he is making his own headset as well. The demo was not on his, though.
That was my impression. It sounds like he is really just looking at all options to figure out what does and does not work and why. To that extent, he is building and modifying hardware to experiment while also working with manufacturers.
That's just one of the many HMD's that Carmack has played with and hacked on extensively. It's quite clear from the interview that Carmack has done quite a bit of hardware hacking as well as software development.
the trouble with stereoscopic curved displays seems to be keeping them calibrated to your eyes properly, slight shifts of the head clamp messes things up pretty badly
Nice interview with John demoing and discussing his work:
"So the way this has gone, is, I decided to treat myself after Rage on there, I bought a, you know, a head mount for $1,500 or so. It's a little cottage industry, there's a few places that do these things with integration, and... it sucked. It was really bad. It was everything that I expected it to be, that it, it didn't look like there had been any progress in 20 years, since... or 15 years since I had looked at these things last.
But, when I got that, then, I started taking it apart, both literally and figuratively, to go ahead and see what are all of the aspects on here, on the sensing side, where, when I wrote my own test software for this, using their library to go ahead and get the head-tracking, it had 100ms of latency, and this just didn't make any sense to me. Why is it so bad, did they need to have so much filtering?
And, I wound up, I took the software that I wrote for Armadillo Aerospace for our rocket control with fiber-optic gyros, I took that gyro integration software and took raw values from the micro-machine sensors on there, and all the sudden it got way better. You know I can only guess that they may have filtering from 10 years ago when they had really noisy sensors, and they're a lot better now."
[...]
Resolution is gonna get better. We're gonna get to 120Hz displays, I'm haranguing all the display vendors about this. [...] Removing the latency, one of the cases that I've been making that shows the ridiculousness of it all, where, I can measure 50ms of delay on this, and I do that by, I have a program that switches colors when I hit a button, and you put a high-speed camera here, you mash it, and you wait, you count frames until it switches, and it's 50ms for that over a very fast display. That's more time than it takes to send a packet from America to England, you know. That's just ridiculous! But it's because router people and switch people care about latency, they know it's important so they don't pile it up. Display people don't know yet, but I'm trying to educate all of them about that."
[...]
"And this field of view (90 degree horizontal field of view, 110 degree vertical field of view), you couldn't get in a $10,000 head-mount display; actually, you still can't today, it's that much higher."
[...]
"The head mount display stuff is, it makes even this 8 year old game a fundamentally different experience. It really is like nothing you've ever played liked that... 10 times more graphics power doesn't give you that level of intensity."
I've visited the Stanford Virtual Reality Lab and tried their $50k HMD and high-precision trackers. I agree with John that latency is currently the #1 problem for VR / AR.
Interestingly, the #2 problem is something you don't realize is a problem until you try: "vergence-accommodation conflicts", meaning that the display fails to render optical depth. The solution is "Fixed-Viewpoint Volumetric 3D": http://quora.com/Volumetric-3D , which is what I'm creating as my academic career and my startup Vergence Labs.
Thanks for the info. You will definitely be interested in this part of John's interview, where he discusses depth and how he approached the problem. It starts at around 9:20 and ends at around 11:25:
He seems to have not heard of the latest Volumetric 3D display technology: "time-multiplexing": http://bankslab.berkeley.edu/projects/projectlinks/fastswitc... (the bottommost diagram, labeled "Switchable lens volumetric display.", made practical in 2008. The diagram labeled "Illustration of 3 mirrors display" is a bulkier 2004 technology.)
I love how scientific he is when he talks and evidently when he does his work. He said something along the lines of "if you can't decide which way to do something, do it both ways and see which is better".
Refreshing to hear a talk like that amongst discussions around language, platform wars.
Here's to hoping someone with a name in the industry can finally push through decent consumer VR, and that this will get some attention at E3...
I bought the Sony HMZ-T1 earlier this month hoping that it would be what I was always looking for in terms of home VR (despite the lack of headtracking, which I was going to add with trackIR). However, it was horribly uncomfortable to wear and just didn't give the immersive feel I was looking for.
In looking at the Oculus site it mentions that there is a Kickstarter campaign, but then when I click through to a forum post on it the date he mentioned that was back in 2009. I'd be first to contribute to a decent HMD Kickstarter campaign...
Carmack's been pushing companies like this for nearly two decades. In particular, he has been a significant force in the evolution of the video card industry. The hardware companies know and respect him. If anyone can do it, he can.
So, imagine you've got acceptable VR HMDs? What software would you write for them? Why not start today?
I want an IDE that's like the Bubbles from _Signal to Noise_
I want seamless telepresence for driving mechs
I'm learning faster now because I can have multiple physics and math books along with note-taking software on my iPad. What do good HMDs do for education?
Oh man, the memories. I remember playing the original Doom game with a virtual reality helmet at a gaming convention back in 1994 or so when I was a little kid. It was a tiny convention at a small Finnish town, but I remember seeing same kind of helmets in photographs from CeBITs and E3's in computer magazines.
Does anyone remember a consumer VR helmet circa 1993? It had a helmet with displays of roughly "mode 13h" resolution and a hand held hockey puck with a few buttons? The helmet and the puck had accelerometers and/or inclinometers and/or gyroscopes or something to detect movement.
Head turning affected in-game camera turn and tilting the puck was movement. Puck buttons were fire, change weapon, etc. Proper FPS aiming would have been difficult but Doom's projectile collision detection is 2d anyway :)
I had one years ago that came from an IT auction for the princely sum of £40. That included basically everything but the PC including the big nest thing you had to stand in. I set it up to work with AlphaWorld on a windows 95 machine. It required a bit of work to get it to accept straight VGA signals and it wasn't the stereoscopic variant. It sat in my living room for a year and mainly collected junk.
The experience both rocked and sucked. The setup was dialled into UUNet at 14.4k and you could count the pixels but it was still slightly cool. It made your eyes and neck hurt badly though.
Ended up on eBay in '00 and sold for £1 because "buyer collects".
If you follow Carmack's twitter feed, you'll see him mentioning his frustration over the last year at some of the display tech under the hood as he was exploring the space.
Virtual Reality is really the only thing I've been looking forward to in gaming. Once this becomes a reality, I will be genuinely excited about it again.
I have been wondering for quite some time how much better the thing would be if I just swapped out the puny 640xwhatever lcd for something that's 1080p, or better yet had 1080p per eye. I think the focusing adjustments seem to be the coolest part of carmack's stuff that older projects didn't have. I just hope someone quickly comes up with a gyro based control scheme, and a new Descent.
I'm still waiting to wake up one day and see a $999 home robot that can do everything from clean the bathroom to walk the dog. If this happened tomorrow I wouldn't be that surprised.