
Apply HN: The Mirror AI - akosenkov
The idea is to build a neural interface for Artificial Intelligence in Virtual Reality.<p>Sounds crazy but what&#x27;s the point?
Looking at the problem of General AI, recent&#x2F;future advancements in the field are going to deal with a lack of training environment.
I strongly believe that in the future one will need the tools to train AI on how to interact with the &quot;actual&quot; 3D world.
That includes awareness of space (3D sight and sounds), time and touch.<p>While being essential for the general AI development, this would also greatly improve weak AI efficiency.
Given the learned notion of the 3D abstractions and corresponding real-world interactions, one would be able to extract much more meaning from the 2D projections or textual information. Not mentioning human&#x2F;AI interactions that would require strong understanding of the emotions and causal relationships.<p>My background is purely technical with a focus on High Performance Computing and Big Data.
My most recent affiliations: Swiss Federal Institute of Technology Zurich, Intel Corporation, IARPA.
======
jorgemf
I think you have in mind a robot in the real world. I do believe that the real
AI would have to be embodied, have body which can use to sense and understand
the things by itself (example: you can teach an AI to recognize a chair but
you can not teach that it can use a rock as a chair unless it goes and sits
there).

Emotions are one of the things you might need to create a similar human
intelligence, but there are also more things you would need (abstract
representation, abstract reasoning, imagination, self representation, etc).
Take a look to this research about machine conciousness:
[http://www.conscious-robots.com/consscale/](http://www.conscious-
robots.com/consscale/)

Unfortunately the research are more focused in the AI that leads to better
algorithms as better image recognition. All the part about a real robot
intelligence which can feel is mostly missing and few people work on this.

~~~
akosenkov
The robots are extremely expensive to build, scale and adopt just for the sake
of training. With VR it will be much more plausible.

Plus not only me would get to develop the "core". I want to provide the ground
for this direction.

And yeap, the actual "mission" behind this idea is actually closer to "a real
robot intelligence which can feel" and reason.

------
ryporter
Are you suggesting that we improve AI by directly tapping into the human
brain's 3D representations? (Maybe I'm misunderstanding what you mean by
"interface".)

We're still at the beginning of learning how the brain works. Can you lay out
a plausible roadmap for how your product will be realized within the 10 years?
20 years?

In short, yes, it does sound crazy.

~~~
akosenkov
In short, yes.

By interface I mean a "cage" of neurons connected to the inputs/outputs
(similarly to humans) that can host different deep neural networks (and be
used as a training environment).

The basic interface can be realised within weeks - that would include stereo-
imaging input and basic motor tasks. But the more complex it gets - the more
work is needed of course (not mentioning development of the hosted AI models).

------
nxzero
When would real be real enough? Didn't DARPA already do this for some of their
AI work?

~~~
akosenkov
Real is real - when you make it real ,) I think that DARPA's research
(robotics) is lacking the scale (which VR and mass adoption can bring).

~~~
nxzero
Anything like OpenAI's Gym: [https://gym.openai.com](https://gym.openai.com)

~~~
akosenkov
One might think about it as adding additional degrees of freedom.

The goal is similar but unlike OpenAI's Gym or DeepMind I want to construct a
training ground much more close to the reality in VR that would also allow
interactions with humans (e.g. to be trained as well).

It's a logical step after image recognition and playing simple games.

------
brudgers
What are the status and roadmap in regard to implementation?

~~~
akosenkov
Speaking honestly, the idea is at the level of early experiments. I can get
the basic stereo-imaging and motor inputs/outputs within a month. Next would
be work on getting feedback system from the environment/surrounding objects
including the spectator interactions. This would take another month or a bit
more. Finally another month (and more) would consist of experiments on
embedding neural nets and scaling the training process. If to move quickly 3
months is enough to build a basis.

