
This Preschool is for Robots - jacobsimon
http://www.bloomberg.com/features/2015-preschool-for-robots/#hn
======
an4rchy
This is so cool! Although, I'm sure it will take quite a bit of time to
actually use these robots in everyday settings.

I was wondering if they could create multiple copies of robots and give them
all different tasks and then combine the knowledge, to speed up the process.
However, due to randomness of learning I would assume each task has to be
taught to one single robot or it might not be as efficient?

Also, I'm curious to know, would a robot need to relearn a skill if a part
like the arm changes shape because the basic building block on which it relies
on has changed?

~~~
PeterisP
For your question, "would a robot need to relearn a skill if a part like the
arm changes shape", for the currently commonly used style of machine learning
which is also implied by the article - deep convoluted neural networks - the
situation is that it would need _some_ relearning/retraining but it would be
much smaller than starting from scratch.

When you need to train a system for problem X that limited data available, a
possible approach is to train a system on some related problem Y where the
data is plentiful, and then use that network (with e.g. the final layer
removed) as input for a smallish network that solves the problem you need, but
piggybacks on all the patterns discovered by the original network.

In a similar manner, if you had a system that works on one style of arms, then
it would be usable training a (much smaller) neural network that "converts"
from that system to outputs of another style of arms.

~~~
bargl
This may be a simple question. Can you take "snapshots" of AI and then load
multiple robots with the same robot model learned in a classroom like this?

I.E. Do the training with a prototype robot, then send the learned behavior to
a factory where they can send out pre-taught robots with what the prototype
learned?

I can't see a reason this wouldn't work, but maybe I'm missing something.

~~~
robotresearcher
Yes, absolutely. With almost all implementations, any learned/trained
parameters are stored in a file or database and can be backed-up, restored, or
transferred to a duplicate.

Some systems are sensitive to precise calibrations of sensors and actuators,
which must be done per-device. But usually this stuff is low-level detail that
is decoupled from the AI stuff.

------
natosaichek
Instead of having only a single physical robot, what if you had a virtual
(unity3d?) robot. I imagine you could speed up the learning process immensely
if you didn't have to have a ~200k robot actually manually performing the
tasks. Just do the physics simulation and fake the sensors and other inputs.
I'm sure that's a pain to set up, but probably way more effective and cheaper
in the long run than trying to train a single PR2

This was done a long time ago on a smaller scale:

[http://www.demo.cs.brandeis.edu/pr/neural_controllers/evo_co...](http://www.demo.cs.brandeis.edu/pr/neural_controllers/evo_control.html)

[https://youtu.be/0fg1Jj8ehHs](https://youtu.be/0fg1Jj8ehHs)

[http://youtu.be/n6uMBveqbyE](http://youtu.be/n6uMBveqbyE)

I'd love to see something like that tried with more modern computers / neural
algorithms. Does anyone know places that are actually doing this?

~~~
PeterisP
I'm not sure if we can do good enough physics simulations in real time so that
the behavior of manipulator grip on e.g. a bottle cap and the stumbling of the
bottle cap against the bottle as you push it around matches reality. Perhaps
we can, but not with the common physics engines used in unity3d games and
physics demos.

In any case, dealing with the actual physical actuators and sensors, their
limitations and feedback is the hard part of the problems - if you have a
solution that works perfectly in a virtual environment, then IMHO you have
achieved something like 10% of the progress needed to do the same in real
life.

~~~
yongelee
[https://www.youtube.com/watch?v=x8Fo2slT2WA](https://www.youtube.com/watch?v=x8Fo2slT2WA)

Any 3d software is more than capable of creating perfect physics simulations
in real time. I'm not sure what you mean by matching reality but in a physics
simulation the only reality that matters is the virtual simulation, which can
be done perfectly inside a program.

~~~
PeterisP
If the mind learns from the simulation that sending output X to the robotic
hand will successfully screw a bottle cap on the bottle, but in real life it
turns out that the same outputs result in the bottle cap slipping and falling
out of the hand, then the simulation was not appropriate.

You can do physics simulations that match some properties of reality that you
are modeling. However, for robotics, you would need very detailed simulations
of the exact hardware you'd be using, and it is very difficult to simulate the
relevant aspects of it, a CAD drawing of the hand isn't enough - when some
joint will wiggle slightly or the friction of a 'finger' surface will be
different than you'd expect, then it will cause different behavior and the
simulation will not match reality.

------
em3rgent0rdr
Reminds me of the TV series "Extant" where a human family adopts an android
child in order to give the android the real human experience.

