Hacker News new | past | comments | ask | show | jobs | submit login
This Preschool is for Robots (bloomberg.com)
22 points by jacobsimon on Sept 8, 2015 | hide | past | favorite | 11 comments



This is so cool! Although, I'm sure it will take quite a bit of time to actually use these robots in everyday settings.

I was wondering if they could create multiple copies of robots and give them all different tasks and then combine the knowledge, to speed up the process. However, due to randomness of learning I would assume each task has to be taught to one single robot or it might not be as efficient?

Also, I'm curious to know, would a robot need to relearn a skill if a part like the arm changes shape because the basic building block on which it relies on has changed?


For your question, "would a robot need to relearn a skill if a part like the arm changes shape", for the currently commonly used style of machine learning which is also implied by the article - deep convoluted neural networks - the situation is that it would need some relearning/retraining but it would be much smaller than starting from scratch.

When you need to train a system for problem X that limited data available, a possible approach is to train a system on some related problem Y where the data is plentiful, and then use that network (with e.g. the final layer removed) as input for a smallish network that solves the problem you need, but piggybacks on all the patterns discovered by the original network.

In a similar manner, if you had a system that works on one style of arms, then it would be usable training a (much smaller) neural network that "converts" from that system to outputs of another style of arms.


This may be a simple question. Can you take "snapshots" of AI and then load multiple robots with the same robot model learned in a classroom like this?

I.E. Do the training with a prototype robot, then send the learned behavior to a factory where they can send out pre-taught robots with what the prototype learned?

I can't see a reason this wouldn't work, but maybe I'm missing something.


Yes, absolutely. With almost all implementations, any learned/trained parameters are stored in a file or database and can be backed-up, restored, or transferred to a duplicate.

Some systems are sensitive to precise calibrations of sensors and actuators, which must be done per-device. But usually this stuff is low-level detail that is decoupled from the AI stuff.


Instead of having only a single physical robot, what if you had a virtual (unity3d?) robot. I imagine you could speed up the learning process immensely if you didn't have to have a ~200k robot actually manually performing the tasks. Just do the physics simulation and fake the sensors and other inputs. I'm sure that's a pain to set up, but probably way more effective and cheaper in the long run than trying to train a single PR2

This was done a long time ago on a smaller scale:

http://www.demo.cs.brandeis.edu/pr/neural_controllers/evo_co...

https://youtu.be/0fg1Jj8ehHs

http://youtu.be/n6uMBveqbyE

I'd love to see something like that tried with more modern computers / neural algorithms. Does anyone know places that are actually doing this?


I'm not sure if we can do good enough physics simulations in real time so that the behavior of manipulator grip on e.g. a bottle cap and the stumbling of the bottle cap against the bottle as you push it around matches reality. Perhaps we can, but not with the common physics engines used in unity3d games and physics demos.

In any case, dealing with the actual physical actuators and sensors, their limitations and feedback is the hard part of the problems - if you have a solution that works perfectly in a virtual environment, then IMHO you have achieved something like 10% of the progress needed to do the same in real life.


My guess is that you don't need something perfect to acheive 90% success. Maybe you switch to real-world physical training for the last 10%, but the first 90% (or whatever) of the neural network training - learning which 'mental' control corresponds with which 'muscle', getting the grip close to the right place with close to the right force, getting an approximation of sensor feedback into the network (with some noise added, perhaps) should get a long way there. I mean - you can see the terrible simulation they used for that asimo robot neural net training - not anywhere close to a unity simulator. I did just happen across this: http://gazebosim.org/ Which seems like it's basically designed for these sorts of things.


So what if you trained the robot to know when it needed real life training? Give it the option to train in a simulated environment or the real environment, but grade it on performance in the real environment. I bet it would try to do most things in the virtual and then later in the physical, since virtual training would (could.../should?) be faster. And it would be able to learn what sort of things in the virtual machine don't map well to real life. Heck, it might even learn to generally compensate for flaws in the 3d simulation.


https://www.youtube.com/watch?v=x8Fo2slT2WA

Any 3d software is more than capable of creating perfect physics simulations in real time. I'm not sure what you mean by matching reality but in a physics simulation the only reality that matters is the virtual simulation, which can be done perfectly inside a program.


If the mind learns from the simulation that sending output X to the robotic hand will successfully screw a bottle cap on the bottle, but in real life it turns out that the same outputs result in the bottle cap slipping and falling out of the hand, then the simulation was not appropriate.

You can do physics simulations that match some properties of reality that you are modeling. However, for robotics, you would need very detailed simulations of the exact hardware you'd be using, and it is very difficult to simulate the relevant aspects of it, a CAD drawing of the hand isn't enough - when some joint will wiggle slightly or the friction of a 'finger' surface will be different than you'd expect, then it will cause different behavior and the simulation will not match reality.


Reminds me of the TV series "Extant" where a human family adopts an android child in order to give the android the real human experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: