
MIT cheetah robot lands the running jump [video] - neverminder
https://www.youtube.com/watch?v=_luhn7TLfWU
======
Coincoin
Is it me or does it seem to have an easier time with the highest obstacle. I
though it was a fluke but then it had the same behaviour in free range mode.

Is it a random result because of the phase of the obstacle in the run cycle?

------
DanBC
This is amazing.

I am so impatient though - I want to scatter bricks on the landing-side of the
jump to see how it copes.

~~~
enraged_camel
It would probably get annoyed and shoot you with its lasers.

------
wayanon
Interesting the obstacles are always magenta.

~~~
Gravityloss
In a war room somewhere: "Have you seen this video? We have to repaint the
robocheetah obstacles as soon as possible!"

------
cdnsteve
Would be funny to see a group of these performing synchronously at half time.

------
gvozd
That's not a cheetah. That's an electric sheep.

~~~
creativityhurts
You'll scream this when that electric sheep comes after you.

------
Qantourisc
To be fair, this looks more like a mountain goat jump. Impressive non the less
!

~~~
lucb1e
My first thought :)

------
nosuchthing
Love that ingenuity on their camera dolly

------
kirk21
Love it that a PhD student had to push the camera guy at the end.

------
deegles
Hopefully they'll put a saddle on it! What a way to commute...

------
return0
Is the goal here explicitly to mimic animal locomotion? Does this have unique
advantages to something that rotates/rolls etc?

~~~
bovermyer
It's not the animal locomotion per se that's exciting. It's the balance
control.

------
kelvin0
Wow, this is impressive! However I'm still eager to see the matchup against
the MIT Antelope.

------
tlo
I, for one, welcome our new cheetah robot overloads!

------
robmccoll
Amazing algrotihms!

But seriously... impressive work :-)

------
vladtaltos
could skynet be too far away ?

\- with deep neural nets making ai advances in strides and

\- with robots/drones learning to autonomously navigate

we should start including skynet clauses into our source codes immediately!

------
SimplyUseless
Robots are coming!

If Moore's law will apply to the progress of Robotics, we have very few years
left before we will all be slaves.

------
frevd
Terminator, the humble beginnings - looks and feels like rats!

------
jolt
This reminds me of when i was learning to drive. In the beginning I was only
aware of what was immediately in front of the car, and as such, drove very
slow and had pay extra attention because a lot of important stuff was coming
in to my field of view all the time.

Now, when i drive i look further down the road, i plan out how to follow the
road, i see the intersection coming up, and try to anticipate the routes of
the other cars/bikes and that requires far less energy, than only processing
the immediate surroundings.

~~~
lsaferite
Well, all the sensory overload you experienced as a novice driver has become
background noise that is handled subconsciously now. That lets you focus on
more strategic planning of your driving. That's one reason I like countries
that mandate a learner sign in cars for new drivers. It lets others around
them understand that they are still getting to grips with that sensory
overload and allows them to act accordingly.

~~~
jolt
I see, is there an equivalent for robots? Like putting part of the algorithm
in to hardware or something? Or is my "algorithm" still "there", I'm just not
as aware that I'm following it?

~~~
lsaferite
I'd say it's still there, it's just running on a background thread. :)

------
wyclif
It's only a matter of time until one of these things kills a person.

------
rebootthesystem
All of this is neat stuff.

That said, for some reason I am still bothered by the use of laser range
finders. To me it makes it all feel like a parlor trick. The easiest way to
produce machines that can be used in "Oh, wow! Look at that!" videos that
continue to bring in grants.

These robots should use binocular vision and nothing else.

Oh, wait a minute, that's hard, isn't it? Yup.

~~~
sopooneo
Why? Why should they be limited to the same hardware configuration as their
biological analogues?

~~~
rebootthesystem
Because they have to interact with our world, not a lab or a factory.

If you are building a robot for a factory, put limit switches, magnetic
sensors or whatever you want on it.

If, on the other hand, you want to build robots to live, work and interact
with humans they need to be capable of understanding my world the way I do.
Think of a robot interacting with a toddler or a bunch of kids.

This is monumentally harder than scanning in front of the robot with a laser
to detect geometry, measure height and approach speed and then plan a jump.
Much harder.

Not to diminish their work but the math and physics seem almost trivial.
Figure out the x-intercept, width and height of a parabolic path that will
give you enough margin of error not to touch the obstacle. Then do the math on
the time delay between the front and rear legs based on approach speed. Then
plan the gait in order to be able to have the legs at the right point at the
right time. So long as you have a robot that can jump it's a done deal.

Again, I know it is more complicated than that, but it doesn't compare to the
degree of sophistication a robot would have to have to manage the real world
with binocular vision. My 9 year old kid can fly remote controlled model
airplanes tooling along at over 60 miles per hour just using his eyes. You
don't need millimeter accuracy laser measurement devices, you need to
understand the world around you in some context.

Context: I built walking robots (not toys, research grade) over 25 years ago.
Today there's virtually no difference in actuation mechanisms and sensors. A
lot of these programs are grant-sucking machines that are reinventing the
wheel rather than making true progress. Here are some of the things we need:

\- True binocular vision systems that can develop an understanding of the
environment to various degrees of sophistication (this is hard)

\- Better actuators. The artificial muscle has yet to be realized. Make your
arm limp on the table. You can't do that with a robot. You can simulate it.
But it isn't the same thing. You'll actually consume power and spin
gears/pumps very fast to be in "limp and compliant" mode. Real flexibility and
real compliance are critical for robots that need to interact with people and
animals. Every animal on the planet relies on this to interact with the
environment.

\- Better programming paradigms. We are still typing "if" statements and "for"
loops to program intelligent robots. A far greater degree of abstraction is
required to truly advance the art. No, libraries are not a solution. We need
to be able to express concepts to a machine in far more efficient terms. How
do you teach a robot to tie a knot on a rope on a table and have that robot
think about using that same knot on a sailboat or to restrain a dog to a post?
Without telling it that these are options?

\- Better means of communication with machines. Buttons and knobs isn't how
you communicate with your taxi driver or housekeeper. A 5 year old kid should
be able to command a machine without having to rock the Linux command line.

etc.

I guess my argument is that we already know how to build "parlor trick"
machines. We've known how to do this for quite some time. Any set of decent
mechanical engineers can build a decent walking machine given a reasonable
amount of time. Making it walk and even jump is almost just as trivial.

Because of the way these departments are funded the truly hard and interesting
work might not be done or might not see the same degree of funding. Some of
the items I listed above could require 10 to 20 years of solid dedication
before the "Oh Wow! Did you see that!" moment is reached. Most of the funding
out there isn't smart enough to support these kinds of projects. And so the
money goes to the guy who can put a spring-loaded plunger on a little robot
with wheels and show it can jump to the top of a building. You know, first
year college physics, if not high school physics. This does virtually zero to
advance robotics but it sure makes politicians write checks!

------
joeblau
edit: Spoiler Alert!

I just watched Ex-Machina and for some reason, I just pictured the robot
running out of the building after its last test without the safety harness and
setting itself free.

~~~
ionwake
I waited months to watch this and read your spoiler. I don't think your
comment has added any benefit to mankind.

------
dang
Url changed from [http://www.engadget.com/2015/05/29/oh-no-mits-cheetah-
robot-...](http://www.engadget.com/2015/05/29/oh-no-mits-cheetah-robot-can-
jump-over-hurdles-while-running), which points to this.

~~~
5h
I really like that this happens on HN.

A crap gif and half a paragraph add nothing to the video imho.

------
jader201
The left and right front/hind legs moving in sync looks unnatural. I wonder if
this is ultimately the best way to distribute weight and balance of a four-
legged object, and why living four-legged creatures do not run like this?

Alternatively, would it be better if the robots also ran more life-like, or is
there a benefit (besides the ease and simplicity of engineering the physics)
to the robots running like this? I.e. will they ultimately have the robots
running more life-like?

~~~
Sputum
this is how cheetahs run [http://giphy.com/gifs/animal-
running-7lz6nPd56aHh6](http://giphy.com/gifs/animal-running-7lz6nPd56aHh6)

~~~
jader201
Actually, if you slow it down, the left/right legs are stil not quite
perfectly in sync (at least not in this case):

[https://www.youtube.com/watch?v=NuyeVN7PuTM](https://www.youtube.com/watch?v=NuyeVN7PuTM)

~~~
oldpond
Correct, and it's clear the machine does a little correction cycle after it
lands. Very cool though. Now they need to work this into the pathfinding
algorithms.

------
benliong78
I for one welcome our robotic overlord

~~~
spiek
This is such an annoying and useless comment. This same comment is parroted
EVERY time there's an article about robots anywhere on the Internet. How are
you adding anything to the discussion at all?

~~~
rokhayakebe
S/he seems excited and wanted to reinforce that with something more than an
upvote. Frankly if his/her comment adding nothing, yours took away from the
thread because it has a negative and discouraging tone to it.

Be nicer my friend. We are all friends here.

~~~
spiek
That's fine, then reinforce it with a comment that is interesting, and not a
useless cliche.

It's everybody's responsibility to ensure the quality of the discussion
remains high.

------
XCSme
Well, it is very impressive, but still a very long way from what a real animal
can do. What if the obstacle is at the edge of a canyon? It will suicide :D .
Those tests are very restricted, but even after they will think the robot is
ready, there will be cases they have missed in which it will fail.

~~~
icebraining
_What if the obstacle is at the edge of a canyon? It will suicide :D_

Not to worry, the drone flying ahead will already have mapped the terrain :)

~~~
Qantourisc
And tagged you as its next kill :p

------
ajbetteridge
So they can spend millions developing the robot, but the best they can do for
tracking while they film from the side is a guy pushing another guy in a big
plastic box on wheels. Way to go geeks!

~~~
tagawa
I don't see the problem. It was quick and easy and got the job done - a nice
avoidance of over-engineering.

~~~
kolbe
It's not a problem. It's amusing. Juxtaposition is a common form of humor, and
whether or not MIT indented the humor, it is funny to see a multimillion
dollar autonomous robot get filmed with such a primitive technique.

~~~
bduerst
Next up: Throwing wrenches at the robot to see if it can play dodge ball.

------
chinathrow
DARPA again. Great to see more money flowing into military research.

Irony off.

~~~
Cyph0n
Military research is extremely important. The use of military tech in civilian
applications is the reason for a lot of stuff we use daily, as I'm sure you
know.

~~~
ZenoArrow
There's no reason that technological progress from the US government couldn't
be driven by NASA rather than DARPA. I suspect many peace-loving people would
prefer to see that.

~~~
JesseObrien
"No reason", other than the massive budget constraints NASA faces, and has
been facing yearly since the end of the Apollo program.

[https://static.nationalpriorities.org/images/charts/2015/tot...](https://static.nationalpriorities.org/images/charts/2015/total-
desk.png)

See that Science heading there on the chart? It's hard making the world a
better place on a 0.7% budget. DARPA, however has access to that Military
heading, 16.3%.

~~~
ZenoArrow
Perhaps you're missing my point. I'm suggesting that money from the military
spending budget could be moved over to NASA without losing any ability to
drive forward new technology.

------
vlasev
No one seems to be discussing how this can be improved! Here are a few points:

1\. Real cheetahs don't run like that. The action of both pairs of legs is
staggered for smoother motion and control. Check this out -
[https://www.youtube.com/watch?v=131wvVGjZUc](https://www.youtube.com/watch?v=131wvVGjZUc)

2\. Looking at this jumping-over-obstacles video, it seems like the robot is
lacking flexibility. The real life cheetah's hind legs go quite a bit
underneath in preparation for push off. Contrast this with how the robot's
hind legs are a little too behind and the robot stutters a bit after every
jump. If the robot could put its hind legs further it would make for a
smoother landing.

~~~
bliti
1.The goal is not to build a cheetah, buy to learn from them. One thing to
consider is that this team is constrained by materials. What works for an
animal will likel not transfer directly due to the difference in materials.
Harmonics are well absorbed by tissue but the robot may have issues with them.

2\. How did you come to this conclusion? Have you built a robot that mimics an
animal? If so, may you provide some insight into your discoveries?

Robotics programmer/builder here. Always interested in learning about the work
of others.

~~~
vlasev
To your first point, wouldn't a robot like that benefit from staggering the
action of the legs? Instead of X impact at one point in time, have X/2 impact
at two points in time? Would that create too much strain?

For my conclusion in point two, my insight comes from experience of copying
animal movement. When I trained parkour my friends and I would copy animal
movement from across the whole animal kingdom[1]. Here's an example of
quadrupedal movement[2]. A big problem for us bipeds in trying to run like
quadrupeds is that our legs tend to be big, long and inflexible, while our
arms tend to be much weaker than our legs. The effect of this combination is
to make running properly like a an animal which evolved to run on all fours
quite difficult. From my own experience, putting my legs more and more forward
past my arms (i.e. copying the cheetah), made for a much easier time.

If you were to try to run like the robot, with symmetrical action, it would be
extremely uncomfortable, because the impact is large at every step. Staggering
the action allows for a much smoother run. Make any human run like that and
they'll eventually automatically switch to the less impactful staggered run.

[1]:
[https://www.youtube.com/watch?v=Ymg1-Fhl69w](https://www.youtube.com/watch?v=Ymg1-Fhl69w)

[2]:
[https://www.youtube.com/watch?v=W8oh5Xuy7NA](https://www.youtube.com/watch?v=W8oh5Xuy7NA)

~~~
rtkwe
Having the back and front legs work as a pair makes the side to side balancing
easier. I having them hit separately would require more control and degrees of
freedom in the leg to keep the robot from oscillating side to side while
running.

------
tugberkk
Aside from being a great development; I am really scared that robotics is
improving this much. It feels like we are creating our own potential enemy.

~~~
methodover
Ugh, I really dislike this sentiment. Even assuming the most extreme
possibility -- that we create a powerful, sentient machine -- this is no
different than what every species on this planet has been doing for a few
billion years now.

Creating life is what we earthlings do. And yeah, maybe that life form you
create goes on to do horrible things. But maybe it goes on to do really good
things. You don't know until you try.

~~~
ZenoArrow
Personally, I'm not concerned with that. I'd be more concerned about powerful
robots controlled by
governments/corporations/anyonewhohasaninterestincontrollingyourbehaviour.

Look at what's happening in Yemen. The population there lives under the fear
of drone strikes by the US (that includes the civilian population). If
governments are prepared to do that with drones, what reason is there to
suspect they'll turn down using something that's ground-based to impose
control?

~~~
xixixao
I wish there was no reason for stupid warfare, but:

Aren't they better off than the Vietnamese who lived under the fear of Napalm
bombardment? (or, for that matter, Japanese fearing the atomic bomb) I don't
think that robotics has significant negative impact on warfare (and I wonder
whether it has a positive impact).

~~~
DanBC
Yes, in one sense people are better off with drones than they would be with
land mines or cluster bombs or "shock and awe" style area-bombing.

Those are (pretty clearly, IMO) war crimes.

But still, it's useful to be worried about the potential "desensitising
effect" of remote warfare. Is someone operating a drone subject to the same
psychological pressure to avoid killing other humans as someone pulling a
trigger? (Turns out that yes, that person probably is, but that their
political superiors possibly aren't).

~~~
jmagoon
Drone operators have the same type of psychological pressure and results as
ground troops: [http://www.nytimes.com/2013/02/23/us/drone-pilots-found-
to-g...](http://www.nytimes.com/2013/02/23/us/drone-pilots-found-to-get-
stress-disorders-much-as-those-in-combat-do.html)

In terms of their superiors, isn't that how warfare has always been?

My guess would be that ground troops replaced by robots would likely still
have a pilot with a "finger on the trigger" for a long time. If the reaction
is this mixed Hacker News, how do you think extremely conservative military
commanders would feel about "AI" controlled soldiers?

~~~
omegaham
It doesn't help that drone pilots are treated absolutely miserably by the
command. I had a friend who transferred from our electronics tech job (which
was extremely laid-back and stress-free most of the time) to work in a UAV
command, where the command effectively viewed its pilots as machines to work
until failure. "Oh, another one attempted suicide because his wife left him?
That's cool, tell the monitor that we need an extra body and we'll replace him
when the next boot drop hits. In the meantime, just increase everyone else's
hours from 14 hours a day to 16 hours a day. We aren't the FAA, we don't have
rest requirements."

Incidentally, air traffic control had this same problem in the 90s until too
many people started taking the quick way off the tower.

------
deepnet
Stuart Russel of AI a Modern Approach fame ( with Norvig ) asks us to consider
the ethics of AI research - killer robotics, & hunter drones.

[http://www.nature.com/news/robotics-ethics-of-artificial-
int...](http://www.nature.com/news/robotics-ethics-of-artificial-
intelligence-1.17611#/russell)

He calls for a moritorium until we can get a neural net to learn Asimov's 1st
Law.

Should the self driving car swerve into you to avoid greater moral hazard ?

Humour aside as researchers and robotocists do we have a moral obligation to
obey the 1st Law ?

~~~
IgorPartola
> Should the self driving car swerve into you to avoid greater moral hazard?

The self-driving car's first priority should be to protect those inside it.
Anything else is more complicated. I read an article a little while ago about
this topic and it posed a question along the lines of "should a self-driving
car do the thing that'll kill its own passengers, yet save a larger number of
others around it?" Think about how difficult it would be for a self-driving
car to estimate this. Even more importantly, how would two self-driving cars
cooperate in the event of an inevitable accident? Would we allow them to
directly talk to each other, or are they only allowed to interpret each
others' trajectories?

I think the simplest directive is "save your own passengers". That is what
human drivers currently do, and I believe in the long run this would be
effective at saving the optimal number of lives. Besides, if choosing between
a car with this directive and a car that's programmed to sacrifice you to save
someone else, which would you buy?

~~~
anthony_d
I'm not sure about that. If your car had uncontrolled acceleration, for
instance, would you consider use a crowd of people to slow you down or would
you put it in the ditch? For the sake of argument that amounts to a left or
right turn.

~~~
IgorPartola
Say there are 10 people in this crowd. The optimal solution for the 11 of us
is obviously to have my car drive off a cliff to the right, not into the crowd
on the left. However, for me, the optimal solution is obviously to not drive
off a cliff.

However, your argument omits a couple of interesting details. First off, if a
car can have uncontrolled acceleration, it can likely have any number of
glitches. It may for example _think_ it's accelerating uncontrollably, and try
to throw me off a cliff. Or it may think that the cliff is a small ditch. Or
it may think that a bunch of balloons tied to a mail box is a crowd. If you
are a programmer, what would you rather debug: code that prevents uncontrolled
acceleration or code responsible for killing the driver by recognizing crowds
and cliffs?

The other details is whether the metaphorical crowd should even allow me to
buy a car that has logic built into it to drive over them. What if instead of
a cliff and a crowd it's actually two different crowds of different sizes?
What would a human driver do in these cases?

~~~
anthony_d
As a person and a driver I'm willing to say the car should drive off the
cliff.

As a developer I wouldn't buy a car that would drive off the cliff.

It's a tough problem. Personally I would make a quickly/poorly thought out
analysis of how to do the least damage. Being human I'd probably get it wrong.

~~~
bladecatcher
>> As a developer I wouldn't buy a car that would drive off the cliff.

What does being a developer have anything to do with it?

~~~
grkvlt
A developer knows, through hard-won experience, that the algorithm to decide
whether driving over a cliff is better for everyone is _bound_ to have bugs,
and will therefore gladly throw you off a cliff when it is the wrong thing to
do ;(

Non-developers would assume the computer to be (somewhat) infallible.

