
Underactuated Robotics - jeffreyrogers
http://underactuated.csail.mit.edu/underactuated.html
======
Animats
Interesting. I used to work on legged locomotion in the 1990s.[1]

The notes point out that locomotion on the flat is a special case. Too much
work on legged locomotion assumes flat ground. On the flat, balance dominates
the problem. Once you get off a flat surface, slip/traction control dominates.

Slip control is like "ABS for feet". You have to keep the forces applied
parallel to the ground below the point where slip starts. That changes the
shape of the problem. Classically, most robot control is positional. Slip
control is in force space. Until you have slip control, hill climbing will not
work. So the first step is to constrain those forces to stay below the break-
loose point.

I pointed out in the 1990s that legs with three joints allow manipulating the
contact force vector and the position independently.[2] This is visible once
you watch people climbing hills, and even clearer with horses, where the leg
bones are closer to being of equal length.

Most legged robots don't make fast starts and stops or fast turns. Even the
Boston Dynamics machines usually start by trotting in place and then shifting
to forward. Motion with high accelerations is traction-limited.

The first step is like ABS, constraining the forces below the break-loose
point. You need that because in the real world bad stuff is going to happen
and recovery involves backing off the forces until traction is regained. The
second step is considering forces when planning, so that movements get near
the limits of traction but don't exceed them. This is where you can begin to
do more aggressive movements.

The lesson notes are finally going in the right direction, looking at this as
a two-point boundary value problem. Most previous work has focused on finding
some expression that measures stability and maintaining that. If you want
agility, you have to give up stability maintenance throughout the gait cycle.
You need good landings. Everything else is secondary.

You have a set of constraints that apply at a landing, as a foot touches down
- be within slip tolerance on forces, joints not too close to limits, impact
not too high. And, importantly, the situation must be within the basket that
allows stability recovery during the ground contact phase. Land, stabilize,
launch, reposition for landing while in air, repeat.

Most work focuses on stabilization. That's only part of the problem. What to
do in the air is classic rocket science trajectory planning. Launch control is
mostly force-limited, and that's when you get to apply big forces and get big
accelerations.

How far ahead do you plan? One landing ahead for basic locomotion, two
landings ahead for athletics. If you plan two landings ahead, the stability
criterion for the first landing can be relaxed somewhat. You might fail to
zero out rotation in yaw because that will be fixed at the next landing, for
example.

This stuff is cool, but there's no market. It was fun to work on, though. In
the end, I sold much of the technology to a game middleware company, so it
worked out OK.

[1] [https://www.youtube.com/watch?v=kc5n0iTw-
NU](https://www.youtube.com/watch?v=kc5n0iTw-NU) [2]
[http://www.animats.com/papers/articulated/articulated.html](http://www.animats.com/papers/articulated/articulated.html)

~~~
tomcam
I've been thinking about this stuff for decades but never saw anything like
this. I think you're the first person to nail the three-joint thing. Super
impressive, thanks.

------
theothermkn
Another version of this course (I _think_ it was an OCW version on YouTube.)
had a video of a fish holding position in a stream. There was an obstacle in
the stream, and the fish swam up into its wake to save energy. Neat. Then they
showed a video of a dead fish being towed through a simulated stream by a
line. It exhibited "swimming" behavior that could be dismissed as just
flapping in the current right up until the _dead_ fish swam up into the lee of
the nearby obstacle for shelter.

It's amazing how much adaptation and complex behavior can be embedded into an
underactuated system. A fish's dead body "knows" how to save energy when
swimming against a current!

~~~
colanderman
I'm curious the results of that experiment with a towed sphere. My gut says it
too would find the lee, simply because that's a local and stable energetically
favorable configuration of the system.

------
ArtWomb
This looks very good. It's actually a stealth optimization textbook ;)

Relevant recent work from Google AI: building a dynamic model of the world
from pixel observations only

[https://ai.googleblog.com/2019/02/introducing-planet-deep-
pl...](https://ai.googleblog.com/2019/02/introducing-planet-deep-
planning.html)

------
bitL
I took the course in 2015, when it was Matlab-only and Russ was working on a
C++ version; it was one of the best courses in robotics I've taken (together
with Udacity's Self-driving car ND). It also had a simple version of Boston
Dynamic's Atlas robot that was massively improved recently. Does anyone know
if the course contains new content related to the recent advances at BD? Is
the optimization code now compatible with ROS? Thanks!

~~~
sgillen
All the new optimization code has C++ and/or python (2.7) bindings.

I don’t think TRI has made ROS wrappers but it could conceivably be done
without much pain I think.

Not sure there is much in there about new stuff from BD. There is some deep RL
stuff now though, not really sure how much the course has changed since 2015.

------
Isamu
The course on edx is here:

[https://www.edx.org/course/underactuated-robotics-
mitx-6-832...](https://www.edx.org/course/underactuated-robotics-
mitx-6-832x-0)

~~~
plumeria
Looks like it's archived. There are video lectures available at MIT's OCW:
[https://ocw.mit.edu/courses/electrical-engineering-and-
compu...](https://ocw.mit.edu/courses/electrical-engineering-and-computer-
science/6-832-underactuated-robotics-spring-2009/video-lectures/)

------
ssimoni
This is somewhat relevant if you are interested in underactuated robots in
production. One of our customers, Tokyo Kitty [1], a nightclub in Cincinnati
uses two underactuated robots to deliver drinks to private rooms without
spilling. We use convex optimization (chapter 10) to not spill, the bartenders
and patrons love it [2].

[1]
[https://www.facebook.com/thattokyobar/](https://www.facebook.com/thattokyobar/)
[2]
[https://www.youtube.com/watch?v=n_PeLAlVpzQ](https://www.youtube.com/watch?v=n_PeLAlVpzQ)

------
etaioinshrdlu
This makes me wonder: imagine you have a quadcopter-type drone or something,
where some of the propellers are mounted on inverted pendulums, or double-
inverted pendulums, or something equally wildly unstable.

Could one then learn to control it with deep learning, and maybe gain some
benefits from doing so? Like gaining the rapid maneuverability that birds
have.

~~~
auxym
You don't need an inverted pendulum to make an aircraft unstable. See the
Grumman X-29, the forward-swept wing made it aerodynamically unstable, and it
piloting it was only made possible by feedback control software.

[https://en.wikipedia.org/wiki/Grumman_X-29](https://en.wikipedia.org/wiki/Grumman_X-29)

------
gler1024
Relating to the current deep RL hotness... If I'm training a network to
control a walker/swimmer, will simply adding a penalty on energy expenditure,
on top of whatever the goal may be (e.g. maximise forwards distance), lead to
underactuation?

------
mario0b1
This looks really really interesting. Thank you for posting this!

Right now I am trying to build a quadruped robot and still a bit scared of
that mathematics, but its awesome and I'll fight my way through it.

------
amelius
I'm wondering if/when all the control theory will be replaced by some simple
generic black-box deep-learning scheme.

~~~
tnecniv
It really depends on your application. If your system is so complex that you
give up on understanding the dynamics, then a "generic" ML algorithm makes
sense. However, a big problem with applying ML to robotics is you need a ton
of data and producing a representative dataset for your system can be hard.
Traditional control methods don't need nearly as much data, including the
system discovery process.

~~~
smattiso
I think that depends on how good our physical simulations become? If we can
model our robot to a sufficient degree of physical precision, and our world
simulation is good enough, with the right objective functions it seems like
one probably build a robust control system without really understanding any of
the control theory at all? Granted this might take the equivalent of 50 years
worth of learning but it seems possible? This is how animals have learned to
move over time right?

~~~
jbay808
Why is it a useful goal to not understand control theory?

Animals like us have plenty of control theory built into our firmware, even if
we aren't going through the calculations on the cognitive software level.

A control system, tuned and running, is a few lines of code in a fast loop, or
a feedback amplifier circuit. Just like a neural net, it takes a while to tune
the coefficients but then it becomes muscle memory. And since much of control
theory is fully general (like Bode's sensitivity integral), those principles
must also hold for biological systems as well.

A baby doesn't think about Laplace transforms when learning to walk, but
neither do they think about backpropagation.

If you're keen to replace control theory with something else, I recommend
solidly understanding it first.

------
FrojoS
One of the few MIT classes I had the pleasure to attend in person. So good.
Russ Tedrake is amazing.

------
msadowski
Thanks for posting this! It looks very interesting. I wish I had the time to
go through all those books!

