
Robots Can’t Dance: Why the singularity is greatly exaggerated - joosters
http://nautil.us/issue/20/creativity/robots-cant-dance
======
arvinjoar
Misleading title, robots can actually be great dancers[1]. The problem with AI
is that it's a moving target, as soon as there's an advance we tend to go
"well that's still pretty mechanical", forgetting that we're also just
mechanical systems (probably). Granted, AI and AGI isn't really the same
thing, but it seems like we always make the assumption that an AGI has to work
like a human internally. To me that seems like a good source of inspiration,
but it's probably not a requirement. In the end, I think we'll figure out that
for all our "creative genius", we're still just monkeys who see and do, and
sometimes we do something new.

[1] = [http://youtu.be/ww9ClmCWBr0](http://youtu.be/ww9ClmCWBr0)

~~~
stephanfroede
You touched the core of the discussion, imo, that is there are two schools of
thaught involved.

One sees the universe and anything else as a machine -> determinism

The others are seeing the universe as a discrete continuum -> indeterminism

That is also my fundamental criticism at the current approach, it is not even
considering the possibility that the whole approach could be wrong.

I think intelligent machines are possible, but not without much more
fundamental understanding. Hyping, cheering deep learning and all this stuff,
is just irrational and illogical.

~~~
aninhumer
Surely the more relevant dichotomy here is monism(materialism) vs dualism?

For those who find determinism unsatifying, I doubt indeterminism feels like
much of an improvement.

~~~
stephanfroede
Dualism is a part of it, but dualism is more an ethical question.

Indeterminism is more about emergence, quantum fields and such things. How the
universe works.

My impression is that a lot of science that is applied to intelligent machines
is based on a deterministic physical model based on Newton and LaPlace. The
Bayesian networks was pioneered 200 years ago by LaPlace for example. But how
is Einsteins relativity theory applied? Or quantum fields?

There are two different models of the universe involved the "old"
deterministic model and the "new" indeterministic model (see Karl Popper ->
Open Universe ->
[http://www.goodreads.com/book/show/288137.The_Open_Universe](http://www.goodreads.com/book/show/288137.The_Open_Universe)).

May be it make sense to bring some newer approaches into the game, instead of
reapplying again and again the same approach.

------
Beltiras
Why is it that we give humans a lot of time doing trial and error and
acknowledge them as masters of some art but then when a machine process fails
to deliver _on it 's first baby steps_ declare that the endeavor is
impossible?

Creativity is experimentation coupled with the insight of previous work.
Computers can do the experimentation, they just need longer horizons of
insight to be able to moderate.

On the other hand thou: AGI is a far way off. I'm not expecting anything HAL-
like within my lifetime. Unless my lifespan will be artificially prolonged.

~~~
ilitirit
There's another aspect to consider:

When people express themselves in a creative way, they do so so that other
people (and themselves!) can appreciate it. We create art for other people -
we don't create for animals or plants or robots. Other people can appreciate
our art because it may express an emotional and/or psychological state that
resonates with them.

Now if we avoid the question of whether or not an AI can be considered
creative if it does not express itself voluntarily, how can we judge the
"quality" of its art? If it's doing so because it evokes a favourable internal
state whenever it observes it, could that not be considered art?

(When you listen to a piece of music and it makes you feel good, would you not
consider that a good piece music?)

~~~
Beltiras
I consider music that makes me feel anguish to be a good piece of music too
(given the right circumstances).

------
benkant
Ignoring the singularity talk because I think it's become meaningless, but
Jürgen Schmidhuber would disagree with the conclusion that machines can't be
creative, or at least that it's beyond algorithms.

He has a formal theory of creativity [0] which claims to explain, among other
things, music, humour, beauty [1] and fun. It centres around compression and
Kolmogorov complexity.

There's a great video in the first link.

These are hard problems, but it's shortsighted to consider it impossible for
us to build machines with approximate behaviour. Often with this class of
criticism you'll have arguments along the lines of "sure the submarine moves
through the water, but is it swimming?" Apologies to Dijkstra.

[0]
[http://people.idsia.ch/~juergen/creativity.html](http://people.idsia.ch/~juergen/creativity.html)
[1]
[http://people.idsia.ch/~juergen/beauty.html](http://people.idsia.ch/~juergen/beauty.html)

------
minthd
"But generating is fairly easy and testing pretty hard."

This is the reason he thinks computers won't be creatives. But testing
theories in science or math can be automated(in most cases). So technological
creativity is possible ,and that what's the singularity talks about(although
the testing can be quite lengthy and expensive ,which might really slow the
singularity maybe to the point of no singularity at all).

And as for artistic creativity - that depends on if we can build some model on
how humans evaluate art in general. Who knows, maybe we can build a good model
of that. We've certainly improved from say 100-200 years ago where most
artists we're considered geniuses, to today ,where a large percent of
commercial art is at least is generated or least guided by knowledge on how to
create good stories, etc.

~~~
thibauts
> _But testing theories in science or math can be automated(in most cases).
> [...] (although the testing can be quite lengthy and expensive ,which might
> really slow the singularity maybe to the point of no singularity at all)._

So we would be back to the problem being raw computing power.

> _And as for artistic creativity - that depends on if we can build some model
> on how humans evaluate art in general._

I'm pretty sure that's culture and social experience in general that allow us
to evaluate art. Not sure robots will be able to do that soon, as it seems to
be a product of our intricate biological structure.

> _[...] to today ,where a large percent of commercial art is at least is
> generated or least guided by knowledge on how to create good stories, etc._

So maybe robots are not as much becoming humans as we're becoming robots.

~~~
haliax
"seems to be a product of our intricate biological structure"

How do you know?

~~~
thibauts
"seems". I guess that's the most resonable assumption. What else ?

------
onion2k
Robots don't actually need to be creative; robots only need to approximate
creativity well enough that humans can't tell. After that point robots will
always look creative even if they're not. The weight of a little randomness
and a lot of brute force will do the rest.

~~~
pjc50
We're kind of on the edge of this with algorithmically-optimised clickbait
headlines. There's still a human in the loop at the moment, but the more
metrics-driven the process is the more likely parts of it are to be automated.

~~~
tempodox
That's actually a nice idea for a satire (and what can be more satirical than
real life?). Do you know of any artificial headline generators? If not, it's
high time we built one. I bet we could make cartloads of money with it.

------
ilitirit
Consider the following definition for a Technological Singularity (TS):

 _A TS is an event that occurs when AI advances to the point where its humans
can 't keep up with understanding and/or predicting it's decision-making
process and/or the results thereof._

Using this definition, it would appear that, like physical singularities
(black holes) TS can occur on a large or small scale (micro black holes can
pop in and out of existence, with little to no effect on the macro world). So,
let's say we develop an AI that can teach itself to play Go. After a while,
not even the smartest humans can beat it. Indeed, the smartest humans can't
even understand* why it plays in the way that it does. If this counts as a TS,
where does creativity come into play?

*(Something similar has happened before, but it was discovered that the neural network was using physical electrical effects that occurred in the actual hardware when certain pieces of code was run. When a human tried to analyse the actual code, it made no sense.)

~~~
slowmovintarget
You've selected the phrase "a Technological Singularity" and stretched the
metaphor to create your definition.

"The Singularity" is what most people are discussing when this phrase is used.
That requires something more encompassing than small occurrences, and would
have society-wide impact.

Personally, I don't think it will actually occur. By "it" I mean the point
where individual humans are eclipsed by a technological gestalt beyond
ordinary human comprehension. This is my opinion, but I believe economic
factors will retard technological progress enough that "The Singularity"
cannot occur. Our society will either tear itself apart, or the disparate
technologies will be so fragmented and incompatible as to not come together as
a whole.

For examples, I'd cite the space program and the current state of computer
operating systems.

~~~
ilitirit
> You've selected the phrase "a Technological Singularity" and stretched the
> metaphor to create your definition.

If you consider the example I gave and expand it to other fields, it's the
same thing. As I said, it's analogous to black holes.

My point is, how does creativity fit into all of this?

~~~
slowmovintarget
It isn't the same thing. But that's just argument.

Let's go with the new subject of how creativity fits in. Creativity allows the
expansion of a system of axioms through perceiving possibilities not permitted
within the system. This allows escape from "incompleteness" [1].

Of course that is not all creativity does, but it is fairly big deal as it
leads to what we tend to call "understanding" or "comprehension". Knowing how
to calculate the next number in a sequence, and comprehending that the next
number will always be the same as the previous (divide 1 by 3 and express as a
decimal number) requires multiple levels of observation.

We've been able "teach" machines discoveries of that nature that we've already
made, but we haven't really been able to generate the capability. Take the
case of those evolved neural net solutions that took advantage of the physical
nature of the hardware to optimize a detection circuit. The optimization could
only occur because there was a suitable system-external test to drive
optimization. While it is feasible to allow the combination of the physical
world and a definition of "survival" to serve as a suitable test for machines,
the result would merely be machines that "survive". This would not be "The
Singularity" of machines that out-think us, this would be the "gray goo"
scenario of machines that devastate our civilization and merely replace us on
the top of the food chain.

My point is that there needs to be a way for the machine to generate its own
tests. This, in large part, is comprehension, driven by creativity. Granted, I
say all of this firmly embedded among the laity.

[1]
[http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_t...](http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems)

------
eli_gottlieb
Excuse me, this is headdesk-worthy clickbait material.

 _wham_ Cognition! _wham_ Is! _wham_ Lawful!

If you can scientifically characterize human creativity, then you can program
an algorithm to behave creatively. If you believe you cannot program an
algorithm to behave creatively, then this is because you don't understand
creativity as a cognitive function.

Why is it that as soon as someone says "AI" everyone turns off their normal
scientific/naturalist worldview and starts going yippity-skip-de-doo in
fairyland!?

~~~
krapp
>Why is it that as soon as someone says "AI" everyone turns off their normal
scientific/naturalist worldview and starts going yippity-skip-de-doo in
fairyland!?

Despite claiming to be rational, people are still uncomfortable with the idea
that they're meat machines.

~~~
eli_gottlieb
Well _what else_ do they expect to be? If non-physical souls existed, they
would have to work on some principle _too_. Reality always bottoms out
somewhere.

------
VMG
> We can build a classifier that would look at lots of pairs of successful
> movies and do some kind of inference on it so that it could learn what would
> be successful again. But it would be looking for patterns that are already
> existent. It wouldn’t be able to find that new thing that was totally out of
> left field.

Just a baseless assertion without any evidence.

There is no reason to assume you couldn't build a system that emulates
whatever the human brain is doing there.

~~~
crististm
I can think of one: you can't simulate an atom.

If you read too much into supercomputing and think that they can simulate
atoms, then go tell CERN to stop searching for sub-particles and simulate them
instead.

Otherwise, I think you know what I mean. Besides, a real system depends on
initial conditions. You can't simulate those.

~~~
zAy0LfpBZLC8mAC
You are confused about the order in which these things happen:

1\. We form a hypothesis about how something works.

2\. We do experiments to try to falsify said hypothesis.

3\. If a certain amount of experimentation fails to falsify the hypothesis, we
conclude tentatively that the hypothesis is a correct model of reality,
namely, we promote it to a theory.

4\. We use that theory to simulate the real thing computationally.

(4) is the whole point of doing steps (1) to (3) - and all VMG is saying is
that there is no reason to assume that (1) to (3) couldn't lead to (4) with
regards to the brain, just as there is no reason to do so with regards to
atoms, which in turn is why we do operate CERN instead of just asserting that
atoms cannot be understood.

~~~
crististm
Yes you are correct in 1-4 but I think that your hypothesis is wrong wrt to
simulation:

You don't know the initial conditions which are a _huge_ part in determining
the outcome of the simulation. Maybe you will limit your precision to Plank
constant. Can you measure with that precision?

~~~
zAy0LfpBZLC8mAC
I don't know the initial conditions of what?

~~~
crististm
Now this question clarifies why you think too much into simulating reality:

[http://en.wikipedia.org/wiki/Initial_condition](http://en.wikipedia.org/wiki/Initial_condition)

look for nonlinearity

EDIT: You can't simulate a brain without considering the quantum effects at
atomic levels. You can't simulate quantum effects. QED.

~~~
chriswarbo
> You can't simulate a brain without considering the quantum effects at atomic
> levels.

That's wild speculation. At the very finest level of detail, we can simulate a
brain in this way. At the very coarsest level of detail, we can simulate it as
a thermodynamic heat bath. The correct level of detail for the emergence of
intelligent behaviour is likely somewhere in-between.

> You can't simulate quantum effects.

What? Of course you can! Classical computers take an exponential amount of
time, but it's still a finite problem. The whole field of Computational
Chemistry is based on simulating quantum effects!

~~~
crististm
"The correct level of detail for the emergence of intelligent behaviour is
likely somewhere in-between."

I think this is your speculation. :)

"Classical computers take an exponential amount of time"

It is not a matter of time it takes to simulate, but of the details. You don't
know which details matter and which don't. You don't know if you need infinite
precision or you can stop at Plank's level. Thus, since you don't know a lot
of things, you can't be sure that what you simulated was indeed the real thing
or something that you imagined/theorized to be the real thing.

~~~
zAy0LfpBZLC8mAC
"Thus, since you don't know a lot of things, you can't be sure that what you
simulated was indeed the real thing or something that you imagined/theorized
to be the real thing."

That's a useless distinction as there is nothing where you could "know the
real thing", you _always_ work with theories, without exceptions. When you
take your first step out of bed in the morning, you don't know the _real_
behaviour of the floor (it's made of atoms, after all), you "only" use a
relatively high-level theory of solid materials to predict that it will
support you before you step onto it. There is no guarantee that that will work
out, but there is fundamentally no way to do better than that, all of science
works that way, _everything_ we know about the world is "a theory", even the
atom theory is a theory, and the quantum theory is a theory, it's all about
modeling as best as we can, none of it is "proven to be the real thing", and
no scientist ever even tries to prove something to be "the ultimate reality",
all that matters to science is to make models more precise, and to figure out
what those models seem to be sufficiently precise to use for - and then use
them, which we do very successfully indeed.

------
reacweb
You humans can't appreciate the dance of robots.

------
thibauts
I think at least one side of creativity can be summed up as _" producing new
combinations of things we already know"_. In this context art would be more
than creativity : means of suggesting new unexpected combinations of ideas in
the minds of others. There is a social sharing side to this equation. This in
my opinion is what AI won't get soon, as it requires embodiment, and more
specifically human embodiment. It is already difficult to communicate with
other animal species that share many biological structures with us and thus
ways to experience the world. How could it be easy to make a machine that
produces _meaning_ , as in _combinations of ideas that make sense in the
context of human experience_ ?

What will save us is the building of machines that will collect, store,
process and repurpose meaning in a meaningful way (no pun). Like, linking
pieces of data to emotional states. Yet they won't get it.

------
arethuza
On a related topic - I can recommend the new movie _Ex Machina_ :

[http://en.wikipedia.org/wiki/Ex_Machina_%28film%29](http://en.wikipedia.org/wiki/Ex_Machina_%28film%29)

About the only thing I feel safe commenting on without fear of spoilers is
where the outdoor scenes were filmed - Norway - which looked stunning.

------
sxp
Computers have already passed a domain specific Turing Test by composing music
that is as good as human composed music [1]. The phrase "robots can't [do X]"
should always be suffixed with a "yet". There was a paper a few years back
demonstrating a system that could compose an image based on text by finding
images of the desired objects and composing them together. It's just a step
away from creating a system that can generate paintings in the same style as
great artists based purely on a text description.

[1] [http://www.psmag.com/books-and-culture/triumph-of-the-
cyborg...](http://www.psmag.com/books-and-culture/triumph-of-the-cyborg-
composer-8507)

~~~
olavk
Have blind tests been performed, where musicians could't distinguish between
music composed by computers and music composed by humans?

~~~
tessierashpool
yes, and the software did quite well, but see my post above for caveats.

------
mcguire
" _It would instantly generate all possible combinations of movies and there
will be some good ones. But recognizing them, that’s the hard part._ "

By his definition, the vast majority of people are not intelligent.

------
marktangotango
>> “No, you’re missing that a fundamental aspect of intelligence is experience
and that requires embodiment.” He knew that to understand the world you needed
to be inside the world, you needed to experience its behaviors and responses
to you. Well, he was right. We may be making progress in being able to do
things like recognize a cat in a photograph. But there’s a huge gulf between
that and doing something creative.

I'm not sure if I agree with this, but it is a compelling argument. Could
experience be simulated the first time, hence bootstrapping the AI's?

~~~
vidarh
There are tons of people working on giving robots the ability of collecting
experience through embodiment in robots.

It may be a compelling argument for why the various timelines thrown around
for the singularity will be off, but it's a speed bump, not a road block.

~~~
stephanfroede
So goes the argumentation since 1960.

~~~
lmm
By the standards of 1960 we already have AI. After all, computers can beat the
best humans at chess.

~~~
stephanfroede
A chess board is a closed system with fixed rules, as far as I know you only
need a lot of computing power to apply the min-max algorithm to solve any
chess game.

In a chess game there are no probalities, AI is about to independently
recognizing patterns in noise and develop assumptions out of it. The trick
human brains are applying here is called intuition.

~~~
lmm
That wasn't what people said in 1960 - as soon as AIs solve a particular
problem we redefine intelligence to mean something different. Modern AIs (e.g.
hypothesis generation toolkits) can be better at recognizing patterns in noise
than humans. I bet in 20 years' time we'll see that - and we'll be having this
same conversation about how pattern recognition isn't really intelligence.

------
stephanfroede
The singularity is a linear projection of computing power, in this projection
anything that is questioning the singularity is pro-actively ignored. What
puzzles me most, is why it is ignored. For me it does not make sense to hope
that just the amount of simulated neuronal complexity will be enough, that
suddenly out of the complexity something intelligent emerges. The whole
approach is flawed. Something very essential is missing, that is a proven
model how brains work and why they work, down to the last quantum state.

~~~
icebraining
Is it really necessary? As far as I know, the theories about generating lift
were still being heavily debated long after the Wright brothers flew. People
often simply try things until they work, then go back and try to understand
why they work.

Having a decent theory helps, but I'm skeptical that a complete understanding
of the issue is required to make it work.

~~~
stephanfroede
The difference is the plane did fly without the understanding after little
research, but we are trying to make the machines intelligent since decades.

Something is missing, imo.

~~~
icebraining
Little research? We had been trying to make heavier-than-air machines for
centuries, if not millennia. Even the specific concept of the modern airplane
as a fixed-wing flying machine with separate systems for lift, propulsion, and
control was put forward in 1799, more than a century before the Wright
brothers flight. If we talk about flying machines in general, the Bamboo-
copter[1] is 2400 years old.

[1] [http://en.wikipedia.org/wiki/Bamboo-
copter](http://en.wikipedia.org/wiki/Bamboo-copter)

~~~
stephanfroede
It was in my best intend to ignore all attempts before the Wright brothers to
try to fly.

It is the same here, we have the wish to make intelligent machines, but we may
lack an engine to do that. Besides the airplane design it was also the
availability of powerful engines and other features to go the skies.

(I did read the Wikipedia article): in that sense I afraid we are more at
stadium of Da Vincis concepts than an aeroplane.

------
crusso
The whole point of the Singularity is that the technology will progress to a
point where AI techniques will advance to produce something that is creative
and intelligent enough to advance itself.

Obviously, we haven't hit that point yet - so we don't know what those
advances are.

Before penicillin was discovered, doctors couldn't conceive of curing many
serious and deadly infections.

~~~
simonh
Quite. I have no time for exaggerated singularitarian nonsense about strong AI
being inevitable in a few decades. We don't even have the primitive
fundamental concepts required to even begin to outline the actual design of
such a thing. Therefore any attempt to estimate a timeline for its development
is a blind guess.

On the other hand, just because we can't design or build one now doesn't mean
we never will. Lord Kelvin was clearly wrong that heavier than air craft were
impossible because birds. They are physical, mechanical systems that are
heavier than air and yet fly. Therefore such systems are self evidently
possible. So it is with strong AI. Physical systems that exhibit human
intelligence or exist - us. Therefore physical systems like us are possible.

Here again the example of birds is instructive. Birds fly, but constructing a
machine that flies I. The same way as birds is incredibly hard. Far, far
harder than building rockets, propeller driven planes and even jets. There's
no law of he usiverse that says our first strong AI will be designed along the
same architectural lines as the human brain or that itse performance envelope
will be similar to ours. At this stage, as the OP says, we don't know.

------
stephanfroede
Joke: may be you need a quantum singularity to get the intelligent machine
singularity...

No I do not think that we do have black holes in our brains, and no we are not
connected by worm holes...but it is nice idea. Likeminded people are connected
by tiny worm holes.

------
blueskin_
Nothing about the singularity specifically requires creative AI; indeed, a
large part of it is _enhancing the existing potential of humans_ ; replacing
them outright is not necessarily the case. As it is, an article titled about
AI and creativity without the gratuitous "singularity won't happen" would be
more accurate, but so much less sensational and clickbait-y that it might not
even be able to justify its own publishing, I guess.

Also, had to laugh at gratuitous misuse of "supercomputers".

Yet another layman grasping at concepts they don't understand... _yawn_. Such
an interesting coincidence that someone who says "AI can't be creative" also
happens to be an artist... Reminds me of Roger Ebert dismissing the potential
of games vs. movies as he felt threatened by them. As it is, that is one of
the most gross and ignorant misunderstandings of singularity, that people will
be somehow marginalised or not valued rather than the main point being the
next logical step of tools (post-)humans use for their own benefit.

Edit: Oh, look, that whole "...the greatest scientists are also artists."
canard again. Why am I not surprised? There have been maybe two or three who
were exceptional or even widely appreciated in both, which is indicative of a
"renaissance man" who is proficient in many fields and not an overall
tendency/requirement. While many _are_ skilled in nonscientific fields too, I
would hardly call Hawking, Feynman or Dawkins an artist just because they were
good at speeches, lectures or books, for example.

I would also remind everyone that predictions of the future are so often
pessimistic...

"This 'telephone' has too many shortcomings to be seriously considered as a
means of communication. The device is inherently of no value to us." \--
Western Union internal memo, 1876.

"Heavier-than-air flying machines are impossible." \-- Lord Kelvin, president,
Royal Society, 1895.

"Airplanes are interesting toys but of no military value." \-- Marechal
Ferdinand Foch, Professor of Strategy, Ecole Superieure de Guerre.

(Apocryphal; thanks for pointing this out to me as I was not aware; left in
for completeness' sake) "Everything that can be invented has been invented."
\-- Charles H. Duell, Commissioner, U.S. Office of Patents, 1899.

"No flying machine will ever fly from New York to Paris." \-- Orville Wright.

"Professor Goddard does not know the relation between action and reaction and
the need to have something better than a vacuum against which to react. He
seems to lack the basic knowledge ladled out daily in high schools." \-- 1921
New York Times editorial about Robert Goddard's revolutionary rocket work.

~~~
icebraining
_" Everything that can be invented has been invented." \-- Charles H. Duell,
Commissioner, U.S. Office of Patents, 1899._

Poor man, constantly libeled due to lazy book writers:
[http://en.wikipedia.org/wiki/Charles_Holland_Duell](http://en.wikipedia.org/wiki/Charles_Holland_Duell)

~~~
blueskin_
Thanks; have added a disclaimer on that quote.

------
tempodox
My singularity is certainly not exaggerated, but I, being a 100% non-robotic
entity, can assure all of you that I can't dance at all.

------
Animats
Well, to address the title of the article, robot motion usually doesn't look
very good for a known reason. The motion control systems used are usually
positional. Most robotic control systems have a processor or PLC for each
joint, and that processor usually accepts position goals, not force goals.
Then there's some central coordinator issuing positional commands. This is
simple to code, and many robotics frameworks have that hierarchical approach
more or less nailed into them.

That hierarchical approach is not very good for dynamic motion. For that, you
need force control and coordination in force space. I used to work on this;
here's the first anti-slip control for legged robots, from 1995:
([https://www.youtube.com/watch?v=kc5n0iTw-
NU](https://www.youtube.com/watch?v=kc5n0iTw-NU)). That was picked up by a
grad student at McGill, who put it into their running quadruped Scout II. Then
his professor, Martin Buehler, left McGill for Boston Dynamics and became the
head engineer on BigDog.

All the actuators on BigDog are run by one CPU, which is a Pentium 4 class
machine running QNX. The balance servoloop runs at 100Hz, and the hydraulic
valve control loop runs at 1KHz. This allows for coordinated force control
across all actuators, which is why BigDog is so agile. The Atlas robot version
1 is basically a modified BigDog, although version 2 seems to have been
redesigned above the hips, with onboard power.

The motion in the DARPA Humanoid Challenge looked so bad last time because
most of the participants using the Atlas robot were using a Windows DLL
provided by Boston Dynamics. That DLL was just intended to provide some basic
functionality to get participants started. Functions provided included "walk
slowly" and "stand stably while arms do something". They didn't have the
running, balance recovery, or slip control capabilities Boston Dynamics put
into Big Dog. Expect much better performance in round 2 next winter.

Until recently, most robotics simulators were hopeless about force accuracy or
friction. Most of them used physics engines borrowed from video game
technology, where nobody cares about force accuracy or friction as long as
things blow up prettily. This was recognized as a problem by DARPA, and they
funded Dr. Mike Sherman at Stanford to put a serious dynamics simulator into
Gazebo. Sherman previously had a commercial company, Symbolic Dynamics,
building dynamics simulators for industry, and did know how to get the
dynamics right. So now you can simulate force-controlled robots in Gazebo.

(Unfortunately, it took two decades to get this right, so I've moved on to
other things, after a detour through physics engines for animation.)

Anyway, that's why robots can't dance very well yet. That problem is being
fixed.

------
jkot
Would human creativity pass turing test?

------
dominotw
forget dancing. Even simple human tasks like tying shoelaces is borderline
impossible for machines.

