
Humanoid robots can’t outsource their brains to the cloud due to network latency - nomoba
http://arstechnica.co.uk/gadgets/2016/03/network-delays-rule-out-the-cloud-as-an-outsourced-brain-for-humanoid-robots/
======
mturmon
This is an interesting conundrum.

I've seen it play out in space-based autonomous systems, where fundamental
light-time delays limit how much autonomy you can offload to Earth versus
using on-board computing on a rover
([http://www.jpl.nasa.gov/news/news.php?release=2010-094](http://www.jpl.nasa.gov/news/news.php?release=2010-094)).

You end up having to reason about splitting the computational burden between
the remote system (having limited resources) and the cloud. Sometimes you can
train in the cloud but run on the robot (e.g., upload large training sets to
the cloud, and download a trained classifier to a fast runtime on the robot).

Finding the right boundary for such a split system can create a hard
engineering/infrastructure problem, because simple changes in bandwidth can
have huge infrastructure implications.

~~~
beloch
The problems are so much more pronounced in space and extraplanetary
exploration that it really is a totally different thing. Bandwidth is lower,
latency is much higher, power is a huge issue, and the hardware itself is far
more primitive. The curiosity rover runs on a radiation hardened CPU capable
of about 400 MIPs, which would be considered utterly obsolete for a cell phone
these days. Imagine programming your phone to drive hundreds of millions of
dollars of payload around an alien environment where, if something goes wrong,
there will be no human intervention for anywhere from 3 to 22 minutes!
Everything on Earth _should_ be easier.

An android could rely on remote servers a hundred kilomters away and incur
less than a millisecond of additional delay. This is not a problem at all. The
problem is that cloud services take several orders of magnitude more time to
respond. An android doesn't need to carry all of its brains around with it the
way a rover or spacecraft does. However, it does need to have a box dedicated
to providing its intelligence within a reasonable distance. It can't rely on
google cloud services. It needs software running on one, specific box.

~~~
option_greek
I'm curious... How do they harden these things and do they have an impact on
the CPU speed (is that why its so slow) ?

~~~
officialchicken
I've read that the shielding is a small part of it (enviro temperature
fluctuation can be hundreds of degrees) but the biggest factor is the process
is around 65nm or much, much larger to help prevent radiation from
accidentally toggling gates.

They are very constrained in terms of available power - and under-clocked if
you wish. The radios I believe consume most of the power. The multiple
redundancy / failover nature of every circuit including RAM, CPU stack/heap,
etc. also bites into the power budget which slows things down. On some space
and extraterrestrial vehicles just about everything is double-computed on
separate circuits and compared for exactness, and can be compared with results
on an earth-based system for accuracy.

I'm sure some readers here can explain some of these features in more depth.

------
jrbapna
I'm having trouble believing network latency would be the bottleneck here.
Just ping google.com and you'll see ~25ms latency which is a lot less than the
half second delay described in the article.

Now, having the server actually process the information, and return a response
that then must then be vocalized may take much longer, but that's a different
issue than "network latency".

Not to mention when people interact, they often use filler words when
collecting thoughts... "um...", "uh...", "hmmm..", "yeah...". there's your
half second delay

~~~
pinkunicorn
Pinging is a lot different from sending data. Its not just about words but
also your actions. Try uploading an image to Imgur or a video to youtube!
Humanoids probably have to do them. So entirely on the cloud is certainly not
possible. Maybe do some precomputation on the humanoid and send the data for
comparison with a larger dataset to the cloud makes sense.

~~~
VikingCoder
What are you talking about? People live stream to YouTube all the time. And to
Twitch. And they used to on OnLive. And Nvidia has a thing. And Skype. And
Google Hangouts. And Chatroulette.

~~~
PhasmaFelis
Most of those services have a lot more latency than you may realize. I've been
looking for a way to stream real-time game video from a friend on the west
coast to me in Kentucky, so we can kibitz over voicechat; high-quality
streaming video has very noticeable delays at best.

(And it's _really_ bad if the service is optimized for streaming to many
viewers at once--Twitch enforces a minimum buffering delay of 10 seconds even
for a private, one-viewer stream, and can range as high as 60 seconds in heavy
conditions, which really annoys a lot of streamers. If anyone has a suggestion
for a good low-latency one-to-one video streaming tool, I'd love to hear
it...)

~~~
zardo
NVidia GRID is probably the best benchmark, especially since a latency
optimized data center full of GPUs sounds like a pretty good place to do some
matrix multiplication. If it's fast enough to play a game it should be usable
for everything but fast motor control.

------
Pfhreak
Humans have latency in their comms too, we just hide it effectively with
filler language. I'm surprised we couldn't mask a half second of latency with
a quick, "Hmm.." or "Ah..." or even a bunch of canned responses.

I know I've bought myself additional time with exactly those -- "That's a
great question." "Interesting..."

~~~
jimktrains2
"You know" ... frantically opens up firefox and goes to google ... "I was
thinking about this the other night" ... skims the result fragments ... "it's
interesting that there are a couple viable approaches to this problem" ...
opens up multiple tabs and then wikipedia ... "We need to set some parameters
on how we expect our solution to behave, i.e. really accurate, but slow to
get, or slightly less accurate and really fast, or what point in between" ...
begins skimming wikipedia ... "were you able to grant me access to the data,
as specified in my original email 2 months ok?" ... frantically reads research
abstracts ... "Oh? Still waiting on your data team." ... closes all the tabs I
just opened ... "OK, we'll let's thinking about how accurate/fast/resource
intensive we can afford and see if you can get me access. We'll talk next
week." ... hangs up.

------
taneq
Not to mention it's a terrible idea from a privacy and security perspective.
It doesn't matter how good your encryption is or how reliable and low latency
your network connection is if the service provider has shoddy VTech style
security. The only way to keep your data safe is to not give copies of it to
third parties. ("Two can keep a secret if one of them is dead" and all that.)

~~~
eru
> The only way to keep your data safe is to not give copies of it to third
> parties.

What makes you say so? Fully homomorphic encryption (which we don't have, yet)
would allow untrusted party to do trusted computations for you.

~~~
taneq
That's kind of like saying that public key encryption is pointless because
quantum computers (which we don't have, yet) would allow you to break it. As
of the current state of the art, letting a cloud service operate on your data
requires you to give them a readable copy of that data, and (this was my main
point) SaaS companies have an incentive to make their terms and conditions as
intrusive as possible.

~~~
eru
We are talking about humanoid robots with brains in the cloud here..

------
DanielBMarkham
Latency would be an issue when dealing with face-to-face communication because
of the uncanny valley. Perhaps.

Latency would not be an issue harnessing the cloud to drive robots to do
chores around the house -- serving drinks, cleaning up, feeding the pets, and
so on. All you'd really need is an intermediate language. You'd send commands
like "walk over there" or "Pick up that cup" So what if there was a 2-3 second
delay?

Also, you should be able to use real people over the cloud controlling bots
right away. Actually hooking AI into it and having the cloud control
everything is still a ways off. (And having robots in your house controlled
remotely by other computers is about as freaking crazy as I can imagine)

------
aaron695
Uncanny valley isn't real. It's just an artifact from the fact most people see
robots on video.

Any latency we will just adapt to quite naturally.

At worst it means Blade Runner style robots can't work 100% off the cloud.

Personally I think caching would deal with most issues. How often does anyone
ever surprise you with a sentence.

~~~
PhasmaFelis
> _Uncanny valley isn 't real. It's just an artifact from the fact most people
> see robots on video._

I can't make that make sense. Could you elaborate?

~~~
aaron695
[http://boingboing.net/2013/09/03/the-uncanny-valley-might-
no...](http://boingboing.net/2013/09/03/the-uncanny-valley-might-not-a.html)

Think about it, have YOU ever seen it? We've all seen videos, but no real life
robots in the uncanny valley.

If it did exist it'd make an amazing art exhibit for starters.

If you've seen Ron Mueck work in real life you'll notice it's cool, but far
different to the pictures.

2D has a way of evoking emotions that's not possible in 3D.

~~~
cskau
> Think about it, have YOU ever seen it? We've all seen videos, but no real
> life robots in the uncanny valley.

Isn't that kinda like saying:

Think about it, have YOU ever seen atomic bombs? We've all seen videos, but no
real life atomic bombs. They're not real!

~~~
zardo
Nuclear reactions aren't a theory about human psychology. It would be really
weird if nuclear reactions seemed to work when we filmed them but not in the
reality. It doesn't seem _that_ strange that people might feel differently
about an object than a video of that object.

------
bitwize
In GitS, tachikomas have inboard brains, but each night they dump all their
memories and experiences into a central database. All tachikomas learn from
each single unit's experience, and each unit learns from the experience of all
the others.

------
dtornabene
Its interesting no one seems to have commented on the analog digital
distinction yet. Not a cog sci scholar, but the massively parallel nature of
the brain allows for highly sophisticated computations to happen more or less
instantaneously but even an android such as this is (ie as powerful as this)
is going to have to "look something up" as it were. The silliman (i think?
drunk commenting) lectures neumann did at the end of his life cover this.
also, the "intention engine" from the article sounds interesting.

------
stephengillie
I'm not sure how far away their datacenter is that they have 500ms of latency.
I'm pretty sure packets can circle the globe in under 500ms these days.

The article mentions access points, maybe they're hampered by poor wifi?

------
p4wnc6
I wonder how the Borg managed to deal with latency issues.

~~~
krapp
Transwarp conduits. And being fictional.

~~~
p4wnc6
I really need to start being more fictional. Maybe I'll read _Broom of the
System_ again.

------
lololomg
They can't outsource ALL of their brain but the higher-level thinking is not
that sensitive to a bit of latency.

------
jonathankoren
Sensor latency is all your primary sensory organs are colocated with your
brain.

~~~
benjiweber
Even with the colocation we have eye-to-action response times of around 400ms.
It takes 100ms for information to even get from the retina to processing in
the brain
[http://www.sciencedirect.com/science/article/pii/S0896627309...](http://www.sciencedirect.com/science/article/pii/S0896627309001718)

Given this, network latency doesn't seem like a big deal.

