
ANA Avatar XPRIZE - T-A
https://avatar.xprize.org/
======
dljsjr
The last few XPRIZE announcements have been really confusing to me.

I don't take any issue with the sentiment; they want to advance specifically
commercial domains by incentivizing them to accomplish moonshots.

The problem I have is that the moonshots (quite literally in the case of the
Google Lunar XPRIZE) are so ambitious that the only people that have the
resources to even come close to accomplishing these goals would never be
interested in prize amounts this small. They would almost definitely end up
losing money with the purses the size that they are and quite honestly winning
an XPRIZE just doesn't carry enough prestige to counter the capital losses.

The amount of R&D needed to get a robotic avatar to the level proposed here is
staggeringly larger than $10mil.

I just don't see the point.

~~~
melling
X-Prizes can spur innovation even if they aren’t completely successful. A few
of the Moon X-Prize contestants are still planning to reach the goal:

[https://www.bloomberg.com/news/articles/2018-03-11/space-
exp...](https://www.bloomberg.com/news/articles/2018-03-11/space-explorers-
now-shoot-for-the-moon-without-google-s-prize)

------
spacestuff387
This won't be easy but cheaper entrants are possible. Take off the shelf
components and put them together. Then rely on a human to do the more complex
bits.

Take pg 7 of the draft guidelines:
[https://avatar.xprize.org/sites/default/files/ana_avatar_dra...](https://avatar.xprize.org/sites/default/files/ana_avatar_draft_competition_guidelines_2018_03_12.pdf)

In the 'Vision' section it requires that the Avatar be able to track motion:
"Can you correctly follow an object moving on the floor in front of you?"

The hard way to do this is to develop a vision system with neural nets to
track relative motion of any object within the field of view. Objects need to
be classified, identified and tracked. Further, the purpose of the object must
be divined. That is extremely difficult.

But, if you allow for a wireless connection with a human in a suit and the
human operator's brain is doing all of that work (the work a child does when
catching a ball), then the challenge gets much much easier. Pass the data to
the human (visual) in close enough to realtime so that the human can react
while wearing a control suit. Then pass the control suit reactions back to the
robot. The problem becomes one of data compression, latency in the system,
UI/UX design on the human side of things, etc.

The point is, cheaper, less comprehensive systems could be built that could
functionally complete the challenge and win the prize. And those systems could
have immediate uses.

For example, what if I have elderly parents who have a robotic assistant at
their home fulltime. Some tasks are easy to program (ex. changing a bedpan).
But other tasks are much more difficult: a sponge bath to prevent bed sores.
The robot could change the bedpan whenever it needed to automatically like a
roomba doing the vacuuming. But when it needed to do the bath, it could call
me up at home, I could put on my control suit, connect over the internet, and
bath my parent from thousands of miles away. The complex 'thinking' and task
management of the robot is handled in my brain.

But the basic business case is met: work can be done by a robot with a human
'guide' or 'pilot' at a distance.

------
sushisource
Anyone else freaked out by the fact that "ANA" is the same name used in the
[https://en.wikipedia.org/wiki/Commonwealth_Saga](https://en.wikipedia.org/wiki/Commonwealth_Saga)
for a distant future governing cyberintelligence?

Great coincidence

~~~
hujun
ANA also stands for All Nippon Airways, a Japanese airline; that's first thing
I thought when I read the title

~~~
quelltext
Well, that's because All Nippon Airways is behind this.

