Hacker News new | past | comments | ask | show | jobs | submit login

I've worked in robotics for over 10 years, at state of the art labs and high quality startups.

There are really only two hard problems in robotics: Perception and Funding.

Perception, especially around a bunch of people, with depth, mapping, understanding traffic and gestures, all in real time etc etc will be a huge problem for these machines for a while.

Funding though? I doubt that's an issue right now.






I'm also a roboticist. Perception and funding are hard. But don't forget battery energy density, and the power-to-weight ratio and energy efficiency of actuators. Also very very hard, and Moore's law helps not at all.

Autonomous cars are in a nice niche since they store vast energy for actuation anyway, it's OK to be heavy, and the controls are relatively simple. They are limited by perception and decision making.

Humanoids are way more limited by energy storage and actuation. Animals are absurdly efficient.


Battery density is only an issue if these things are spending most of their time moving long distances. If you are targeting a drop in replacement for a human worker who is spending most of their time at a workstation, it can be plugged in while working. Even in a scenario where the robot can not be connected to power while working, that's easily solved with redundancy - get two robots, one works while the other charges. Obviously better battery life is a nice to have, but it's not an impediment to large scale adoption the way other big robotics problems are.

> easily solved with redundancy - get two robots.

Yay, twice as expensive.

And power tethers on robots suck so hard. Try it sometime, you’ll hate it.


A problem that can be solved by spending 2x is not the type of fundamental problem I'm referring to (easily solved by "Funding" or otherwise known as a system design constraint and part of everyday engineering albeit very difficult and skillful engineering)

The leaps that would be required to make a mannequin with motors intelligently interact with crowds (in groups no less) at a publicity event cannot be solved with 2x funding jumps, and I'm arguing they are largely perception-, sensing-, mapping- and self-modelling- based.


And how much more expensive is the twice as efficient power system that hasn't been developed yet?

Nearly all robots in actual use have tethers, it's really not a big concern. Further there are other methods of providing power, such as induction. For any situation where long range mobility is really a concern, you probably don't want a humanoid robot to begin with.


Tangential question: are there any actuators out there that mimic the animal muscle tissue, i.e. swelling laterally in order to shorten a tendon and pull a joint? This seems like a very elegant method compared to servos with all sorts of slack and rigid positioning. I'm not a roboticist so I'm not familiar with state of the art in actuators.

They exist, but they're inefficient compared to ordinary motors.

My point is that if you want to make a fully functional android last longer, have it take bathroom breaks and change out its lithium backpack.

If you want to make a energy-unconstrained robot into a fully functional android, you have much bigger, fundamental problems.


That surprises me. I thought motion planning and motor control would be harder - old memories of Asimo falling helplessly trying to climb stairs, the clunkiness of a robot aligning itself perfectly with a drawer before executing a scripted-looking action to pull the handle, the obvious recorded sequence Atlas uses to get up from a fall. I know Boston Dynamics does impressive acrobatics, but it's all legs and no arms.

Are kinematics and planning solved now? I want to move into the field so I'm trying to learn.


How much of that is actually perception though?

"Where can I put my feet safely?"

"What is the orientation and 6 higher order velocities of my body?"

etc.

I've been told that a perfectly observable / estimable system is trivially controllable. It's one of the reasons I believe perception is upstream of everything - interaction dynamics alone are nearly impossible to just wave away with models.

I don't even work in perception. But I know that everything is fine until you try to go online with perception in the loop. Then you are behind the perception team's debugging nearly all the time.


Here's some closed-course manipulation with arms: https://www.youtube.com/watch?v=vuG-qNgLHws

There are others. Controls is hard! You need investment and to solve difficult engineering problems. But we have a pretty good idea that those things are solveable and can demonstrate success b/c they are engineering challenges, not things we fundamentally don't have an approach for yet.


> Asimo falling helplessly trying to climb stairs

IIRC, that wasn't a control problem but a mechanical failure of a gearmotor shaft.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: