I think most people believe that the problem with robots is that we don't have the right software, and if we just knew how to program them then today's robots could be incredibly useful in everyday life. From that perspective, this move from OpenAI seems dumb.
That belief is wrong. Today's robots can't be made useful in everyday life no matter how advanced the software. The hardware is too inflexible, too unreliable, too fragile, too rigid, too heavy, too dangerous, too expensive, too slow.
In the past the software and hardware were equally bad, but today machine learning is advancing like crazy, while the hardware is improving at a snail's pace in comparison. Solving robotics is now a hardware problem, not a software problem. When the hardware is ready, the software will be comparatively easy to develop. Without the right hardware, you can't develop the appropriate software.
OpenAI is right to ignore robotics for now. It's a job for companies with a hardware focus, for at least the next decade.
You're demonstrably wrong. I can waldo any number of commercially available arms to do work humans do today. Surgeons waldo precise robots to conduct surgeries as a matter of course. Every piece of construction machinery operated by a human today is an incredibly useful robot lacking sufficiently capable software.
"When the hardware is ready, the software will be comparatively easy to develop." I take it you've never written any software for a robot? The long tail of the real world takes years and years to handle. Probably the most advanced robotics company, at the cutting edge of the ML+Robotics, is Covariant and their entire business model rests on an understanding that the long tail can and should be handled by humans.
I agree that OpenAI is right to cut out the hardware, but all your reasoning about why is wrong.
The reason, which they state, is that data collection on physical devices is slow and modification to those devices is slow and maintenance on those devices is expensive. You want to simulate everything, not because it reproduces the real world in high fidelity, that doesn't matter, but because it gives you approximations with sufficient variety and complexity that you can continually challenge your AI, and you can do all that at 1M fps.
> I can waldo any number of commercially available arms to do work humans do today
Not for everyday tasks with anywhere near the efficiency, reliability, speed, and cost that humans have without robots. You can't waldo any robot to do the laundry or the cooking in a normal home anywhere near as well as a human can do it. (I'd love to see you try!)
Sure, you can make a robot that can do work humans can't. You can make a robot stronger, or more precise, or better suited to repetitive motion than a human. Those attributes are useful in specialized tasks. But generally not for the everyday tasks humans do today that we want robots to help us with. For everyday tasks you need a robot that is comparable in speed, efficiency, weight, reliability, durability, flexibility, sensor capability, and cost to a human. Not one of those areas, but all of them simultaneously. That's the hard part.
This is just moving the goal post of your argument.
But the "Hardware lags behind" only makes sense to Sci-fi like expectations of robot agility but the software isn't even remotely close to embody that hardware. Even In the real world robotics applications TODAY this statement falls flat by one simple demonstration:
Use existing arm + teleoperation and conduct X amount of tasks (could be a mobile robot too, or a car for that matter). Now find a software that have same versatility in task execution as the human.
Most softwares for simple robotics manipulation tasks lose out to human operating it directly, bar efficiency maybe, in an static controlled environment even using the same control and perception system. Yet human controlling these arms directly show that the hardware is capable enough to conduct those tasks.
The "hardware lags behind" statement is if anything just a convenient excuse from the software / automation developers in Robotics, (also being one of them myself) shifting the blame to others, or have a sense of false highground.
The need of Lidar on early self driving cars was the same motivation; somehow softwares couldn't just use camera but needed an additional 6th sense, that humans don't even need, and still performed quite bad.
Even if this is true, that weakens your point that hardware is the fundamental limit for robots. If there are situations where software giving human like behavior to a robot could be extremely valuable, then there's certainly a motivation for generic AI companies to be in that area.
That doesn't mean OpenAI robotics leaving wasn't a good idea. It seems like it was but for other reasons.
Giving robots human-like behavior is mostly useful for general-purpose robots. Specialized robots don't need general AI to do their jobs. OpenAI is trying to develop general human-like AI. There's no general-purpose robot for them to put it in, and developing one is a hardware problem, not a software problem.
Specialized robots don't need general AI to do their jobs.
Self-driving cars have been unable to succeed based on their lack of a broad understanding of what's happening on the road (ie, "too many corner cases"). Self-driving cars would be a huge change and their failure is very significant.
We can build perfectly good robot arms for a huge swath of assembly/warehouse/retail tasks, but there's no AI that can aim them well enough and carefully enough. An overqualified AI would still be a valid solution and extremely valuable.
Off-topic but this is the first time I've seen waldo used as a verb. It's not in the Urban Dictionary, but from the way you use it I take it that it means "to find the correct thing out of the sea of so many similar things", right?
>
Waldo Farthingwaite-Jones was born a weakling, unable even to lift his head up to drink or to hold a spoon. Far from destroying him, this channeled his intellect, and his family's money, into the development of the device patented as "Waldo F. Jones' Synchronous Reduplicating Pantograph". Wearing a glove and harness, Waldo could control a much more powerful mechanical hand simply by moving his hand and fingers.
The way I see it, the problem is that the "any number of commercially available arms" - the models that are beyond absolute toys and might be waldoed to do work humans do today - are staggeringly expensive compared to the humans they might replace. The cheap arms are ... toys, they can do a nifty demo, but they are really shit for doing any actual work, and while there are also good arms available, they're so expensive that they're tricky to afford even for experimental use, but in practical applications for the same price you can just get many outsourced workers to do that with actual hands instead.
The world will be ready for robotics revolution - creating immense demand for robotics software - when you'll be able to get a decent arm for the price of a fridge, not the price of a fancy car; just as we got the computer revolution not when we developed capable computers, but only after we developed affordable capable computers.
The way I see it, the problem is that the "any number of commercially available arms" - the models that are beyond absolute toys and might be waldoed to do work humans do today - are staggeringly expensive compared to the humans they might replace. The cheap arms are ... toys, they can do a nifty demo, but they are really shit for doing any actual work, and while there are also good arms available, they're so expensive that they're tricky to afford even for experimental use
But the expense of such is a product of the lack of demand for them. Cars (again) are produced with engines now having astoundingly good accuracy.
Of course, an accurate arm is going to always be expensive than a simple arm but cars, chips, phones and whatnot show that with large scale processes and heavy capital investment, accuracy and cheapness are compatible. But today, with software incapable of doing useful things with those arms, the people and institutions with the capital to make accurate robot arms cheap, through economies of scale, are not going to mobilize that capital.
just as we got the computer revolution not when we developed capable computers, but only after we developed affordable capable computers
Some technological advances happen through a feedback loop of commodities getting sold and producers improving those commodities (cars are an example here). Other technologies require a leap where a significant clump of capital has to be devoted to creating an advanced device for which there's no sellable (and sometimes no operable) device. (the biggest example of technological leap was the Manhattan project). It might be the case that things will happen that way with robots. But I'd also say it's an open question.
cars will be the first commercially available robots with widespread use. They will then be mutated into lawnmowers, farm equipment, commercial transport, etc. They will then get 'add-ons'- arms, legs, power tools. Because humans are mobile, widespread use of any robot will first require portability or mobility. Actually, the smartphone was our first robot-and it is portable.
I want to react to 'the long tail can and should be handled by humans' and I find this thinking counter-productive and dangerous.
Humans are excellent to handle the long-tail when they're already handling the rest. Take driving. We're already seeing cars with large cognitive assistance, taking more and more an active role in 'easy' tasks. Think Tesla's autopilot. You're supposed to be there and 'take over' in case the 'machine' fails to handle the 'long tail' or decides to give you the responsibility of whatever happens next (because you trained it to do so).
Driving is a very complex task, you need training, experience, anticipation and (very important) context. There's no easy way to scramble all the details necessary for a decision in a human brain in the time to take the decision 'correctly'. Similar problem for industrial automation where you call the 'long tail' person once in a while and that person probably doesn't have the expertise of reconstructing the context, after 3 turnover phases in your provider.
I think we're taking this problem the wrong way, and aiming for the lower fruits, and higher and higher, while handwaving the long tail and sending it over the fence to the human. We should be putting the human at the center of this, and extend their capabilities, reduce the repetitiveness, help, not take over.
The paper I like a lot on this is 'automation should be like Iron Man, not like Ultron'.
This is a very important point about shifting the calculus of automating human labor when doing it incompletely. There are still big debates in the self driving car world between people who want to get to level 2-3 (partial/conditional automation) vs level 5 (full automation).
The earlier group often says since Edge cases that they can't automate now constitute only <5% of training scenarios they encounter, they've automated 95% of the job. But with what you're saying, we can't really expect the calculus to work that way.
I don't think it's really true to say that the issue is hardware or software. There is lots of robotics in everyday life, from autonomous vacuums in our homes to autonomous factories producing the goods we consume. The reason we don't have millions of little robots buzzing around us is... there's very little need for it.
The average human spends most of their time barely engaged, our brains and bodies are operating far below what we're capable of, the romanticised sci-fi vision of a world filled with intelligent robots performing every menial task for humans builds on the idea that humans have better things to do, but do we? We already have enough knowledge and resources to end world hunger, to bring a high standard of living to every human, but we choose not to: our problem is social, not software or hardware.
As an aside, I'd dispute the claim that hardware is lagging behind software: Tesla has lots of money and lots of smart people and they haven't been able to deliver self driving cars after more than a decade of promises (because of software).
> Today's robots can't be made useful in everyday life no matter how advanced the software. The hardware is too inflexible, too unreliable, too fragile, too rigid, too heavy, too dangerous, too expensive, too slow.
You're absolutely wrong. Anyone with basic electronics knowledge and a few hundred bucks can build a passable robot body out of hobby grade servos and 3D printed parts. If you're willing to spend $10k+ you can make something quite capable.
Programming it to then actually do anything, let alone anything useful in the real world, is still out of reach for all but a tiny fraction of companies.
Hardware still has a long way to go before it's as capable as biological systems but it's usable. Real world AI is far from that in most areas.
No, and yes. You can build something. But even if you imagine tele-operating it 24/7, there is not going to be something relevant you can do with it around the house. It's not about the programming.
And that will not even be taking into account the time-to-maintenance of such a system.
On the other hand, Boston Dynamics' manifold, where they do the control of the dozens of hydraulical parts, is an absolute marvel of technology that shows what you can achieve with 45 (?) years of dedicated focus.
You might be able to teleoperate their robot for something useful in a human environment, and I guess that would be a gamechanger. But even there I want to wait-and-see if they can escape the fate of many that came before them.
Think of literally any piece of equipment that needs a human operator to operate it. If we had the software side figured out, none of these machines would have human operators.
Robotics software is incredibly complex. Even with machinery that was a perfect replica of a human body to the most minute details, throwing some ML algorithms at it wouldn’t get us anywhere.
If it worked that way, my job would be much easier.
I'm not an expert, but I'd suggest hardware is trailing software, but there is still great progress happening in materials design, soft-robots, miniaturization, etc. Just as in the computer era, we go through phases where the software is ahead of hardware, then hardware gets ahead of software. It seems that is a pendulum that swings, and the argument many people have to the integrated pipeline Apple operates.
I think self driving car is an incredibly useful robots where massive adoption is under way (very early stage still); and in many less challenging areas, self driving capable vechles have been taking over.
One day a person will see a Spot robot and smile at its cuteness, then notice that there's no way to get past it, no way to have it not decide where you can and can't step towards. It won't have a gun, but that person will no longer have a reason to smile at them in the future.
Later you will have a Spot robot chasing a person and getting it to stop, surrendering at the machine without being threatened by it, but just by recognizing that there's no longer a point in running away.
robots do a lot today, it's just not AI and most of it isn't ML. I think the ML folks find what robots do today "tiresome and manually trained" but that doesn't stop robots from producing billions of dollars a year in goods.
That belief is wrong. Today's robots can't be made useful in everyday life no matter how advanced the software. The hardware is too inflexible, too unreliable, too fragile, too rigid, too heavy, too dangerous, too expensive, too slow.
In the past the software and hardware were equally bad, but today machine learning is advancing like crazy, while the hardware is improving at a snail's pace in comparison. Solving robotics is now a hardware problem, not a software problem. When the hardware is ready, the software will be comparatively easy to develop. Without the right hardware, you can't develop the appropriate software.
OpenAI is right to ignore robotics for now. It's a job for companies with a hardware focus, for at least the next decade.