"We observed that for precision grasps, such as the Tip Pinch grasp, Dactyl uses the thumb and little finger. Humans tend to use the thumb and either the index or middle finger instead. However, the robot hand’s little finger is more flexible due to an extra degree of freedom, which may explain why Dactyl prefers it. This means that Dactyl can rediscover grasps found in humans, but adapt them to better fit the limitations and abilities of its own body."
The learning of "emergent" behavior, specifically when it creates improvements to natural human motion is one of the main reasons why this type of work is so important. Similar to the way that we imitate design from nature (e.g. wings, suction cups), we can now accelerate development by observing how the bots perform the task in a variety of environments
Then why not evolve commensurate dexterity with mechanically simpler manipulators than a human hand? I'd bet that a robot with 4 arms with 4 different specialist manipulators and a few specialist tools (like a generalized screw-threading tool) could eventually be more efficient than a human with 2 human arms. You can even see this happening in real life, powered by human brains. The kind of dexterity people can get out of a crude instrument like a backhoe is very impressive.
The simpler device will always have an economic advantage.
A backhoe is actually very complex - the simple device is a stick. A thousand humans with a stick can do as much work in a week as a single human with a backhoe can do in a day. It only takes a few weeks to pay off the more complex backhoe. Of course your humans can use their bare hands for a further reduction in productivity.
A backhoe is less complex than a horizontal drill, and the drill is less versatile overall. However the drill can go under a structure without harming it which can be a big economic win. If you are just doing a shallow pipe through an open field they are fairly competitive in price though.
The simple device has an economic advantage when the task is not repeated often enough to pay off the costs of the more complex machine (this includes maintenance costs which can be more than the initial investment)
But it's quite a bit less complex than a scaled up anatomic analogue human arm. It's just complex enough to be a general-purpose excavation and construction tool. My point is that the evolved complexity of a human hand is too complex for the general purpose manipulation task. Evolution isn't perfect. It just did a very good job given it had a bunch of fish bones to work with.
Please note I said "simpler" device, not "simplest possible" device. US artillery in WWII had commensurate functionality to German artillery, but did it with nearly half of the number of moving parts. US tanks were individually less capable than their German counterparts, but were more efficient to manufacture and easier to repair. That is what is meant by the economic advantage of the "simpler." Not using a trebuchet instead of a cannon.
Maybe if you're constraining 'general purpose manipulation' to mean 'rearranging blocks'. That complexity (and especially the extra brain power to run it) costs energy. If it wasn't useful, we wouldn't have it.
I'm thinking more along the lines of "can build a PC" or "repair a machine." I don't think we'll need to lap flint spearheads in 2018 and beyond. I don't think we need to have repair bots play the fiddle so much on the surface of Mars. I do think that the manipulation tasks useful to a 21st century civilization are "within the grasp" of simpler manipulators than the human hand. The human hand is freakishly complex. If we can get 80% of the functionality for 25% the cost, that will be a win. (For the most difficult 20%, there are still humans.)
That complexity (and especially the extra brain power to run it) costs energy. If it wasn't useful, we wouldn't have it.
Of course it's useful! No one disputes that! It doesn't matter. You're fallacious in supposing that 1) all of the complexity and capability is absolutely essential in current and future contexts and 2) that evolution had bought it with maximal possible efficiency. What we understand about evolution tells us that the 1st isn't necessarily true, and that the 2nd is probably not!
Yes, there will be a place for human hands. But some huge fraction of the current uses of human hands will probably be replaced by very capable but much simpler manipulators -- simply due to economics!
This is why slavery didn’t just die upon invention of the engine.
Qatar 2022 https://www.amnesty.org/en/latest/campaigns/2016/03/qatar-wo...
Russia 2018 https://www.hrw.org/news/2018/06/06/russia-world-cup-labor-a...
Brazil 2014 https://www.aljazeera.com/humanrights/2014/07/world-cup-work...
(a bit of a threadjack so I'll stop here)
The recent history of technology suggests otherwise. General-purpose devices displace simpler single-purpose devices. Consider all the specialized, but relatively simple, circuits displaced by the desktop computer.
Yes, but simpler general-purpose devices displace more complex general-purpose devices, so long as they're also suitable to the tasks. A robot with a couple of graspers, a picker, a suction manipulator, plus some graspable special-purpose tools could be as versatile as a humanoid robot with anatomically inspired human-like hands, but be several times simpler and several times more reliable.
Worth noting: it's a well-supported route to join OpenAI without any special graduate training. Many of our teams (including our robotics team!) hire experienced software engineers, teaching them whatever ML they need to know, or our Fellows program lets people do a more formal curriculum (https://blog.openai.com/openai-fellows/). We also have a number of software engineers who focus on what looks like traditional software engineering: see for example https://www.youtube.com/watch?v=UdIPveR__jw.
See our open positions here: http://openai.com/jobs!
On the other hand, Alpha Go or even a rudimentary chess program does better than 99.99% of all humans.
So is it fair to say that deep learning is fundamentally missing something that humans do? Or that chess and Go are "easy" problems in some sense?
(It seems like with "unlimited" training hours it could eventually be better than a human? Or is that a hardware issue?)
The first chess program was written by Alan Turing on paper between 1948 and 1950. He didn’t have a computer to run it, but he could still play a game with it by stepping through the algorithm by hand. In 1997, Deep Blue beat Kasparov, using traditional algorithms and not deep learning.
Clearly there are differences between these problems and dexterity. Chess, for example, can be described relatively simply using logic, and there is no dynamic or physical element; a rudimentary player can be written using pencil and paper; a winning player just needs enough compute power, apparently.
More importantly, there is a technology curve. You are asking about the ultimate limits of a technique moments after its first success puts it at the low end of the spectrum of human ability. Give it a decade or two.
I am just shocked the video was real-time and not sped up like so many of these videos are (eg watch a robot arm fold a shirt in thirty seconds when you play it at 5x speed).
This needs a citation and it needs it badly.
It was widely reported in the popular press, to the dismay of many scientists working in game-playing AI, who had very different opinions about how close or far beating a professional human at Go was at the time of AlphaGo. The majority of them in fact did not make predictions- they just pointed out that Go was the last of the traditional board games to remain uncoquered by AI. Not that it would take X years to get there. Most AI researchers are loath to make such predictions, knowing well that they tend to be very inaccurate (on either direction).
> Just a couple of years ago, in fact, most Go players and game programmers believed the game was so complex that it would take several decades before computers might reach the standard of a human expert player.
I understand, but in such cases (when an opinion of experts is summarised in the popular press, rather than by experts themselves) it may be a good idea to dig a bit further before repeating what may be a misunderstanding on the part of reporters.
For example, my experience is very different than what you report. In an AI course during my data science Master's and in the context of a discussion on game-playing AI, the tutor pointed to Go as the only traditional board game that was not yet conquered by adversarial AI, without offering any predictions or comments about its hardness, other than to say that the difficulty of AI systems with Go is sometimes explained by saying that "intuition" is needed to play well. And I generally don't remember being surprised when I first heard of the AlphaGo result (I have some bakcground in adversarial AI, though I'm not an expert), and in fact thinking that it was bound to happen eventually, one way or another.
A similar discussion can be found in AI: A Modern Approach (3d ed) in the "Bibliographical and Historical Notes" section of chapter 5. Adversarial AI, where recent (at the time) successes are noted, but again no prediction about the timeframe of beating a human master is attempted and no explanation of the hardness of the game is given, other than its great branching factor. In fact, the relevant paragraph notes that "Up to 1997 there were no competent Go programs. Now the best programs play most [sic] of their moves at the master level; the only problem is that over the course of a game they usually make at least one serious blunder that allows a strong opponent to win" - a summary that, given the year is 2010, and to my opinion, strongly contradicts the assumption that most experts considered Go to be out of reach of an AI player. It looks like in 2010 experts understood then-current programs to be quite strong players already.
In general, I would be very surprised to find many actual experts (e.g. authors of Go playing systems) predicting that beating Go would take "at least 10 years", let alone "several decades" (!). Like I say, most AI researchers these days are very conservative with their predictions, precisely because they (and others) have been burned in the past. Stressing "most".
Yes, it's missing the ability to generalise from its training examples to unseen data and to transfer acquired knowledge between tasks.
Like you say, the article describes an experiment where a robot hand learned to manipulate a cube. A human child that had learned to manipulate a cube that well would also be able to manipulate a ball, a pyramid, a disk and, really, any other physical object of any shape or dimensions (respecting the limits of its own size).
By contrast, a robot that has learned to manipulate cubes via deep learning, can only manipulate cubes and will never be able to manipulate anything but cubes, unless it's trained to manipulate something else, at which point it will forget how to manipulate cubes.
That's the fundamental ability that deep learning is missing, that humans have.
In the space of possible problems solvable by computers, there are those of which are "easy" and those of which are "hard".
Arbitrarily defined, an "easy" problem is any problem that be solved by throwing more resources at it -- whether it'd be more data, or more compute. A "hard" problem on the other hand is the opposite: solvable only by a major, intellectual breakthrough; the benefit of solving a hard problem is that it allows us to do "more" with "less".
Now, the question is: which type of problems are being looked at by today's AI practitioners. I'd argue it is the former. Chess, Go, Dota 2 -- these are all "easy" problems. Why? Because it is easy to find or generate more data, to use more CPUs and GPUs, and to get better results.
Hell, I might even add self-driving cars to that list since they, along with neural networks, existed since the 1980s . The only difference, it seems, is more compute.
All and all, I think these recent achievements only qualify themselves as engineering achievements -- not as theoretical or scientific breakthroughs. One way to put it: have we, not the computers and machines, learned something fundamentally different?
Maybe another approach to current ML / AI is needed? I remember a couple weeks ago there was a post on HN, about Judas Pearl advocating causality as an alternative . Intuitively it makes sense: baby humans don't only perform glorified pattern matching, but they are able to discern cause-and-effect. Perhaps that is what today's AI practitioners are missing.
addendum: there are most definitely a greater number of environmental factors (x, y, z axes) involved in solving such problems.
It's interesting to me that this is about the same amount of time it takes humans to develop similar levels of motor control. I don't know enough about AI or neuroscience to say whether it's likely to be a coincidence or not, though.
Humans learn with entirely different stimuli and experience (can you imagine subjecting a person to learning this object manipulation task in the same way as this robot?).
In addition, Alpha Go Zero demonstrated an order of magnitude better efficiency in training time than Alpha Go due to only algorithmic differences. Humans are about converged to 3 year training time for babies learning these skills, but I doubt that these learning algorithms are as efficient as they will ever be.
Though I suppose you could argue that Dota benefits more from high-level reasoning, whereas basic motor control is a more intuitive skill. (And therefore better suited for this type of AI.)
That seemed an order of magnitude higher than I expected. Is training usually this computationally expensive?
Is it the cores that throws you off? (Those are used for the simulation, not the training). Second, I believe those were preemptible cores, so that's $60/hr for the cores and then $20/hr for V100s (which is what I think they used). $80/hr isn't bad considering how much a (small!) team of researchers costs.
First, it seems to be a lot and be really expensive, but think of it in man-hours. It quickly diminishes.
Take a look at position 44, where it seems to get stuck, with no move to make forward progress, and two fingers straight out. Did it lack image recognition to tell it what block rotation was needed?
It doesn't seem to work by discovering strategies for rotating the block one face at a time, then combining those. It's solving the problem as a whole. That has both good and bad implications.
To be precise, the "physical objects" appear to invariably be cubes of the same dimensions. Not arbitrary "physical objects". Which is probably the best that can be done by training only in a simulated environment.
Of course these works are cited in related works of paper as they should be; perhaps the OpenAI blog should also provide more context on where this stands wrt prior work, as many non-researchers may read this is may be quite misleading...
Let’s say we’re close to self-driving cars, i.e. it’ll happen in 10 years or so. How much will it cost? How much the maintainance will cost? How many years will be needed until everybody owns a self-driving car? Unless more than a handful people have that kind of car you won’t kill a lot of jobs.
David Copperfield should be OK a while longer though
If you think of it in terms of "200 years ago" versus "200 years from now" you might think there would be equal magnitudes of progress. But now is 200 years since 1818, and 2218 is 400 years from 1818 AND 200 years from now. The future has more to build on, and taller shoulders to stand on.
The center of history, at the exact moment where as much has changed since the idea "maybe we could just plant some seeds between the Tigris and Euphrates, and maybe not wander around so much," and from then until the present, may be shorter than 50 years ago, and getting closer every year. Thousands of years on one side of the balance, and tens of years on the other. With that in mind, today may be as different from 1818 as 2038 is from today. 2218 would be just inconceivable.
TLDR (quick-ish skim, feel free to correct) they train a deep neural network to control a robot hand to choose desired joints state changes (binned into 11 discrete values; eg rotate this joint by 10 degrees) for a 20-joint hand given low-level (non-visual; so, current and desired 3D orientation of the object and exact numeric state of the joints) input of the state of a particular object and the hand. They also train a network to extract the 3D pose of a given object given RGB input. All this training is done in simulation with a ton of computation, and they use a technique called domain randomization (changing colors and textures and so friction coefficient and so on) to make these learned models pretty much work in the real world despite being trained only in simulation.
It's pretty cool work, but if I may pull my reviewer hat on not that interesting in terms of new ideas - still, it's cool OpenAI is continuing to demonstrate what can be achieved today with established RL techniques and nice distributed compute.
"To transfer to the real world, we predict the object pose from 3 real camera feeds with the CNN, measure the robot fingertip locations using a 3D motion capture system, and give both of these to the control policy to produce an action for the robot."
But why?? Why should robots' hands resemble human hands? They could have any number of fingers, or tentacles, or magnets, why should they be like human hands??
It seems "AI" really means "as close as possible to human behavior", even if we're not really that clever in said behavior.
Also, human intelligence being at least debatable, it's not obvious that the obsessive imitation of humans is the best way to attain "AI".
It’s really good practice for your AI system to try it on a human hand. If you can make it work with a human hand you can probably make it work for other manipulators, but it’s a great place to start!
 He said, tooting his employer's horn.
I don't quite get the "New Jobs will be created" fallacy.
Let me explain:
What is job? An Abstract way of looking at it: A job is something that requires a set skills to accomplish a task.
What most politicians don't get: Researchers like OpenAi teach machines SKILLS not jobs.
A little thought experiment:
Let's say humans are capable of 100 skills.
Skills can be anything from: driving, seeing, hearing, reading, walking, carrying, drawing etc.
Usually, a low paying job requires little to no traning.
For example: Someone in a warehouse that picks the stuff you have ordered. The skill that are required are: walking, picking and using a device.
A High paying jobs usually requires more skills and/or experience.
We train machines to see better, hear better, sort faster etc.
Any new job will require some sort of skills out the set of skills that can be trained.
But the moment you create this job, it will be automated, because a machine can do it better and faster.
We need to adress this now, otherwise i don't see a bright future for the generations to come.
Because they taught a robot hand to rotate a cube well... ?
Take it from an AI researcher, it's one thing to make a demo of a technique solving a narrow very simple problem and quite another to use that technique to solve a real world need.
"Any new job will require some sort of skills out the set of skills that can be trained" - even if we do reach the level of AI there this would actually hold, we are NOWHERE near that today, and this result certainly does not demonstrate we are.
AI does not remove scarcity.
Comparative advantage is a real thing. It is well studied and well understood.
The common argument seems to conflate AI, automation, robotics, and similar things with the removal of scarcity.
In the presence of scarcity, comparative advantage tells us that we're unlikely to see the vast majority of the world's population with nothing to do.
If a small number of people, with the aid of machines, can do the equivalent work of thousands of workers, how will the free market support the higher cost of human labour in comparison to the robots?
Its not that people will have nothing to do. But for a huge number of people, there will be no way to get paid as much as it costs them to live.
It is possible to have an absolute advantage in production of all goods, but still gain from trading with your inferior partner.
This is based on another of the foundational concepts in economics, that of opportunity cost.
Economics is debate-ably a science, but it is certainly a mature field of study. Much like other mature fields of study and technical fields, it contains jargon. Economics suffers exceptionally from the challenge of its jargon sounding like vernacular language.
You are not using comparative advantage correctly. The wikipedia article is relatively short and clear.
That being said, your argument seems to be "efficiencies in production will drive the prices of goods down" and "humans will earn less money." There is a pretty big jump to "humans will not earn enough to pay the prices of the (now cheaper) goods they require to live."
Let's engage on that argument. Is it a fair (though obviously simplified) statement of your position? What leads you to believe that wages will fall to a greater extent than prices?
To respond on the main debate, my argument is not that "efficiencies in production will drive the prices of goods down" but instead that "efficiencies in production will drive the production costs of goods down".
Just because it becomes cheaper to make a product, doesn't mean that the consumer price will drop the same amount. Especially if that cheaper production can only be utilized by a few companies with automation skills.
The economic benefits of automation may trickle down to consumers, but will largely be taken by the few large companies capable of such automation.
Nobody 20 years ago would have imagined the job of mobile app developer. The mobile phone replaced many devices and probably many jobs in manufacturing, but also created new domains that we couldn't have imagined and empowered people in the developing world (and elsewhere).
> We need to address this now, otherwise i don't see a bright future for the generations to come.
You look at things the wrong way. Humans have always had a job which can't be taken away by corporations - the job of caring for oneself and one's needs. If we don't have corporate jobs, then we can become self reliant at individual, community and country level and find ways to support ourselves. We can build houses, teach children, provide medical care and many other things with jobless people for jobless people. We can even use automation for our own benefit, like we do with open source software.
My job as a developer is, in a real sense, to eliminate as much work from people as possible. If I wasn't doing that, there'd be no point to my job. We don't just make software for the hell of it.
Most people would be surprised that US manufacturing is at the highest level in history. And they are surprised because manufacturing employment is at the lowest levels.
Is it good that Americans are not doing highly physically demanding manufacturing jobs? Sure. But what are the long term consequences of productivity without people? Every industry is more productive with drastically fewer people and they're improving on that equation every day. Not just factories but also white collar office work too. When self-driving vehicles become the norm, a massive amount of people will no longer have jobs.
Think of it this way: what would people who have lots of free time (no job) and lots of unfulfilled needs do? They would work to be self reliant if that would mean having a better life. There would be plenty of manpower. All we need is land and raw materials.
But we also have an extreme paucity of people coming up with those ideas for what to do with our staggering collective potential; the skill to produce creative solutions to problems which may not have been identified yet is one that I believe we will have some difficulty teaching to machines in the short or medium term.
What you see as an intractable problem, I see as a deficit in training, education, and investment in workers. These days, people are fully capable of learning about any topic which sparks their interests; there are tutorials and how-to's from basic to advanced levels in both text and multimedia formats on everything from genetic engineering to programming to circuit design to working with materials like wood/leather/metal/plastic...the list goes on.
People either don't seem to realize that they can retrain themselves, or they don't have the resources and especially time to do so. We can help with both, but there doesn't seem to be much appetite to cough up any money for educating adults.
"Get everybody rich" is a solution to a lot of problems, but it’s not easy.
> Making sure there are good open source versions of the robots should allow everyone to benefit from robot labor.
Open-sourcing is a great thing, but does it really reduce the cost of building a robot?
You still have to pay the people who write those open-source plans and code, plus the ones who build these robots (or the one who build the robots that build the robots), and the ones who sell it. Also, maintainance and updates costs.
Given that definition AI is no threat. Of course your post implies the financial impact of AI upon people, in this context I think it's important to correctly frame a Job as "A way for individuals to use their skills to extract worth from an economy, for the purpose of trading that worth for other goods in the _same_ economy".
The emphasis is key, Jobs are not independent of the value of products sold... e.g taking the "AI will destroy all jobs" narrative to be true and then following it to it's extreme: A world where no one (or very few) have a job, in this world no one has any worth to trade for the value being produced by AI, it's a complete catch 22. The realistic way this could be possible is by living in a star trek universe devoid of "money". But this is not binary, in between there is not some post apocalyptic world where humans go live in the desert and AI meanwhile gradually destroys everyone's source of income, there is an in between:
AI is not as special as everyone thinks, the way it's being used is very applied, it's just an extension of automation... Just like all automation before it - the worth being created by AI is cheaper to create, and therefore _can_ be cheaper to buy, given enough gradual economic impact it also _must_ be cheaper to buy as peoples buying power decreases and the product has no value if it cannot be sold. The net result over time is reduced cost of living.
Note that i've refrained from attacking the easiest thing here which is that: AI is not as clever as the media hype train likes to make everyone think. The chasm between conscious human mind and current engineered NN that is as functionally intelligent as 0.1 fruit flies is astronomically large and n-dimensional we are not talking about something as simplistic as koomeys law here... but the media likes to apply "infinite exponential growth" to every aspect of technology, reality is far more subtle.
If the AI becomea superior to the human mind then you’ll have much more to worry about than jobs.
Plus at the base level there will always be the job of taking care of yourself and keeping yourself alive. If a robot can take that hob then great, that means you don’t have to work anymore.
Where this experience reflects the most? Art and how it communicates and resonates with you, because you can connect, relate. Even if robots would learn to play jazz like Davis, Coltrane and Parker combined, I wonder if there ever will be that human feeling and reflection of player's experience and life.
Empathy may be another field. Taking care of elders or other people in need. Connecting to the experience of other, relating to it. Knowing, that your life is finite.
Maybe a way how to solve this problem is for us to become more human. Finally, one may want to add.
Is AI and robots really cheaper then manual labor for a lot of things?
How close really are we to have AI replace even simple tasks like flipping burgers? What about mopping the floor? Taking the trash?
Why do we work? What do we work for?
Except that you haven’t cited any skill that is currently (or in the near future) better performed by a machine than by a human –maybe sorting but that’s a rather limited skill in itself–.
Ive made attempt to automate this, but I find that constantly the rework needed is too extreme and incomplete.
These things are the future of employment. This problem is non-trivial as my hires dont understand and struggle for a long time. It takes teaching them for them to 'get it'. Still we have to make decisions.
Labor will be automated, this is good.
The future will be using minds, even uneducated minds will be put to use.
Would you let robots run a country? Or a state? Or a town?
> A little thought experiment: Let's say humans are capable of 100 skills. Skills can be anything from: driving, seeing, hearing, reading, walking, carrying, drawing etc.
Can a robot be more human than human?
Am I going to fall in love with a robot? Will the robot show perfect empathy? Will they become enlightened?
 Captured! By Robots! Doesn't count. https://www.youtube.com/watch?v=_zvU165DEYc
Well if they even agreed on millions, that would still be an improvement over the current thinking, i.e., if one type of job is eliminated, a new type of job would magically appear and we'll keep living in the same rainbows and unicorns economy that we are in.
We arguably even had the knowledge to build the neural nets to run them. Had more people been exposed to functional, declarative and data-driven programming back then, I think it would have seemed straightforward to wire up large networks as spreadsheets. Sadly, most of those older approaches have been replaced with opaque and buzzwordy approaches that bury concepts in terminology.
Arguably there was no market back then for AI. But I think what really happened is that AI arrived simultaneously with neoliberalism and supply-side economics which treat workers as commodities. Rather than letting everyone replace themselves with robots and keep their paychecks, they were instead forced to work longer and longer hours for less pay and compete with companies overseas that have few labor or environmental protections.
The problem isn't the technology, it's the political climate that can't see beyond jobs and so-called handouts. Which means that alternative societies are going to have to tackle problems all at once rather than piecemeal. There's going to be tremendous pressure to undercut human-oriented economies that focus on self-actualization over the mundane tasks of running the daily rat race (just like we've seen with organic, high efficiency, solar, wind and recycling being disincentivized).
The problem back then was computing power, not people not being exposed to these paradigms.
P.S. I am trying to help a newer dev atm, and I realize I always have only one question for them while basically doing their work "What. the. hell. are. the. business. requirements?"
Suddenly, this makes me feel much more like a business analyst than a code monkey, though being a decent code monkey is definitely a pre-req.
As a sofware developer myself, I’d love to ever see a world where my work is not needed anymore. Wouldn’t that be awesome to have worked so hard toward automation that you can automate your own job?