Hacker News new | comments | ask | show | jobs | submit login
Learning Dexterity (openai.com)
470 points by gdb 6 months ago | hide | past | web | favorite | 135 comments


"We observed that for precision grasps, such as the Tip Pinch grasp, Dactyl uses the thumb and little finger. Humans tend to use the thumb and either the index or middle finger instead. However, the robot hand’s little finger is more flexible due to an extra degree of freedom, which may explain why Dactyl prefers it. This means that Dactyl can rediscover grasps found in humans, but adapt them to better fit the limitations and abilities of its own body."

The learning of "emergent" behavior, specifically when it creates improvements to natural human motion is one of the main reasons why this type of work is so important. Similar to the way that we imitate design from nature (e.g. wings, suction cups), we can now accelerate development by observing how the bots perform the task in a variety of environments

It's a really cool phenomena, it also means we have to make our simulations better. These sorts of RL algorithms are so good at finding "exploits" in the physics engine they are in that help them "cheat" sometimes, compared to what the researcher wanted.

I agree, but FWIW in the linked work they did randomize some of the engine parameters during training to avoid fitting too much to a specific set of assumptions. Certainly though more accurate simulations, as long as they were still fast to compute, would be very useful!

yeah, I think a big reason why they needed to do randomization in the first place is because of inaccuracies in the simulation compared to the real world. And the extra randomization required almost two orders of magnitude more training time!

This is always true, by using the real universe as the simulation environment, the exploits are applicable.

otoh, maybe finding "exploits" is all we ever do. I don't grip with my thumb and little finger because it hurts the back of my hand, but that isn't a problem for Dactyl.

Sure but we want to find exploits in the physical universe not in the approximation that we call a physics engine.

This means that Dactyl can rediscover grasps found in humans, but adapt them to better fit the limitations and abilities of its own body.

Then why not evolve commensurate dexterity with mechanically simpler manipulators than a human hand? I'd bet that a robot with 4 arms with 4 different specialist manipulators and a few specialist tools (like a generalized screw-threading tool) could eventually be more efficient than a human with 2 human arms. You can even see this happening in real life, powered by human brains. The kind of dexterity people can get out of a crude instrument like a backhoe is very impressive.

The simpler device will always have an economic advantage.

The simpler device might or might not have an economic advantage. There are a lot of variables.

A backhoe is actually very complex - the simple device is a stick. A thousand humans with a stick can do as much work in a week as a single human with a backhoe can do in a day. It only takes a few weeks to pay off the more complex backhoe. Of course your humans can use their bare hands for a further reduction in productivity.

A backhoe is less complex than a horizontal drill, and the drill is less versatile overall. However the drill can go under a structure without harming it which can be a big economic win. If you are just doing a shallow pipe through an open field they are fairly competitive in price though.

The simple device has an economic advantage when the task is not repeated often enough to pay off the costs of the more complex machine (this includes maintenance costs which can be more than the initial investment)

A backhoe is actually very complex

But it's quite a bit less complex than a scaled up anatomic analogue human arm. It's just complex enough to be a general-purpose excavation and construction tool. My point is that the evolved complexity of a human hand is too complex for the general purpose manipulation task. Evolution isn't perfect. It just did a very good job given it had a bunch of fish bones to work with.

Please note I said "simpler" device, not "simplest possible" device. US artillery in WWII had commensurate functionality to German artillery, but did it with nearly half of the number of moving parts. US tanks were individually less capable than their German counterparts, but were more efficient to manufacture and easier to repair. That is what is meant by the economic advantage of the "simpler." Not using a trebuchet instead of a cannon.

> My point is that the evolved complexity of a human hand is too complex for the general purpose manipulation task.

Maybe if you're constraining 'general purpose manipulation' to mean 'rearranging blocks'. That complexity (and especially the extra brain power to run it) costs energy. If it wasn't useful, we wouldn't have it.

Maybe if you're constraining 'general purpose manipulation' to mean 'rearranging blocks'.

I'm thinking more along the lines of "can build a PC" or "repair a machine." I don't think we'll need to lap flint spearheads in 2018 and beyond. I don't think we need to have repair bots play the fiddle so much on the surface of Mars. I do think that the manipulation tasks useful to a 21st century civilization are "within the grasp" of simpler manipulators than the human hand. The human hand is freakishly complex. If we can get 80% of the functionality for 25% the cost, that will be a win. (For the most difficult 20%, there are still humans.)

That complexity (and especially the extra brain power to run it) costs energy. If it wasn't useful, we wouldn't have it.

Of course it's useful! No one disputes that! It doesn't matter. You're fallacious in supposing that 1) all of the complexity and capability is absolutely essential in current and future contexts and 2) that evolution had bought it with maximal possible efficiency. What we understand about evolution tells us that the 1st isn't necessarily true, and that the 2nd is probably not!

Yes, there will be a place for human hands. But some huge fraction of the current uses of human hands will probably be replaced by very capable but much simpler manipulators -- simply due to economics!

Interestingly on a related note I just watched a documentary called "The Workers Cup" about laborers building the venues for the Qatar world cup and there were many shots of the laborers manually shoveling rocks and gravel in places where you would typically see a machine doing it here in North America. Apparently, for them, it's cheaper to use cheap labor to hand dig rocks and dirt than to use a machine to do it.

Yes, if you can treat workers like machines (i.e. no “human rights”) they are actually more efficient per calorie/dollar than a rotary engine for many tasks.

This is why slavery didn’t just die upon invention of the engine.

> The simpler device will always have an economic advantage.

The recent history of technology suggests otherwise. General-purpose devices displace simpler single-purpose devices. Consider all the specialized, but relatively simple, circuits displaced by the desktop computer.

General-purpose devices displace simpler single-purpose devices.

Yes, but simpler general-purpose devices displace more complex general-purpose devices, so long as they're also suitable to the tasks. A robot with a couple of graspers, a picker, a suction manipulator, plus some graspable special-purpose tools could be as versatile as a humanoid robot with anatomically inspired human-like hands, but be several times simpler and several times more reliable.

Relevant 1 minute video: https://vimeo.com/107569286

Does anyone know a good graduate program/route for this kind of work? My undergrad was CS with some experience in (dumb) robotics and mechanical design but no ML. I am interested in applying ML/CV to physical systems like this however am I bit weary of going back to a CS program. I have seen some Mechanical programs with an emphasis on control that let you 'build your own degree'. If I could take a mix of ML/CV, control systems, kinematics I would be happy. Just looking for some input from people in this field.

(I work at OpenAI.)

Worth noting: it's a well-supported route to join OpenAI without any special graduate training. Many of our teams (including our robotics team!) hire experienced software engineers, teaching them whatever ML they need to know, or our Fellows program lets people do a more formal curriculum (https://blog.openai.com/openai-fellows/). We also have a number of software engineers who focus on what looks like traditional software engineering: see for example https://www.youtube.com/watch?v=UdIPveR__jw.

See our open positions here: http://openai.com/jobs!

I am a 2nd year Phd at Stanford and you could definitely do such work here! Also at CMU, Georgia Tech, Berkeley, U Washington, and others. You can enter PhD via EE/MechE or CS - once you are focused on research it does not matter much. The FAIR/Google Residency programs may also be of interest.

Honest question: In the video, it looks like it works, but performs worse than about 90% of humans at the task of rotating a cube.

On the other hand, Alpha Go or even a rudimentary chess program does better than 99.99% of all humans.

So is it fair to say that deep learning is fundamentally missing something that humans do? Or that chess and Go are "easy" problems in some sense?

(It seems like with "unlimited" training hours it could eventually be better than a human? Or is that a hardware issue?)

In 2015, it was commonly thought that it would still be decades before a computer could beat a top human player at Go. Now, you are calling it “easy,” because it’s been done.

The first chess program was written by Alan Turing on paper between 1948 and 1950. He didn’t have a computer to run it, but he could still play a game with it by stepping through the algorithm by hand. In 1997, Deep Blue beat Kasparov, using traditional algorithms and not deep learning.

Clearly there are differences between these problems and dexterity. Chess, for example, can be described relatively simply using logic, and there is no dynamic or physical element; a rudimentary player can be written using pencil and paper; a winning player just needs enough compute power, apparently.

More importantly, there is a technology curve. You are asking about the ultimate limits of a technique moments after its first success puts it at the low end of the spectrum of human ability. Give it a decade or two.

I am just shocked the video was real-time and not sped up like so many of these videos are (eg watch a robot arm fold a shirt in thirty seconds when you play it at 5x speed).

>> In 2015, it was commonly thought that it would still be decades before a computer could beat a top human player at Go

This needs a citation and it needs it badly.

It was widely reported in the popular press, to the dismay of many scientists working in game-playing AI, who had very different opinions about how close or far beating a professional human at Go was at the time of AlphaGo. The majority of them in fact did not make predictions- they just pointed out that Go was the last of the traditional board games to remain uncoquered by AI. Not that it would take X years to get there. Most AI researchers are loath to make such predictions, knowing well that they tend to be very inaccurate (on either direction).

All I know is what the articles and commenters were saying then, as an interesting contrast to this comment now. Every article on AlphaGo described a general state of shock at achieving something that (even if at a purely psychological level) seemed at least 10 years away.


> Just a couple of years ago, in fact, most Go players and game programmers believed the game was so complex that it would take several decades before computers might reach the standard of a human expert player.

>> All I know is what the articles and commenters were saying then, as an interesting contrast to this comment now.

I understand, but in such cases (when an opinion of experts is summarised in the popular press, rather than by experts themselves) it may be a good idea to dig a bit further before repeating what may be a misunderstanding on the part of reporters.

For example, my experience is very different than what you report. In an AI course during my data science Master's and in the context of a discussion on game-playing AI, the tutor pointed to Go as the only traditional board game that was not yet conquered by adversarial AI, without offering any predictions or comments about its hardness, other than to say that the difficulty of AI systems with Go is sometimes explained by saying that "intuition" is needed to play well. And I generally don't remember being surprised when I first heard of the AlphaGo result (I have some bakcground in adversarial AI, though I'm not an expert), and in fact thinking that it was bound to happen eventually, one way or another.

A similar discussion can be found in AI: A Modern Approach (3d ed) in the "Bibliographical and Historical Notes" section of chapter 5. Adversarial AI, where recent (at the time) successes are noted, but again no prediction about the timeframe of beating a human master is attempted and no explanation of the hardness of the game is given, other than its great branching factor. In fact, the relevant paragraph notes that "Up to 1997 there were no competent Go programs. Now the best programs play most [sic] of their moves at the master level; the only problem is that over the course of a game they usually make at least one serious blunder that allows a strong opponent to win" - a summary that, given the year is 2010, and to my opinion, strongly contradicts the assumption that most experts considered Go to be out of reach of an AI player. It looks like in 2010 experts understood then-current programs to be quite strong players already.

In general, I would be very surprised to find many actual experts (e.g. authors of Go playing systems) predicting that beating Go would take "at least 10 years", let alone "several decades" (!). Like I say, most AI researchers these days are very conservative with their predictions, precisely because they (and others) have been burned in the past. Stressing "most".

>> So is it fair to say that deep learning is fundamentally missing something that humans do?

Yes, it's missing the ability to generalise from its training examples to unseen data and to transfer acquired knowledge between tasks.

Like you say, the article describes an experiment where a robot hand learned to manipulate a cube. A human child that had learned to manipulate a cube that well would also be able to manipulate a ball, a pyramid, a disk and, really, any other physical object of any shape or dimensions (respecting the limits of its own size).

By contrast, a robot that has learned to manipulate cubes via deep learning, can only manipulate cubes and will never be able to manipulate anything but cubes, unless it's trained to manipulate something else, at which point it will forget how to manipulate cubes.

That's the fundamental ability that deep learning is missing, that humans have.

(Before beginning, I want to note that these are solely my opinions, and therefore are probably wrong.)

In the space of possible problems solvable by computers, there are those of which are "easy" and those of which are "hard".

Arbitrarily defined, an "easy" problem is any problem that be solved by throwing more resources at it -- whether it'd be more data, or more compute. A "hard" problem on the other hand is the opposite: solvable only by a major, intellectual breakthrough; the benefit of solving a hard problem is that it allows us to do "more" with "less".

Now, the question is: which type of problems are being looked at by today's AI practitioners. I'd argue it is the former. Chess, Go, Dota 2 -- these are all "easy" problems. Why? Because it is easy to find or generate more data, to use more CPUs and GPUs, and to get better results.

Hell, I might even add self-driving cars to that list since they, along with neural networks, existed since the 1980s [1]. The only difference, it seems, is more compute.

All and all, I think these recent achievements only qualify themselves as engineering achievements -- not as theoretical or scientific breakthroughs. One way to put it: have we, not the computers and machines, learned something fundamentally different?

Maybe another approach to current ML / AI is needed? I remember a couple weeks ago there was a post on HN, about Judas Pearl advocating causality as an alternative [2]. Intuitively it makes sense: baby humans don't only perform glorified pattern matching, but they are able to discern cause-and-effect. Perhaps that is what today's AI practitioners are missing.

[1] https://en.wikipedia.org/wiki/History_of_autonomous_cars#198...

[2] https://news.ycombinator.com/item?id=17108179

Simulating the manipulation of a cube, and physics in general, is more complex than simulating a board game.

I can't find it right now, but there was a nice quote on Wikipedia about AI where they were saying that AI optimism stemmed from underestimating ordinary tasks. The AI researchers, being all from a STEM background, assumed that the hard problems were solving chess, go or math theorems, when in reality threading a needle or brushing your teeth requires a much, much more complicated model.

Moravec's paradox.

I would say that the number of permutations (while many) in chess and go are finite, while this has to adapt to anything "at hand."

addendum: there are most definitely a greater number of environmental factors (x, y, z axes) involved in solving such problems.

> Learning to rotate an object in simulation without randomizations requires about 3 years of simulated experience

It's interesting to me that this is about the same amount of time it takes humans to develop similar levels of motor control. I don't know enough about AI or neuroscience to say whether it's likely to be a coincidence or not, though.

It's not really meaningful.

Humans learn with entirely different stimuli and experience (can you imagine subjecting a person to learning this object manipulation task in the same way as this robot?).

In addition, Alpha Go Zero demonstrated an order of magnitude better efficiency in training time than Alpha Go due to only algorithmic differences. Humans are about converged to 3 year training time for babies learning these skills, but I doubt that these learning algorithms are as efficient as they will ever be.

Interesting observation. I suspect it's probably coincidence though. Other tasks which humans are able to learn (such as [playing Dota][1]) have taken OpenAI much longer to master. OpenAI Five spends 180 years of training per day, per hero in order to learn Dota, and it still isn't at the level of professional players (though that may change soon).

Though I suppose you could argue that Dota benefits more from high-level reasoning, whereas basic motor control is a more intuitive skill. (And therefore better suited for this type of AI.)

[1]: https://blog.openai.com/openai-five/

Also, the time referenced in the article is presumably three years of non-stop training – given that an infant has a calendar packed with other things like sleeping, crying, and other non-motor activities, total human time logged on learning fine motor control is probably half that, if not less.

Basically coincidence; the models here are far simpler and do not at all mimic the human nervous system.

I don't doubt that it is coincidence. But it's interesting that vertebrates' brains seems to be using specialized structure (cerebellum) for motor coordination. And here we have far simpler artificial system without any evolutionary priors.

Very cool. There's also a Times article about Dactyl: https://www.nytimes.com/interactive/2018/07/30/technology/ro...

> Rapid used 6144 CPU cores and 8 GPUs to train our policy, collecting about one hundred years of experience in 50 hours.

That seemed an order of magnitude higher than I expected. Is training usually this computationally expensive?

Disclosure: I work on Google Cloud.

Is it the cores that throws you off? (Those are used for the simulation, not the training). Second, I believe those were preemptible cores, so that's $60/hr for the cores and then $20/hr for V100s (which is what I think they used). $80/hr isn't bad considering how much a (small!) team of researchers costs.

The notable thing about this is how little computation it needed. We have a long way to go, but this is a big step in the right direction.

haha I just read it and thought that's a magnitude less than I expected. Pretty often it is. A lot of papers from high profile institutions have a lot of computing power availiable.

First, it seems to be a lot and be really expensive, but think of it in man-hours. It quickly diminishes.

Not to mention that evolution has had millions of years of optimizing this stuff.

Not always but a lot of machine learning techniques now are effectively just brute forcing and highly expensive band wasteful to compute. Machine learning is getting close but I don't believe we will get there with our current methods

Well if you look at the plot in the "Learning progress" section, you'll see that they did require almost two orders of magnitude more training time due to the randomizations they were adding to the simulation. Without these the policy isn't very robust but also takes a lot less time to train.

Yeah but it doesn't matter; One model to rule them all.


Take a look at position 44, where it seems to get stuck, with no move to make forward progress, and two fingers straight out. Did it lack image recognition to tell it what block rotation was needed?

It doesn't seem to work by discovering strategies for rotating the block one face at a time, then combining those. It's solving the problem as a whole. That has both good and bad implications.

>> We’ve trained a human-like robot hand to manipulate physical objects with unprecedented dexterity.

To be precise, the "physical objects" appear to invariably be cubes of the same dimensions. Not arbitrary "physical objects". Which is probably the best that can be done by training only in a simulated environment.

I am continually impressed by OpenAI, whenever we think that something is too difficult for our currently understanding of AI. With their Dota AI and this they have shown that more can be done with a lot less than previously thought.

Not to be too negative, it's cool work, but I'd argue unlike the OpenAI result it is not so surprising this was doable with the techniques they used ; see eg this paper from Google http://www.roboticsproceedings.org/rss14/p10.pdf and this one from Stanford/DeepMind http://www.roboticsproceedings.org/rss14/p09.pdf . Yes there is the additional aspects of an object in hand, but fundamentally the techniques are the same.

Of course these works are cited in related works of paper as they should be; perhaps the OpenAI blog should also provide more context on where this stands wrt prior work, as many non-researchers may read this is may be quite misleading...

OpenAI has not exactly had the best reputation with their press releases.

Holy cow, the robots are definitely coming. We really are at the ground floor of a technology that is going to change humanity, I am certain of that. Changes greater than any changes we've seen before.

Well, if you're thinking sentient, AI beings, then no-- they are still a long way off. Unless we can give any meaning behind why a robot should do something, for example, have and use this kind of dexterity, it's all mechanical tricks. Cool tricks, nonetheless.

I agree with the person you replied to but I'm not thinking about intelligent machines. I'm just thinking about the automation of everything. Look how close we are to self driving cars without needing a sentient robot behind the wheel. Mechanical tricks are going to eliminate the jobs of a lot of people.

> Look how close we are to self driving cars without needing a sentient robot behind the wheel. Mechanical tricks are going to eliminate the jobs of a lot of people.

Let’s say we’re close to self-driving cars, i.e. it’ll happen in 10 years or so. How much will it cost? How much the maintainance will cost? How many years will be needed until everybody owns a self-driving car? Unless more than a handful people have that kind of car you won’t kill a lot of jobs.

With this kind of dexterity, I would say Las Vegas croupiers should be getting a little worried and thinking about re-skilling.

David Copperfield should be OK a while longer though

indeed. the world 200 years from now will be as unrecognizably different as today from 1818.

That seems unlikely. One way or another, I'd expect 2218 to be much, much more different from today than is 1818.

The history of innovation compounds. One cannot improve on what hasn't been invented yet, but afterward, it can happen at any time afterward, even if the unimproved original has since been rendered obsolete.

If you think of it in terms of "200 years ago" versus "200 years from now" you might think there would be equal magnitudes of progress. But now is 200 years since 1818, and 2218 is 400 years from 1818 AND 200 years from now. The future has more to build on, and taller shoulders to stand on.

The center of history, at the exact moment where as much has changed since the idea "maybe we could just plant some seeds between the Tigris and Euphrates, and maybe not wander around so much," and from then until the present, may be shorter than 50 years ago, and getting closer every year. Thousands of years on one side of the balance, and tens of years on the other. With that in mind, today may be as different from 1818 as 2038 is from today. 2218 would be just inconceivable.


I guess someone has to be the negative one: I can't help feeling it's route to the correct face looks entirely accidental (and I don't mean that in a good way)... I'm sure it's "learned" some methods, but they don't look that efficient, reliable, purposeful or controlled. In a more noisy and dynamic environment I'd expect them to fail. Granted is possible these could be more due to training conditions than an inherent limitation of the underlying model.

It looks that way because they're moving rapidly from one face configuration to another. But there's no way that's happening by random. I would guess that even just holding the cube constant in a dynamic grip is quite difficult.

Agreed, it looks really uncoordinated. A lot of reinforcement learning algorithms have this problem, in my experience.

I agree it looks sloppy, but that doesn’t mean it isn’t reliable. All it has to do in any given moment is make progress towards the goal of having the cube in the proper orientation, on average. It may be that it can do that very reliably even with noisy inputs and outputs.

Maybe if they randomized to n<=20 n-gons. I'd love to see Dactyl tackle a dodecahedron.

Link to paper ( why no Arxiv :/ ): https://d4mucfpksywv.cloudfront.net/research-covers/learning...

TLDR (quick-ish skim, feel free to correct) they train a deep neural network to control a robot hand to choose desired joints state changes (binned into 11 discrete values; eg rotate this joint by 10 degrees) for a 20-joint hand given low-level (non-visual; so, current and desired 3D orientation of the object and exact numeric state of the joints) input of the state of a particular object and the hand. They also train a network to extract the 3D pose of a given object given RGB input. All this training is done in simulation with a ton of computation, and they use a technique called domain randomization (changing colors and textures and so friction coefficient and so on) to make these learned models pretty much work in the real world despite being trained only in simulation.

It's pretty cool work, but if I may pull my reviewer hat on not that interesting in terms of new ideas - still, it's cool OpenAI is continuing to demonstrate what can be achieved today with established RL techniques and nice distributed compute.

It's pretty amusing/amazing how well domain randomization works, but it seems like they train the pose detector in the real world and not in simulation:

"To transfer to the real world, we predict the object pose from 3 real camera feeds with the CNN, measure the robot fingertip locations using a 3D motion capture system, and give both of these to the control policy to produce an action for the robot."

> a human-like robot hand

But why?? Why should robots' hands resemble human hands? They could have any number of fingers, or tentacles, or magnets, why should they be like human hands??

It seems "AI" really means "as close as possible to human behavior", even if we're not really that clever in said behavior.

Also, human intelligence being at least debatable, it's not obvious that the obsessive imitation of humans is the best way to attain "AI".

Because most objects are made for human hands. So it is better to have a robot with human-like hands (or body) than to change the shape of all the things we have already created. So future robots can use our human-made stuff.

We have built this world for human hands so that has become the best shape overall. It does depend on the situation but I'm guessing for many, an improved human shape is best.

Yep, backwards compatibility, not just hands but bodies overall, if we ever make a robot that can drive existing cars it will pretty much resemble a human body, and our cities were created for human bodies/interactions (walking up stairs, etc)

Human hands are incredible manipulators. Really, compared to what’s in robotics today they’re fabulous. There’s some other promising developments in non human hand style manipulators, but human hands are still really good.

It’s really good practice for your AI system to try it on a human hand. If you can make it work with a human hand you can probably make it work for other manipulators, but it’s a great place to start!

The hand itself is an incredible piece of machinery.

Has the pricing come down on these robotic hands? Anybody have ballpark cost for the Shadow Dexterous Hand - 100k, 300k?

Pricing on robotic hands: http://www.androidworld.com/prod76.htm

This is a great example of why AI innovation is not moving at the pace we are told to believe. This is using the same basic algorithms we've known about for decades, just more compute and differently formulated problems. We need a paradigm shift!

Any comments on why it seems to basically not use the middle finger at all?

I find it strangely correlated with the way the camera is set up. If it uses the middle finger then the camera might not see correctly the cube's face. You can see it using it at the last resort. But I don't see why this would matter in the simulation phase.

Wouldn't it just be a product of the model not finding that finger useful in the simulations?

It is polite. ;P

That's very impressive. Robotic grasping is getting pretty good[1] but in-hand manipulation is a whole 'nother kettle of fish and this is really exciting.

[1] He said, tooting his employer's horn.

Nice work. I see a future of robots ruling homo sapiens vividly.

Dexterity is their secret weapon!

They should set up an accelerometer and gyroscope in each fingertip instead of pressure sensors. Could then maybe control without a camera.

They already have a good estimate of the finger tip pose from the angular measurements of all the internal degrees of freedom (presumably from angular encoders?). And I'm not sure any IMU small enough to fit into the fingertip will be accurate enough to provide really useful additional pose information.

I'd like to see it roll a coin on its knuckles. Or maybe some card tricks.

I immediately thought of the robotic arm from Terminator 2. It's pretty cool.

Terrifyingly amazing.

This is a perfect example of how AI is taking over the world by storm. I don't know how people don't realize that there will be no jobs left for billions of people. Yes, billions. Not Millions.

I don't quite get the "New Jobs will be created" fallacy.

Let me explain: What is job? An Abstract way of looking at it: A job is something that requires a set skills to accomplish a task. What most politicians don't get: Researchers like OpenAi teach machines SKILLS not jobs.

A little thought experiment: Let's say humans are capable of 100 skills. Skills can be anything from: driving, seeing, hearing, reading, walking, carrying, drawing etc.

Usually, a low paying job requires little to no traning. For example: Someone in a warehouse that picks the stuff you have ordered. The skill that are required are: walking, picking and using a device. A High paying jobs usually requires more skills and/or experience.

We train machines to see better, hear better, sort faster etc. Any new job will require some sort of skills out the set of skills that can be trained. But the moment you create this job, it will be automated, because a machine can do it better and faster.

We need to adress this now, otherwise i don't see a bright future for the generations to come.

"This is a perfect example of how AI is taking over the world by storm."

Because they taught a robot hand to rotate a cube well... ?

Take it from an AI researcher, it's one thing to make a demo of a technique solving a narrow very simple problem and quite another to use that technique to solve a real world need.

"Any new job will require some sort of skills out the set of skills that can be trained" - even if we do reach the level of AI there this would actually hold, we are NOWHERE near that today, and this result certainly does not demonstrate we are.

Dextrous manipulation is a big deal though. Typical stuff we've seen like generating synthetic faces using GANs is not commercially useful, but eye-hand coordination is 90% of commercial activity worldwide.

That is true (and I am well aware of this, as I do research on robotic grasping https://sites.google.com/view/task-oriented-grasp ). But, this would very hard to generalize to more complex manipulation where the object is not already in-hand etc. ; it's yet another piece of the puzzle, but it by itself does not get us that much closer to general purpose training of robust complex manipulation.

Well, i can see that you are a expert in this field, but i still have to say the method they demonstrate, domain randomization is a big deal because that can then be applied to anything else like picking up stuff. Domain randomization is a year old thing now, but it's a big deal - if you can build a "crappy" simulator for it, you will be able to do anything. I feel like domain randomization isnt being promoted enough in the media

sure, but they demonstrate it for a very simple use case of a single object with no background etc. ; domain randomization has yet to be demonstrated to work on more complex environments/tasks

One of the last things not automated at Amazon fulfillment centers is identifying and picking up the item from the shelf brought by the robots which contain it.

sure, and there has already been a ton of work on grasping (see eg DexNet 2.0, Closing the loop for robotic grasping). This stuff is already in the process of being commercialized, and this research does not advance it much.

This is a response to you and all the various siblings, nieces and nephews.

AI does not remove scarcity.

Comparative advantage is a real thing. It is well studied and well understood.

The common argument seems to conflate AI, automation, robotics, and similar things with the removal of scarcity.

In the presence of scarcity, comparative advantage tells us that we're unlikely to see the vast majority of the world's population with nothing to do.

Comparative advantage is exactly why so many people will loose their jobs.

If a small number of people, with the aid of machines, can do the equivalent work of thousands of workers, how will the free market support the higher cost of human labour in comparison to the robots?

Its not that people will have nothing to do. But for a huge number of people, there will be no way to get paid as much as it costs them to live.

Comparative advantage is an economic theory that says there is still gain from trade between asymmetric producers.

It is possible to have an absolute advantage in production of all goods, but still gain from trading with your inferior partner.

This is based on another of the foundational concepts in economics, that of opportunity cost.

Economics is debate-ably a science, but it is certainly a mature field of study. Much like other mature fields of study and technical fields, it contains jargon. Economics suffers exceptionally from the challenge of its jargon sounding like vernacular language.

You are not using comparative advantage correctly. The wikipedia article is relatively short and clear.[0]

That being said, your argument seems to be "efficiencies in production will drive the prices of goods down" and "humans will earn less money." There is a pretty big jump to "humans will not earn enough to pay the prices of the (now cheaper) goods they require to live."

Let's engage on that argument. Is it a fair (though obviously simplified) statement of your position? What leads you to believe that wages will fall to a greater extent than prices?

[0] https://en.wikipedia.org/wiki/Comparative_advantage

Late response here. Yeah I clearly missed the reference to the economics term, but that makes sense.

To respond on the main debate, my argument is not that "efficiencies in production will drive the prices of goods down" but instead that "efficiencies in production will drive the production costs of goods down".

Just because it becomes cheaper to make a product, doesn't mean that the consumer price will drop the same amount. Especially if that cheaper production can only be utilized by a few companies with automation skills.

The economic benefits of automation may trickle down to consumers, but will largely be taken by the few large companies capable of such automation.

> I don't quite get the "New Jobs will be created" fallacy.

Nobody 20 years ago would have imagined the job of mobile app developer. The mobile phone replaced many devices and probably many jobs in manufacturing, but also created new domains that we couldn't have imagined and empowered people in the developing world (and elsewhere).

> We need to address this now, otherwise i don't see a bright future for the generations to come.

You look at things the wrong way. Humans have always had a job which can't be taken away by corporations - the job of caring for oneself and one's needs. If we don't have corporate jobs, then we can become self reliant at individual, community and country level and find ways to support ourselves. We can build houses, teach children, provide medical care and many other things with jobless people for jobless people. We can even use automation for our own benefit, like we do with open source software.

What you fail to address is that the success of software is that it does so much more with so much fewer people. A mobile app developer might be a new job but it actually replaces (indirectly) the jobs of dozens to hundreds of people. That's why it's a success in the first place.

My job as a developer is, in a real sense, to eliminate as much work from people as possible. If I wasn't doing that, there'd be no point to my job. We don't just make software for the hell of it.

No, what you're failing to understand is that the cell phone has created new opportunities even for the poorest people of the world. It opens up commerce, payments, short time borrowing, education, hiring, finding a spouse and many other things that lead to a successful life. On the whole it was a boon for humanity - in other words, it was worse for all of us, rich or poor, before it existed.

The world may very well be a better place and frankly saving people from work is actually a good thing. But I disagree with the idea that the computer/mobile/AI revolution is adding more jobs than it takes away. There are not more app developers than their were factory laborers.

Most people would be surprised that US manufacturing is at the highest level in history. And they are surprised because manufacturing employment is at the lowest levels.

Is it good that Americans are not doing highly physically demanding manufacturing jobs? Sure. But what are the long term consequences of productivity without people? Every industry is more productive with drastically fewer people and they're improving on that equation every day. Not just factories but also white collar office work too. When self-driving vehicles become the norm, a massive amount of people will no longer have jobs.

Jobless people can't build the factory to manufacture the components to build the homes, though. Building a factory requires capital which (currently) cannot be acquired in process you're outlining.

We can also build homes with simpler construction materials and lots of hard work. We can make our own bricks and panels of wood.

Think of it this way: what would people who have lots of free time (no job) and lots of unfulfilled needs do? They would work to be self reliant if that would mean having a better life. There would be plenty of manpower. All we need is land and raw materials.

This "new technology will create new jobs" is the biggest lie that economics ever told. The crises in the midwest in this country and the subsequent election of Donald Trump is a direct result of the displacement of American workers by technology and offshoring.

I am firmly in the camp that we have a glut of capability in terms of what an average individual can accomplish using the low cost, power efficiency, and enormous functionality of modern devices like microcontrollers, motors, and sensors.

But we also have an extreme paucity of people coming up with those ideas for what to do with our staggering collective potential; the skill to produce creative solutions to problems which may not have been identified yet is one that I believe we will have some difficulty teaching to machines in the short or medium term.

What you see as an intractable problem, I see as a deficit in training, education, and investment in workers. These days, people are fully capable of learning about any topic which sparks their interests; there are tutorials and how-to's from basic to advanced levels in both text and multimedia formats on everything from genetic engineering to programming to circuit design to working with materials like wood/leather/metal/plastic...the list goes on.

People either don't seem to realize that they can retrain themselves, or they don't have the resources and especially time to do so. We can help with both, but there doesn't seem to be much appetite to cough up any money for educating adults.

The solution is pretty easy though: Widespread enough ownership of robots. Making sure there are good open source versions of the robots should allow everyone to benefit from robot labor. There will still be scarcity when it comes to land, energy and raw materials. We will have to tackle inequality of ownership of to these things, at least as long as we are earth bound.

> The solution is pretty easy though: Widespread enough ownership of robots.

"Get everybody rich" is a solution to a lot of problems, but it’s not easy.

> Making sure there are good open source versions of the robots should allow everyone to benefit from robot labor.

Open-sourcing is a great thing, but does it really reduce the cost of building a robot?

Once robots are doing all the labor, and you have open source plans and code, the cost is only that of raw materials and energy.

> Once robots are doing all the labor, and you have open source plans and code, the cost is only that of raw materials and energy.

You still have to pay the people who write those open-source plans and code, plus the ones who build these robots (or the one who build the robots that build the robots), and the ones who sell it. Also, maintainance and updates costs.

Not once robots are doing all the labor. You just ask your robot to do it, or maybe your friends' robot if you don't have your first robot yet.

With the current state of AI, the situation you’re describing won’t exist before a century, if it ever exists. You can’t call such thing an "easy" solution.

Progress in robotics for the last several decades has been steady, but plodding and incremental. This is just one small step. I think it's amazing, don't get me wrong, but there is still a Mount Everest of progress to be made before we get to anything like a general purpose robot that can, say, do the housework.

> Let me explain: What is job? An Abstract way of looking at it: A job is something that requires a set skills to accomplish a task.

Given that definition AI is no threat. Of course your post implies the financial impact of AI upon people, in this context I think it's important to correctly frame a Job as "A way for individuals to use their skills to extract worth from an economy, for the purpose of trading that worth for other goods in the _same_ economy".

The emphasis is key, Jobs are not independent of the value of products sold... e.g taking the "AI will destroy all jobs" narrative to be true and then following it to it's extreme: A world where no one (or very few) have a job, in this world no one has any worth to trade for the value being produced by AI, it's a complete catch 22. The realistic way this could be possible is by living in a star trek universe devoid of "money". But this is not binary, in between there is not some post apocalyptic world where humans go live in the desert and AI meanwhile gradually destroys everyone's source of income, there is an in between:

AI is not as special as everyone thinks, the way it's being used is very applied, it's just an extension of automation... Just like all automation before it - the worth being created by AI is cheaper to create, and therefore _can_ be cheaper to buy, given enough gradual economic impact it also _must_ be cheaper to buy as peoples buying power decreases and the product has no value if it cannot be sold. The net result over time is reduced cost of living.

Note that i've refrained from attacking the easiest thing here which is that: AI is not as clever as the media hype train likes to make everyone think. The chasm between conscious human mind and current engineered NN that is as functionally intelligent as 0.1 fruit flies is astronomically large and n-dimensional we are not talking about something as simplistic as koomeys law here... but the media likes to apply "infinite exponential growth" to every aspect of technology, reality is far more subtle.

Either the AI way will be more capable than a human mind or the human mind will be more capable. Jobs will always exist for the more capable mind... because it is more capable.

If the AI becomea superior to the human mind then you’ll have much more to worry about than jobs.

Plus at the base level there will always be the job of taking care of yourself and keeping yourself alive. If a robot can take that hob then great, that means you don’t have to work anymore.

What AI cannot learn and will never have is an experience of being human. It never can, as you can never have an experience of being a robot. Yes, perhaps one day you can teach it too, but it will be learned, not lived.

Where this experience reflects the most? Art and how it communicates and resonates with you, because you can connect, relate. Even if robots would learn to play jazz like Davis, Coltrane and Parker combined, I wonder if there ever will be that human feeling and reflection of player's experience and life.

Empathy may be another field. Taking care of elders or other people in need. Connecting to the experience of other, relating to it. Knowing, that your life is finite.

Maybe a way how to solve this problem is for us to become more human. Finally, one may want to add.

A few things to consider:

Is AI and robots really cheaper then manual labor for a lot of things?

How close really are we to have AI replace even simple tasks like flipping burgers? What about mopping the floor? Taking the trash?

Why do we work? What do we work for?

> We train machines to see better, hear better, sort faster etc. Any new job will require some sort of skills out the set of skills that can be trained. But the moment you create this job, it will be automated, because a machine can do it better and faster.

Except that you haven’t cited any skill that is currently (or in the near future) better performed by a machine than by a human –maybe sorting but that’s a rather limited skill in itself–.

The big "Oh shit" moment will be when AI and robots can perform any task a human can perform better and/or cheaper than a human can, which would make it fiscally unwise to pay a human for any kind of labor. Even if humans could still be better at creative pursuits (such as writing, music, and art, until AI possibly betters us at that too), that is not feasible for 99% of people to survive on.

Eh, I have some pretty terrible data collection and matching work that cannot be done by a computer due to needing to understand similar words, what is in a combo, what is important in a combo, or the strange situation where things are free after you pay for an upfront cost.

Ive made attempt to automate this, but I find that constantly the rework needed is too extreme and incomplete.

These things are the future of employment. This problem is non-trivial as my hires dont understand and struggle for a long time. It takes teaching them for them to 'get it'. Still we have to make decisions.

Labor will be automated, this is good.

The future will be using minds, even uneducated minds will be put to use.

Would you ever go see a robot performer[0]? Like, a robotic comedian, or a robot ballet dancer, or a robotic football player? Would you ever go see a robotic therapist if you're suicidal? Would you let a robot feed, and raise your infant? Would you get guided through a third world country on a mountaineering expedition by a robot? Can a robot write/perform an entire feature film? Would you watch a late-night talk show of one robot chatting with another robot?

Would you let robots run a country? Or a state? Or a town?

> A little thought experiment: Let's say humans are capable of 100 skills. Skills can be anything from: driving, seeing, hearing, reading, walking, carrying, drawing etc.

Can a robot be more human than human?

Am I going to fall in love with a robot? Will the robot show perfect empathy? Will they become enlightened?

[0] Captured! By Robots! Doesn't count. https://www.youtube.com/watch?v=_zvU165DEYc

Not only that, but consider how many jobs we could create if only we banned the use of combine harvesters, tractors, various machines in factories all over the world etc.

Yep, every boss that there is would replace their employees with a single computer if they could, it's basic economics not a single doubt about it, 99.99% would do it in a heartbeat. The thing is that wasn't a problem for hundreds of years cause computers/robots couldn't do most of what humans do, but as computers get everyday closer and even better at what humans can do, this artificial balance we call capitalism of what humans can do and what other humans are willing to pay for it starts to crash as the former are no longer needed; and we realize that capitalism is an anarchy of sorts, just that it was mostly stable for a few centuries so it kind of works but soon it will not.

> Yes, billions. Not Millions.

Well if they even agreed on millions, that would still be an improvement over the current thinking, i.e., if one type of job is eliminated, a new type of job would magically appear and we'll keep living in the same rainbows and unicorns economy that we are in.

Socialism or barbarism, eh?

We had the technology to build human-level artificial hands in the 80s, I remember seeing it in movies and on PBS:


We arguably even had the knowledge to build the neural nets to run them. Had more people been exposed to functional, declarative and data-driven programming back then, I think it would have seemed straightforward to wire up large networks as spreadsheets. Sadly, most of those older approaches have been replaced with opaque and buzzwordy approaches that bury concepts in terminology.

Arguably there was no market back then for AI. But I think what really happened is that AI arrived simultaneously with neoliberalism and supply-side economics which treat workers as commodities. Rather than letting everyone replace themselves with robots and keep their paychecks, they were instead forced to work longer and longer hours for less pay and compete with companies overseas that have few labor or environmental protections.

The problem isn't the technology, it's the political climate that can't see beyond jobs and so-called handouts. Which means that alternative societies are going to have to tackle problems all at once rather than piecemeal. There's going to be tremendous pressure to undercut human-oriented economies that focus on self-actualization over the mundane tasks of running the daily rat race (just like we've seen with organic, high efficiency, solar, wind and recycling being disincentivized).

> We arguably even had the knowledge to build the neural nets to run them. Had more people been exposed to functional, declarative and data-driven programming back then, I think it would have seemed straightforward to wire up large networks as spreadsheets.

The problem back then was computing power, not people not being exposed to these paradigms.


Please stop posting low-effort one-liners here.


Let's be honest - the only thing we care about is "are the programming jobs safe?!" Well, are they?

P.S. I am trying to help a newer dev atm, and I realize I always have only one question for them while basically doing their work "What. the. hell. are. the. business. requirements?"

Suddenly, this makes me feel much more like a business analyst than a code monkey, though being a decent code monkey is definitely a pre-req.

> Let's be honest - the only thing we care about is "are the programming jobs safe?!" Well, are they?

As a sofware developer myself, I’d love to ever see a world where my work is not needed anymore. Wouldn’t that be awesome to have worked so hard toward automation that you can automate your own job?

How apropos, if our own jobs are the first to go.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact