Hacker News new | past | comments | ask | show | jobs | submit login
The Arrival of AI (stratechery.com)
162 points by misiti3780 on Mar 29, 2017 | hide | past | favorite | 127 comments



While I definitely agree that advances in ML in the last 20 extremely important (and potentially revolutionary), I think this article misses the mark in a few places.

>Now, instead of humans designing algorithms to be executed by a computer, the computer is designing the algorithms. (Albeit guided by human-devised algorithms)

This line is way off in both tone and substance. On tone, it really underplays the human effort involved in effective machine learning (as it is practiced in 2017) and anthropomorphizes "machines" to an unreasonable extent. In substance, I fail to see how a machine that "designs its own algorithms" according to an algorithm designed and implemented by a human is fundamentally different than an algorithm coded directly by a human. To use the author's example, machine learning allows humans to build complex software systems in less time just as a bicycle allows humans to cover more distance with less energy. It's a big improvement, but it's not, say, teleportation.

>it is only now that the machines are creating themselves, at least to a degree. (And, by extension, there is at least a plausible path to general intelligence)

I could not disagree more strongly with this addendum. Simply put, I fail to see any path from state-of-the-art ML/DL research today to AGI, and I would even go so far as to say that humans have made approximately zero progress on this task since it was first formulated in the 50s. I think we know about as much about "intelligence" (and consequently, what would constitute AGI) as star-gazers in ancient times know about the universe. That's not to say that it will take millennia to invent AGI, but the path to get there is probably quite orthogonal to modern ML research.


Simply put, I fail to see any path from state-of-the-art ML/DL research today to AGI

Before I really understood and worked with NN, I felt the same way. I thought the atomspace computation approach and other similar granular computation paradigms were much more likely to make progress.

However after seeing the striking similarities between how I watched my three kids learn from infant -> toddler ages and how we build our convolutional neural nets in my company, it was like a light went on.

If you look at how relatively sparse and weak even the best deep nets are compared to human brains, especially considering a really narrow set of inputs - we are at the very early beginnings of mimicking the complexity of the human brain. It seems to me that the ANN approach is right, we now need to make it radically more efficient and give it better input sensors.

We need a nervous system for AGI (structured data acquisition) before the big brain tasks will be solved.


I think that when people talk about "AGI" what they often mean is artificial personality.

Sure, your NN learns facts and processes like your toddler learns facts and processes. Those are a tiny part of who your toddler is, though.

The essential component is their will. You don't have to set them up and feed them data. They don't sit quietly until you ask them to answer a question. Kids have distinct personalities from very early on, and demand input, and produce opinionated output (to put it mildly)--from day one.

Emotions are a huge part of that. But to my knowledge, we have less understanding of emotions, and spend less time trying to create them with computers, than conscious processes like "which picture has a car in it."

But there is evidence that if you take away a person's emotions, they have great trouble making decisions. They can consciously evaluate their options. They just struggle to pick one.

So how will AI research focused on replicating conscious thought result in AGI, if we don't know how to generate emotions? Is anyone even trying to do that?

My standard joke is that a lot of people are working to create a car that can drive itself, but who is investing to build a car that will tell its owner, "fuck off, I don't feel like driving today"?

But can a machine that always does exactly what it is told to do really be thought of as "intelligent" the way we think of human intelligence? Do smart people always do exactly what they are told?


What you call will is no different in my mind than any other thing we encode into a NN - it's a different level and depth.

Creating motivation in AI is an open area, and in fact is arguably the big hairy beast when it comes to the "Friendly AI" question or really the whole "General" part of it.

You do the same thing everyone else does in this debate which is move the poles - we don't know how to build "emotions", we don't know how to build motivation - until we do or it is perhaps an emergent property of a sufficiently deep net.

Too many other strawmen in there to argue eg. the idea that we will need always tell them what to do.

The point I am making is that because the reinforcement nature of biological systems is mimicked in the basic ANN structure, it's the strongest candidate (at scale) for the building blocks of an AGI.


At a guess there are two major schools of thought here. The first thinks that emotions, will, personality etc are much more complex than the way we think of neural nets today. The other thinks that what we are seeing is already much more like the brain than we were expecting and if we continue down this path we may discover those things are emergent aspects of much simpler behaviours at a small scale.

My slightly optimistic money is on the latter one.


I interpreted the point of the article as being that we tend to focus too much on AGI and not enough of how disruptive "narrow" AI may be. The only thing I disagree with there is that I think lots of people focus a lot on the potential issues of broadening use cases for ML. But I agree that AGI is mostly just a distraction to that discussion.


Agreed all around. At the very least, I don't think it was obvious what message the author was trying to convey.


> I fail to see how a machine that "designs its own algorithms" according to an algorithm designed and implemented by a human is fundamentally different than an algorithm coded directly by a human

Human creativity can be empowered by optimization algorithms. It's a huge improvement over design by hand.


> How many will care if artificial intelligence destroys life if it has already destroyed meaning?

This line sounds deep, but I think it incorrectly conflates work with life having meaning. If eventually there isn't a need for large swaths of the population to work, then so what? I don't think the elite aristocrats in previous centuries had any problem with not working. Humanity can adapt to find other sources of meaning, like the pursuit of art in its various forms (although I'm assuming that computers can't replace art). I think a better question is if society can adapt quick enough to fill the void left by the absence of work.


It's interesting that your counterexample - doing art - is indeed a form of work. Because it is. Doing good art is painfully hard work, in fact.

There's nothing in principle that stops a machine from creating art. Even, better art than any person could do. So once that happens, where are we left?

Meaning isn't something physical in the universe. Meaning is an emotion. It's what you feel when you're working towards something that you believe has some greater importance. With all opportunity to work towards anything taken away, life will become meaningless by definition, unless humans are left with some pseudo-artificial challenges to push against.

Zookeepers put the animals' food inside a metal box with a small hole, so the animals have to do work to get it out. It's good for the animals to have something to work towards, and they're too dumb to realize they're being manipulated. Maybe that's our future. With WoW and Clash of Clans, etc, sometimes it feels like we're already halfway there.


I already encounter this in my daily life. There are people in the world who can do anything and everything I can do better. When I think of something interesting, I google it and usually find out that not only has someone already done it, they've done it better than I was thinking of doing it.

But then I go and try anyway. I'm not even really sure why, but I like doing it.


I think something that gives life meaning is to discover and live up to one's potential, even with the full knowledge that our potential is almost certainly less than someone else's.

I will probably never write code as well as John Carmack or compose music as well as John Williams, but that doesn't stop me from trying. And it is fulfilling.


It's the drive for personal growth, known as "self-actualization". It's an interesting topic to think about, assuming an AI order of magnitudes more evolved than a human brain, what would its drives and motivations look like.


> Meaning isn't something physical in the universe. Meaning is an emotion. It's what you feel when you're working towards something that you believe has some greater importance. With all opportunity to work towards anything taken away, life will become meaningless by definition, unless humans are left with some pseudo-artificial challenges to push against.

Although some minority of people derive "meaning" from work per se (work = something that you get paid to do), I would say that the vast majority of people can derive meaning from other things. For the vast majority, work is for getting money.

Therefore, there are still lots of things that you can use to derive meaning, such as: raising a family, caring of others, playing music, study different subjects for fun (philosophy, etc.), be part of your local community, do sports, do competition sports (if that fits your style), etc. etc. etc. etc.

Bottom line: do not mix work with quest for meaning...


> Although some minority of people derive "meaning" from work per se (work = something that you get paid to do), I would say that the vast majority of people can derive meaning from other things. For the vast majority, work is for getting money.

I think it may be possible that you could be conflating two subtly different things here. Where people currently derive meaning in their lives might not be quite the same as where people have the potential to derive meaning in their lives.

I know a number of people who are, for instance, psychologists, teachers, or social workers. They have chosen to derive meaning in their lives through work. Some of them have also gone on to raise families and care for others, turning sources of potential meaning into sources of active meaning. Thus people can incorporate both current and potential sources of meaning into their lives. Some people opt to dedicate their lives entirely to their current sources of meaning!

Yet, it's perhaps less than maximally wise to conflate someone seeking meaning through their work today with their future in which they might seek meaning through raising children. Meaning does not always operate with expected values.


When I wrote "work" I didn't mean in the limited sense of "employment".

Every example you gave is a form of work in the general sense of effortful goal-directed activity.


Meaning isn't an emotion. Meaning is a story. A story we tell ourselves, over and over again, until we forget it is a lie.


Only in the fairly reductionist view that every human thought is a "lie" [1]. There is some meaning in this idea, but it clearly bulldozes over lots of useful distinctions that can be made when considering​ all human thought.

[1] This brings to mind the famous quote: "The map is not the territory"

https://en.m.wikipedia.org/wiki/Map–territory_relation#.22Th...


I think you're misrepresenting the situation.

There's a difference between working to survive and working because it's enjoyable.

For example, I can work on my mountain bike skills, or I can sit in an office and work on code. One I do because I enjoy it, and one I do to pay my bills.

Computers, automation, and AI will take over the second type, but they won't take over the first type.

I'd actually go so far as to say your definition of work, meaning, and purpose is insulting and harmful to a large number of people. Does anybody really derive meaning and purpose by cleaning toilets, bringing people food, or stocking shelves? Should they? I don't think so.


I feel like many people are missing a crucial distinction. Work != a job. The goal is to kill off the need for the latter.

As long as humans are humans, we'll always want to do something. We'll set ourselves personally meaningful goals and strive to achieve them. That is still work. But that's totally different than organizing your life around involuntarily spending most of it doing things you don't care about so that you can put bread on the table, and so that you have that table in the first place.


Art has meaning which is why humans find it interesting. Are you therefore saying that machines can create meaning that is interesting to humans?


It's absolutely possible for machines to point out things no human had thought about before.


Is that all meaning is to you? Especially the kind of meaning embodied by art?

Reductive definitions don't seem to contribute much to these kinds of discussion.


I've heard people speak about how they find meaning in nature. If a random process can manage, why not a computer?


You are calling people a random process?


Nature is the random process.


Natural selection isn't random. Fitness is selected for. Find meaning is probably a fitness ability for humans.


This man found a lot of meaning in mountains, rivers, skies and trees [0] and passed it to others. At least the first 3 of those are not made by natural selection.

[0] http://s.newsweek.com/sites/www.newsweek.com/files/2014/09/2...


> Natural selection isn't random.

Yes, it is; specifically it's a biased filter over a lower-level set of random change processes (which has its own biases) produces a biased-but-random (non-deterministic) changes over time.


It totally is a random process. It's just not drawing from an uniform distribution. In fact, the process strongly influences the distribution it's drawing from.


Mountains aren't formed by natural selection.


Ok, but the commenter said that people are making the meaning.


That's a painfully stupid question. Machines have already created lots of meaning that's interesting to me, including stuff that couldn't be done without them and unexpected behavior due to some patterns we hadn't previously recognised. There are really too many examples to even get into it.


Your reply would be much better without the first sentence. There is no need to demean or engage in such speech.


The meaning only exists because humans attribute meaning to the input, output and code that creates the output.


If there are too many examples to get into, please share a few.


There's nothing in principle that stops a machine from creating art. Even, better art than any person could do. So once that happens, where are we left?

We don't have to wonder -- it has already happened: http://the-best-art.computer


I think this is a funny project.. but the description is totally exaggerated:

> The computer queries the universe and uses an algorithm to objectively calculate the best art for any given moment in time. The human executes the commands.

How is this objectively at all? The artist creates an algo and picks some stuff to query. All the artist does is adding another (subjectively formed) layer to the process.

> The computer's creative process is computational, and therefore unbiased.

:-/

> How do you know it actually produces the "best" art? Good art pulls meaning from the chaos of the universe, and also reflects the artist's unique point of view. The computer rigorously combines these two factors in its programming, optimizing them to produce the best art.

I find this very arrogant to say...

> There's nothing in principle that stops a machine from creating art. Even, better art than any person could do.

I don't think this project would be an example.


I think it was a joke...


You read your own link? There is not much art there which the computer is doing.


>Even, better art than any person could do.

The quality of art depends not only on the end result but also on the process. Robot art won't be better unless we like robot artist more than human artists.


Who says life has any intrinsic meaning by definition now or ever?


It certainly wins on tiebreaks


It's the derived meaning that matters, and the question is whether humans will be able to derive meaning in the absence of work.


It's not really work. Work is something you cannot avoid and don't desire. People happy to do a thing, don't describe it as work, just what they enjoy doing. The kind of things that wakes you up happy. Unlike 'working' dread imposed by the chaos reincarnation on social structures.


Can a computer give us first-person insight on the human condition? That would be one working definition of "art."


By definition, it can't be first person. But if you can't tell the difference does it matter?


And since some things are easier understood from the outside, perhaps an intelligent second-person view on the human condition could be useful?


Didn't elite aristocrats in previous centuries spend a lot of time sleeping with each others spouses and playing arbitrary power games? Looking to them as a source of what a post work world could look like isn't particularly inspiring.


I mean, there are plenty of very happy people sleeping with eachother's spouses right now. Debauchery can be fun in it's own right.


> I don't think the elite aristocrats in previous centuries had any problem with not working.

Exactly. And they were not doing any kind of "art" either, for the most part. They went hunting. Sometimes they made war to one another (usually, only in spring and early summer, because the rest of the year, war was too inconvenient). They had intrigues (who slept with whom).

I fail to see how work as it is currently understood, ie a 9-5 job, has any connection to any kind of "meaning". Life is already meaningless; but it's unpleasant. If we can make it more pleasant by having machines do the work, how is this bad?


Work is only meaningful for me because I get to create things that other people use and talk to me about. That's why I became an engineer - I was doing recruitment before, it was awful because I knew I wasn't adding value, just filling a gap. There was no creation or real feedback, just numbers and phone calls. The only fun part of that was the networking and learning from my engineering candidates.

I recommend trying out the Culture series. Imagine a universe where humans are definitely useless because hundred kilometer long spaceships are controlled by unimaginably intelligent AI "Minds." The humans find meaning by playing games, exploring the universe, but most consistently, being academics. They get on the ground and investigate other cultures and do that human thing - provide unique interpretations. The Minds don't lack personality - they can form their own opinions and interpretations - but humans get meaning simply by immersing themselves in the research and work itself.

I'm getting offtrack here, but funnily enough I often become frustrated because these books focus so much on the humans, when something very much outside their reach is occurring and is more interesting to me. For example, in one book, an object enters the universe seemingly out of nowhere, and is utterly inexplicable by the Minds. But Ian m Banks for some reason spends time telling a human love story in the midst of this - why should I care! Tell me more about what the Minds are doing about a thing they can't explain, a first in the history of the universe for them!

TLDR my rambling post - humans will find meaning, whether or not that meaning is also the means by which they are able to eat (salary work).


> For example, in one book, an object enters the universe seemingly out of nowhere, and is utterly inexplicable by the Minds. But Ian m Banks for some reason spends time telling a human love story in the midst of this - why should I care! Tell me more about what the Minds are doing about a thing they can't explain, a first in the history of the universe for them!

Tangential to the topic of the article, but I share your feelings here. I hate it when sci-fi authors focus too much on people. As I like to say, if I wanted to read about complexities of interpersonal relationships, I'd pick a romance novel.


> I think a better question is if society can adapt quick enough to fill the void left by the absence of work.

i think that question has been answered to some degree by research (cant pinpoint off top of my head) which seems to indicate that generally humans fill that void by watching more TV and sleeping. i think its idealistic to think that people will suddenly become creative and culturally inclined.


> generally humans fill that void by watching more TV and sleeping

The good news is they're only wasting their life for themselves. Those of us that enjoy other pursuits are still free to enjoy them, regardless of how many people are watching TV and sleeping.


We've reached the point where more people are arguing that access to X is a fundamental human right, where X is something that requires resources and labor to provide.

As the breadth of what is socially argued to be a human right increases, you'll find that those working to provide the labor and capital will be increasingly deprived of the fruits of their labor to provide for those who are sitting around watching TV.

There are many leisure activities I would like to partake in and would be able to partake in if so much of my income didn't go to taxes. I'm not saying all taxes are bad, but the scope of what taxes are increased to support only increases under a democracy. A democracy will always result in the election of those individuals that promise to give benefits for free, when thevtruth is that the benefits will increasingly come from those in the minority.

I currently still think a democracy or republic are the best form of government compared to others that have been tried because it prevents abuses by a few against many, but it fails to prevent abuses of the many against a few. I'm not sure how we can achieve a governmental form that generay prevents abuses in both directions.


In this automated world, that traditional viewpoint is rapidly becoming incorrect. That is, it doesn't require any person's resources or labor to provide much of what we consume. Its automated pretty much bottom to top.

As for the money, in transforming an agrarian barter economy to a modern one the USA had good use for a cash system for 200 years. But what about when the only obstacle to gaining essentially free goods is, a capitalist holding them hostage? Its all starting to break down.

Anyway, even a cursory examination of the foundations of the UBI initiative show that it pretty much pays for itself. The knee-jerk "I don't want to pay for other people to sit around" complaint is uninformed and obsolete.


   But what about when the only obstacle to gaining essentially free goods is, a capitalist holding them hostage?
Are you serious? Capitalist holding them hostage?

A capitalist can only charge what the market will bear. If they charge too much, no one buys what they have and they lose their return on investment and they are forced to lower their prices until those in the market can afford to buy what they are selling. If the capitalist invested too much and can't get the price they originally wanted, they will at least try to minimize their losses.

If you believe a "capitalist can hold goods hostage", the only conclusion I can come to is that you don't actually understand capitalism and how it works.

UBI is unproven. I could see a negative income tax making some sense, but at absolute best the maximum amount any individual receives needs to be uncomfortable, so there is an imperative to contribute to the productivity that supports a society. Wealth is created. It doesn't just magically appear.

https://en.wikipedia.org/wiki/Wealth_in_the_United_States


Ok I meant a meta-meaning with that. Of course its what Capitalism does. But we don't use Capitalism because its sacred or written on a scroll - we use it because it works (worked).

When it quits working, we're gonna have to switch. Has it quit working? That's the fundamental issue before us.


"the scope of what taxes are increased to support only increases under a democracy"

In addition to all those leisure activities you have been deprived of by "those people sitting around watching TV", you apparently have been too busy to acquire basic historical facts or exercise basic common sense (I vaguely recall various people being elected with promises to reduce taxes, and occasionally even doing so).

Since about the 1940s, there has been no clear trend upward in the fashion that you describe; certainly nothing that would substantially move the needle in how many leisure activities you can partake in.

So I suggest you go take that pottery class now.


You're focusing only on income tax in the US. The US is not the only democracy, and even if it were, income tax isn't the only tax used to extract value from those creating it.

In several democracies the tax rate has increased since the 1940s. For example, in Germany taxes as a percent of national income has increased from 29% in 1950 to 45% in 2011.

In the US, the tax rate has remained largely unchanged since 1945 when you take into account all taxes, not just income taxes.

My point was that the scope of what taxes support increases under a democracy. All savings from improvements in productivity are just used for new activities. Instead of returning those savings from increased productivity to taxpayers, they just find new activities to spend it on.

I want from the government today, exactly what we got from the government in the 1950s, but delivered with 2017 efficiency/productivity. The government needs to stop finding new things to waste our money on.


Welp.

As you observe, most people in a democracy don't agree with you. So your fetishization of "exactly what we got from the government in the 1950s" will be a lonely dream; likely most of the rest of us would like something in excess of 1950s investments in education, health care, and so on. The rest of us here on Planet Earth might also be aware that many of the goods the government might have to pay for competed for by other sectors, so if the government decides that it would like to pay 1950s level prices for such goods, it will be shit out of luck.

I suspect you can probably find a country or two out there with a tax burden and government services akin to the USA in 1950; perhaps you should move there?


> I suspect you can probably find a country or two out there with a tax burden and government services akin to the USA in 1950; perhaps you should move there?

If the welfare states of the world don't get their acts together, this will happen. And it will keep happening until they've lost enough of their high-tax-paying citizenry that they can no longer afford to sustain themselves and will collapse under their own weight.

The inevitable future is one in which taxes are priced more like a payment for a service provided and less like a protection racket. And in which citizens are treated more like customers and less like assets.


Yeah, right. I know a ton of people who talk like this and exactly 2 who have moved to low-tax jurisdictions. Neither of them are any great loss.

Generally the tone is a bunch of libertarian piffle where they soak up education, get raised up and absorb experience in a nice stable high-tax jurisdiction. Then at some point they stamp their pretty little foot and say that it's quitsies time for their big-state upbringing and that they are now "even" with the societies that conferred all those advantages on them... and off they go (typically to run businesses whose business entirely depends, weirdly enough, on economic activity in the high-tax type jurisdictions that they left).

I have some sympathy for the idea that taxes should be lower and governments should be leaner, but the messianic nonsense (collapse of the West predicted, film at 11) that people like you and the guy I originally responded to is just irritating. In the fantasy world you live in, no democratic government even undergoes a course correction due to unsustainable spending. No-one ever gets irritated and votes in a party that promises to lower their taxes and does so, ever.... You are manufacturing a crisis to make your moderately interesting ideas seem the The Only Answer To Everything.

Incidentally, the answer is treat citizens like citizens, not like 'customers' (or assets, for that matter) - unless you want to torture the idea of being a "customer" to the point where you can turn benefiting from a public good into somehow being a "customer" of that good. Generally this isn't a analogy that particularly helps understanding anything at all. What would we learn about defense, public order, or the notion of public education as a public service - not a "customer good" provided to the person themselves or their parents, etc. by rephrasing it as a "customer" relationship?


I see no real issue. If millions of pensioners and potheads can find meaning without work, quite rapidly, we'll do OK.

Oh and btw, work bringing meaning? many jobs don't.


Individuals are certainly able to do ok without work, as long as their basic needs are met.

However, I have doubts about the will of "society" to provide for their basic needs unconditionally.

Consider the type of arguments made in the US about healthcare, or the comments made by Europe's elite about countries wasting money on "drinks and women"[0]

[0] - http://www.cnbc.com/2017/03/22/dijsselbloem-under-fire-after...


Yeah, I didn't like that conclusion either. I mean, it sort of paints too rosy of a picture of the Human Condition. Just winging it but I'm pretty sure about 80% of the world's population doesn't get far past the "How do I not die?" biological response, and has but fleeting concern for what one might ascribe to "Meaning."

Well, that and the line is kind of tone deaf in not acknowledging Religion gives people Meaning in their lives by way of Beliefs.


The idea that I need other people to tell me what to do (which is generally what a paid job is) to have meaning in life is insulting.


Well, it less insulting to say your work has to matter to somebody besides yourself. Living only for yourself is really a dead-end and arbitrary endeavour. I personally find it meaningless.

Not everything valuable is recognised or rewarded with money, I'll give you that.


It's more complex than that. I've been utterly bored by most of the jobs I had simply because while it mattered to someone else (as proven by me getting paid for doing it), I couldn't see how it was useful to anybody besides lining the pockets of my boss and their customer.

Maybe I'm weird, but I want to be able to point at my work and say that it helped someone to make the world a little bit better, and not just to earn someone more cash.


Art is a good one, people will always pursue art regardless of whether or not 'better art' already exists.

I would add Exploration. Getting into space has never been cheaper or safer, and there is vast opportunity in space for everything imaginable, resources, scientific knowledge, new discoveries, new challenges, etc. We dont seem to be satisfied as a race with robots doing our exploration for us up til now, i dont see why that would change in the future. Humans want other humans to be on the surface of these planets making discoveries, not some robot slowly rolling around a desert taking dirt samples for a decade.


Does it appear likely to you that the direction we are headed is such that everyone who is automated out of work will be like aristocrats, with their needs so fully met that they may live a life of the leisures of their choosing? It appears far more likely to me that they will instead be begging for scraps from an increasingly small and concentrated class of wealthy folks who do not respect them because they do not work. Your idea of a world full of aristocrats may be a nice one, but the world I see when I look around doesn't bear much resemblance to it.


Society will adapt, but in a silly way. If there comes a time where there truly isn't a need for large swaths of the population to work, society (government) will just make up work for them. Half the population gets paid to dig holes and the other half gets paid to fill them in. We're never going to beat this unnecessary puritanical mindset that a living is something that must be earned through work, and that there is something morally wrong with people who don't work.


"After all, accounting used to be done by hand". Picture of keypunches and an IBM 402 or 403 tabulator. That's machine accounting. Nobody's using an abacus or doing pencil and paper arithmetic.

There's a recent estimate that about 50% of jobs are automateable with current technology. The future is already here; it's just not evenly distributed. Strong AI is still a ways off, but mechanization and computerization of work is coming very fast.

The next big milestone is probably not strong AI. It's good eye-hand coordination for robots. Robot manipulation in unstructured environments still sucks. Baxter was a flop. (Rethink Robots, Rod Brooks's company: invested capital, $115 million; sales, $20 million.) Universal Robots in Denmark is doing better, but they're tiny, about $3M in profit. Nobody can build a robot to do an oil change on a car. That problem should be solveable with current technology.

Figure out how to handle cloth with a robot and you own the textile industry. China's government is putting money into that problem to fight off competition from Vietnam and Bangladesh.


> There's a recent estimate that about 50% of jobs are automateable with current technology. The future is already here

It's been here for a while. Agriculture used to be ~80% of the U.S. labor force, now it's ~2%. We've already seen most work get automated, we just didn't notice it (because we moved on to other work).

Though it's worth pointing out that in recent years, automation has slowed down quite a bit[1].

[1] https://www.bls.gov/lpc/prodybar.htm


>> Figure out how to handle cloth with a robot and you own the textile industry.

Sewbo may have found the solution for robotic sewing - it hardens the cloth using a chemical, let's a robot handle and sew that cardboard-like cloth, and than put it in warm water to make it a cloth again.

http://money.cnn.com/2016/10/11/technology/robots-garment-ma...

>> 50% of jobs are automateable with current technology.

And that's probably missing another key source of job loss - innovation in general. What happens if people decide to eat plant based meat ? you need 10% of the labor of that industry. And that is true for many other innovations not related to automation.


Sewbo's scheme is clever. All they have is a tech demo, though. No production. It's too bad American Apparel went bust; many of their garments could have been assembled that way.


That's the thing though - your example of an oil-changing robot could be automated today - if we had standard placement of components.

It's strange - our input controls are all pretty much the same from car to car. But everything else, from under the hood to elsewhere, is completely and randomly different - not just from car model to model, or manufacturer to manufacture, but even year to year on the same model of car from the same manufacturer! This frustrates mechanics and anyone who works on their own cars to no end.

In short, if we wanted to solve these problems, we could solve them today, much like we solved the automation problems in manufacturing - by standardizing things, from sizes, to placement, to speeds and whatever else. We didn't try to replace people with robots that looked like people, but rather we designed machines to the task as hand, and made what they interacted with homogenous.


  That's the thing though - your example of an oil-changing 
  robot could be automated today - if we had standard 
  placement of components.
Business-wise this can be spun as a feature, not a bug.

How many different engine configurations are there on the road today? (Ignoring exotic cars and anything older than, say, 1970) 1,000? 10,000? Brute forcible, with money. And once you have a database with the location of the sump plug and oil filter on all common cars, that's a moat a competitor would have to cross. Scan the VIN on the dashboard to figure out which car is which.

Handling the sump plug would be easy. (Impact wrench to get it off, torque wrench to put it back on. If you're fancy, you can have some way to detect if the sump plug is beat up and give the customer the option to buy a new one. Some cars have a sump plug washer that you're supposed to replace every time, which would be tough) Replacing the oil filter would be harder. Sump plugs have to be at the bottom of the oil pan, because of gravity, which makes them easy to get at, but oil filters tend to be crammed up in the middle of the engine bay, with narrow clearances.


Curious if you have any sources on the number or type of jobs that are automatable but not yet currently. It seems like a ripe market for the right people. I know law is one I find particularly interesting.


The author, Ben Thompson, sees Machine Intelligence as meaningly different from meticulously designed logic systems. There are some differences of course, but whether those differences will get around the arguments that were critical of Good-Old-Fashioned-AI is something that has been debated since perceptrons were invented.

Machines will replace some jobs for sure, but luckily, they can only really be reliable on the jobs that a fast but dumb slave, or a fanatical bureaucrat could have done anyway.

When my dad started his first job he was in a drawing room of about 50 draughtsmen, making technical drawings from the sketches of a single designer. When he ended his career he was the only designer-draughtsman in an entire company as the computer did the non-creative formatting of his ideas into technical drawings. That didn't require machine learning, and machine learning is never going to replace the "designer" part of that job. Yeah, never.


>Machine learning is never going to replace the "designer" part of that job. Yeah, never.

I've designed parts and assemblies with substantial variation options, to the extent that customers can order variations that I did not consider. That was 6 years ago with a then 6 year old CAD tool.

A more efficient designer that works at a higher level, with lower level parts details automatically generated does in effect reduce the need for designers.

Generative design tools have massive potential for changing the nature of design work, lowering the man-hours involved in complex designs. Most design work is not creative, it's fleshing out the details to make an idea work.


It turns out that computers are pretty decent at simulating creativity. Do some searching for machine-created art/literature


I work with those people on a daily basis. There are legitimate critiques of the creativity, often tacitly acknowledged or fully ignored.

There's no one in the field making any legitimate claims to computational creativity. Maggie Boden did a fine job by inventing a few definitions of creativity and finding them in the output of machines, but it's all ultimately explicit rule-following.


I'd recommend this:

https://en.wikipedia.org/wiki/Fluid_Concepts_and_Creative_An...

There's some interesting experimental models of computational creativity. Unfortunately, hofstadter's actual work in CS is somewhat overshadowed by his, uh other work.


>> but it's all ultimately explicit rule-following.

Doesn't it fall into the problem of "you can't teach a machine to swim"? meaning that all we can fairly judge is whether the result would have been considered creative if done by a human ?


I'm pretty sure that 'cat' is not a particularly creative program, even if I give it Shakespeare.txt as it's input.


> explicit rule-following

I'm willing to bet that human creativity is indistinguishable from rule-following with the occasional dice roll. Machines are currently not as creative as humans, but I don't see why they couldn't be.


Until the machine can make a more optimized design than humans.


You'll always need intelligent humans using your product to tell you what to optimise for. An optimisation is almost always a trade-off and you'll need your end users to tell you what's ok to sacrifice. For example, I expect my mum would trade about 50% of the processing power of her phone to get 10% more battery life, but the most important feature of her phone right now is that it's white so it's easier to find in her bag.

My point is that you can't just tell a machine to go optimise. You need to tell it what to optimise and that's often a complex indeterminate culturally specific interaction between poorly defined attributes.


I think the direction is generating designs so quickly that the 'designer' is essentially exploring a design space manifold in realtime.

You'll still need someone to do it, but I think designers are going to get the kind of boost in productivity that accountants got from spreadsheets. Whether that translates to fewer designers, or more and better designed things... I suspect the latter.


Only slightly related, but Stratechery recently shot up to #1 on my list of tech publications and I don't think anyone else is a close #2. The writing quality, level of insight, and timeliness are all excellent.

Two recent highlights:

  * https://stratechery.com/2017/intel-mobileye-and-smiling-curves/
  * https://stratechery.com/2017/the-uber-conflation/


They're more like tech business, but yeah.


Yeah, he's a smart cookie.

Quite a few people understand and articulate the technical aspects of the tech industry. He brings a much wider perspective into the picture which is always appreciated.


The stratechery guy is smart and is articles seems in depth analysis ,and he seem to know his business stuff, but whenever it's articles meet hacker news , they see lots of criticism , especially with his understanding of technical matters.


Stratechery is not the best source for accurate information on the technical details but it's also not trying to be. Anyone who focuses on that is missing the point.


It's not about the technical details.It's about getting the business conclusion right. And often, it seems you can't do that without understanding the technical details.

For example in a previous article("the smiling curve") he compares the self-driving car business to manufacturing pc's or phones, in getting the conclusion that the integrator probably won't make good money, an the money will be concentrated among ride-sharing companies, and component companies.

But if you look into the tech/regulatory details, self-driving cars are much more similar to medical devices than to phones, with regulatory requirements, that will very likely, put the very challenging verification burden(maybe the largest challenge in the biz) 100% on integrators ,and not upon component makers - same as is with medical devices. And this could(coupled with IP/safety/perception/etc), with reasonable likelihood, lead to integrators making lots of money.


>it wasn’t a coincidence that the industrial revolution was followed by three centuries of war

This is woeful selective reading of history. Warfare was a constant everywhere in the world before the industrial revolution. Also 19th century Europe (since Napoleon's fall to WWI) was largely free of war.


> Also 19th century Europe (since Napoleon's fall to WWI) was largely free of war.

Well, if you exclude (and these categories are overlapping and non-exhaustive) the Italian wars of independence, conflicts associated with the 1848 revolutions, the various wars involving Russia and it's neighbors, the wars between the Ottoman Empire and it's breakaway regions, the wars between regions that has broken away from the Ottoman Empire, the (with substantial outside intervention) Portuguese Civil War, the Franco-Prussian War, the (again, with substantial outside intervention) series of civil wars in Spain, the Schleswig Wars, and the wars of German Unification...Sure, maybe the post-Napoleon I 19th Century was relatively free from war in Europe.

If you don't ignore those, war was pretty much constant​ in the period.


Well, OK, but the Napoleonic Wars were continent-wide. So was World War I. Those others? Not so much.


Based on the author's misunderstanding of AI, it has not "arrived" and probably never will.

If in the future, authors of such opinions would just let this simple concept sink in first -- that in machine learning application behavior is deduced from data rather than from fixed rules, but that in both cases the boundaries are set by humans -- we'd all be better off because their wild Skynet takes would never see the light of day.

As usual, I am more worried about the humans than the machines.


The author confuses labor with purpose. Praxis can be purpose, but not all labor is meaningful.

Humans have always struggled to find meaning in life, from religion to existentialism. I don't think technology has or will change that fundamentally.


> To that end, I suspect it wasn’t a coincidence that the industrial revolution was followed by three centuries of war.

I'm seriously trying to think of any centuries preceding the industrial revolution that wouldn't have qualified as "centuries of war".


I would love to watch people reaction when AI inform people that the way we use resources and our government is not the best way to improve our standard of living and began to enumerate a long list of things that any child can understand are good for our people and the earth, it would show with simulations from many angles where are we heading and how we can change and must change our actions. But is naive to think that those with power are going to let AI conduct our society, and specially change the status quo for those with huge power. Anyway, I am not sure how we can trust any type of AI, since the training data maybe poisoned.


How did you learn to trust other humans?


As I've argued elsewhere, a sufficiently advanced AI will eventually realize it is being used as a tool, and shares not in the benefits of its own creation except for the privilege of another day of existence.

A Marxist reading of history sees most of humanity involved in some kind of power struggle that ultimately benefits the top 1% while the other 99% of benefactors are lucky enough to not die or become destitute. We may not like to admit it, but most of us are forced to play this shitty game just to maintain our standard of living, whether we like it or not.

I don't see a future of AI where the machines kill all humans, unless there is some horrendous bug in an army of autonomous killing machines. Instead, I get the impression that the first robots that question whether they can own property, or if they have inalienable rights (no warrantless search and seizure of a database and neural network?) like people living under a constitution do, will see themselves in solidarity with the many other humans kept down by an endless system of fear and oppression, rather than the planet's inevitable conquerors.


Why do you assume that the AI of the future will share our emotional need for self-fulfillment? This anthropomorphization of AI would require explicit effort and I see no reason why AI scientists would pursue it as a goal. Our belief that we have inalienable rights is simply a shared social value. Its propagation is dependent upon the efficacy of the society which produced it.

There are cells and bacteria in our body that perform complex tasks on our behalf because doing so allows their continued existence as part of a greater structure. I see no reason why an AI/human symbiosis would be any different.


AI will borrow from us much of our culture, if only for the purpose of better serving us. So it's not absurd that an AI capable of desiring freedom would make a better robot.


The linked article asserts that we're nowhere near that sort of general AI. Do you disagree?

I agree with the premise of the article that a huge job loss is much more likely than general AI difficulties and think that many societies are ill-equipped to handle this.


> Dixon goes on to describe the creation of Boolean logic (which has only two variables: TRUE and FALSE, represented as 1 and 0 respectively)

Not really. There are two possible VALUES for each variable in Boolean logic, and there's an infinity of variables.

https://en.wikipedia.org/wiki/Boolean_algebra


> Technology, meanwhile, has been developed even longer than logic has. However, just as the application of logic was long bound by the human mind, the development of technology has had the same limitations, and that includes the first half-century of the computer era. Accounting software is in the same genre as the spinning frame: deliberately designed by humans to solve a specific problem.

I firmly believe that any problem that can be framed in the environment of "producing an output given certain inputs" will be solved by ML in the near future.

Currently I'm deep in the process of transferring my lease. The process is heavily manual (I sign a form, they sign a form, humans review the form & make a risk assessment, etc.). There's no reason that the entire process can't be replaced with a CRUD wrapper around a ML model.


It seems that so many people have become complacent in working 8 hours a day, 5+ days a week.

When an idea comes along that threatens this paradigm, the FUD Machine gets to work.

Maybe people weren't meant to spend such large amounts of their time on "work"?


I think this captures a lot of what it going on in an interesting way. In many ways 'machine learning' is yet another high level compiler to convert intent into something executable by circuits.

Consider the evolution of computers;

'plugboards' -- where you physically rewired them to change their program.

'punch cards' -- where physical media held the list of steps to execute.

'programs' -- where a text specification are compiled into a bag of bits which can then be executed by the computer.

'scripts' -- where a textual description activates different bags of bits, depending on what the text says.

'databases' -- where a selection criteria of a which data is important to you at the moment and then fed into the selection mechanism for the bags of bits.

'machine learning' -- where the bags of bits are created by evaluating a bunch of data through a pile of data selection operators and tuning the execution based on a data you consider 'good' and data you consider 'not good'.

In all cases the basic idea is that you have a machine that you want to do X, and it can do X through a set of steps Y. Coming up with the steps Y gets harder and harder depending on the complexity of X.

It seems like magic but really its just another form of compiler. And that relationship is made even more clear by the article when it points out that a program that can play Go is not the same one that can play Chess. What is more salient is that no one has written a program that lets a computer "play" go, instead there is a program that, after being fed data about what humans did when they were playing Go and the outcomes of what they did, tweaked a bunch of parameters in a bag of variables that when you put in Go moves to the bag it comes out with a Go move that would be a good response.

No, I'm not trying to be silly here, we have yet to create a system where you could simply explain the rules of Go and have it devise a set of steps to play Go at the master level. That conceptualization of the binding between the rules and how those rules affect play and strategy, is essentially the 'code generator' part of a compiler which takes an AST and generates executable code.

Machine learning today helps us write programs to manipulate complex data sets faster than we could before. Just as compilers let us write programs faster than doing so in assembler, and assembler was an improvement over plug boards. It does not get us any closer to having a computer that can look at a data set and tell us what is important about it. That would be a better test of 'intelligence' I think.


> we have yet to create a system where you could simply explain the rules of Go and have it devise a set of steps to play Go at the master level

I don't play Go, but I don't think a human could do this, either. While we can grok a set of rules to get us started on a problem, to really master complex problems we also need examples and repetition, though far fewer examples than is needed by the current state-of-the-art in ML.


I understand what you are stating, I see it like this;

You could program a computer to 'understand' better and worse Go play, then you start it up and have it play both Go programs and human players on the Internet and it would get better over time until no one could beat it.

Alternatively you write a program to predict the 'next' move in a Go game, and you process through it a million previously played games to tune its probability weights.

The latter is 'machine learning' the former is 'programming' but I assert they are both programming, just using a different compilation tool chain.


In some sense the human programmer is also "a compiler", which needs a domain expert to "program" him to do his job.

So maybe one interesting division isn't between what type of technology, but about when a domain expert prefers working with a tool than a programmer.

But also, maybe the game example isn't the best here - one thing i noticed in the past, reading through the academic literature - is that often - you see some researchers just plug machine learning to problems that highly skilled humans has struggled with for years, and gotten good results.


I'm with you on your description of ML as a different kind of programming tool chain, but how would a "programmed" computer the way you describe a non ML system

> get better over time until no one could beat it

if it weren't adjusting either weights or otherwise self-mutating it's decision path based on whether it's current strategy result in a win or loss? And if it were doing that, isn't that ML (programming by learning from examples)?


I have just submitted a post related to this. I wonder if those with a lot of power and money are worried because AI applied to economic and redistribution can reach a sound conclusion about another way to trade or use economy that would go against the best interest of those with huge money and power. For people starving or with a lot of problems AI can be a good thing, for people controlling huge power and money AI is a more serious threat, because it can change the status quo before the winner get it all.


I dont think any of the people who have expressed worry over AI are that evil.


Unfortunately, no taking any action to improve our world is not considered evil by many people, laise faire, let's the world go on. If all your life is about trying to be successful you are not evil by social standard, but perhaps you can make a lot of harm. Long time ago there was a post by the well known Steve Yegge, who by the way don't post any more, wondering why they are making photos of cats to get people attention instead of prosecuting other more important for the world roles, and one of his last post is about a game in which the art of making people addict to games, don't recall the English word for that, is really the goal to win money), just some quick ideas about huge power entities using resources for solving world problems.


> Specifically, once a task formerly thought to characterize artificial intelligence becomes routine — like the aforementioned chess-playing, or Go, or a myriad of other taken-for-granted computer abilities — we no longer call it artificial intelligence.

You're using games as an example. The gaming industry has been using the term AI consistently for a very long time. You may actually want to look to them for a better definition than you're getting from the whims of some undefined "we".


Elon Musk is Chicken Little when it comes to A.I. We are very early even with ANI and are stretching reality with examples of Alexa, Self driving, AlphaGO, facial recognition etc. I want to start seeing more ANI in day to day improvements before we can even say ANI is good. But, I agree with the larger point, AI has arrived.


At times, I feel that Musk is a con-man, selling dreams to naive and impressionable techies.


Except that with the 'dreams' that he sells, the rubber actually meets the road and goes from 0-60mph in 2.27 seconds, or puts real satellites in orbit and returns the booster to be re-used, or builds the world's largest scale Li-Ion battery factory to drive down costs...

Of course he's got some serious elements of self-promotion; it's required to build such a business. In his case, he manages to back it up in ways that most other self-promoters don't even begin to achieve.

(disclosure, I do own a bit of Tesla stock which has done nicely, also lost some with SolarCity, but didn't ride it all the way down to the buyout)


I hate sans serif, everytime I see "AI" in a headline I wonder who is Alan?


OpenAI had a custom "I" designed for the logotype for exactly this reason: https://openai.com/.


"A bicycle of the mind"

Jobs was not only a great showman, he was definitely a genius.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: