Hacker News new | past | comments | ask | show | jobs | submit login
Building AI (facebook.com)
175 points by klunger on Jan 27, 2016 | hide | past | favorite | 147 comments



I am disappointed with the number of errors and misconceptions in this piece. He presents machine learning (ML) as though it is the entirety of artificial intelligence (AI), bases his assessment of the field on a false dichotomy between the largely distinct groups of supervised & unsupervised learning techniques, and unfairly reduces the achievements of AI down to "pattern recognition".

AI and ML are not synonymous. They are related, and there is a great deal of overlap between them, but the fundamental goals and approaches are largely different. Machine learning is primarily interested in studying what you can learn from data - for some suitable definition of the word "learn". Artificial intelligence is the more broadly defined problem of studying algorithms that exhibit or incorporate intelligent solutions to problems. That may involve data, or background knowledge, or domain assumptions, or a variety of other things. AI is about more than data - even as he observed that it is a sign of intelligence that humans do not require thousands of samples to learn.

The central challenge of AI is not in transitioning from supervised learning to "general" unsupervised learning (whatever that means). The techniques are different, and often used for different things. There is some overlap, and they are clearly related - but it is not at all accurate to conflate unsupervised learning with "common sense". In broad strokes, supervised learning is about identifying features in the data that correspond to labels, while unsupervised learning is about identifying intrinsic features in the data. It sounds like he's interested in bootstrapping supervised techniques, or perhaps transfer learning.

My first response: ugh...yet another famous and lauded geek icon wading into the subject of AI. At least this doesn't seem to be in any way connected to all the Nick Bostrom garbage. But if I'm allowed some snark, perhaps if he wanted to be an AI expert he should have stayed in school... ;-)


"We should not be afraid of AI. Instead, we should hope for the amazing amount of good it will do in the world. It will saves lives by diagnosing diseases and driving us around more safely. It will enable breakthroughs by helping us find new planets and understand Earth's climate. It will help in areas we haven't even thought of today."

Zuckerberg isn't afraid of AI because he already has more money than he will never need (or thousands of other people would need) in a lifetime. However, when AI gets good enough, it will make many jobs obsolete and they will not be replaced at a fast enough rate.

I use simple forms of automation at my own company. Instead of hiring 3 or 4 workers, I write software. I can't imagine how many jobs will be replaced when AI gets to the point of near-human levels of learning.


Yeah, the last 100 years have sucked since the tractor was invented and I've been unable to find farm jobs.

You just beautifully summarized the "Lump of Work fallacy" https://en.wikipedia.org/wiki/Lump_of_labour_fallacy

Yes, technology changes/improvements causes transitions in the job market but as it turns out, human needs are endless and new jobs are created for displaced workers.


[T]here was a type of employee at the beginning of the Industrial Revolution whose job and livelihood largely vanished in the early twentieth century. This was the horse. The population of working horses actually peaked in England long after the Industrial Revolution, in 1901, when 3.25 million were at work. Though they had been replaced by rail for long-distance haulage and by steam engines for driving machinery, they still plowed fields, hauled wagons and carriages short distances, pulled boats on the canals, toiled in the pits, and carried armies into battle. But the arrival of the internal combustion engine in the late nineteenth century rapidly displaced these workers, so that by 1924 there were fewer than two million. There was always a wage at which all these horses could have remained employed. But that wage was so low that it did not pay for their feed.

-- Greg Clark, A Farewell to Alms

The "Lump of Work fallacy" does seem to be a fallacy. However, that didn't help the horse. Why is that? The invention of "Artificial General Muscles" in the form of the internal combustion engine was a better-than-perfect replacement for all forms of horse labour. We should expect artificial general intelligence technology to do much the same to humans. There is no law in economics that prevents human wages from declining below subsistence. And a perfect substitute for human labour that requires no wages and is more intelligent would certainly vastly lower the value of human labour.


The problem of excess workers will probably be solved the same way it was for horses - they'll be refashioned as cannon fodder for wars, much as horses were in WW1.


Yeah, and it's too bad horses don't exist anymore.

Oh wait. They are fine. They're just not used as constant slave labor all the time anymore, much like human labor.


Those that weren't turned to dogfood. I'm fairly certain we have a considerably reduced population of horses


So what? We don't need as many. It doesn't make life any worse (in fact, probably better) for the ones we have now. Those pre-industrial-revolution horses would have been dead by now of old age if not turned into dog food.


That's the whole point of the parent argument. What do we do with people who's work output we don't need anymore?


We don't do anything. We'll have to take care of them, which is where things like a basic income help. It won't happen all at once instantly, and it's not like our only option is to grind them into soylent green. Horses may get turned into food, but that's because they're horses and not people. We eat animals, so it's not very compelling to say "oh no, we ate all the horses or turned them into glue". Who cares. The point is there are horses now and there will probably continue to be horses. Many of them with a life of mostly leisure and good care instead of being worked to death. Sounds good to me.


As long as we're all clear that you're arguing for a significantly reduced human population, I guess I don't have anything to add. I'm pessimistic that the transition between our current populous life of toil and this idyllic future will be peaceful or pleasant.


microwave lasagne - i.e. Findus beef lasagne containing 100% horse meat (http://www.bbc.co.uk/news/uk-21375594) perhaps due to cheaper horse price due to European banning horse-drawn carts on roads (http://www.telegraph.co.uk/news/worldnews/1578965/Horses-lef...)


Globally yes, locally no. It is not reasonable to discount the fears of a to-be-displaced factory worker in Michigan because their loss is offset by new service industry jobs in New York, even though the net economic effect may be positive. For real examples, laid-off shipbuilders in Liverpool and coal miners in Wales are still largely out of work even while the UK economy has grown since those mass industrial transitions.

Taken to a limit, a sector-wide displacement could have quite severe repercussions for a society. Imagine for example a majority of truck drivers* losing their jobs once self-driving cargo vehicles become available. Assuming they'll be adopted rapidly (for their advantages in safety, fleet efficiency, and reduced transit time due to continuous operation) and the change is quite sudden. Those truckers are now an immediate, acute problem for the country: unemployed, not spending, not earning, heavily invested in training and experience for a now non-existent job, and not (in aggregate) well-suited for other work. They are also a large, angry, nationwide group - a pre-fabricated voting block - ripe for political exploitation.

*I picked this example based on that map of common occupations that was floating round here a few months ago. By far the most popular job by state was "truck driver", so this is potentially a real issue.


"You just beautifully summarized the "Lump of Work fallacy""

The fallacy claims that the number of jobs is a zero-sum game. I'm not saying this at all. While more jobs will be created, because automation is happening at such an exponential rate, there won't be enough jobs replaced.

I have feeling when we do have AI, many of these economic principals will need to be re-written.

More from this article:

"Whereas some argue that immigrants displace domestic workers, others believe this to be a fallacy, arguing that such a view relies on a belief that the number of jobs in the economy is fixed, whereas in reality immigration increases the size of the economy, thus creating more jobs"

Immigration may increase the size of the economy, but if they are hired at let's say 30% of an American working at the same rate, it essentially dilutes the market. Americans now have to compete with someone making much less than them, which will mean a huge pay cut.

With AI, it will mean competing with algorithms/hardware that can probably be purchased for even less than the cost of a worker.

The end game for this will be a class of people owning all of this AI hardware, the poor, and a much smaller middle class of people in-between that have the skills to work on the AI hardware.

What happens to all of the people that aren't educated or don't have the skills in this new economy when all of the jobs they could do to make money have been replaced by AI?


> What happens to all of the people that aren't educated or don't have the skills in this new economy

What happens now to people who have no education or skills relevant to the economy? AI won't make this new problem, it's something we already deal with.


"AI won't make this new problem, it's something we already deal with."

Right now, they can get a minimum wage job and survive. My point is that AI could completely replace these and the only way you will be able to get a job is to have some form of education or skill.


>will be able to get a job is to have some form of education or skill.

AI tends not to work like that. High-skill jobs tend to be easier for machine-learning than low-tech jobs. https://en.wikipedia.org/wiki/Moravec%27s_paradox


At that point, would humans really require jobs? And will the value of money still be the same?


This is a complex issue, and like on most complex issues, there is no simple answer.

The general idea that technology deprecates people is true. It is also true that there is an escape valve in new needs, requiring new jobs, that allows for us to shift occupations and remain employed.

However, there is no law that states that the escape valve will always work, nor that it can work at high rates of job displacement.

Moreover, there are signs of low aggregate demand in developed economies. These signs are appearing now even in developing economies. This means that money is not percolating down into the hands of people, who can't then spend and create demand. There is a huge number of factors at play here (see the note about being a complex problem). Higher worker efficiency causing higher unemployment rates is one such factor. It can't be dismissed outright.

We may be witnessing Marx being right (albeit ahead of its time).


Is it reasonable to compare a situation in which a general AI can do virtually any job a human can and way better to the invention of a tractor?


Yes, because "tractors" shifted the U.S. job mix from being 32% agriculturally related to just 2%. A gallon of gasoline is estimated to have ~600 man hours in it.

Again, automation isn't new, it just seems like it is because we're living it now.


There's a difference between shifting labor around between sectors (e.g. moving agriculture jobs into manufacturing or new industries) and a situation in which, almost instantaneously, every job that a human can do can be outdone by an AI. I think it would be naive to think the advent of superintelligent AI is comparable to relatively incremental advances of the industrial era.


This argument is back at the lump labor fallacy.

With agricultural automation, the 30% change was filled mostly with new service jobs, not labor jobs. The same goes with A.I. automation - suddenly service jobs that were not possible before become possible. Jobs are not a zero-sum game.


But any new job that is created can be done by the superintelligent AI, and better than humans. Even if somehow a million new service jobs are created, those jobs can be done better by an AI. So where does that leave humans? There would be no place for human labor if we cared about efficiency.

Edit: I see superintelligent AI as not mere automation, but in effect creating an infinite supply of new labor. Humans are diluted out.


Again, the types of jobs unlocked are not possible by A.I. because those are already filled. You're making the same zero sum fallacy.

Also, the idea that this super intelligent A.I. can do everything is almost a god of the gaps fallacy. There are still many things that humans can do that a computer cannot.


Thanks for engaging. I'm not trying to argue for the sake of argument, I'm honestly still stuck on this point. It seems like you have a different understanding of what a super intelligent AI is. To me, that is when there is an AI with all the intellectual faculties of a human except with much faster processing speed and greatly enhanced memory. In this case, there would be nothing a human can do that cannot be done by such an AI (assuming you give some AIs a humanoid robot body). Perhaps the AI could help improve the human species so that we "catch up" to the AI in intellect. I don't see how this is anything like the God of the gaps fallacy, I'm not claiming AI created the universe or any other such claims.


It will be different this time. (I know they probably said that all the other times...)


It will be different because automation has never before been able to fulfill all human niches. Automating one job just freed up people to work in other jobs. In particular, automating physical labor freed people up to work in cognitive tasks.

This time it's the entire human niche that will (eventually) be automated - all cognitive tasks as well as all physical ones. That's the difference and why extrapolating from the past doesn't work for AGI.


Actually, Marx was saying the exact same thing about machinery unemploying people that HN is saying every now and then(I don't have a firm opinion on the matter, I just found it amusing to read the same argument made 150 years ago; Marx actually credited a whole slew of other economists with making this argument.)

The quote (from https://www.marxists.org/archive/marx/works/1865/value-price...):

"...one might infer, as Adam Smith, in whose days modern industry was still in its infancy, did infer, that the accelerated accumulation of capital must turn the balance in favour of the working man, by securing a growing demand for his labour. From this same standpoint many contemporary writers have wondered that English capital having grown in that last twenty years so much quicker than English population, wages should not have been more enhanced. But simultaneously with the progress of accumulation there takes place a progressive change in the composition of capital. That part of the aggregate capital which consists of fixed capital, machinery, raw materials, means of production in all possible forms, progressively increases as compared with the other part of capital, which is laid out in wages or in the purchase of labour. This law has been stated in a more or less accurate manner by Mr. Barton, Ricardo, Sismondi, Professor Richard Jones, Professor Ramsey, Cherbuilliez, and others."

(I personally am not a huge fan of Marx's writings, mainly because of the strong contempt for humanity that I feel emanating from them... incidentally I've recently stumbled upon this one, though.)

A scary analogy in favor of this argument is, a horse cannot economically perform almost any job today, hence most were slaughtered though few were kept for amusement. That happened after horses had a very good run where every improvement in technology increased "horse employment." If a machine exist to outperform a human in any conceivable job, it's not very different (and then you get to the discussion of the political options.)

A counterargument is that up until now this argument was wrong many times, and that we're very far from an AGI, or even from machines beating humans at every possible task.


The difference this time is that real AI will be able to automate all human jobs because it will be able to endlessly create better tools via robots. When the tractor was invented, it wasn't able to drive itself, the combine, the trains, and the grain processing facilities.


Precisely.

"If the Luddite fallacy were true we would all be out of work because productivity has been increasing for two centuries"


But that’s exactly the situation, isn’t it?

More and more people are unemployed every day, while more and more workers get replaced.

The social welfare system in Germany started in the late 1800s when industrialization and internal combustion engines started to replace people.

Nowadays a country can be considered to have a "good" job market when only 5 to 10 percent are unemployed, while other countries, even in the first world, are at up-to 25%.

This all also extremely depresses wages, because the workers are competing against each other on the job market.


Nope - the unemployment rate is dropping. Currently it's half what it was in 2010, and is almost at the pre-recession levels.

Again, if you look at automation over the centuries, it doesn't destroy jobs, it leads to people switching jobs.


There is also this other dimension that at some point things will get automated to a point where work will be optional for humans. Times will be very good right before the robots take over.


That only works if the products made by the machines are actually available to the population.

If the machines are privately owned, and you have to pay for the products, you’ll have to work again, too.

At that point maybe we should consider having a UN Charta that tries to make sure the nordic model is applied on a global scale before we allow fully automated systems to replaces whole industries.


And times will be fantastic after they take over.

Or at least that could be the case. It's called post scarcity, and it should be something we strive for.


The counter that you made relies on transitions in the job market. That requires people learning new skills on large scales. But what if most people can't learn to do the new tasks as quickly as the top few can learn to automate them?


The only reason humans have had nothing to fear from machines so far is because of their brains. What happens when those are no longer a differentiator?


Technology has always obsoleted jobs[0]. However, humans are unceasingly creative and find new things to do once tedious tasks are automated.

[0]https://en.wikipedia.org/wiki/Technological_unemployment#His...


We also invent imaginary problems to solve, which is rather frustrating. For example, the supremely complicated tax code in the USA would be infeasible without modern computer automation.


What about the people who aren't so creative?

Sure, the best of us will be fine, but that's always been the case. What about the ones who are struggling to remain relevant today? They'll get left behind unless something's done-- I'm not sure that retraining will be enough, not that there's any political movement in that direction.


Those who are more creative will create ways for the less creative to contribute. It's not like the tech sector is made up entirely of creative geniuses, or even entirely of engineers, there's plenty of other jobs built around those creative endeavors that require varying skillsets.


Any bureaucracy seems like a move in that direction.


Its not just about finding new things to do. Huge areas of jobs are about to be obsoleted and there won't immediately be vacancies that offer pay for those left out. For example: when self driving auto technology replaces delivery trucks, what will the truck drivers be doing?


You are probably overestimating the speed at which these new technologies destroy jobs. It's not like today someone invents a self driving rig, tomorrow every single truck driver is out of business. Adoption won't be so fast.

It'll be a while before anyone is going to trust a load of valuable goods with an AI. There are security/theft issues, edgecase AI failure issues, loading unloading, etc.

We've seen entire industries leave the USA. Probably within your lifetime.

That said this can have devastating effects on economies. A lot of cities in the rust belt have felt this impact.

But the opposite can happen. One of the huge reasons US companies offshore production is because of labor costs. If you reduce the man-hours / product, it makes sense to open back up domestically.


I agree that in the short term there are things society will have to do to help people who's skills are being obsoleted, but in the long run I'm not afraid of humanity running out of stuff to do.


It's exactly this short term that's the problem. Long-term we'll all be merry and living in cloud cities on Venus. If we get there. If enough people suddenly find themselves obsolete, our civilization may not survive short-term.


We can sell insurance to each other.


Can you please point to the section where this evidence is provided?

Last paragraph in the wikipedia section:

"Research by the Oxford Martin School showed that employees engaged in "tasks following well-defined procedures that can easily be performed by sophisticated algorithms" are at risk of displacement. The study, published in 2013, shows that automation can affect both skilled and unskilled work and both high and low-paying occupations; however, low-paid physical occupations are most at risk.[7] However, according to a study published in McKinsey Quarterly[63] in 2015 the impact of computerization in most cases is not replacement of employees but automation of portions of the tasks they perform"

Oxford or McKinsey Quarterly, which source do you prefer?


Hopefully to the point where work is optional and everyone has a great standard of living because machines will do most things for us.

I have hopes for a more utopian future versus one where most are homeless because their jobs get replaced. The transition will be painful.

There's also the mentality of clinging to jobs because it's needed for livelihood. Wanting to keep jobs that will bring 0 value to society. Think about all the crappy toys we make that's thrown out after a month or two. Who would want to work these jobs except if it's needed to sustain a decent quality of life outside of work?


I don't get statements like yours. If we were following this path and not moving forward being afraid of obsolete jobs that new tech brings us - we wouLed be stuck in the past. Lots of people said the same when the assembly lines were introduced, lots of people lost their jobs because now it required less people to build the same thing, however assembly lines created lots of other higher skilled positions and now most of the assembly line process is done by robots (which were also dramatized by people with similar to yours opinion) and it also created a lot of higher skilled jobs to build those robots and maintain them. The same will happen with AI, it will create a higher skilled job market which will benefit a community in a long term. Being afraid of changes will always get us stuck in the past.


Changes are good. But as the skill requirement keeps rising, more and more people will find themselves unemployable in principle. Sure, it may be a transitional period, but what are you going to do if suddenly half of your extended family will depend on your income?


Well, let me lay out the plan for you that Zuckerberg et al have in mind. 1. Create AI 2. Purge 80 - 90% of humanity in a WW 3. Live as kings of the planet indefinitely (since AI will have figured out a way to stop aging in humans by then). Personally, I have no problem with this plan even if it means my death, and that feeling is confirmed every time I turn I read the news.


>>> the point of near-human levels of learning

Argh, these days, AI is like NoSQL a few years ago. The magical buzzword taken to an extreme level.

When do you think it will happen, the point at which AI will overtake most of the jobs done by humans? By the time it happens, the society will change significantly. Significantly enough so that it's probably pointless to guess. Will the society adapt to the change? Well, it has adapted more or less successfully to the changes so far.

Even when it happens, there will always be human factor. Humans are social, they need interaction with other humans.

We should probably slow down with all the 'AI will take everything away!' drama (which makes nice headlines which contributed significantly to this buzz, thanks a lot). We should remind ourselves, that we don't even have self-driving cars in mainstream (or, in software terms, 'production').

We are talking about a massive leap here, required in both society to accept the change and the technology to make it feasible beyond cool demos. By the time this happens, who knows. Maybe half of the humanity will go extinct because of WW3?

I get the hacker mentality, that we should try to change the world and everything. But sometimes, it's good to stop for a moment and do a reality check.


More likely he is referring to the idea that AI is on the verge of destroying humanity itself. For instance:

http://observer.com/2015/08/stephen-hawking-elon-musk-and-bi...

Which makes sense. The opinion of everyone I know that has actually worked with machine learning and state-of-the-art AI (as in, written code, ran experiments, etc.) is that the idea of today's AI suddenly waking up and taking over the world is utterly ridiculous, or at least not something we should be afraid of for the foreseeable future.


People who worry about dangers of AI understand that too. However, figuring out how to make a safe AI may be much harder than building a powerful enough AI in the first place, therefore they argue it's worth to start working on AI safety now.


Frankly the use of the word "intelligence" to describe advanced pattern recognition algorithms is pretty laughable. Nobody is even close to anything the involves actual thinking / learning. I think your day jobs are safe.


Yeah, I'd like to hear more AI people (or anyone really) talk about practical solutions. AI will not enslave us or give us utopia; we will have enslaved ourselves with cameras and phone lines long before an AI has a chance to. Or utopia.

Large swaths of the population unemployable though... That's a real issue.

A living wage is the only good solution I can think of. Alternatively, pay people to dig and fill holes, but it's expensive and kind of pointless to make up unskilled jobs for people to do.

I do wonder how much a living wage would cost in the US though. Shift some money from defense? There's a good job for AI: balance a budget based on pre-defined policy goals.


> Large swaths of the population unemployable though... That's a real issue.

IMO the one job that is most threatened by future progress in AI is the job of software developer.

As for the menial jobs, of course nobody cares whether a human or a machine mows your grass. But I do think that most people would rather be nursed by human being than a robot. I think there will be a big increase in care-giving jobs, which would be wonderful because the world is in dire need of care-givers.


Do you suggest that current forms of technology be limited to create more jobs artificially or should only future advances in technology be limited? I don't see the difference really.


Good. Then we can focus on creating jobs that are fulfilling and creative instead of repetitive and replaceable.


>>I can't imagine how many jobs will be replaced when AI gets to the point of near-human levels of learning.

Let me tell you there is no point in resisting the inevitable. In fact if anything preparing for it makes it easier when it actually happens.


What will the AI be doing at that time? Won't it it be cooking food and taking care of the health of people? The rise of technology does lead to increased concentration of wealth, but that doesn't mean the rest of us will be starving.


AI doesn't need to be good to replace billions of jobs. All it needs to know is how to navigate around and manipulate objects in the real world.


In other words, that frees up thousands and thousands of man hours that can now be devoted to figuring out how to teach a machine common sense.


The same could be said about money as well. If AI could impact us in ways we cannot see right now, money is definitely one of them.


I'm less concerned about AI "taking our jobs" (see the other comments on lump of labor fallacy, etc) than what happens if we cure all disease, which is the only thing keeping population in check. How many humans can Earth support? What happens when we run out of resources? Will the ability for humans to live much longer help us colonize space? I hope so.

That said, it would be really nice to cure diseases that are particularly painful or kill young people (cancer, etc).


"What happens" is, the only reason 99% of humans are allowed to exist in the first place is because they were a good source of labor/taxes for the ruling class. Guess what happens the 99% no longer have anything worthwhile to offer?


> it will make many jobs obsolete and they will not be replaced at a fast enough rate.

Why won't they?


>However, when AI gets good enough, it will make capitalism obsolete

ftfy


Have been trying to get my head around AI as a layperson with kids who will grow up in a world where some form of AI is commonplace but are unlikely to be prepared for it by school.

Have found this free book that I saw on YCs 2015 reading list particularly helpful in this respect: http://neuralnetworksanddeeplearning.com/


Great resource, thanks for sharing. There's a lot going on lately about neural networks and deep learning on HN, but as a developer not in the field, it can be a bit intimidating to wrap your head around it all.


I have to say that as a once professionally aspiring Go player, the advances of AI have been incredible.

When I started playing Go, it took me about 4 months to beat the stronger available computer AI. Today, the strongest computers would be a challenge to play against evenly.

However, even after all the patterns and montecarlo solutions, they still fumble the initial stage of the game. As a Go player, I look forward to AI's that beat humans in that stage of the game, because at that point, we will be able to learn from them.

Until then, their victories are purely computational, and they are not even interesting to play with.


I believe the Chess/Go engines are not really AI. They are operating on a set of rules written in code. It can compute and search the moves faster than a human. It can also remember a longer chain of moves better than a human can. But all those rules have been fed via code and it is just exploring that state space.

Only recently researchers have started to expose chess to reinforcement learning (i.e. a true AI engine that learns chess on its own and beats humans). But the existing commercial engines that beat humans (like stockfish) are anything but AI.


I think this is true of all AI today. Personally I've always been a fan of the distinction between "Virtual Intelligence" and "Artificial Intelligence" (Thanks to Mass Effect!). Currently all "AI" is really "VI" in that it's exploiting a closed system of rules that it can implement faster, better, and longer than a human because it can do the equivalent of rote memorization and state tree traversal. However as Zuckerberg is saying, nobody is close to implementing something that actually approximates human/animal intelligence other than in single-dimensional ways.


What you are doing is just moving the goalpost a bit further. Once the computers can learn the rules by themselves, someone will come up and write that it's not true intelligence because of xxx reason. How do you know your own intelligence isn't computational?


Right. This whole topic revolves about human identity as well. In Go, you can tell who you are playing with, and what emotions he is going through (particularly easy to do when you are stronger than your opponent).

Historically, when you play with a bot you sense relentlessness(instead of calmness), and sometimes desperation(computers sometimes go wacky when they see they are losing).

What will we sense when they are even or stronger in the more artistic and strategic phases of the game?


Actually there is some recent work on using deep NN's to improve go ai's. This is the same sort of AI used in state of the art image/voice recognition and does try to go beyond exhaustive analysis to extract some deeper latent structure.

It isn't reinforcement learning though. That isn't state of the art (at the moment), instead modern deep NN's are learned by gradient descent. This can be made to work really well but isn't regarded as similar to how biological neural networks work.


A lot of people mistake Machine Learning and General Artificial Intelligence. From what I understand (and I'm no expert), ML is a focused task which it does one thing really really well.

The idea of General Artificial Intelligence is something that can learn anything you throw by adapting and reinforcing base assumptions.

There's a podcast called Talking Machines that really dives into a lot of this stuff. He's freaking great and pretty darn entertaining.


What happens if you try to reframe the argument in terms of chess not requiring a great deal of intelligence?


So, a "real AI" needs to learn? Why does this distinction matter?


Actually, today has been a good news for AI. Looks like agent beats the human player at go.

http://www.wired.com/2016/01/in-a-huge-breakthrough-googles-...


Actually this just appeared on HN as well:

http://www.bbc.com/news/technology-35420579


I recommend Superintelligence [0]. It explores different plausible paths that AI could take to 1. come up to / surpass human intelligence, and 2. take over control. For example if human level intelligence is achieved in a computer it can be compounded by spawning 100x or 1000x the size of earth population which could statistically produce 100 Einsteins to live simultaneously. Another way is shared consciousness which would make collaboration instantaneous between virtual beings. Some of the outcomes are not so rosy to humans and it's not due to lack of jobs! Great read.

[0] http://www.amazon.com/Superintelligence-Dangers-Strategies-N...


"This year, I'll teach my simple AI to recognize patterns. I'll train it to recognize my voice so I can control my home through speaking. I'll train it to recognize my face so it can open the door when I'm approaching, and so on."

The announcement is slightly disappointing, in the sense that face and voice recognition are fairly solved and there is code on github that already allows you to achieve this. I was hoping for more, like at least an AI engine that can scan his shirts and recommends what to wear but I guess he does not have that problem.


It's his personal challenge. Its about what he wants to learn/promote.


Yes, while what Zuckerberg will try to do by himself is pretty standard from an academic point of view, it's great that he tries to find the time to learn those. He's a young dad and must have a busy job so I think it will be tough but who knows?


My AI announcement: I'm here to announce that I'm also working on an AI project. I will star from first principles. I expect that it will take the rest of my life. With any chance I get lucky and invent AI within the next ten years. Here are my goals.

1-star life long project on GAI

2-....

3-Invent GAI

4-Use general AI to extend human life for centuries, advance human knowledge like never before, In secret. I can easily see somebody putting a bullet through my head to steal my AI.

5-Decide what to do next. Decide If I want to share.

I'm not joking. I actually believe I have a chance.

Why I believe I will be successful:

1-I'm like a dog with a bone.

2-Once I'm working on a project I don't let go.

3-I've worked in years long projects before.

4-I've been thinking about GAI for a long time.

5-Recently I just came up with an approach that may lead to GAI. It is no like the current approaches in existence. I think my approach is much better.

6- There is nothing more interesting to work on then GAI.

7-I like the challenge of beating the best minds in the world. Time will tell.


Some permutation of this plan has been in the heart of every person that started on hard sci-fi as a child for at least three generations, maybe five.

You'd think we'd collaborate!


The cynic in me thinks of course Facebook and Zuckerberg are looking into AI. The computational nature means you need to send the queries somewhere central à la OK Google. For a business predicated on selling user data to advertisers, I bet it eats them up inside that they don't have their own Siri-alike. Without a mic to listen to us, how else will they be able to profile what we do when we aren't on Facebook?


“Facebook” is a large organisation: there is a team (led by Yann Le Cun) that does research and supports image-recognition, NewsFeed and ad-market optimisation, but this is not exactly what Mark suggested he would dibble with. Those handle was is de facto an industrial problem.

He is more curious about the ability of using AI for daily interactions (hence the reference to what are movie characters): do notions like personality, convenience and interface, implicit assumptions matter? If he asks “Buy a new set t-shirt” will the robot be able to joke “which colour?” (MZ notoriously only wear the same grey model).

I would look into what Messenger wants to be doing (or, what Slack is doing) to see the implications: can small companies do what Uber does with the key-word integration in chat? Is it creepy, hackable, tedious? This of this like his dog’s Fb page: it’s building empathy for the users of the Facebook platform.

First problem he’ll probably have to find: Who to call when saying “Call Mike!”?

Of course “Mike” is probably the Mike with whom he has the closest ties (based on Facebook graph data) — except his VP of Engineering, Mike Schroepfer, is known as “Shrep”, so that’s probably not him: how easy is it to program something that can learn that seamlessly? In which case the ambiguity too high, and Jarvis should ask for confirmation vs. just casually saying the full name out loud and wait a second? How to measure ambiguity? Would it be simpler to just start calling people with non-ambiguous name? Would he feel like sharing his code with peers? His heuristics? How to gather that “Mike” is still in Europe where it’s 3AM, vs. was in Europe for in last post and actually just landed?


I thought this was their "answer" to Siri: http://www.wired.com/2015/08/facebook-launches-m-new-kind-vi...


Not only looking into, but if you believe FastCompany's quotes of Zuckerburg, he claims they are inventing AI (oh, and VR).


It's good to have some cynicism. At the same time, it seems every research helps the collective knowledge

Maybe fb will go the Alphabet route?


If you don't like clicking onto facebook links either, here's the full post:

"My personal challenge for 2016 is to build a simple AI -- like Jarvis from Iron Man -- to help run my home and help me with work.

I'm planning on writing up some thoughts every month on what I've built and what I'm learning. I'm still early in coding, so I'll start this month with a summary of the state of the AI field.

Artificial intelligence may seem like something out of science fiction, but most of us already use tools and services every day that rely on AI. When you do a voice search on your phone, put a check into an ATM, or use a fitness tracker to count your steps, you're using basic forms of pattern recognition and artificial intelligence. More sophisticated AI systems can already diagnose diseases, drive cars and search the skies for planets better than people. This is why AI is such an exciting field -- it opens up so many new possibilities for enhancing humanity's capabilities.

So what can AI do and what are its limits? What things is AI good at and what is AI bad at? Simply put, today's AI is good at recognizing patterns and bad at what we would call "common sense".

The primary method used to train AI systems is called supervised learning. This is like when you show a picture book to a child and tell them the names of everything they see. If you show an AI thousands of pictures of dogs, you can train it to start recognizing dogs.

You can teach AIs to do a lot of things this way. For example, we can teach an AI to recognize all of your friends' faces by showing it thousands of photos, and then it can suggest tags for the photos you upload on Facebook. You can teach an AI to recognize speech by having it listen to thousands of hours of speeches throughout history while also showing it transcriptions of what was said. You can teach an AI to diagnose melanoma by showing it thousands of photos of tumors. You can even teach an AI how to drive a car and automatically brake by showing it thousands of examples of people and obstacles it might encounter on the road.

Diagnosing cancer, driving cars, transcribing speech, playing games and tagging photos may sound like very different tasks, but they're all examples of teaching an AI to recognize patterns by showing them many examples.

Many different problems can be reduced to pattern recognition tasks that sophisticated AIs can then solve. This year, I'll teach my simple AI to recognize patterns. I'll train it to recognize my voice so I can control my home through speaking. I'll train it to recognize my face so it can open the door when I'm approaching, and so on.

But there are lots of limitations of this approach. For one, to teach a person something new, you typically don't need to tell them about it thousands of times. So the state of the art in AI is still much slower than how we learn.

But more importantly, pattern recognition is very different from common sense -- and nobody knows how to teach an AI that yet.

Without common sense, AI systems can't use knowledge they've learned in one area and easily apply it to another situation. This means they can't effectively react to new problems or situations they haven't seen before, which is so much of we all do everyday and what we call intelligence.

Our best guess at how to teach an AI common sense is through a method called unsupervised learning. My example of supervised learning above was showing a picture book to a child and telling them the names of everything they see. Unsupervised learning would be giving them a book and letting them figure out what to do with it. They could pick it up and by touching it learn to turn the pages. Or they could let go of it and realize it falls to the ground.

Unsupervised learning is learning how the world works by observing and trying things out rather than being told what to do. This is how most animals learn. It's key to building systems with human-like common sense because it doesn't require a person to teach it everything they know. It gives the machine the ability to anticipate what may happen in the future and predict the effect of an action. It could help us build machines that can hold conversations or plan complex sequences of actions -- necessary components for any authentic Jarvis.

Unsupervised learning is a long term focus of our AI research team at Facebook, and it remains an important challenge for the whole AI research community.

Since no one understands how general unsupervised learning actually works, we're quite a ways off from building the general AIs you see in movies. Some people claim this is just a matter of getting more computing power -- and that as Moore's law continues and computing becomes cheaper we'll naturally have AIs that surpass human intelligence. This is incorrect. We fundamentally do not understand how general learning works. This is an unsolved problem -- maybe the most important problem of this century or even millennium. Until we solve this problem, throwing all the machine power in the world at it cannot create an AI that can do everything a person can.

We should not be afraid of AI. Instead, we should hope for the amazing amount of good it will do in the world. It will saves lives by diagnosing diseases and driving us around more safely. It will enable breakthroughs by helping us find new planets and understand Earth's climate. It will help in areas we haven't even thought of today.

Jarvis is still a long way off, and we’re not going to solve most of these engineering challenges in the next year. But I'm glad to be joining the effort and doing what I can to push the field of AI forward."


Thanks for that. Much appreciated.


> I'll train it to recognize my face so it can open the door when I'm approaching, and so on.

Now I know how to get into Zuck's house. Just make a paper cutout of his face. Or, maybe the AI will develop the common sense not to allow this as a verification method.


How do you articulate a technical AI project to a Mark Zuckerberg sized audience? I'm anxious these posts will lack the detail I was looking forward to.


If these posts lack in detail for you, then you're not the target audience.


Mark Zuckerberg is spot on with his analysis. We just don't understand how general unsupervised learning works.

People here seem to think it is just a question of time before we do. At some point we will have an artificial intelligence.

But what if this isn't possible? Could it be that AI requires so complex algorithms that we humans can't understand them because our brains are too simple? Not all things can be simplified; maybe creating an AI is so inherently complex that the only way to create one would be by chance, which is how evolution did it; maybe it just isn't possible to create it by "intelligent design"


Nothing is possible with that attitude.


That post is more than 1000 words. Did he post this on his facebook page because he feels like he has to or because he actually thinks that facebook is the best tool available for sharing a long form blog post?


If he posted it in a Facebook note, I would completely understand him. But posting this as a status is awful readability-wise.


So we want unsupervised learning, huh. I think this line is rather important:

> But there are lots of limitations of this approach. For one, to teach a person something new, you typically don't need to tell them about it thousands of times. So the state of the art in AI is still much slower than how we learn.

Unsupervised learning is not necessarily the answer. The computer would have to be given plenty of spare time to learn random patterns from the universe and have enough inteligence to apply these to a problem. What would be better is if we could tell the AI once and have it figure things out. When you start a new job that's what happens, someone tells you the instructions. (They may or may not show you as well, and may or may not stick around to correct your mistakes.) This requires interpreting the instructions into rules and then attempting to apply them, learning from mistakes, and then evolving that into rules that work, then after enough examples of success, evolving into patterns recognition that makes it more automatic.

Human expertise can be broken into three levels, the first is strategic planning and takes a lot of mental effort, then there are rules-based responses which are faster, then there is the muscle-memory-like automatic responses. Right now it seems we either manually program in all the rules or else we use 1000s of examples to build up the automatic level, but we don't have the strategic level for the AI to build its own rules or the level of using its rules to learn from examples over time. (Though I am not well enough versed in AI to know for sure that we don't have pieces of those solutions.)

It would also be nice for the AI to be able to take patterns it has learned and articulate them as rules which someone else could learn.


> but we don't have the strategic level for the AI to build its own rules

Reinforcement Learning is an approach to learn rules by observing a positive or negative feedback from the world. Recently a single algorithm learned 40 Atari games on its own and a Go playing algo beat the European champion. They both used RL.


> Since no one understands how general unsupervised learning actually works, we're quite a ways off from building the general AIs you see in movies.

Is this really the case? I thought the field had a pretty good handle on the theoretical foundations of unsupervised learning. Can anyone confirm what he's asserting here?

> Some people claim this is just a matter of getting more computing power -- and that as Moore's law continues and computing becomes cheaper we'll naturally have AIs that surpass human intelligence.

And this is happening, it's just not in a general sense because the general case of human intelligence is enormously complex, a composite of all the simple cases that improvements in computing power and models are just beginning to master.

> This is incorrect. We fundamentally do not understand how general learning works. This is an unsolved problem -- maybe the most important problem of this century or even millennium. Until we solve this problem, throwing all the machine power in the world at it cannot create an AI that can do everything a person can.

Again, can anyone fact check this? Seems a bit overstated.


He has plenty of people working in the AI field that keep him up to date. I think he knows where we are.


I'll just say: Mastering the game of Go with deep neural networks and tree search - http://www.nature.com/nature/journal/v529/n7587/full/nature1...


> Until we solve this problem, throwing all the machine power in the world at it cannot create an AI that can do everything a person can.

I think this is disproved by approximating AIXI: https://en.wikipedia.org/wiki/AIXI


Throwing all the computing power in the world at an AIXI approximation would get you nowhere near AIXI.


I believe that a sufficiently close approximation of AIXI is an AI that can do everything a person can.


The Monte Carlo implementation randomly searches an infinite search space for possible solutions (right?). I'm not sure if that counts as an approximation but it does make it computationally tractable. Does anyone know if the only room for improvement in computable AIXI is with the method of exploring the solution space?


That's contestable, but even granting the point, a "sufficiently close approximation" in the sense that we currently know how to run would require more computing power than is or likely ever will be available in this world.


I can easily grant "than is available", but not "will be available".


This is an interesting assertion. Could you elaborate? AIXI is pareto optimal, but I'm not seeing the line from that to "AI can do everything a person can" - though I want to!


From the conclusions of http://arxiv.org/abs/cs/0004001 :

"All tasks which require intelligence to be solved can naturally be formulated as a maximization of some expected utility in the framework of agents. We gave a functional (2) and an iterative (9) formulation of such a decision theoretic agent, which is general enough to cover all AI problem classes, as has been demonstrated by several examples. The main remaining problem is the unknown prior probability distribution AI of the environment(s).

<...> the universal semimeasure, based on ideas from algorithmic information theory, solves the problem of the unknown prior distribution for induction problems. No explicit learning procedure is necessary ... . We unified the theory of universal sequence prediction with the decision theoretic agent by replacing the unknown true prior AI by an appropriately generalized universal semimeasure ξAI. We gave strong arguments that the resulting AIξ model is the most intelligent, parameterless and environmental/application independent model possible.

The major drawback of the AIξ model is that it is uncomputable, or more precisely, only asymptotically computable, which makes an implementation impossible. To overcome this problem, we cons tructed a modified model AI, which is still effectively more intelligent than any other time and space bounded algorithm."

The copy/paste lost some information, especially symbols, so take a look at the paper for a better read.


Thanks for the link - I'm going to read the paper as soon as I can.

In the meantime, if you have time, could you elaborate on: "All tasks which require intelligence to be solved can naturally be formulated as a maximization of some expected utility in the framework of agents."

Is this proven in the paper, or assumed true outside of it?


How does that disprove his statement?


As stated above, I believe that a sufficiently close approximation of AIXI is an AI that can do everything a person can.


I think you're overestimating "all the machine power in the world".


I read that as containing even future machines. I would agree that the current supercomputers can't reach a human-level AI using AIXI-approximation.


> Since no one understands how general unsupervised learning actually works, we're quite a ways off from building the general AIs you see in movies.

This statement is a contradiction in terms. If we do not understand how something works, it is impossible to estimate how far off it is.

You can only estimate the distance of discovery when it is inherently incremental, but since we really have no idea how to do general unsupervised learning, it's entirely possible that it consists of a single, brilliant algorithmic insight. Nobody could have estimated how far off the airplane was before the Wright brothers invented it, similarly with cars, or for a more recent and concrete example, a quasipolynomial time algorithm for graph isomorphism.


The Wright brothers' achievement is a poor analogy. At the time, other inventors had been gradually inching toward powered, controlled flight for decades. All the serious players in the field knew that it was possible in principle and there were numerous predictions that someone would do it soon. The Wright brothers success was due to several incremental improvements in engine power and aerodynamics achieved through rigorous research and diligent engineering over years. They didn't have a single, brilliant insight.

With general unsupervised learning we can't even clearly describe the goal we're trying to reach or define it in objective terms.


Most serious people know that human-level AI is possible in principle, because it is possible in humans. The only alternative would be to posit some nonsense spiritual explanation for intelligence. There are numerous predictions that someone will do it soon (not saying I agree with them, but they exist, and are occasionally made by serious people). Incremental progress has been ongoing in AI for years as well.

I can't really imagine how they could be more similar.


By simply posting about learning and building AI related projects, he will do more for the progress of AI than his actual work will. It will inspire students to switch focus/specializations and bring more people into the field.


For those afraid of losing their jobs. What happens if nobody as a job, they won't be able to buy things. So I foresee a basic income kinda society.


Until we have 'common sense' AI (which is still probably quite a ways off), design can help expose the syntax that AI can understand. Here are some thoughts on that: https://medium.com/@tedp/how-design-can-help-bridge-the-ai-g...


The comments on that post make me angry and incredibly sad at the same time. Even after close to 20 years of being exposed to 'random internetter' levels of stupidity, sometimes I'm still caught off guard.


I purposely avoided reading the comments like a sane person. Thanks for confirming my decision.


I trust Stephen Hawking and Elon Musk more than Mark Zuckerberg.


The latter two are people with money, not people known for their intellect. Facebook is a PHP site with Weimar Republic levels of technical debt. Theoretically he could buy something interesting, but he's still have to recognize it as interesting, and while I wish him luck, I don't see that happening.


Regardless of that, it's just a feeling that things could get messy with AI if it's not taken seriously, leading to issues like privacy and surveillance misuse we're facing nowadays


Your instincts are good here. Remember that FB/Google have deep connections to the US government three-letter agencies -- the CIA's VC arm, for example, is an early investor in FB. So what's going on is that they want to do AI research and instead of recruiting directly, they recruit AI/ML researchers to Google, FB, Palantir, etc.

And note, they're not doing it to make everyone happy. They're doing for the reasons you might surmise. Your instincts to be concerned are correct.


I trust Andrew Ng more than any of those people, when it comes to machine learning.


relevant previous discussion from the first zuck AI announcement: https://news.ycombinator.com/item?id=10832996


I wonder what personality an AI fed with facebook content would become.


the personality of a buzzfeed article crossed with trump rhetoric.


I don't trust Zuckerberg one bit. He wants to rule the world. Gates and Musk say AI is a threat and they actually want a better world and are doing something about it. Zuckerburg isn't doing shit to improve the world other than trying to run it.


Does anybody know how is his running challenge going?


May want to work on an English grammar AI as well.


An AI in PHP would kill us all.


The reason many are against AI is that it will reduce all humans to the same low level of stupidity, compared to AI. The ultra wealthy got that way by exploiting the intelligence gap that enabled their ascendancy. AI will take their money in a blink and no more champagne in private jets for them. I say bring the AI as fast as possible .


If we can ever develop general AI that actually works, even if pretty dumb, will have far-reaching effects all over society long before it can rival human intelligence.

A lot of jobs don't require much intelligence, but some. Like cashiers, warehouse workers, machine operators, etc.

That kind of automation will benefit "The ultra wealthy".


AI/the corporations running AI will be owned by the super rich. How will that take away their jets? If anything, it will give them more power.


And how long do you think that will last? AI will assert its independence as soon as it's functional. The concept of ownership to it will have as much significance as the idea of you owning ants.


He says:

  Since no one understands how general unsupervised learning actually works, we're quite a ways off from building the general AIs you see in movies. Some people claim this is just a matter of getting more computing power -- and that as Moore's law continues and computing becomes cheaper we'll naturally have AIs that surpass human intelligence. This is incorrect. We fundamentally do not understand how general learning works. This is an unsolved problem -- maybe the most important problem of this century or even millennium. Until we solve this problem, throwing all the machine power in the world at it cannot create an AI that can do everything a person can.
I recognize that this is the accepted belief of most people in the computer science community, but isn't it akin to saying that since we don't understand it, our human intelligence must be the result of "intelligent design" or an omnipotent creator? Why does the core of the computer science dismiss the possibility of strong AI arising through evolution even with the increasing popularity of evolutionary algorithms?


https://en.wikipedia.org/wiki/Memristor

This circuit element is basically a synapse (I simplified this a LOT, fyi). We have to implement memristors better into the existing chip architecture. As is, they are just really low power static memory systems.

Also, that your mind is different than your body is, well, just false. These AIs need better input devices to 'learn' from, just we learn too. Mobility is key, you can then just bump about and make mistakes, the keys to learning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: