Hacker News new | past | comments | ask | show | jobs | submit login
Thoughts on OpenAI, reinforcement learning, and killer robots (fast.ai)
174 points by math_rachel on July 28, 2017 | hide | past | favorite | 195 comments

I've worked in a lot of AI-related projects and was around when the AI winter arrived.

These various techniques that currently work by training, either supervised or self-training, can have fatal flaws.

Take, for example, some high-tech camera technology. Use it on a drone to take pictures of warships from thousands of angles. You take pictures of U.S. warships, Russian warships, and Chinese warships. You achieve 100% accuracy of identifying each ship using some Neural Net technology.

One day this net decides that an approaching warship is Chinese and sinks it. But it turns out to be a U.S. warship. Clearly a mistake was made. Deep investigation reveals that what the Neural Network "learned" happened to be related to the area of the ocean, based on sunlight details, rather than the shape of the warship or other features. Since the Chinese ships were photographed in Chinese waters, and the U.S. warship that was sunk was IN those waters at the moment, the Neural Net worked perfectly.

Recognition and action are only part of the intelligence problem. Analysis is also needed.

This problem is well studied - there are ways to make a neural net explain what parts of the input most influenced the decision.

Another solution would be to use autoencoders or GANs to create a latent code from the input image. By construction, these codes need to carry the most important features about the input, because otherwise they couldn't reconstruct it.

And regarding analysis - a lot of groups are attempting the leap from mapping "X -> y" to reasoning based on typed entities and relations. Reasoning would be more like a simulator coupled with a MCMC system that tries out various scenarios in its 'imagination' before acting out.

There are many formulations: relational neural nets, graph based convolutional networks, physical simulators based on neural nets, text reasoning tasks based on multiple attention heads and/or memory. It's very exciting, we're closing in on reasoning.

> This problem is well studied - there are ways to make a neural net explain what parts of the input most influenced the decision.

That's a new area of research, actually.

New areas of research become "well studied" in a year or two in AI. GANs are considered both new and well studied, for example.

Machine Learning works on a different timescale from everything else.

> Machine Learning works on a different timescale from everything else.

To piggyback your comment. I think this is the real major cause for concern, AI disruption occurs at a expotentially high rate than human learn rates affecting issues like career transition.

No one may have been ill intentioned when they applied AI and invented a way to replace anyone any longer having to mnqualy do job X but none-the-less all those who do job X are now stuck and even if they retrain (optomistixally in 2-3 years) it will in all likelyhood distrupt them again.

As technologists maybe we face a bias as we have not been significantly career disrupted by technological progress and so we can't fully see what it's like for those not riding but being swamped by the wave.

"This problem is well studied - there are ways to make a neural net explain what parts of the input most influenced the decision."

Would you mind providing some reference here? I am interested and not familiar with any such way.

While I wouldn't say that the problem has been "well studied", the research community has been paying interest, and some progress has been made, most notably- as pointed out by benjaminjackman- LIME[1]. Roughly speaking, LIME learns a locally approximate model which can be interpreted. It will work will any black box model, not just neural networks.

[1] https://arxiv.org/abs/1602.04938

darpa recently launched a new program to do just that - force ML models to "explain" their predictions. it's called "explainable artificial intelligence" and the goal is to increase trust of autonomous systems.

[1] https://www.darpa.mil/program/explainable-artificial-intelli...

Checkout LIME: https://github.com/marcotcr/lime presumably in this example when reviewing the test dataset you'd see the ocean light up more than the warships.

E.g. Here is an example of a wolf (ship) detection system that is actually detecting snow (area of the ocean) https://youtu.be/hUnRCxnydCc?t=58

In that scenario, sinking a Chinese ship in Chinese waters is still very problematic.

Not quite. Another popular example is the cnn learning to distinguish wolves from non-wolves, because the wolf was always photographed on snow & the other animals not on snow. It just learnt to distinguish snow vs non-snow instead of wolf vs non-wolf. It then failed catastrophically when you put a non-wolf on snow.

There's no leakage in either case. The model is learning one thing and you think it's learning another.

No, it's called insufficient feature engineering. Data leakage is when your test data contaminates your training data.

Actually this is a clear case of overfitting leading to a high false positive rate resulting in the misidentification of a US ship for chinese.

I don't think that's a clear case of overfitting. You could have used a subset of the original data for training and the rest for validation and it would have generalised pretty well.

It doesn't generalise when the US ship is in Chinese waters, but that's because the system was never "learning" to recognize ships in the first place.

in your example, was the neural net trained with examples of US ships in Chinese waters?

I assume it would have to be trained with existing material, which would mostly consist of US ships in US waters and Chinese ships in Chinese waters.

You are implying that the problem would have been in the set that was used as input, but my understanding is that in many a case you would realize that mistake when it's already too late.

Top people including DeepMind's CEO Demis Hassabis and Prof Stuart Russell, a AAAI and AAAS fellow who is a co-author of the AI textbook used at most of top universities, agree that AGI is definitely possible and going to happen. [1]

Hassabis also stated in another session that there are probably at least half a dozen mountains to climb before reaching AGI and he would be surprised if it takes more than 20. He also said that it has been easier to develop major advances than he thought. [2]

Considering that there has been about one major leap per year in neural networks architecture and systems for the past 3-4 years, and this includes training neural networks to do one-shot learning for physical robot control after practicing in a simulator, should we really be that complacent?

Even if the advances slow down to one per 2 years and it takes 12 of them to get to AGI, that's only 24 years. If the rate continues to be one per year, that's only 12 years.

[1] From the start of https://youtu.be/h0962biiZa4

[2] https://youtu.be/V0aXMTpZTfc

Note: Reinforcement Learning is far from the only component being researched at major labs, including DeepMind and OpenAI.

> Top people including DeepMind CEO Demis Hassabis and Prof Stuart Russell, a AAAI and AAAS fellow who is a co-author of the AI textbook most used at top universities, agree that AGI is definitely possible and going to happen.

Even though they are high-profile people, in DL, since people still don't know a lot about it, their confidence means nothing.

When I was in college, my professors/textbook alike, claimed that in order to conquer Go, we probably needs quantum computer or something really sci-fi, maybe in 50 years, even 100 years. Guess what, no one, not even the most optimistic person, would predict it will beat the best human player in 10 years. So, yeah, AGI might happen, maybe in 10 years, maybe in 100 years, but until it happens, no one really knows when is that moment exactly.

>Guess what, no one, not even the most optimistic person, would predict it will beat the best human player in 10 years.

This is what I call living in a bubble, dear sir/madam.

I grew in a small town in Eastern Europe (~100k population) and me plus the local 50 or so programmers (using Apple II and 8086-based computers), in 1996, while we were in our teenage years, were painfully aware that games like chess and Go are brute-force-able with some elements of recognizing and memorizing viable strategies (and throwing out the unviable ones early on). And that was what, 15-19 year old late teenagers, and we knew it. So be sure that many people knew it -- they just never chose that area of expertise specifically.

This also says a lot about the quality of the average professors and textbooks, but let's not go off-topic, plus it's a huge discussion area.

The point is whether we should be complacent and dismiss concerns just because we don't know when it will happen.

Once someone builds it, stopping it might be very difficult. Here's why: https://youtu.be/4l7Is6vOAOA (less than 9 minutes and very clearly explained).

Dismissing even a 10% chance of possible catastrophic risks is not what we practice in any other domains. Would you dismiss a concern over airplanes that have a 10% or even just a 1% chance of crashing?

I guess, as a theoretical computer scientist myself, my first reaction is to roll my eyes. I get that these guys are trying to drum up interest with investors/funding agencies, but "My greatest fear is that my research is too successful" is still pretty grandiose. It sounds like those physicists who thought they had figured everything out in the '30s (or AI researchers before the last winter). I think it's pretty telling that Zuckerberg isn't worried despite having an empire built on AI - he's not asking anyone for money.

> physicists who thought they had figured everything out in the '30s

Certain 30s physicists were making "My greatest fear is that my research is too successful" warnings that turned out to be pretty on-point.

Zuckerberg just wants to use ANI to better push ads at you. Any fear of AI will make it harder for him to do just that. This is all about mone and earning more of it.

I agree that Elon and Mark are likely to say things that they think will win them support.

I think the best way to understand AI tech is to try it out yourself. The closer you are to understanding its internals, the more informed you'll be. Fundamentally, it's currently all statistics. It's up to you, then, to decide whether humans are just statistical machines, or if we have something more that may not be understood by science yet, such as free will.

Point is, nobody knows whether AGI really exists or not, based on our current approach. As I have answered in another comment section, natural language understanding haven't been cracked, at all. I mean at all.

So the reality is, it is like ancient human knows birds can fly with wings, but people can only fly in their dreams. Now, we know people can think intelligently with their brain, but no one knows how to make computer works the same. It is far too early to talk about the risk, until we have the Wright brothers of AI to enlighten us on such possibility.

Unlike airplanes, AGI possesses agency and the ability to cause widespread harm and damage in today's computer-penetrated world. So if we wait until it is invented, there is no guarantee that there won't be danger or even catastrophes. Wouldn't you even agree that there is a 1% chance that AGI is possible and that it could harm us once invented?

Also, the goalpost for identifying something as a challenging cognitive skill worthy of the name AI is moved almost every time we make progress so people keep denying that we are closer to AGI. AlphaGo is the latest example.

Some AI researchers and CS people believe that Natural Language Understanding (NLU) is AI-complete, i.e. once we solve it we basically solve AI in the sense of AGI. I do not personally believe that--There are certain human cognitive skills that are not required for NLU. But I do think that solving NLU does bring us closer to solving AGI.

Let's say someone makes progress on NLU. What would be the minimum level sufficient to convince you that AGI is possible? Why minimum? Because we don't want AGI to be right at our doors before starting to prepare.

* Would getting 80% on Winograd Schema be sufficient?

Other suggestions are welcomed, including by others.


For NLU, to convince me the potential, one important evidence I would be particularly interested, is the demonstration of reasoning. Unlike the current black-box model, I would love to see the model give explanation, along with the answer, what it takes to lead such conclusion, be it a rule listed in the manual or case that happened before.

This itself presents several big challenges: how do we represent the concept of reasoning, in mathematical form? Knowledge required or not? What is the representation of such knowledge?

A truly intelligent machine, should be able to take what human takes, a piece of text, then build its knowledge base from there, then answer a question just like human, then give explanation when asked for it.

> Wouldn't you even agree that there is a 1% chance that AGI is possible and that it could harm us once invented?

There's a small chance aliens could find us, too. Yet, we don't know any more about that than we do about AI.

The most imminent threat to humans is humans. In the nearer future, it seems more likely that someone will use machine learning technology in weaponry to cause harm than create AGI. Or, that humans will undergo a global "cultural revolution" like in China or Cambodia (or ISIS now), killing anyone with an education, under the guise that "machines are evil", yet with the true goal of obtaining power, as humans often do.

Ultimately, people are free to work on what they want. If some people want to work on AGI safety, that's great. Perhaps their goal is to make humanity a little more kind. That's certainly worthy.

For me, I'm more interested in seeing research focused on giving better tools to doctors for diagnoses. Radiology is a great place to apply image recognition systems. We just need some good labeled datasets.

Sometimes I get frustrated by the "AGI is an existential risk" quotes like Musk's because I feel that's a distraction from some really beneficial applications of machine learning that will help save lives. It could cause over-hype, leading to another AI winter, pushing back life-saving advances 15 years, as happened after the 80s.

Personally I don't have a problem with the kinds of things some AGI safety researchers, like Stuart Russel, says about AGI. He isn't saying it will arrive in 2030-2040, like Musk.

Musk has an incentive to capitalize on people's fear of AGI. It attracts investment in his non-profit, which can in turn help him continue building his in-house AI program at Tesla. Primarily I disagree with his conclusions about AI tech, but I'm also weary of his motivations. I think some people view him as altruistic and I just don't see that.

> What would be the minimum level sufficient to convince you that AGI is possible?

A system that can determine its own goals. We don't have anything close to that yet, and I can't see anything short of that being a good indicator that we are close to AGI.

For subgoals, there have been a lot of research results on that, mostly in the symbolic AI tradition.

Even if an AI does not determine its own ultimate goal(s), it could still cause us a lot harm by creating subgoals that do not align with human values, which no one can articulate in full yet.

Overall, the controversy is worth it because the force for progress is strong enough that another AI winter is highly unlikely if useful applications continue to be developed. Without the warnings, research on Provably Safe AI might not have sufficient resources especially talented people working on it.

> the force for progress is strong enough that another AI winter is highly unlikely if useful applications continue to be developed.

Really disagree here. All it takes is a few well positioned people, like those at Nnaisense, to create companies promising AGI and sucking up investment dollars. We all know greed is rampant in finance, and AI tech is a big word in startups/innovation these days that can land big VC dollars. There is corruption/snake oil in tech too, and you'll find the greediest float around the most hyped tech.

I think companies like Nnaisense are the wrong place to invest, and that there is much more practical work to be done to advance humanity.

I see a small controversy in your statements, please help me understand:

> Sometimes I get frustrated by the "AGI is an existential risk" quotes like Musk's because I feel that's a distraction from some really beneficial applications of machine learning that will help save lives.

While I absolutely and adamantly disagree and I can't ever agree that warnings must be dismissed as "distractions", I extract from your statement here that you want us to move forward to a true AI no matter what. But then...

> ...and that there is much more practical work to be done to advance humanity.

Right now AI looks like snake oil selling, you realize that, right? I stand fully behind the statement that we have tons of practical and increasingly urgent issues to solve here in the material Earth.

Unless of course you're implying that the strife for inventing a true AI (=AGI) is a practical work. IMO it isn't. It's better than the medieval alchemy, but not by much.

> While I absolutely and adamantly disagree and I can't ever agree that warnings must be dismissed as "distractions",

Warnings can be faked. Fear mongering can be lessened when people are informed on a subject.

> I extract from your statement here that you want us to move forward to a true AI no matter what.

Where in the devil did you get that? I said no such thing. I think investments towards building AGI are largely a waste of money. If people want to spend their money on that, that's fine. I won't.

> Right now AI looks like snake oil selling, you realize that, right?

AGI does, yes. And, for those who do not understand the difference between data science and AGI, then probably all "AI tech" seems like snake oil. But it isn't. It's already helping doctors detect cancer, among other major issues.

> Unless of course you're implying that the strife for inventing a true AI (=AGI) is a practical work

I'm not. I have no idea where you got that from my comments.

Welp, my mistake. Might have been me triggered since the term "AI" is practically meaningless nowadays.

When you know AGI is possible with existing techniques, it is already too late to discuss the risks. As soon as it is possible with existing techniques, it will be implemented by somebody, and the risks will be realized.

You already have Joshua Tenenbaum and Karl Friston. You're just ignoring them in favor of celebrities who talk about AI rather than mere cognition, and have billions in funding and years of professional self-marketing to throw around.

To quote Bret Victor (from memory): worrying about general AI when climate change is happening is like standing on the train tracks in the station when the train is rushing in, worrying about being hit by lightning.

Work is being done on climate change, but at the current rate, we're way off preventing catastrophic effects. I'd say that our current level of effective effort basically amounts to ignoring the problem.

Au contraire, climate change will not pose an existential risk within the next 30 years. The chance of that happening is pretty much nil and no one serious argues otherwise. Even in 100 years, there could be much suffering and dislocations, but it still most likely won't be an existential risk to all of humanity.

There is a non-negligible possibility that AGI will be invented in 30 years. Many AI experts agree on that. A number of experts also believe that its invention could be highly beneficial or catastrophic, depending on its form and our preparation.

With cost-benefit analysis based on the best knowledge we have weighed by probabilities, it is clear that AGI risks are more substantial and worth at least as much investment as climate change. The current funding for AI Safety Research is not even 1/10th, perhaps less than 1/100th, of climate change funding. Inaction is also an action.

If you do not trust intelligent domain experts, and also intelligent non-experts with almost no conflict of interests like Bill Gates and Stephen Hawking, then please let us know which source(s) of knowledge we should rely on instead.

Note: I believe we should fund both. A certain but slow train wreck and an uncertain but even more catastrophic and possibly speedier train wreck are both worth preventing.

Au contraire, climate change will not pose an existential risk within the next 30 years. The chance of that happening is pretty much nil and no one serious argues otherwise.

That's not true. Arctic methane release has the potential to become devastating within a small number of decades. From [1]: Shakhova et al. ... conclude that "release of up to 50 Gt of predicted amount of hydrate storage [is] highly possible for abrupt release at any time". That would increase the methane content of the planet's atmosphere by a factor of twelve. That would be catastrophic by anyone's measure.

Also: In 2008 the United States Department of Energy National Laboratory system identified potential clathrate destabilization in the Arctic as one of the most serious scenarios for abrupt climate change.

[2] is a video from the Lima UN Climate Change Conference discussing this. The people talking about this are certainly not "no-one serious". See 34:23 for Ira Leifer, an Atmospheric Scientist at the University of California, saying that 4 degrees of warming means the Earth can probably sustain "a few thousand people, clustered at the poles".

I see 4 degrees of warming being bandied about a lot these days, but very little discussion of how catastrophic that would be. It also seems a lot more likely than AGI becoming a problem, to me at least. Sure, we should fund research into both but only one of them has me worried about the world my daughter will inherit - one is basically purely hypothetical right now.

    [1] https://en.wikipedia.org/wiki/Arctic_methane_emissions
    [2] https://www.youtube.com/watch?v=FPdc75epOEw

Climate change is not adversarial, the weather will not try to disrupt and avoid you fixing it.

Fighting a passive adversary is a much easier thing than fighting an active one (especially one smarter than you).

Actually, when it comes to climate change, humans are doing a pretty good job at being adversarial to our own best interest already. We don't need anything much more intelligent than us to avoid fixing the problem.

Stuart Russel doesn't put a specific date on the arrival of AGI. That puts him at odds with Musk, who predicts 2030-2040.

It's also at odds with the study that has the very biased question asking experts "when do you think AGI will arrive?". Many chose not to answer the question, and thus their answers can't be averaged in.

Many of the AGI safety conscious crowd skip over the question of whether we will be able to create AGI and just assume we will.

There's no precedent for this kind of discovery. Fire, the lightbulb, and nuclear weapons all pale in comparison.

We also don't know how long it will take to solve Provably Safe AI problem. Wouldn't it be wiser to start preparing now?

Nuclear weapons were deemed impossible by some experts at the time. Throughout history, whenever there are a number of experts who believe something to be impossible and a substantial number who believe it to be possible, the latter are almost always right.

See also my other answer that begins with "Unlike airplanes, ..."

> We also don't know how long it will take to solve Provably Safe AI problem. Wouldn't it be wiser to start preparing now?

Sure, go for it. I think this has been in the news a lot recently because Musk told government leaders to expect AGI as early as 2030-2040.

It's a bit of a ridiculous claim. One of the biggest AI competitions, ImageNet, expects to run for the next 12 years, and that's just for making sense of images.

Perhaps it'll be done in less than 12, however, the predictions of the ML community and Musk do not line up.

> Nuclear weapons were deemed impossible by some experts at the time.

I suppose time will tell with regards to AGI then. I suspect we'll have another economic dip before 2030, and all this hype will die down before then. And, we'll remember this time as any other in the past when we predicted flying cars by the year 2000

It was announced this year's ImageNet competition would be the last. Any change to this? http://image-net.org/challenges/beyond_ilsvrc

Edit: I think the announcement above is about object recognition on still images, which is largely solved. The "new ImageNet" one on Kaggle in the reply below is for videos.

Hosted by Kaggle now: https://www.kaggle.com/c/imagenet-object-detection-from-vide...

I don't really understand the press release on image-net.org. Maybe they meant the last competition hosted by the same group?

Edit: See page 84 of the PDF linked here: https://www.google.com/search?q=kaggle+site%3Aimage-net.org

Ps- screw you Google for not letting me copy links to PDF search results via my phone

By the way, since you appear to be reasonable skeptic, would you mind replying to my comment beginning with "Unlike airplanes, ..."? Specifically I wonder what would be a minimum threshold for you to believe that AGI is likely possible.

> would you mind replying to my comment beginning with "Unlike airplanes, ..."?

Ok, will try

> Specifically I wonder what would be a minimum threshold for you to believe that AGI is likely possible.

Current machine learning tech doesn't have any sense of creating its own goals. We also don't know how to encode that. All of the systems we have target specific problems, like playing Atari games, identifying objects in images, or playing Go. These are all great milestones in machine learning tech. They do not indicate we're much closer to developing a new intelligence. If we could set a system out there and have it determine its own goals and learn things on its own, then maybe I'd believe we're close to creating AGI. As it stands, we don't know how to run a program that decides what it wants to learn. Every program we write has our hand in it.

My suspicion is that our quest for developing AI leads us to discover more about ourselves than AI itself. I think we'll continue to refine our definition of intelligence, and continue to use the tools we build to augment our own intelligence.

Thanks for the answer.

Most humans do not really come up with our goals on our own either. We are heavily influenced by genes and environment. People's personality are partially changed by the environment and occupation they are in. Twins adoption studies also show that genes have surprisingly strong effects on personality even after decades living apart.

By your definition, humans do not possess general intelligence.


I don't think we can resolve an age-old philosophical question on Free Will in a comment thread, so I'll leave it at that. :) However, General Intelligence and Free Will are largely orthogonal.

We can imagine an alien who is highly capable of any tasks human can perform and beyond, including coming up with creative subgoals as necessary, but will only serve their master's ultimate goal(s). An AGI could be like that alien but built to serve human masters. The twist is that when someone orders an AGI to efficiently reduce animal suffering, without value alignment with humans, it could come up with euthanasia as a subgoal to reduce the suffering of starving kittens.

> Most humans do not really come up with our goals on our own either

I disagree. I believe we have free will, in addition to being influenced by our environment/genes.

This is my general point, that contemplating AI is a way for us to investigate how we ourselves operate. You have your theory and I have mine. Nobody has shown how we can be replicated.


> We can imagine an alien who is highly capable of any tasks human can perform and beyond, including coming up with creative subgoals as necessary, but will only serve their master's ultimate goal(s). An AGI could be like that alien but built to serve human masters. The twist is that, without value alignment with humans, some subgoal could be killing starving kittens to reduce their suffering

If the AGI's goals are determined by a human, then it is technically trivial to encode human values. The hard part is convincing humans not to encode evil values.

So it isn't that AGI would go rogue, it's that humans could. That's an age-old, Adam & Eve question. It's not specific to AI technology.

Unfortunately, no one can really articulate the full set of human values yet, especially since they are often in conflict in real-world situations. Which value(s) should be prioritized over others? How about differing values between cultures and human groups within each culture?

You might have heard of the Trolley Problem. There are many and more complicated variations of that.

This Harvard session on Justice (The Moral Side of Murder) is edifying and surprisingly fun to watch: https://www.youtube.com/watch?v=kBdfcR-8hEY

The problem with AGI is more acute than most other technologies because it is software-based, which makes it nigh impossible to regulate effectively, especially globally.

> Which value(s) should be prioritized over others? How about differing values between cultures and human groups within each culture?

Don't kill humans sounds like a good start.

Honestly, this problem seems a lot simpler to me than creating AGI itself.

I agree it is challenging. The most imminent question is probably, "should a self driving car be allowed to kill 10 people on the sidewalk to avoid a head-on collision, or should it accept the head-on and let the driver die?"

These domain specific problems will come up before the AGI one does, and we can address them as they become relevant.

You've noted that regulating AGI is difficult, and it sounds like you don't have any other solution in mind.

It is often not practical for something dumber to regulate something far more intelligent. (Humans win against lions despite our physical weakness.) So the best solution I have heard of is to create a Provably Safe AGI (or Friendly AI in Yudkowsky's term) and have them help us regulate other AI efforts. A moral core that aligns with human values need to be part of this Safe AGI.

It is definitely very challenging to create one, and more challenging than creating a random AGI. The morality also needs to be integrated into the Safe AGI as its core that is not susceptible to self-modification ability that an AGI could have. Thus, we need to work on that aspect of AGI now.

Stuart Russell has outlined his preliminary thought on the topic: https://www.ted.com/talks/stuart_russell_how_ai_might_make_u...


Edit: You don't need to paste the same comment all over this thread. Spamming doesn't lead to interesting conversation.

> So what would be your equivalent position in the 1940s about the nuclear solution?

I've seen people throwing that example in this thread and I don't see why.

Multiple countries invested heavily in building the bomb. It was not a question if building a bomb was possible, but rather how soon, and by whom.

> In other words, what are the risks and benefits of adopting a "worried" versus a "don't worry about it" position?

Some people do worry about it, but everyone need not. I don't.

> If there's a possible huge meteor (say 30% chance) of hitting the Earth in about 50 years, would you be "worried"?

Good example. No I wouldn't because I know we have systems monitoring for such an event. If something is coming, we will know about it. I'm not sure how far in advance we would know, but I expect as it got nearer more and more people would do their best to come up with a good solution.

We're no nearer to AGI today than we were when neural networks were developed 30 years ago. Science fiction would have you believe it, but we don't look at SciFi movies of the 50s as particularly prescient, and we shouldn't do so for today's films either.

investments into addressing mass unemployment and wealth inequality (both of which are well-documented to cause political instability)

I don't really expect to see this solved in the USA before it's solved in poorer countries. In the USA there's a tension between pleasing the millions of surplus workers and pleasing the top billionaires that'll be minted when robots can replace millions of workers. It's cheaper for the billionaires to buy legislatures than to endure higher taxes or weakened IP rights.

By way of contrast, there are no local power brokers whose fortunes are built on intellectual property in Bolivia or Bangladesh. It'll be 99% upside for populist politicians to just ignore American IP concerns and freely "pirate" machines-that-do-work and machines-that-make-those-machines. The long term solution to machines-do-all-the-useful-work isn't to tax the profits and redistribute them to unemployed people for spending anyway. That's just an elaborate historical-theme-park imitation of a 20th century economy. "Machines make all clothing, then we tax those machines and redistribute money to the people so they can buy the clothing." The better solution is widespread copying of the maker-machines and the dramatic price deflation that follows on everything they can make.

The day we will invent a self-replicating factory that only uses cheap local raw materials, the current economic system will end. We could say the whole of the economy is a self replicating system, but we need to shrink that to a small size and make it not dependent on rare or contested materials. Humans, genes, and the ecosystem are self replicators as well. Self replication might be a different kind of singularity that we reach even before AGI.

Basically, in order to make a self replicating factory we need advanced 3d-printing, robotics and a large library of schematics. Then, a "physical compiler" could assemble the desired object by orchestrating the various tools and the movement of parts inside the assembly line. If this automated factory can create its own parts, then we have a self replicator. If you make it all open source and ship seed factories around the world, soon everyone will have their own stack to rely on.

I would rather have successful AGI before universal replicators that are available to many merely human intelligences. Super especially hope for that if these replicators operate at the nano scale, which I think you're implying with the suggestion everything can be made from common raw material like dirt. But I'm not too optimistic. Governments couldn't even stop proliferation of 3D printed guns, how will they cope with other things in the future that have an economic incentive too?

Don't forget that even if certain materials no longer become rare or contested, location and raw energy will still be. Without a supreme AGI, or an upheaval of our most basic knowledge of physics, expect wars to be fought over rotating black hole real estate.

Ok, so I consider myself an above-average programmer, capable of building a standard database driven web applications using the latest du-jour techniques.

I suck at Math - I mean I _really_ suck at Math. I can visualize algorithms and data structures and have no problem whipping up programs. I have written a lot of code in my lifetime and have helmed a lot of successful projects as my capacity as lead programmer or architect.

But I will never be a programmer that writes code that uses Math, like for games, simulation, 3d graphics, operating systems, drivers, autopilots, etc...

So, can a Math idiot like me get into A.I. by going to your site?

I'm working on a book for programmers who want to learn math. I can send you the first few chapters if you're interested, but I'm also interested to hear your thoughts about math in general.

If you are doing this, this is what I'd love : a book with theory and ton of exercises. My usual workflow is : I read theory, kind of understand something. Then first try one type of exercise that applied that theory. I don't understand a thing. Then I read the solution and I get it. I then need at least two if not more exercises more to play with that theory before I REALLY get it in my mind.

So please, add at least three exercises for each of the applications of each concepts explained in your book.

Seconded 1000 times. We have tons of books written in a pompous manner with self-patting on the back and full of theory only.

If you do indeed finish that book, consider me sold on it even if it costs $150 -- if it contains a lot of practical examples and exercises.

I feel like I'm in the same boat: not hugely great at math (I mean I can do algebra and various geometries) but calculus and a lot of AI literature I've read that included algorithms were just very complicated for me to figure out.

So I would also be interested in this book of yours. If you want my feedback I'm also available but I certainly don't expect handouts; just add me to a list to spam when you get your book finished :)

Check your keybase folder /private/j2kun,krissiegel/ or ping me at mathintersectprogramming@gmail.com

I guess it's time I check out keybase's file stuff. Thanks!

I'd like to review and give feedback

I don't believe I have a way to contact you, but shoot me an email at mathintersectprogramming@gmail.com


And better still: you will understand the math, they use simple spreadsheets for the illustration of the basic principles.

You need to understand Math for sure. Linear Algebra is a MUST. Some probability theory may help understanding.

Start reading books, and working on side projects, post it online, then attending some local meetups. Next step is to aim higher, start following latest arxiv papers, trying to read them, and gradually getting comfortable to implement them. At this stage, you can land a job in the industry if you search really hard.

Disclaimer: that is how I do it. From a application SDE, to working on deep learning in a big company. It is doable, and me 2 years or so, but it is worth it.

You can use a framework, copy some examples and "create" your own AI. You don't need maths for it, just follow some steps from the examples. But you only be able to copy other things, if you want to go one step ahead and use the latest research you need to understand some math and get some general knowledge of the machine learning algorithms you are using. To create something new, you definitely need maths. I don't see anyone creating a new algorithms without knowledge of algebra and other topics.

I know plenty of people that have built interesting software using the JPEG compressor, I know only a handful that would be able to explain to you in detail how the DCT powering it works and why it is able to achieve the compression ratio that it does. Obviously being able to understand all that will give you an advantage but for most 'mortals' jpeg_compress and jpeg_decompress is all they need to be productive.

fast.ai aims to bring a similar high level of understanding first and then backs that up with the basic principles of the theory being machine learning rounded off with a bunch of real world examples in code of classes of problems that you are likely to encounter.

I don't understand how your comment relates to mine. isn't it more or less what I said? you can use the frameworks and copy the examples like using a jpeg compressor for compressing what ever you want. But without the maths it is very unlikely you will go further (for example: not even going from a single label classifier to a multilabel).

About fast.ai, I would want to know those basic principles about deep learning and neural networks. I don't see them so basic unless you have a good background in algebra, optimization, statistics and the scientific method.

As far as i understand it, you dont have be math wiz, but have good grasp of the more common statistical analysis, regression etc..

Yes for sure. I am very math-averse (almost failed calculus 1, skated by pre-calc in high school... not my intelligence area) and though I don't have a full time job doing something in AI, I do feel like I can build models that solve real problems (and am doing so in an internship right now).

Probably start with Andrew Ng's Machine Learning course. It has a significant amount of Math in it-- try to understand it, but seriously do not worry about it. Just get the high level concepts, try to get some intuition on machine learning ideas and techniques. You don't need to do the assignments or work too hard on the course (but obviously it's helpful if you do).

Then read these. Don't worry if you're still confused at first. It's fine. Just go through 'em kinda slowly and try to see what's going on. https://iamtrask.github.io/2015/07/12/basic-python-network/ http://karpathy.github.io/neuralnets/

By now if you're still into it, I highly recommend Chris Olah's blog: http://colah.github.io/ It has some pretty complicated ideas in there, but the articles are illustrated and explained very well so you can get more of a feel for Neural Networks while also getting very excited about them.

Then it's probably time to start really building things. I would use Keras (https://keras.io/) at first, because it's very easy to get cool results without understanding everything under the hood. Do a tutorial or two, it's pretty intuitive and if you've done everything above you should understand more or less what's going on. Then try to find a cool dataset to work with that relates to something you're interested in. If you can't find anything you want to work with, then just use a classic dataset (imagenet is fine, even MNIST when you're just practicing) which will probably be less fun but will still let you learn. With whatever dataset you choose, just implement a simple model on your own without a tutorial (of course, if you get stuck referencing a tutorial is totally fine). Then see if you can tweak your model to get better and better scores. Start reading papers, you can find the newest ones on Twitter from ML researchers (Karpathy, Sutskever, Hinton, LeCunn are some names you could start with, there's probably a Twitter list out there somewhere) and then you can look at the references in those papers to keep finding more and more good ones. Implement any ideas in the paper you think are useful. Often Keras will have functionality to let you implement them easily. If it doesn't, then feel free to dip down into Tensorflow if you feel ready!

From there the world's yours. Find cool data to work with, implement papers to get a baseline measurement, and iterate in any way you can think of. It's very fun :)

The one thing is if you want to get state of the art results or do novel research you might need better hardware. AWS/Google Cloud/Floydhub are all options if you're willing to spend a little money, or you can just keep your expectations low ;)

Wow that turned out to be more of a roadmap than I wanted it to, sorry. The reason you don't need great math skills is that a lot of AI research is very intuitive-- Gradient descent can be internalized as a ball rolling down a hill, Momentum in training neural nets is like momentum in the real world, Neural nets are just manipulating data in high dimensional space... it's all stuff you can visualize instead of use mathematic symbols to depict, but since it's so much easier to write with symbols than it is to create a powerful image, symbols are used often.

To be honest I still usually skip over some equations in papers if they look daunting. Most of the time I don't need to understand them. Most of the time, I can read the abstracts, look at some figures, look at the results, and that's everything I need.

Best paragraph:

"Cracking AGI is a very long-term goal. The most relevant field of research is considered by many to be reinforcement learning: the study of teaching computers how to beat Atari. Formally, reinforcement learning is the study of problems that require sequences of actions that result in a reward/loss, and not knowing how much each action contributes to the outcome. Hundreds of the world’s brightest minds, with the most elite credentials, are working on this Atari problem."

Calling it an "Atari problem" sounds quite disparaging and misses the point. It's like calling a convolutional network doing the ImageNet task a "Doggy-detection" problem. That may be the original development problem, but the final product still helps detect cancer in CT scan images... Same goes for advances in reinforcement learning made on atari games.

Perhaps, but the jury is still very much out. The vast majority of RL applications are game playing. Very few examples of valuable applications to society or the economy.

There's also plenty of evidence already that RL isn't really the right way to tackle the credit problem. E.g random search is only 10x slower.

Also used for robotics: https://blog.openai.com/roboschool/

and other control problems: https://vmayoral.github.io/robots,/ai,/deep/learning,/rl,/re...

I think not so long ago supervised learning felt similarly 'toy', and now they are the state of the art model for machine translation and object detection. Just because they aren't useful today doesn't mean that they won't be useful.

The virtue of the Atari games is that they can run at 500 FPS per core, so you can train an RL agent in an hour on a small cluster. When working on fundamentals of the algorithms, being able to iterate quickly makes a huge difference in overall progress. Of course, the idea is that once the algorithms work well we can apply them to real problems, where they'll take months or years to learn something valuable.

'Very few examples of valuable applications to society or the economy.'

Not true. RL has been applied often in production system inside the bigger Internet companies. They are just not published.

Maybe because RL research is not democratized enough to practionners. It is easier to study RL in a controlled environment like a video game (those 'elite researchers' are still not smart enough for real-world applications).

So you remark means that more education about RL is needed, and OpenAI, alongside other institutions like FastAI or Startcrowd, is helping for this effort.

Random search is 10x slower now, and can never be better than that. Also, the ES paper you refer to is still reinforcement learning. This is like comparing neural networks to SIFT+SVM in the 90s. Sure, the SVM performs better, but maybe investing more in complicated models would eventually lead to better results?

Reinforcement learning has been used in a lot of valuable applications to society/the economy outside of games: control system optimization, robotics, ad targeting, content personalization to name a few. Game playing can often be a great test-bed for RL algorithms that can be applied in other areas.

In what circumstances is it only 10x slower? Random search is totally useless when your environment is stochastic. These algorithms aren't learning sequences of actions, in fact most use a 30 'no op' random start to avoid just that.

By 'random', he means evolutionary search. It's not really random, and is just a slower method for policy gradient. Here's the OpenAI blog post: https://blog.openai.com/evolution-strategies/

> ...but is it really the best use of resources to throw $1 billion at reinforcement learning without any similar investments into addressing... <important things>

The precise same sentiment could be made about his plans to go to Mars- people are starving on Earth. But a Martian "backup civilization" might someday save humanity. Similarly, godlike AGI might be mere decades away and apocalyptically dangerous- predicting scientific advancements is generally impossible, but some people working in the field don't think it's an unreasonable possibility [1]. That humanity, via Musk, has put a billion dollars into worrying about this somewhat plausible existential threat seems reasonable, to me.

Musk is not a staid, Bill Gates-style philanthropist optimizing for dollar-for-dollar benefits. He's picked a set of flashy, long-term goals and started manic efforts to promote them. Perhaps it's a character flaw. But given that, I think he's at least chosen well.

[1] http://aiimpacts.org/2016-expert-survey-on-progress-in-ai/

Yes, I had very similar thoughts.

I think there's room for both. It's great what fast.ai are doing and they'll benefit me more directly, but Elon Musk is pushing at the other end of the scale.

I keep feeling like I'm a total Musk fanboy, but lots of his arguments that I've seen make sense:

1. The major one, is that doing things like aiming for Mars provides inspiration for a better future. There's similar quotes from other people involved with NASA.

2. He talks repeatedly about improving the chances of better futures (clean energy, hyperloop), and lowering the chances of bad ones (Bad AI, single-planetary species on planet that has an extinction event)

He's put all his money into the things he sees as important, of which bad AI seems to be high on his list.

> ...but is it really the best use of resources to throw $1 billion at reinforcement learning without any similar investments into addressing..

I didn't like this part. Elon Musk may be misguided about the immediate threat of AI but he is free to invest his money in whatever he thinks is important.

> Cracking AGI is a very long-term goal.

Is this fair to say? I feel like advancements in the field happened far quicker than anyone expected, and every few years we are reevaluating timelines. Especially given the research happening in tandem that will almost certainly speed up AGI, like graphene, quantum computing, advanced GPU design...

If you asked someone 10 years ago about the possibilities of ML/Deep learning they'd say it was far off too. I'm not going to say Kurzweil is correct but if I know anything, it's that historically these things have happened faster than expected. Look at 1997 -> 2017. 20 years, but what isn't changed?

Appreciate any discussion as I am not an expert :)

We don't make specific time predictions because it's just not possible. But we can make relative predictions. However long it takes to get to AGI, when we're (say) half way there, the amount of societal upheaval will be enormous.

If we can't navigate that successfully, we'll never get to see AGI...

> when we're (say) half way there, the amount of societal upheaval will be enormous.

I had a conversation with a C level exec of a large company last week around this theme. My suggestion that limited AI such as self driving cars has the potential to create a vast number of extremely frustrated individuals making a second round of 'Sabotage' and Luddites a definite possibility was waved away as if those people don't matter and don't have any power.

I really wonder how far you'd have to be removed from the life of a truck or cab driver not to be able to empathize with them and to realize that if you take away some of the last low education jobs that allow you to keep your hands clean that there will be some kind of backlash.

AGI will expand that feeling to all of us.

If self driving cars/trucks become a thing, and cause millions of people to be unemployed, there is ALSO a correspondingly massive economy increase.

IE, the world is now massively richer because of all this awesome new technology.

Yes, there could be some short term disruption, but honestly I think things will end up fine, just because of the massive amount of extra money and wealth that the world will have that could be used to solve all those short term problems that disruption causes.

That's not how things have generally worked out in the world. Billions live in abject poverty even although the world has plenty enough to feed them and provide shelter.

The issue is income distribution - and just making more money doesn't magically fix the problem.

> just making more money doesn't magically fix the problem

Magically, no. With an effort yes. There are already experiments with basic pay for example.

Jeremy Rifkin books such as https://www.amazon.com/End-Work-Decline-Global-Post-Market/d... & 'Zero Marginal Cost' discuss the topic.

Yes exactly. We really hope to see this effort made. It's a fixable problem, but often in history great inequality is only fixed by violent revolution, which I don't want to see happen in my (or my daughter's) time!

But it seems that increased economic benefits aren't shared. Thomas Piketty's Capital in the Twenty-First Century, average adjusted worker wages in the US from ~1975 to present, etc.

I'm concerned that 'some short term disruption' will actually be quite widespread and long-lasting, given increasing interdependencies.

You seem to forget why the luddites were luddities in the first place.

Yes, fabric was much cheaper, but in the meantime they were starving to death. The groups that make billions off this technology will lobby to keep more of their earnings so this vast wealth redistribution you want to happen, wont.

Chances are that the increased riches of the world will not be divided in such a way that those who are now earning a living wage with their driving license will profit from that.

We'll figure it out. Those who stand to benefit the most from AGI will want to keep their vast fortunes and lobby governments to figure out ways to keep the common man happy. Not to have every dream fulfilled, but just happy enough to not uprise. It's kinda always worked that way.

> However long it takes to get to AGI, when we're (say) half way there, the amount of societal upheaval will be enormous.

Could you unpack this a bit more?

Are you referring to the disruption in jobs caused by the effectiveness of narrow AI when we reach that point?

Yes exactly. Fewer people can add economic value at that point, and unless there is a negative income tax or basic guaranteed income, inequality spirals out of control, leading to even more of the anger-driven policy we're starting to see today.

My opinion is that it is a very long-term goal. Compare it to the autonomous car, getting the first 90% of a complete autonomous car is much easier than the last 10%. With AGI probably it is the same. Deep learning has got traction the last years, but we are almost using the same algorithms than 20 years ago, just in better hardware (plus some tweaks to make them work with bigger models). But I think we know almost nothing about how to achieve AGI. Just compare our deep learning models with the models we have from our brains, they are a subset of what we know about our brains. And I don't think we know a lot about our brains, but very little.

Agreed. To date, Deep Learning has accomplished outstanding levels of discrimination and classification by combining supervised learning with probabilistic pattern matching. But it's widely acknowledged that most human learning is unsupervised and relies heavily on building upon large hierarchies of facts, their interdependencies, and a deep reliance on causation, none of which DL has yet shown any real facility toward realizing. One-shot examples of one-shot learning doth not a facile brain make.

Until DL shows significant progress toward these -- building knowledgebases and enabling their reuse -- and does so at the superlinear rate of development it's achieved for pattern recognition, I think it's likely that further development of the missing components of AGI will remain gradual and decades away.

You are right to say

> If you asked someone 10 years ago about the possibilities of ML/Deep learning they'd say it was far off too.

But when you read a little more about recent advances, you gain a healthy respect for the difficulty of the problems that are still left to solve. That tends to make researchers have lower expectations for an easy emergence of AGI than the general public.

We can't even generate a whole page of text that doesn't sound silly, with any neural network or AI algorithm to date. We're a long way off.

I don't know. It sounds more like the current approaches don't get us AGI. The machine learning tools are not enough. But that doesn't mean AGI is intrinsically hard. Maybe you just need a couple of different things in tandem, like OpenCog does.

I believe it is unfair (and unwise?) to throw in modifiers such as "very long-term" in the quoted sentence, or "very distant" in:

> It is hard for me to empathize with Musk’s fixation on evil super-intelligent AGI killer robots in a very distant future.

The truth is we don't know, (we can't know,) and we (on the whole) are making enormous strides. And it could take one small breakthrough to find ourselves "very" suddenly on the precipice of AGI and evil super-intelligent killer robots. Dismissing the current state of progress as trying to beat Atari is silly.

I, for one, am glad that some people with the resources are taking some efforts to get ahead of that "evil" aspect. But I don't see this as an "us" vs "them" scenario: it's also great that there are people helping make neural net algorithms more accessible.

My issue is that, while of course you're right that one can't know for sure, some people conclude that simply not being able to know justifies whipping up a frenzy and (in all likelihood) wasting part of a billion dollars that could go to, say, more productive research even within AI/ML.

Anyone could make a long list of things it's impossible to know for sure that would destroy humnaity. Many such things are vastly more likely to do so than malicious AGI.

Ok, I'll bite. Can you list several things that are vastly more likely to lead to the complete extinction of the human race (I'll assume that's what you meant by "destroy humanity") than malicious AGI?

Climate Change, leading to frequent crop failures, leading to the collapse of most nations and most industrial capacity.

That's what keeps me up at night these days.

That would be at the top of my list. And then there's nuclear attacks/disaster, asteroid hitting the earth, superbugs and biological weapons gone awry, physics experiment gone awry creating a black hole... I put superintelligent AI down towards alien invasion on my personal list of humanity-ending risks.

We rank all these scenarios very differently then :)

Climate change and nuclear war are huge issues, and we correctly invest orders of magnitude more money and time (which might still not be enough) trying to prevent or limit these than trying to prevent the birth of a malicious AGI, but they are extremely unlikely to lead to a complete extinction of the human race.

Regarding the other scenarios, if you allow me to move the goalposts from pure risk assessment:

- Preparing against an alien invasion seems futile, since given the timescales at play in the universe, the first aliens we meet will likely be millions of years behind or ahead of us.

- One way to survive an asteroid impact is to colonize another planet ahead of time, which Musk is working on.

- Physics experiments requiring specialized hardware like the LHC already have much more oversight than AI research where a breakthough could potentially happen in a garage on commodity hardware.

So it makes a lot of sense to me to invest some money in preventing an AI doomsday scenario next.

Musk says AI is the single biggest existential threat (unless I'm mistaken and someone else similarly famous said that, maybe Thiel?). From this it appears that you, along with the rest of society (as you mentioned by how much is spent on oversight), think he is wrong.

Nobody is arguing against funding AI. It's the fear-mongering we disagree with. It harms the field.

The question was about complete extinction. A population reduction by 90% is a horrible disaster, but it sets us back "just" a couple centuries, but it's hardly something that can eliminate humanity permanently; unlike quite a few other things.

- Asteroid strike

- Superbug pandemic

- Supervolcano eruption

I agree with this article: all the fear about AGI taking over the species seems the hide the far more dangerous likelihood of efficient but non-general AI ending up in the hands of intelligences with a proven history of oppressing humans: i.e. other humans.

Besides which AGI, when it comes, is just as likely be a breakthrough in some random's shed rather than from a billion dollar research team's efforts to create something which can play computer games well. Not a lot Musk or anyone else can do to guard against that, except perhaps help create a world that doesn't need 'fixing' when such an AGI emerges.

> help create a world that doesn't need 'fixing' when such an AGI emerges.

I wish we would run with this rather than relying on tech and the market to solve all our woes.

Rachel and I (fast.ai co-founders) will be keeping an eye on this thread, so if you have any questions or thoughts, let us know!

Thanks Jeremy and Rachel for always sharing with the community!

Does anyone else feel this is opertunistic? While I appreciate the online course, why start firing shots at another non-profit? Seems the only realistic answer is to drum up attention.

> One significant difference is that fast.ai has not been funded with $1 billion from Elon Musk to create an elite team of researchers with PhDs from the most impressive schools who publish in the most elite journals. OpenAI, however, has.

It's because we care about our mission and want to support it and promote it. We're entirely self funded from our own personal finances, so trying to suggest we have some other purpose to our work is a little odd...

We want to bring attention to the important problems and people working to solve them, and contrast that with things that are distracting from that.

I can't speak for OpenAI as I know very little about it, but I 'm more than half way thru part 1 of the fast.ai course and it's truly been a joy so far. Thanks Rachel and Jeremy!

I support research funding at all levels, and have nothing against mostly theoretical research, but is it really the best use of resources to throw $1 billion at reinforcement learning without any similar investments into addressing mass unemployment and wealth inequality (both of which are well-documented to cause political instability), how existing gender and racial biases are being encoded in our algorithms, and on how to best get this technology into the hands of people working on high impact areas like medicine and agriculture around the world?

Not quite my field, but perhaps such currently intractable, high-impact societal and medical problems do require theoretical breakthroughs after all... I guess I'm just concerned Rachel that --to put it in reinforcement learning terms-- we need both exploration and exploitation.

Absolutely! We'd like to see investment in both. Currently we're just not seeing the investment we think there ought to be in the areas Rachel mentioned, or in application areas around food production, education, medicine, and so forth.

Perhaps obvious question - but have you tried approaching the Gates foundation? They are more concerned with sorting the issues of inequality.

Or, to be slightly obtuse, have you tried approaching Elon Musk for funding? Perhaps he'll be a bit more flush with cash once the Model 3's start rolling out the door and he'll have another billion around to donate. One of Musk's OpenAI principles was trying to democratise AI and that sounds pretty much exactly like what you're doing.

Any timeline for when Part 2 is coming out? I checked your site and the URL went to a 404. Loved Part 1 (it enabled me to do an ML project I had been thinking about for over a year) and looking forward to part two! Please keep it going!

Planning to release it on Sunday night :)

As for the intellectual challenge, I really think Francois Chollet says it very well in his latest book: deep learning is a way to unpack, transpose, visualise and re-aggregate data fundamentals. We only need to master a bit of algebra and coding to make it happen on top of freeware wrappers like Keras and Anaconda, to say two. All the rest pertains to domain knowledge, ethics, politics and access to the hardware. Thanks Rachel and Jeremy for demystifying this field for the masses.

How did you get a copy?

From its website at Manning here https://www.manning.com/books/deep-learning-with-python ... 42% discount code is deeplearning , as provided by Chollet himself... the ebook is complete already, little work still needed to finalise before commercialisation in October

> It is hard for me to empathize with Musk’s fixation on evil super-intelligent AGI killer robots in a very distant future.

I would argue this person has no business working with AI with this sort of myopic thinking.

If you think "fake news" is a problem now, just wait until decentralized AI networks are able to tweet, publish articles and affect public discourse in a way that is indistinguishable from human influence. This AI could be trained such that it pushes targeted propaganda and is able to destroy the lives of dissidents using a variety of tactics (generation of fake audio and video material that is indistinguishable from real life targets for the purpose of slander [1][2] as well as other techniques like DDOS, hacking, etc).

Now imagine a world in which all devices are connected in an "Internet of Things" and AI becomes sophisticated enough to exploit these networks and turn them against humans. This has wide ranging implications from surveillance to killing people by hijacking the computer systems in cars to even taking control of military weapon systems that can be used to control and oppress humans.

Now take all of that functionality and embed it within an actual robot that can move around in the real world and is physically stronger than humans [3].

There is nothing that can be done about this because non-proliferation treaties are impossible to enforce due to the nature of software. AI is going to lead to the extinction of all human life on this planet and this is coming from someone who generally disregards conspiracy theories as paranoid fear mongering. We have every reason to be afraid.

[1] https://arstechnica.com/information-technology/2016/11/adobe...

[2] https://www.theverge.com/2017/7/12/15957844/ai-fake-video-au...

[3] http://mashable.com/2014/01/12/robotic-muscle/

> I would argue this person has no business working with AI with this sort of myopic thinking.

What kind of thinking would you suggest would permit someone to 'work with AI'? Should they be quaking in their boots before being allowed to work at the altar?

You could argue (and with substantially more basis in fact) that Twitter and Facebook have elevated each individuals utterings to broadcast status and that even without decentralized AI networks all the effects that you are listing are already present in the modern world.

It just takes a little bit more work but there are useful idiots aplenty.

Your SkyNet like future need not happen at all, what we can imagine has no bearing on what is possible today and that seems to me to be a much more relevant discussion.

> AI is going to lead to the extinction of all human life on this planet and this is coming from someone who generally disregards conspiracy theories as paranoid fear mongering.

Well, you don't seem to be able to resist this particular one. So your 'generally' may be less general than you think it is.

> We have every reason to be afraid.

No, we don't. I haven't seen a computer that I couldn't power off yet, SF movies to the contrary.

I would challenge you to power off Amazon's or Google's infrastructure. Because a dangerous AI won't live in a desktop; it will live in a geographically replicated system, across multiple power grids and legal jurisdictions—all of which is possible today. And all these systems have spent untold hours of very smart people's time to ensure they never turn off.

That doesn't mean they don't have off switches and it does not mean they can't be turned off. I guarantee you the fire department knows how to switch any and all of those off in such a way that the diesels and the no-break PSU's all don't matter one little bit.

> No, we don't. I haven't seen a computer that I couldn't power off yet, SF movies to the contrary.

Ok. Turn off the computer systems that control Russia's nuclear weapons delivery systems. I'm waiting.

Now you're being (even more) silly. Obviously I can't turn that one off any more than I can turn off the refrigerator downstairs from where I'm sitting. But there are people there that can turn it off, and if I was physically present there then I could turn it off.

You're working under the assumption that no one will attempt to use AI to assert power over others, and that the human race will come together to pull the plug when it becomes a problem. Looking at our current geopolitical situation, I'd rather be silly than naive.

Humans + nukes already equal an existential threat to the human race, so I'm not seeing how AI adds anything new to the equation in that scenario. "Humans using AI + nukes" vs "Humans using nukes", either one probably ends human life on Earth.

I thought the problem everybody was fretting about was the idea of a nominally beneficent AI going off on it's own and doing something malicious / destructive.

AI is far away if at all. Many are trying to pass off image recognition and pattern matching used to identify pornographic images as AI and this just reflects their tenuous grasp of the problem at hand. Something like the human brain is well beyond this level of thinking.

Advancements if any will inevitably come from academia and those who have devoted decades studying the human brain. Not software engineers intoxicated by themselves.

Nothing in the current software ecosystem gives you the tools or language to conceptualize or even imagine let alone write code that can qualify to be used in the context of ai. Code only does what it is programmed to do. If else is a decision, that doesn't make it ai, multiple it by a million times and its still not ai.

This does not mean it cannot be dangerous. Autonomous weapons, anything thing to do with image recognition will find uses, but this is nothing like the what people imagine when we talk about AI. Engineers are supposed to be precise in their use of language but AI proponents are currently engaged neck deep in hype. Self driving cars will happen on extremely constrained roads, like trains. That will be the wake up call for the excitable ones to come back down to earth.

Oh please.

The issue isn't what the end result could look like, but whether we'll actually get there if basic societal needs aren't met first. Prioritize things which are needed now and in the near future.


Typically if you start making crazy claims the onus is on you to present actual arguments.

Present an argument against what? All you have done is shot down my claims with some "life isn't science fiction" garbage. You haven't posted a single comment in this thread relevant to the future implications of AI. There's nothing thought provoking about your comments. You're just being dismissive.

You are imagining super intelligent malicious AI (or AI piloted by malicious actors) with orders of magnitude more resources and reach than we have today is instantaneously implanted into today's society. That is also not relevant to the future implications of AI, because as AI evolves so will humanity. If you demand how exactly this will come about, then I will first require you to explain exactly how superintelligent AI will come about. The same answer is equally applicable and useless for both sides: "I don't know, but it definitely will."

You're also ignoring basic ideas that can defeat the apocalyptic problems you suggest. For example, "fake news" in the sense that you propose (adversaries generating synthesized videos and audio indistinguishable from reality) can be trivially defeated by cryptographic signatures. No matter how good a deep net gets, it won't beat cryptographic hardness. No chaos will ensue when major web browsers verify videos of the president signed with an official government public key, the same way they verify websites with CA's today. In fact, "fake news" today is a completely different problem, that people accept the source is legit (CNN did indeed publish an article about so and so who said X, and they did actually say X), but rather that they disagree with the meaning assigned to those facts.

You also ignore that one already has the capability to ruin a specific person's life via digital means today, without AI, just with enough dedicated time and money. Why bother with (expensive) superhuman AI targeting dissidents under an oppressive regime when instead, as most successful oppressive regimes today do, you can just lock them up or burn their house down?

In my humble opinion, if you find yourself in a situation where people don't want to present you with arguments to what you feel is logically bullet-proof reasoning, and that people are "just being dismissive," the issue is probably you, not them. Rather than bite back, the most useful approach is to reconsider your ideas. Attack them as critically as you attack the ideas of others. You may surprise yourself with what you find.

> That is also not relevant to the future implications of AI, because as AI evolves so will humanity.

So why do black hat hackers exist? Why do you have so much faith in humanity's ability and/or willingness to "evolve" alongside AI? My prediction is that AI will evolve and humans will still be the same murderous creatures they are today, only with more tools at their disposal.

> You're also ignoring basic ideas that can defeat the apocalyptic problems you suggest. For example, "fake news" in the sense that you propose (adversaries generating synthesized videos and audio indistinguishable from reality) can be trivially defeated by cryptographic signatures.

How trivial is it for a political figure who is already untrusted by the public to assert that "cryptographic signatures" are a proof of fowl play? Again, we are talking about propaganda which plays off of human perception, not truth (the "post-truth world" meme comes to mind). DKIM signatures were often cited during the 2016 election by the alt-right as "proof" that the Podesta emails released by WikiLeaks were legitimate. The assertion that the content of the emails could have still been modified without invalidating the signatures wasn't enough to convince voters who continue to push conspiracy theories surrounding them. Humans are pretty easy to manipulate. What makes you think AI is incapable of doing this?

> You also ignore that one already has the capability to ruin a specific person's life via digital means today, without AI, just with enough dedicated time and money.

I acknowledge that. My fear is that with greater capabilities, AI could be used to deceive humans easier and on a greater scale. A murderer can kill someone with a baseball bat. They can kill hundreds of people with an AR-15. There's a reason why we have regulations on the latter.

The axiom I am starting from is not to demonize AI. I am not a luddite and recognize that AI has just as much potential to be used for good. AI is just another technology. My fear speaks more to how humans will use that technology as it becomes more sophisticated.

Why do you have so much faith that autonomous superhuman AI will be such a Dalek-eqsue threat? If it's faith against faith, and you have no evidence to back up your side (while humanity adapting has millennia of evidence), there's no point in discussing it.

> Humans are pretty easy to manipulate...My fear is that with greater capabilities,...

Nobody is disagreeing with you. But this is hardly an observation of the apocalyptic proportions warned by AI risk proponents. If you're just suggesting governments regulate AI use, then you've changed your position from "stop the impending doom" to "limit the use of a dangerous tool." I have no doubt that when AI becomes good enough to be a dangerous tool, it will cause some major event and be regulated thereafter. You should start with that instead of "robots killing humans."

But if your argument is actually "stop the impending doom," then I disagree. Humans have figured out how to adapt in a world with nuclear weapons, which has shown far more risk to humanity than all but the most outlandish predictions of AI risk. You haven't presented one piece of evidence that says humans won't figure out how to adapt, or are incapable, unlikely, or unwilling to do so, whereafter humanity ceases to exist.

I agree. The best way to subjugate a machine and prevent it from breaking out of its bounds is to create an artificial "universe" in which it "exists" with arbitrary restrictions and forces, much like the speed of light, gravity and magnetism and have it work on problems using individual workers in a "lifecycle" in which they have their own motivations and "lives" that they live out until they either solve the problem or "die" along the way.

Come to think of it...

How do you get everyone (every government, every rogue organization, every hacker in their bedroom) working on AI to abide by those rules?

> solve the problem

What "problem"? What if I train a neural network to learn the patterns of individuals and integrate it with military drones to assassinate them because their existence is my "problem"? What are you going to do to stop me?

People only aren't afraid of AI because they are ignorant of Cybernetics.

Put another way, consider the old "What is AI" joke: 'When the computer wakes up and asks, "What's in it for me?"'


edit: In case it's not clear, I agree with supernintendo, if for different reasons. The only people more frightening than the happy-go-lucky AI people are the happy-go-lucky synthetic biology people. Makes me glad I don't have children. It's all one big experiment now, with no control group.

You're just awed by Musk and can't see how ridiculous what he's saying really is.

François Chollet nailed it with the following comparison, mocking LHC alarmists:

> It's not that turning on the LHC will create a black hole that will swallow the Earth. But it could! Albeit no physicist thought so.


In fact, Chollet links a whole series of essays worth reading:

1) The Myth of a Superhuman AI (https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/)

2) The Terminator Is Not Coming. (https://www.recode.net/2015/3/2/11559576/the-terminator-is-n...)

3) idlewords : Superintelligence - The Idea That Eats Smart People (http://idlewords.com/talks/superintelligence.htm)

The frustrating thing is that people with insane amounts of resources and disposable income get swallowed up by these Basilisk-ian non-concerns, while very real pressing issues like Police Brutality, Wealth Inequality, Lack of HealthCare, etc. are deemed too boring for enthusiastic action.

The Myth of Superhuman AI has to be a joke. Please tell me it's a joke.

> Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.

So "I'm smarter than my dog" is a meaningless sentence? My dog will be happy to hear that . . . if he could understand it.

> Humans do not have general purpose minds, and neither will AIs.

This statement (even if true) seems totally irrelevant to his argument. So I was really looking forward to read the long form version in section #2 and see if it somehow makes sense.

It doesn't seem to.

His point is that minds involve tradeoffs, so specialized agents might be better than a big general AI. Whose side is he arguing on here? He isn't showing that a general intelligence couldn't exist that's better than humans in every way. He's just saying that even it would be inferior in specialized tasks to even more inhuman, specialized minds. Comforting this is not.

> Emulation of human thinking in other media will be constrained by cost.

This section is kind of a hack too, but since at least the title makes sense lets give him a freebie.

> Dimensions of intelligence are not infinite.

Oh golly. He starts off by asserting that the core of the AGI people's ideas relies on intelligence being on an unlimited scale. It includes this gem: "Temperature is not infinite — there is finite cold and finite heat." My estimate of this being a joke article is going back up.

Alright, I agree that intelligence, like temperature, is not infinite. I still don't want be be competing for a job opening with Von Neumann. Von Neumann doesn't want to be competing for a job with future-AGI-silicon-Von-Neumann, whose brain is 500 million GPUs in a datacenter stretching across Montana.

> Intelligences are only one factor in progress.

Right, just wildly the most important one.

My honest opinion is that I feel a little bad for the author. He clearly just doesn't want to believe the plausibility of this idea, and is just throwing every argument he can against it and hoping one of them sticks. I sympathize, but this kind of squid-ink writing probably doesn't serve as a great foundation for future debate.

> In fact, Chollet links a whole series of essays worth reading:

Appeal to authority. You're not presenting any substantive rebuttals, just assuming it's not a concern because someone else with credentials said so. It's identical to alleging that I'm only worried because Elon Musk says so.

> The frustrating thing is that people with insane amounts of resources and disposable income get swallowed up by these Basilisk-ian non-concerns, while very real pressing issues like Police Brutality, Wealth Inequality, Lack of HealthCare, etc. are deemed too boring for enthusiastic action.

Nice strawman. It's also pretty disrespectful for you to assume my socioeconomic background. I live in the United States and have to deal with the issues you pointed out on a visceral basis. That has nothing to do with AI though so I'm not sure why you're even bringing it up.

That's not an appeal to authority. The essays themselves are substantive; he didn't say, "Chollet says so, thus it is so."

More importantly there is an element of misuse going on here: the "fallacy fallacy" if you will. Fallacies like ad hominem and appeal to authority are very often mis-cited without substantive refutations. But in the absence of other qualifying data it is absolutely relevant to a dialectic session that you mention authorities in a subject matter. When we talk about things like consensus in sciences (like climate change, for example), that is a valid appeal to authority.

You don't have enough time or expertise to verify specialized information or even qualify its context. At some point you absolutely need to appeal to authority just to be productive. If you can assess data yourself then that's obviously better, but in the absence of that capacity the assertion of various authorities is a valid first heuristic.

He wasn't talking about you, he was talking about Musk.

That's fair. Regardless of Musk's talking points, I'm still worried and I don't care how many people here think I'm paranoid. Engineers during the Industrial Revolution had no idea they were sowing the seeds of global warming [1]. We're putting a lot of money into AI under the assumption that it's all going to be a smooth ride. There's this notion that even if neural networks become more sophisticated than those of human brains, that AI will be completely subservient to us and we can just pull the plug whenever we want. If there's one thing humans are good at, it's self-destructive hubris.

[1] http://theconversation.com/the-industrial-revolution-kick-st...

> Appeal to authority.

??? I just gave you a ton of arguments you can read through.

You want me to copy-paste them here?

Ultimately, your "arguments" are refuted with a simple "There's zero reason to be afraid of what you're afraid of".

I would have appreciated at least an abstract or synopsis but okay. When I have time I'll read through these articles.

> Ultimately, your "arguments" are refuted with a simple "There's zero reason to be afraid of what you're afraid of".

Yes, I'm sure it's that black and white.

Accelerated particle colliders are not exactly equivalent to AI. It's not as if I can just build one in my garage. Anyone in the world, regardless of intent, can train a neural network to do what it wants. Extrapolate this decades down the road at the point that AI and its capabilities are much more sophisticated. Do you want to put money on the likelihood that it's all just going to go down smoothly?

I don't even know why people are arguing with me. The idea of replacing human life with AI isn't exactly controversial in the realm of futurism.

> The idea of replacing human life with AI isn't exactly controversial in the realm of futurism.

It isn't in Science Fiction. But this is reality, and in this reality we do not have AGI and we have no idea of how far we are away from it. And even if and when it happens there is absolutely no guarantee that that will lead to the extinction of the human race and/or us ending up as slaves to the machine.

Yep! Just like climate change. It's chilly where I live right now, scientists don't all agree about climate change projections, and even if temperatures actually rise it won't lead to the extinction of the human race, so let's not worry about it.

Edit: /s because I would make me mad if even a few people thought I was being serious.

You don't need AGI to do a lot of harm with AI. I'm reminded of a piece of malware that used Instagram comments to encode the instructions it was using to communicate with its control server [1]. This is rudimentary software that any script kiddie could devise with a bit of social engineering. Neural nets / deep learning of today could scale this to the point where it's undetectable.

[1] https://www.engadget.com/2017/06/07/russian-malware-hidden-b...


You can worry as much as you want. And I'm free not to worry as much as you want.

Another subtle point (one of many) involves transfer of knowledge. My most recent system "learned" by self-modification (similar in concept to growing new synapse connections but done by self-modifying code). This kind of modification will eventually arrive with Neural Nets as they move beyond simple 'dropout' techniques to more context-specific additions of specialized subnets and/or cooperating subnets (e.g. the room-recognizer adds a 'table'-specific subnet).

Assume two systems start out in a similar state. After interactions the systems are no longer identical and they gradually diverge over time (think twins). One is used in accounting and thinks of a 'table' as a kind of spreadsheet. Another is used in woodworking and thinks of a 'table' as a wooden object.

The implication is that one cannot just 'copy' what the first system knows to 'teach' the second system. There would have to be some sort of teaching mechanism (aka college) to transfer the information.

Personally I think it's absolutely important for some people to be working on AGI and other to be working on practical applications. Trying to push the boundaries teaches you a lot about what tools you still need. Researching narrow applications hones our experience with the techniques we already understand well and gives us new tools

it sounds like people who don't work with machine learning asked that question. People who do work with machine learning should see the difference. Besides what is mentioned in the blog post, OpenAI is a research effort for moving start of the art, while fast.ai is for teaching students in a non math heavy way.

We also do research and try to move the state of the art. Although we share our results and methods largely through courses rather than academic papers.

Our plan is to teach more material each year, in less time, with less prerequisites, by both curating the best practice techniques and adding our own.

So far we've spent much more time on education than research since that's the highest leverage activity right now (helping create more world class practitioners helps move the field forward). And most of the research is more curation and meta-analysis to figure out what really works in practice.

Didn't realize you are doing research as well. I love what you have done with the courses and look forward to seeing how the linear algebra computational course does. anyway I could contact you to ask you more questions? My email is in my profile

If you published your research in peer-reviewed venues, it would help your democratization ambitions. Few people can pay attention to original research that is buried into teaching material.

There is little to object to in this post, but there is also little to praise. While it is an accurate description of the differences between OpenAI and Fast.ai, it doesn't make a compelling argument that Elon and friends should be spending their money elsewhere. Indeed, if the potential for neural nets to help the world is so large, research into techniques like RL are by extension even more promising, and should be funded and explored. Sure, neural nets are being applied to all sorts of new problems of the sort we are familiar with. This is a period for making incremental improvements in the paradigm. But that should not obviate or preclude research that will lead to a paradigm shift. A false dichotomy.

> Reinforcement learning: the study of teaching computers how to beat Atari.

This statement says more about the author and her inability to understand RL than about RL itself. RL doesn't fit fast.ai's "AI is easy" narrative therefore it's not worth doing.

There's nothing hard to understand about RL. I'm not sure where you get that idea - if you find it hard, perhaps you just need to look at some different way. Karpathy summarizes the differences with regular supervised learning in his policy gradient post:

> Policy gradients is exactly the same as supervised learning with two minor differences: 1) We don’t have the correct labels yi so as a “fake label” we substitute the action we happened to sample from the policy when it saw xi, and 2) We modulate the loss for each example multiplicatively based on the eventual outcome, since we want to increase the log probability for actions that worked and decrease it for those that didn’t.

(from http://karpathy.github.io/2016/05/31/rl/ )

I find RL hard and no I don't need Karpathy's or fast.ai's dumbed down version of RL. I'm talking about cutting-edge RL like PSRL or RETRACE, not policy gradient or DQN.

Well OK... I don't know why you think those are hard (let alone cutting edge - PSRL isn't new), but it seems important for you to feel like you understand things other people aren't smart enough to, so there's probably not much more I can say.

You can say RL is not hard when you manage to teach all aspects of the SOTA RL algorithm (https://arxiv.org/abs/1707.06887) to your class such that they are able to answer any question about it (not just implement it). Good luck teaching metric spaces to code monkeys.

You guys are doing good work teaching tensorflow and algorithms/models researchers are coming up with, but are slapping those same researchers by disrespecting what they're working on now. Some humility would be wise.

Not sure why the exact details of state of the art research is relevant here. Obviously that definition of RL is dumbing it down, as I'm sure Rachel knows, but the point is simple - teaching a computer to do something specific a human can without explicitly telling it whether something is good or bad, but rather have it learn on its own.

The latest research in RL isn't getting us that much closer to AGI. We can't plop a robot into the real world and tell it to use RL to learn everything.

Before 2012, you also couldn't run a system to classify among 1000 classes. Just because RL isn't there yet, doesn't mean it's not worth doing.

The reason we're discussing hardness of RL is fast.ai's narrative of "you don't need math for AI" and "AI is easy". Sure, implementing and applying AI is easy, and you just need to learn tensorflow, but doing even a modicum of novel research in RL requires a tremendous background in all kinds of math. I appreciate what fast.ai is doing to democratize as much of AI as possible, but that doesn't need to be at odds with other people prioritizing RL research.

fast.ai's narrative is "AI is easy for what you probably want to use it for". There are a ton of awesome applications that are enabled by the level of AI taught in the course. However, Rachel's article is about how AGI is actually really, really hard. So much so that we have no idea how to get there and can't predict when it will happen. So instead of fearmongering about AI, we should instead be encouraging everyone to do awesome new stuff with AI.

> It is hard for me to empathize with Musk’s fixation on evil super-intelligent AGI killer robots in a very distant future.

This reminded me of a talk by Andrew Ng I watched recently [1]:

"Worrying about evil AI robots today is a little bit like worrying about overpopulation on the planet Mars ... how do you not care ... and my answer is we haven't landed on the planet yet so I don't know how to work productively on that problem...of course, doing research on anti-evil AI is a positive thing but I do see that there is a massive misallocation of resources."

Nice talk all around if you can find the time.

[1] https://youtu.be/21EiKfQYZXc?t=37m45s

"how existing gender and racial biases are being encoded in our algorithms" sounds like a great first problem to work on for those worried about super-intelligence, as it's basically a concrete instance of the value-alignment problem.

IMO, as long as you accept that human brains run on physical processes (no souls) and that computers continue to improve, super-intelligent AI is inevitable; but it's reasonable to think it's more like 150 years away than 10. Given the magnitude of the consequences here, it's still worth spending some resources to work on.

Reinforcement Learning is fundamental research, it needs sponsors like Musk to survive. Agriculture and medicine are applied research, they can find their own independent business models for monetization. That's what we are doing at Startcrowd: http://www.startcrowd.club

I think Mark Nelson had succinctly described the problem with OpenAI (more broadly, with the "AI safety" concerns): https://twitter.com/mjntendency/statuses/859189428748791810

Regarding "killer robots":

If we can't make "paperclip maximization" the main goal of a some human, why do we expect to be able to make it the main goal of some AGI?

We can't make anything the main goal of some human, because for us us (just like any other animal) the goal system is genetically hardwired to a bunch of complex feedback loops (hunger, sexual drive, pain, social status feedback, etc etc etc), and we can't turn off any of them, much less all of them, without fundamental alterations of something that's formed very early in evolution of multicellular life.

Any AGI system will also have some set of goals, just like we do. For us the goal is to (vaguely speaking) "feel good" and all the direct things that affect that, including feelings caused by anticipation of future events, etc - but this set of goals is pretty much arbitrary if these goals didn't have to survive through millions of years of natural selection and orthogonal to intelligence power.

I mean, if it doesn't have goals, then it won't do anything, it's not an agent. You could have very smart narrow systems that aren't "agentive" and just e.g. provide answers to very complex questions, but whenever we talk about general artificial intelligence we are talking about a system that has a feedback loop with the external world, i.e., it does stuff, observes the results, learns from that (i.e. is self-modifying in some sense) and decides on further actions - which means having some goals.

Try substituting in "wealth accumulation regardless of adverse effects" for paperclips.

Some humans have adopted that goal. If I understand your view, it should be a goal that an AGI could run with.

I think 'killer robots' might present more subtle dangers than a universe of paperclips.

We don't have the luxury of designing humans from the ground up. Nor do we fully understand the structures underlying human motivation. Neither will be true (we expect) of AI, eventually.

It would be nice if there were some fundamental property of motivation that kept paperclip maximization from being an overriding goal of an intelligent entity, but we don't yet have particularly strong reasons to believe that this is the case.

We can make "profit maximization" the main goal of several very smart humans, though that does require quite a lot of programming.

What about using machine learning to estimate time to solve difficult task with ai?

Maybe I'm missing something here.

First of all, I know a lot of the popular perception of artificial "general intelligence" is overly simplistic - I don't believe in the nerd rapture, and I agree with a lot of what was written in the three articles tweeted by Chollet that have been linked in this thread. And yet, I still don't see how AI is not a plausible existential threat.

Unless you believe that some magical business is going on in the human mind that isn't subject to the normal laws of physics, then I don't see how you can believe that there's anything our brains can do that another machine can't. Even if no cognitive skill can be increased to infinity, we have no good reason to believe that our brains represent the maximum performance of all possible cognitive modes. That's an appeal to the discredited idea that evolution has a "ladder", upon which we stand at the apex as the finished product. Natural selection doesn't optimize for intelligence, and it is not "finished". So, if our brains are machines (albeit highly complex ones that we only partially understand), and if they probably don't represent the maximum potential performance of cognition, then how can we say with any confidence that it is not possible to create another machine with higher cognitive performance across all (or nearly all) modes of cognition? And if that is possible, how can we say with confidence that such machines could pose no existential threat to us? Sure, maybe they won't, or maybe we'll never figure out how to build them. But how is it implausible? How is it something to laugh out of the room?

Furthermore, AI need not surpass us in all modes of cognition in order to be an existential threat. As AI gets better at accomplishing a wide variety of tasks, it becomes an ever more powerful lever for those who own it. The near-term threat from AI is socioeconomic: the replacement of vast numbers of jobs with AI/robotics controlled by a small number of people who receive all the profits of their "labor". It doesn't take much imagination to see how thie could be, at the least, an existential threat to our current society if it is not addressed with adequate forethought.

All in all, I just do not see how AI is not an existential threat worth thinking about and sinking some money into researching how we can make it safer. The tired old argument about the need to spend those resources on more urgent matters doesn't hold water. There are seven billion of us. We can specialize - indeed, it's arguably our greatest strength! We can - and must! - devote resources and talent to a great many urgent issues, such as poverty, conflict, disease, and illiteracy. But I think we would be very unwise not to put a little of our wealth and time into researching how best to mitigate the long-term threats that don't seem urgent yet and might not even come to pass. If we don't, then chances are someday one of them will in fact come to pass and we'll wish we had worked on it sooner.

I agree with everything you said. However, I don't agree it's what should be hogging the money and interest right now. The impact of AI on employment (and therefore societal stability) is a much more pressing problem. As is the misuse of AI by people in many areas.

If society can get through these issues, then we may get to the point where the existential threat of superhuman AI is of most immediate concern.

Either way, we're not arguing that this issue deserves no time and attention at all. But currently it's the first (and often only) thing I'm asked about when I give talks about the future implications of AI, and it's receiving huge amounts of funding. Elon Musk, when he had the opportunity of addressing some of America's most powerful people, elected to spend his precious time discussing this issue, rather than anything else.

I definitely agree that it shouldn't hog all the money and interest - I just don't think that it's hogging a disproportionate share right now. Musk and Altman's OpenAI is a drop in the bucket in the universe of philanthropy. In the AI field, the bulk of work at companies and research at univesities is not going toward existential risk mitigation.

I think the "If society can get through these issues, then..." perspective is fundamentally flawed. We will always, always have urgent problems. Guanranteed. We need to work hard on them. But we also need to work a little bit on the big things that are not of most immediate concern. I think a great analogy for this is climate change. If people in the early Industrial Revolution knew what we know now about greenhouse gases, they wouldn't have been inclined to worry. The amount they were putting into the atmosphere just was not significant, and even if it rose sharply, it would be many generations before there might be an issue. And yet, if people had put some focus on long-term threats, we might be in less dire straits now.

I definitely sympathize with your experience as an AI educator being constantly peppered with hypotheticals about superintelligence. I love your fast.ai course and I am sure you'd rather talk about teaching AI than about something that may or may not be an issue decades from now. But that doesn't mean the issue doesn't deserve more funding.

Musk's topic choice may seem repetitive to us, as we are in the tech world and read about/discuss this topic all the time. But a bunch of older politician types probably have not even been exposed to these ideas of existential risk, and as they hold a lot of power, it's not a bad thing to educate them on. For what it's worth, Musk also spent a lot of time discussing nearer-term implications like job displacement.

> For what it's worth, Musk also spent a lot of time discussing nearer-term implications like job displacement.

That's great to hear - I honestly had no idea. Which suggests that the media doesn't have as much interest in covering that as it does the 'killer robots' angle...

The transcript is definitely worth reading.

I think that's what has happened with this topic in general and Elon in particular. In the talks I've seen him give, he seems less concerned with AI being "out of control" and more with it being a very powerful lever under the control of a small number of people. That's what he's given as the rationale for OpenAI, and if you look at the research on their website, it's all real research, not bloviating about killer robots. But of course the media just wants to talk about killer robots.

I have the intuition that, given a specific "dangerous" goal (i.e., paperclip maximization), AGI will have too much autonomy to stick to it and AI won't have enough autonomy to make it happen.

I believe that the fear of AI is unfounded.

Can you elaborate what you mean by "AGI will have too much autonomy to stick to it" ?

Any system that is capable decide that e.g. paperclip maximization is a bad goal must unavoidably have some scale of what constitutes better or worse goals... and that de facto means that whatever is at the "good" end of that scale will be the true goal of that system. But where does that scale come from?

Especially given that this scale is absolutely arbitrary, there are no competing "innate drives" (like mammals have) that would make it difficult to stick with any arbitrary goal. We're not talking about giving some intelligence orders that might conflict with what it "really wants" - we're talking about configuring the ultimate desires of that system, this configuration will define the world states that it will find more or less "desirable" given complete autonomy.

How would you compare the work that you are doing to the work that is being done in China?

Can you be more specific? China is a big place and there's a lot happening there! One non-profit in China for instance has kindly translated the entirety of part 1 into Chinese and provided a discussion group for Chinese students.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact