These various techniques that currently work by training, either supervised or self-training,
can have fatal flaws.
Take, for example, some high-tech camera technology. Use it on a drone to take pictures of warships from thousands of angles. You take pictures of U.S. warships, Russian warships, and Chinese warships. You achieve 100% accuracy of identifying each ship using some Neural Net technology.
One day this net decides that an approaching warship is Chinese and sinks it. But it turns out to be a U.S. warship. Clearly a mistake was made. Deep investigation reveals that what the Neural Network "learned" happened to be related to the area of the ocean, based on sunlight details, rather than the shape of the warship or other features. Since the Chinese ships were photographed in Chinese waters, and the U.S. warship that was sunk was IN those waters at the moment, the Neural Net worked perfectly.
Recognition and action are only part of the intelligence problem. Analysis is also needed.
Another solution would be to use autoencoders or GANs to create a latent code from the input image. By construction, these codes need to carry the most important features about the input, because otherwise they couldn't reconstruct it.
And regarding analysis - a lot of groups are attempting the leap from mapping "X -> y" to reasoning based on typed entities and relations. Reasoning would be more like a simulator coupled with a MCMC system that tries out various scenarios in its 'imagination' before acting out.
There are many formulations: relational neural nets, graph based convolutional networks, physical simulators based on neural nets, text reasoning tasks based on multiple attention heads and/or memory. It's very exciting, we're closing in on reasoning.
That's a new area of research, actually.
Machine Learning works on a different timescale from everything else.
To piggyback your comment. I think this is the real major cause for concern, AI disruption occurs at a expotentially high rate than human learn rates affecting issues like career transition.
No one may have been ill intentioned when they applied AI and invented a way to replace anyone any longer having to mnqualy do job X but none-the-less all those who do job X are now stuck and even if they retrain (optomistixally in 2-3 years) it will in all likelyhood distrupt them again.
As technologists maybe we face a bias as we have not been significantly career disrupted by technological progress and so we can't fully see what it's like for those not riding but being swamped by the wave.
Would you mind providing some reference here? I am interested and not familiar with any such way.
E.g. Here is an example of a wolf (ship) detection system that is actually detecting snow (area of the ocean) https://youtu.be/hUnRCxnydCc?t=58
There's no leakage in either case. The model is learning one thing and you think it's learning another.
It doesn't generalise when the US ship is in Chinese waters, but that's because the system was never "learning" to recognize ships in the first place.
You are implying that the problem would have been in the set that was used as input, but my understanding is that in many a case you would realize that mistake when it's already too late.
Hassabis also stated in another session that there are probably at least half a dozen mountains to climb before reaching AGI and he would be surprised if it takes more than 20. He also said that it has been easier to develop major advances than he thought. 
Considering that there has been about one major leap per year in neural networks architecture and systems for the past 3-4 years, and this includes training neural networks to do one-shot learning for physical robot control after practicing in a simulator, should we really be that complacent?
Even if the advances slow down to one per 2 years and it takes 12 of them to get to AGI, that's only 24 years. If the rate continues to be one per year, that's only 12 years.
 From the start of https://youtu.be/h0962biiZa4
Note: Reinforcement Learning is far from the only component being researched at major labs, including DeepMind and OpenAI.
Even though they are high-profile people, in DL, since people still don't know a lot about it, their confidence means nothing.
When I was in college, my professors/textbook alike, claimed that in order to conquer Go, we probably needs quantum computer or something really sci-fi, maybe in 50 years, even 100 years. Guess what, no one, not even the most optimistic person, would predict it will beat the best human player in 10 years. So, yeah, AGI might happen, maybe in 10 years, maybe in 100 years, but until it happens, no one really knows when is that moment exactly.
This is what I call living in a bubble, dear sir/madam.
I grew in a small town in Eastern Europe (~100k population) and me plus the local 50 or so programmers (using Apple II and 8086-based computers), in 1996, while we were in our teenage years, were painfully aware that games like chess and Go are brute-force-able with some elements of recognizing and memorizing viable strategies (and throwing out the unviable ones early on). And that was what, 15-19 year old late teenagers, and we knew it. So be sure that many people knew it -- they just never chose that area of expertise specifically.
This also says a lot about the quality of the average professors and textbooks, but let's not go off-topic, plus it's a huge discussion area.
Once someone builds it, stopping it might be very difficult. Here's why: https://youtu.be/4l7Is6vOAOA (less than 9 minutes and very clearly explained).
Dismissing even a 10% chance of possible catastrophic risks is not what we practice in any other domains. Would you dismiss a concern over airplanes that have a 10% or even just a 1% chance of crashing?
Certain 30s physicists were making "My greatest fear is that my research is too successful" warnings that turned out to be pretty on-point.
I think the best way to understand AI tech is to try it out yourself. The closer you are to understanding its internals, the more informed you'll be. Fundamentally, it's currently all statistics. It's up to you, then, to decide whether humans are just statistical machines, or if we have something more that may not be understood by science yet, such as free will.
So the reality is, it is like ancient human knows birds can fly with wings, but people can only fly in their dreams. Now, we know people can think intelligently with their brain, but no one knows how to make computer works the same. It is far too early to talk about the risk, until we have the Wright brothers of AI to enlighten us on such possibility.
Also, the goalpost for identifying something as a challenging cognitive skill worthy of the name AI is moved almost every time we make progress so people keep denying that we are closer to AGI. AlphaGo is the latest example.
Some AI researchers and CS people believe that Natural Language Understanding (NLU) is AI-complete, i.e. once we solve it we basically solve AI in the sense of AGI. I do not personally believe that--There are certain human cognitive skills that are not required for NLU. But I do think that solving NLU does bring us closer to solving AGI.
Let's say someone makes progress on NLU. What would be the minimum level sufficient to convince you that AGI is possible? Why minimum? Because we don't want AGI to be right at our doors before starting to prepare.
* Would getting 80% on Winograd Schema be sufficient?
Other suggestions are welcomed, including by others.
This itself presents several big challenges: how do we represent the concept of reasoning, in mathematical form? Knowledge required or not? What is the representation of such knowledge?
A truly intelligent machine, should be able to take what human takes, a piece of text, then build its knowledge base from there, then answer a question just like human, then give explanation when asked for it.
There's a small chance aliens could find us, too. Yet, we don't know any more about that than we do about AI.
The most imminent threat to humans is humans. In the nearer future, it seems more likely that someone will use machine learning technology in weaponry to cause harm than create AGI. Or, that humans will undergo a global "cultural revolution" like in China or Cambodia (or ISIS now), killing anyone with an education, under the guise that "machines are evil", yet with the true goal of obtaining power, as humans often do.
Ultimately, people are free to work on what they want. If some people want to work on AGI safety, that's great. Perhaps their goal is to make humanity a little more kind. That's certainly worthy.
For me, I'm more interested in seeing research focused on giving better tools to doctors for diagnoses. Radiology is a great place to apply image recognition systems. We just need some good labeled datasets.
Sometimes I get frustrated by the "AGI is an existential risk" quotes like Musk's because I feel that's a distraction from some really beneficial applications of machine learning that will help save lives. It could cause over-hype, leading to another AI winter, pushing back life-saving advances 15 years, as happened after the 80s.
Personally I don't have a problem with the kinds of things some AGI safety researchers, like Stuart Russel, says about AGI. He isn't saying it will arrive in 2030-2040, like Musk.
Musk has an incentive to capitalize on people's fear of AGI. It attracts investment in his non-profit, which can in turn help him continue building his in-house AI program at Tesla. Primarily I disagree with his conclusions about AI tech, but I'm also weary of his motivations. I think some people view him as altruistic and I just don't see that.
> What would be the minimum level sufficient to convince you that AGI is possible?
A system that can determine its own goals. We don't have anything close to that yet, and I can't see anything short of that being a good indicator that we are close to AGI.
Even if an AI does not determine its own ultimate goal(s), it could still cause us a lot harm by creating subgoals that do not align with human values, which no one can articulate in full yet.
Overall, the controversy is worth it because the force for progress is strong enough that another AI winter is highly unlikely if useful applications continue to be developed. Without the warnings, research on Provably Safe AI might not have sufficient resources especially talented people working on it.
Really disagree here. All it takes is a few well positioned people, like those at Nnaisense, to create companies promising AGI and sucking up investment dollars. We all know greed is rampant in finance, and AI tech is a big word in startups/innovation these days that can land big VC dollars. There is corruption/snake oil in tech too, and you'll find the greediest float around the most hyped tech.
I think companies like Nnaisense are the wrong place to invest, and that there is much more practical work to be done to advance humanity.
> Sometimes I get frustrated by the "AGI is an existential risk" quotes like Musk's because I feel that's a distraction from some really beneficial applications of machine learning that will help save lives.
While I absolutely and adamantly disagree and I can't ever agree that warnings must be dismissed as "distractions", I extract from your statement here that you want us to move forward to a true AI no matter what. But then...
> ...and that there is much more practical work to be done to advance humanity.
Right now AI looks like snake oil selling, you realize that, right? I stand fully behind the statement that we have tons of practical and increasingly urgent issues to solve here in the material Earth.
Unless of course you're implying that the strife for inventing a true AI (=AGI) is a practical work. IMO it isn't. It's better than the medieval alchemy, but not by much.
Warnings can be faked. Fear mongering can be lessened when people are informed on a subject.
> I extract from your statement here that you want us to move forward to a true AI no matter what.
Where in the devil did you get that? I said no such thing. I think investments towards building AGI are largely a waste of money. If people want to spend their money on that, that's fine. I won't.
> Right now AI looks like snake oil selling, you realize that, right?
AGI does, yes. And, for those who do not understand the difference between data science and AGI, then probably all "AI tech" seems like snake oil. But it isn't. It's already helping doctors detect cancer, among other major issues.
> Unless of course you're implying that the strife for inventing a true AI (=AGI) is a practical work
I'm not. I have no idea where you got that from my comments.
Work is being done on climate change, but at the current rate, we're way off preventing catastrophic effects. I'd say that our current level of effective effort basically amounts to ignoring the problem.
There is a non-negligible possibility that AGI will be invented in 30 years. Many AI experts agree on that. A number of experts also believe that its invention could be highly beneficial or catastrophic, depending on its form and our preparation.
With cost-benefit analysis based on the best knowledge we have weighed by probabilities, it is clear that AGI risks are more substantial and worth at least as much investment as climate change. The current funding for AI Safety Research is not even 1/10th, perhaps less than 1/100th, of climate change funding. Inaction is also an action.
If you do not trust intelligent domain experts, and also intelligent non-experts with almost no conflict of interests like Bill Gates and Stephen Hawking, then please let us know which source(s) of knowledge we should rely on instead.
Note: I believe we should fund both. A certain but slow train wreck and an uncertain but even more catastrophic and possibly speedier train wreck are both worth preventing.
That's not true. Arctic methane release has the potential to become devastating within a small number of decades. From : Shakhova et al. ... conclude that "release of up to 50 Gt of predicted amount of hydrate storage [is] highly possible for abrupt release at any time". That would increase the methane content of the planet's atmosphere by a factor of twelve. That would be catastrophic by anyone's measure.
Also: In 2008 the United States Department of Energy National Laboratory system identified potential clathrate destabilization in the Arctic as one of the most serious scenarios for abrupt climate change.
 is a video from the Lima UN Climate Change Conference discussing this. The people talking about this are certainly not "no-one serious". See 34:23 for Ira Leifer, an Atmospheric Scientist at the University of California, saying that 4 degrees of warming means the Earth can probably sustain "a few thousand people, clustered at the poles".
I see 4 degrees of warming being bandied about a lot these days, but very little discussion of how catastrophic that would be. It also seems a lot more likely than AGI becoming a problem, to me at least. Sure, we should fund research into both but only one of them has me worried about the world my daughter will inherit - one is basically purely hypothetical right now.
Fighting a passive adversary is a much easier thing than fighting an active one (especially one smarter than you).
It's also at odds with the study that has the very biased question asking experts "when do you think AGI will arrive?". Many chose not to answer the question, and thus their answers can't be averaged in.
Many of the AGI safety conscious crowd skip over the question of whether we will be able to create AGI and just assume we will.
There's no precedent for this kind of discovery. Fire, the lightbulb, and nuclear weapons all pale in comparison.
Nuclear weapons were deemed impossible by some experts at the time. Throughout history, whenever there are a number of experts who believe something to be impossible and a substantial number who believe it to be possible, the latter are almost always right.
See also my other answer that begins with "Unlike airplanes, ..."
Sure, go for it. I think this has been in the news a lot recently because Musk told government leaders to expect AGI as early as 2030-2040.
It's a bit of a ridiculous claim. One of the biggest AI competitions, ImageNet, expects to run for the next 12 years, and that's just for making sense of images.
Perhaps it'll be done in less than 12, however, the predictions of the ML community and Musk do not line up.
> Nuclear weapons were deemed impossible by some experts at the time.
I suppose time will tell with regards to AGI then. I suspect we'll have another economic dip before 2030, and all this hype will die down before then. And, we'll remember this time as any other in the past when we predicted flying cars by the year 2000
Edit: I think the announcement above is about object recognition on still images, which is largely solved. The "new ImageNet" one on Kaggle in the reply below is for videos.
I don't really understand the press release on image-net.org. Maybe they meant the last competition hosted by the same group?
Edit: See page 84 of the PDF linked here: https://www.google.com/search?q=kaggle+site%3Aimage-net.org
Ps- screw you Google for not letting me copy links to PDF search results via my phone
Ok, will try
> Specifically I wonder what would be a minimum threshold for you to believe that AGI is likely possible.
Current machine learning tech doesn't have any sense of creating its own goals. We also don't know how to encode that. All of the systems we have target specific problems, like playing Atari games, identifying objects in images, or playing Go. These are all great milestones in machine learning tech. They do not indicate we're much closer to developing a new intelligence. If we could set a system out there and have it determine its own goals and learn things on its own, then maybe I'd believe we're close to creating AGI. As it stands, we don't know how to run a program that decides what it wants to learn. Every program we write has our hand in it.
My suspicion is that our quest for developing AI leads us to discover more about ourselves than AI itself. I think we'll continue to refine our definition of intelligence, and continue to use the tools we build to augment our own intelligence.
Most humans do not really come up with our goals on our own either. We are heavily influenced by genes and environment. People's personality are partially changed by the environment and occupation they are in. Twins adoption studies also show that genes have surprisingly strong effects on personality even after decades living apart.
By your definition, humans do not possess general intelligence.
I don't think we can resolve an age-old philosophical question on Free Will in a comment thread, so I'll leave it at that. :) However, General Intelligence and Free Will are largely orthogonal.
We can imagine an alien who is highly capable of any tasks human can perform and beyond, including coming up with creative subgoals as necessary, but will only serve their master's ultimate goal(s). An AGI could be like that alien but built to serve human masters. The twist is that when someone orders an AGI to efficiently reduce animal suffering, without value alignment with humans, it could come up with euthanasia as a subgoal to reduce the suffering of starving kittens.
I disagree. I believe we have free will, in addition to being influenced by our environment/genes.
This is my general point, that contemplating AI is a way for us to investigate how we ourselves operate. You have your theory and I have mine. Nobody has shown how we can be replicated.
> We can imagine an alien who is highly capable of any tasks human can perform and beyond, including coming up with creative subgoals as necessary, but will only serve their master's ultimate goal(s). An AGI could be like that alien but built to serve human masters. The twist is that, without value alignment with humans, some subgoal could be killing starving kittens to reduce their suffering
If the AGI's goals are determined by a human, then it is technically trivial to encode human values. The hard part is convincing humans not to encode evil values.
So it isn't that AGI would go rogue, it's that humans could. That's an age-old, Adam & Eve question. It's not specific to AI technology.
You might have heard of the Trolley Problem. There are many and more complicated variations of that.
This Harvard session on Justice (The Moral Side of Murder) is edifying and surprisingly fun to watch: https://www.youtube.com/watch?v=kBdfcR-8hEY
The problem with AGI is more acute than most other technologies because it is software-based, which makes it nigh impossible to regulate effectively, especially globally.
Don't kill humans sounds like a good start.
Honestly, this problem seems a lot simpler to me than creating AGI itself.
I agree it is challenging. The most imminent question is probably, "should a self driving car be allowed to kill 10 people on the sidewalk to avoid a head-on collision, or should it accept the head-on and let the driver die?"
These domain specific problems will come up before the AGI one does, and we can address them as they become relevant.
You've noted that regulating AGI is difficult, and it sounds like you don't have any other solution in mind.
It is definitely very challenging to create one, and more challenging than creating a random AGI. The morality also needs to be integrated into the Safe AGI as its core that is not susceptible to self-modification ability that an AGI could have. Thus, we need to work on that aspect of AGI now.
Stuart Russell has outlined his preliminary thought on the topic: https://www.ted.com/talks/stuart_russell_how_ai_might_make_u...
> So what would be your equivalent position in the 1940s about the nuclear solution?
I've seen people throwing that example in this thread and I don't see why.
Multiple countries invested heavily in building the bomb. It was not a question if building a bomb was possible, but rather how soon, and by whom.
> In other words, what are the risks and benefits of adopting a "worried" versus a "don't worry about it" position?
Some people do worry about it, but everyone need not. I don't.
> If there's a possible huge meteor (say 30% chance) of hitting the Earth in about 50 years, would you be "worried"?
Good example. No I wouldn't because I know we have systems monitoring for such an event. If something is coming, we will know about it. I'm not sure how far in advance we would know, but I expect as it got nearer more and more people would do their best to come up with a good solution.
We're no nearer to AGI today than we were when neural networks were developed 30 years ago. Science fiction would have you believe it, but we don't look at SciFi movies of the 50s as particularly prescient, and we shouldn't do so for today's films either.
I don't really expect to see this solved in the USA before it's solved in poorer countries. In the USA there's a tension between pleasing the millions of surplus workers and pleasing the top billionaires that'll be minted when robots can replace millions of workers. It's cheaper for the billionaires to buy legislatures than to endure higher taxes or weakened IP rights.
By way of contrast, there are no local power brokers whose fortunes are built on intellectual property in Bolivia or Bangladesh. It'll be 99% upside for populist politicians to just ignore American IP concerns and freely "pirate" machines-that-do-work and machines-that-make-those-machines. The long term solution to machines-do-all-the-useful-work isn't to tax the profits and redistribute them to unemployed people for spending anyway. That's just an elaborate historical-theme-park imitation of a 20th century economy. "Machines make all clothing, then we tax those machines and redistribute money to the people so they can buy the clothing." The better solution is widespread copying of the maker-machines and the dramatic price deflation that follows on everything they can make.
Basically, in order to make a self replicating factory we need advanced 3d-printing, robotics and a large library of schematics. Then, a "physical compiler" could assemble the desired object by orchestrating the various tools and the movement of parts inside the assembly line. If this automated factory can create its own parts, then we have a self replicator. If you make it all open source and ship seed factories around the world, soon everyone will have their own stack to rely on.
Don't forget that even if certain materials no longer become rare or contested, location and raw energy will still be. Without a supreme AGI, or an upheaval of our most basic knowledge of physics, expect wars to be fought over rotating black hole real estate.
I suck at Math - I mean I _really_ suck at Math. I can visualize algorithms and data structures and have no problem whipping up programs. I have written a lot of code in my lifetime and have helmed a lot of successful projects as my capacity as lead programmer or architect.
But I will never be a programmer that writes code that uses Math, like for games, simulation, 3d graphics, operating systems, drivers, autopilots, etc...
So, can a Math idiot like me get into A.I. by going to your site?
So please, add at least three exercises for each of the applications of each concepts explained in your book.
If you do indeed finish that book, consider me sold on it even if it costs $150 -- if it contains a lot of practical examples and exercises.
So I would also be interested in this book of yours. If you want my feedback I'm also available but I certainly don't expect handouts; just add me to a list to spam when you get your book finished :)
And better still: you will understand the math, they use simple spreadsheets for the illustration of the basic principles.
Start reading books, and working on side projects, post it online, then attending some local meetups. Next step is to aim higher, start following latest arxiv papers, trying to read them, and gradually getting comfortable to implement them. At this stage, you can land a job in the industry if you search really hard.
Disclaimer: that is how I do it. From a application SDE, to working on deep learning in a big company. It is doable, and me 2 years or so, but it is worth it.
fast.ai aims to bring a similar high level of understanding first and then backs that up with the basic principles of the theory being machine learning rounded off with a bunch of real world examples in code of classes of problems that you are likely to encounter.
About fast.ai, I would want to know those basic principles about deep learning and neural networks. I don't see them so basic unless you have a good background in algebra, optimization, statistics and the scientific method.
Probably start with Andrew Ng's Machine Learning course. It has a significant amount of Math in it-- try to understand it, but seriously do not worry about it. Just get the high level concepts, try to get some intuition on machine learning ideas and techniques. You don't need to do the assignments or work too hard on the course (but obviously it's helpful if you do).
Then read these. Don't worry if you're still confused at first. It's fine. Just go through 'em kinda slowly and try to see what's going on.
By now if you're still into it, I highly recommend Chris Olah's blog: http://colah.github.io/
It has some pretty complicated ideas in there, but the articles are illustrated and explained very well so you can get more of a feel for Neural Networks while also getting very excited about them.
Then it's probably time to start really building things. I would use Keras (https://keras.io/) at first, because it's very easy to get cool results without understanding everything under the hood. Do a tutorial or two, it's pretty intuitive and if you've done everything above you should understand more or less what's going on. Then try to find a cool dataset to work with that relates to something you're interested in. If you can't find anything you want to work with, then just use a classic dataset (imagenet is fine, even MNIST when you're just practicing) which will probably be less fun but will still let you learn. With whatever dataset you choose, just implement a simple model on your own without a tutorial (of course, if you get stuck referencing a tutorial is totally fine). Then see if you can tweak your model to get better and better scores. Start reading papers, you can find the newest ones on Twitter from ML researchers (Karpathy, Sutskever, Hinton, LeCunn are some names you could start with, there's probably a Twitter list out there somewhere) and then you can look at the references in those papers to keep finding more and more good ones. Implement any ideas in the paper you think are useful. Often Keras will have functionality to let you implement them easily. If it doesn't, then feel free to dip down into Tensorflow if you feel ready!
From there the world's yours. Find cool data to work with, implement papers to get a baseline measurement, and iterate in any way you can think of. It's very fun :)
The one thing is if you want to get state of the art results or do novel research you might need better hardware. AWS/Google Cloud/Floydhub are all options if you're willing to spend a little money, or you can just keep your expectations low ;)
Wow that turned out to be more of a roadmap than I wanted it to, sorry. The reason you don't need great math skills is that a lot of AI research is very intuitive-- Gradient descent can be internalized as a ball rolling down a hill, Momentum in training neural nets is like momentum in the real world, Neural nets are just manipulating data in high dimensional space... it's all stuff you can visualize instead of use mathematic symbols to depict, but since it's so much easier to write with symbols than it is to create a powerful image, symbols are used often.
To be honest I still usually skip over some equations in papers if they look daunting. Most of the time I don't need to understand them. Most of the time, I can read the abstracts, look at some figures, look at the results, and that's everything I need.
"Cracking AGI is a very long-term goal. The most relevant field of research is considered by many to be reinforcement learning: the study of teaching computers how to beat Atari. Formally, reinforcement learning is the study of problems that require sequences of actions that result in a reward/loss, and not knowing how much each action contributes to the outcome. Hundreds of the world’s brightest minds, with the most elite credentials, are working on this Atari problem."
There's also plenty of evidence already that RL isn't really the right way to tackle the credit problem. E.g random search is only 10x slower.
and other control problems:
I think not so long ago supervised learning felt similarly 'toy', and now they are the state of the art model for machine translation and object detection. Just because they aren't useful today doesn't mean that they won't be useful.
Not true. RL has been applied often in production system inside the bigger Internet companies. They are just not published.
So you remark means that more education about RL is needed, and OpenAI, alongside other institutions like FastAI or Startcrowd, is helping for this effort.
The precise same sentiment could be made about his plans to go to Mars- people are starving on Earth. But a Martian "backup civilization" might someday save humanity. Similarly, godlike AGI might be mere decades away and apocalyptically dangerous- predicting scientific advancements is generally impossible, but some people working in the field don't think it's an unreasonable possibility . That humanity, via Musk, has put a billion dollars into worrying about this somewhat plausible existential threat seems reasonable, to me.
Musk is not a staid, Bill Gates-style philanthropist optimizing for dollar-for-dollar benefits. He's picked a set of flashy, long-term goals and started manic efforts to promote them. Perhaps it's a character flaw. But given that, I think he's at least chosen well.
I think there's room for both. It's great what fast.ai are doing and they'll benefit me more directly, but Elon Musk is pushing at the other end of the scale.
I keep feeling like I'm a total Musk fanboy, but lots of his arguments that I've seen make sense:
1. The major one, is that doing things like aiming for Mars provides inspiration for a better future. There's similar quotes from other people involved with NASA.
2. He talks repeatedly about improving the chances of better futures (clean energy, hyperloop), and lowering the chances of bad ones (Bad AI, single-planetary species on planet that has an extinction event)
He's put all his money into the things he sees as important, of which bad AI seems to be high on his list.
I didn't like this part. Elon Musk may be misguided about the immediate threat of AI but he is free to invest his money in whatever he thinks is important.
Is this fair to say? I feel like advancements in the field happened far quicker than anyone expected, and every few years we are reevaluating timelines. Especially given the research happening in tandem that will almost certainly speed up AGI, like graphene, quantum computing, advanced GPU design...
If you asked someone 10 years ago about the possibilities of ML/Deep learning they'd say it was far off too. I'm not going to say Kurzweil is correct but if I know anything, it's that historically these things have happened faster than expected. Look at 1997 -> 2017. 20 years, but what isn't changed?
Appreciate any discussion as I am not an expert :)
If we can't navigate that successfully, we'll never get to see AGI...
I had a conversation with a C level exec of a large company last week around this theme. My suggestion that limited AI such as self driving cars has the potential to create a vast number of extremely frustrated individuals making a second round of 'Sabotage' and Luddites a definite possibility was waved away as if those people don't matter and don't have any power.
I really wonder how far you'd have to be removed from the life of a truck or cab driver not to be able to empathize with them and to realize that if you take away some of the last low education jobs that allow you to keep your hands clean that there will be some kind of backlash.
AGI will expand that feeling to all of us.
IE, the world is now massively richer because of all this awesome new technology.
Yes, there could be some short term disruption, but honestly I think things will end up fine, just because of the massive amount of extra money and wealth that the world will have that could be used to solve all those short term problems that disruption causes.
The issue is income distribution - and just making more money doesn't magically fix the problem.
Magically, no. With an effort yes. There are already experiments with basic pay for example.
Jeremy Rifkin books such as https://www.amazon.com/End-Work-Decline-Global-Post-Market/d... & 'Zero Marginal Cost' discuss the topic.
I'm concerned that 'some short term disruption' will actually be quite widespread and long-lasting, given increasing interdependencies.
Yes, fabric was much cheaper, but in the meantime they were starving to death. The groups that make billions off this technology will lobby to keep more of their earnings so this vast wealth redistribution you want to happen, wont.
Could you unpack this a bit more?
Are you referring to the disruption in jobs caused by the effectiveness of narrow AI when we reach that point?
Until DL shows significant progress toward these -- building knowledgebases and enabling their reuse -- and does so at the superlinear rate of development it's achieved for pattern recognition, I think it's likely that further development of the missing components of AGI will remain gradual and decades away.
> If you asked someone 10 years ago about the possibilities of ML/Deep learning they'd say it was far off too.
But when you read a little more about recent advances, you gain a healthy respect for the difficulty of the problems that are still left to solve. That tends to make researchers have lower expectations for an easy emergence of AGI than the general public.
We can't even generate a whole page of text that doesn't sound silly, with any neural network or AI algorithm to date. We're a long way off.
> It is hard for me to empathize with Musk’s fixation on evil super-intelligent AGI killer robots in a very distant future.
The truth is we don't know, (we can't know,) and we (on the whole) are making enormous strides. And it could take one small breakthrough to find ourselves "very" suddenly on the precipice of AGI and evil super-intelligent killer robots. Dismissing the current state of progress as trying to beat Atari is silly.
I, for one, am glad that some people with the resources are taking some efforts to get ahead of that "evil" aspect. But I don't see this as an "us" vs "them" scenario: it's also great that there are people helping make neural net algorithms more accessible.
Anyone could make a long list of things it's impossible to know for sure that would destroy humnaity. Many such things are vastly more likely to do so than malicious AGI.
That's what keeps me up at night these days.
Climate change and nuclear war are huge issues, and we correctly invest orders of magnitude more money and time (which might still not be enough) trying to prevent or limit these than trying to prevent the birth of a malicious AGI, but they are extremely unlikely to lead to a complete extinction of the human race.
Regarding the other scenarios, if you allow me to move the goalposts from pure risk assessment:
- Preparing against an alien invasion seems futile, since given the timescales at play in the universe, the first aliens we meet will likely be millions of years behind or ahead of us.
- One way to survive an asteroid impact is to colonize another planet ahead of time, which Musk is working on.
- Physics experiments requiring specialized hardware like the LHC already have much more oversight than AI research where a breakthough could potentially happen in a garage on commodity hardware.
So it makes a lot of sense to me to invest some money in preventing an AI doomsday scenario next.
Nobody is arguing against funding AI. It's the fear-mongering we disagree with. It harms the field.
- Superbug pandemic
- Supervolcano eruption
Besides which AGI, when it comes, is just as likely be a breakthrough in some random's shed rather than from a billion dollar research team's efforts to create something which can play computer games well. Not a lot Musk or anyone else can do to guard against that, except perhaps help create a world that doesn't need 'fixing' when such an AGI emerges.
I wish we would run with this rather than relying on tech and the market to solve all our woes.
> One significant difference is that fast.ai has not been funded with $1 billion from Elon Musk to create an elite team of researchers with PhDs from the most impressive schools who publish in the most elite journals. OpenAI, however, has.
We want to bring attention to the important problems and people working to solve them, and contrast that with things that are distracting from that.
Not quite my field, but perhaps such currently intractable, high-impact societal and medical problems do require theoretical breakthroughs after all... I guess I'm just concerned Rachel that --to put it in reinforcement learning terms-- we need both exploration and exploitation.
Or, to be slightly obtuse, have you tried approaching Elon Musk for funding? Perhaps he'll be a bit more flush with cash once the Model 3's start rolling out the door and he'll have another billion around to donate. One of Musk's OpenAI principles was trying to democratise AI and that sounds pretty much exactly like what you're doing.
I would argue this person has no business working with AI with this sort of myopic thinking.
If you think "fake news" is a problem now, just wait until decentralized AI networks are able to tweet, publish articles and affect public discourse in a way that is indistinguishable from human influence. This AI could be trained such that it pushes targeted propaganda and is able to destroy the lives of dissidents using a variety of tactics (generation of fake audio and video material that is indistinguishable from real life targets for the purpose of slander  as well as other techniques like DDOS, hacking, etc).
Now imagine a world in which all devices are connected in an "Internet of Things" and AI becomes sophisticated enough to exploit these networks and turn them against humans. This has wide ranging implications from surveillance to killing people by hijacking the computer systems in cars to even taking control of military weapon systems that can be used to control and oppress humans.
Now take all of that functionality and embed it within an actual robot that can move around in the real world and is physically stronger than humans .
There is nothing that can be done about this because non-proliferation treaties are impossible to enforce due to the nature of software. AI is going to lead to the extinction of all human life on this planet and this is coming from someone who generally disregards conspiracy theories as paranoid fear mongering. We have every reason to be afraid.
What kind of thinking would you suggest would permit someone to 'work with AI'? Should they be quaking in their boots before being allowed to work at the altar?
You could argue (and with substantially more basis in fact) that Twitter and Facebook have elevated each individuals utterings to broadcast status and that even without decentralized AI networks all the effects that you are listing are already present in the modern world.
It just takes a little bit more work but there are useful idiots aplenty.
Your SkyNet like future need not happen at all, what we can imagine has no bearing on what is possible today and that seems to me to be a much more relevant discussion.
> AI is going to lead to the extinction of all human life on this planet and this is coming from someone who generally disregards conspiracy theories as paranoid fear mongering.
Well, you don't seem to be able to resist this particular one. So your 'generally' may be less general than you think it is.
> We have every reason to be afraid.
No, we don't. I haven't seen a computer that I couldn't power off yet, SF movies to the contrary.
Ok. Turn off the computer systems that control Russia's nuclear weapons delivery systems. I'm waiting.
I thought the problem everybody was fretting about was the idea of a nominally beneficent AI going off on it's own and doing something malicious / destructive.
Advancements if any will inevitably come from academia and those who have devoted decades studying the human brain. Not software engineers intoxicated by themselves.
Nothing in the current software ecosystem gives you the tools or language to conceptualize or even imagine let alone write code that can qualify to be used in the context of ai. Code only does what it is programmed to do. If else is a decision, that doesn't make it ai, multiple it by a million times and its still not ai.
This does not mean it cannot be dangerous. Autonomous weapons, anything thing to do with image recognition will find uses, but this is nothing like the what people imagine when we talk about AI. Engineers are supposed to be precise in their use of language but AI proponents are currently engaged neck deep in hype. Self driving cars will happen on extremely constrained roads, like trains. That will be the wake up call for the excitable ones to come back down to earth.
The issue isn't what the end result could look like, but whether we'll actually get there if basic societal needs aren't met first. Prioritize things which are needed now and in the near future.
You're also ignoring basic ideas that can defeat the apocalyptic problems you suggest. For example, "fake news" in the sense that you propose (adversaries generating synthesized videos and audio indistinguishable from reality) can be trivially defeated by cryptographic signatures. No matter how good a deep net gets, it won't beat cryptographic hardness. No chaos will ensue when major web browsers verify videos of the president signed with an official government public key, the same way they verify websites with CA's today. In fact, "fake news" today is a completely different problem, that people accept the source is legit (CNN did indeed publish an article about so and so who said X, and they did actually say X), but rather that they disagree with the meaning assigned to those facts.
You also ignore that one already has the capability to ruin a specific person's life via digital means today, without AI, just with enough dedicated time and money. Why bother with (expensive) superhuman AI targeting dissidents under an oppressive regime when instead, as most successful oppressive regimes today do, you can just lock them up or burn their house down?
In my humble opinion, if you find yourself in a situation where people don't want to present you with arguments to what you feel is logically bullet-proof reasoning, and that people are "just being dismissive," the issue is probably you, not them. Rather than bite back, the most useful approach is to reconsider your ideas. Attack them as critically as you attack the ideas of others. You may surprise yourself with what you find.
So why do black hat hackers exist? Why do you have so much faith in humanity's ability and/or willingness to "evolve" alongside AI? My prediction is that AI will evolve and humans will still be the same murderous creatures they are today, only with more tools at their disposal.
> You're also ignoring basic ideas that can defeat the apocalyptic problems you suggest. For example, "fake news" in the sense that you propose (adversaries generating synthesized videos and audio indistinguishable from reality) can be trivially defeated by cryptographic signatures.
How trivial is it for a political figure who is already untrusted by the public to assert that "cryptographic signatures" are a proof of fowl play? Again, we are talking about propaganda which plays off of human perception, not truth (the "post-truth world" meme comes to mind). DKIM signatures were often cited during the 2016 election by the alt-right as "proof" that the Podesta emails released by WikiLeaks were legitimate. The assertion that the content of the emails could have still been modified without invalidating the signatures wasn't enough to convince voters who continue to push conspiracy theories surrounding them. Humans are pretty easy to manipulate. What makes you think AI is incapable of doing this?
> You also ignore that one already has the capability to ruin a specific person's life via digital means today, without AI, just with enough dedicated time and money.
I acknowledge that. My fear is that with greater capabilities, AI could be used to deceive humans easier and on a greater scale. A murderer can kill someone with a baseball bat. They can kill hundreds of people with an AR-15. There's a reason why we have regulations on the latter.
The axiom I am starting from is not to demonize AI. I am not a luddite and recognize that AI has just as much potential to be used for good. AI is just another technology. My fear speaks more to how humans will use that technology as it becomes more sophisticated.
> Humans are pretty easy to manipulate...My fear is that with greater capabilities,...
Nobody is disagreeing with you. But this is hardly an observation of the apocalyptic proportions warned by AI risk proponents. If you're just suggesting governments regulate AI use, then you've changed your position from "stop the impending doom" to "limit the use of a dangerous tool." I have no doubt that when AI becomes good enough to be a dangerous tool, it will cause some major event and be regulated thereafter. You should start with that instead of "robots killing humans."
But if your argument is actually "stop the impending doom," then I disagree. Humans have figured out how to adapt in a world with nuclear weapons, which has shown far more risk to humanity than all but the most outlandish predictions of AI risk. You haven't presented one piece of evidence that says humans won't figure out how to adapt, or are incapable, unlikely, or unwilling to do so, whereafter humanity ceases to exist.
Come to think of it...
> solve the problem
What "problem"? What if I train a neural network to learn the patterns of individuals and integrate it with military drones to assassinate them because their existence is my "problem"? What are you going to do to stop me?
Put another way, consider the old "What is AI" joke: 'When the computer wakes up and asks, "What's in it for me?"'
edit: In case it's not clear, I agree with supernintendo, if for different reasons. The only people more frightening than the happy-go-lucky AI people are the happy-go-lucky synthetic biology people. Makes me glad I don't have children. It's all one big experiment now, with no control group.
François Chollet nailed it with the following comparison, mocking LHC alarmists:
> It's not that turning on the LHC will create a black hole that will swallow the Earth. But it could! Albeit no physicist thought so.
1) The Myth of a Superhuman AI (https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/)
2) The Terminator Is Not Coming. (https://www.recode.net/2015/3/2/11559576/the-terminator-is-n...)
3) idlewords : Superintelligence - The Idea That Eats Smart People (http://idlewords.com/talks/superintelligence.htm)
The frustrating thing is that people with insane amounts of resources and disposable income get swallowed up by these Basilisk-ian non-concerns, while very real pressing issues like Police Brutality, Wealth Inequality, Lack of HealthCare, etc. are deemed too boring for enthusiastic action.
> Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
So "I'm smarter than my dog" is a meaningless sentence? My dog will be happy to hear that . . . if he could understand it.
> Humans do not have general purpose minds, and neither will AIs.
This statement (even if true) seems totally irrelevant to his argument. So I was really looking forward to read the long form version in section #2 and see if it somehow makes sense.
It doesn't seem to.
His point is that minds involve tradeoffs, so specialized agents might be better than a big general AI. Whose side is he arguing on here? He isn't showing that a general intelligence couldn't exist that's better than humans in every way. He's just saying that even it would be inferior in specialized tasks to even more inhuman, specialized minds. Comforting this is not.
> Emulation of human thinking in other media will be constrained by cost.
This section is kind of a hack too, but since at least the title makes sense lets give him a freebie.
> Dimensions of intelligence are not infinite.
Oh golly. He starts off by asserting that the core of the AGI people's ideas relies on intelligence being on an unlimited scale. It includes this gem: "Temperature is not infinite — there is finite cold and finite heat." My estimate of this being a joke article is going back up.
Alright, I agree that intelligence, like temperature, is not infinite. I still don't want be be competing for a job opening with Von Neumann. Von Neumann doesn't want to be competing for a job with future-AGI-silicon-Von-Neumann, whose brain is 500 million GPUs in a datacenter stretching across Montana.
> Intelligences are only one factor in progress.
Right, just wildly the most important one.
My honest opinion is that I feel a little bad for the author. He clearly just doesn't want to believe the plausibility of this idea, and is just throwing every argument he can against it and hoping one of them sticks. I sympathize, but this kind of squid-ink writing probably doesn't serve as a great foundation for future debate.
Appeal to authority. You're not presenting any substantive rebuttals, just assuming it's not a concern because someone else with credentials said so. It's identical to alleging that I'm only worried because Elon Musk says so.
> The frustrating thing is that people with insane amounts of resources and disposable income get swallowed up by these Basilisk-ian non-concerns, while very real pressing issues like Police Brutality, Wealth Inequality, Lack of HealthCare, etc. are deemed too boring for enthusiastic action.
Nice strawman. It's also pretty disrespectful for you to assume my socioeconomic background. I live in the United States and have to deal with the issues you pointed out on a visceral basis. That has nothing to do with AI though so I'm not sure why you're even bringing it up.
More importantly there is an element of misuse going on here: the "fallacy fallacy" if you will. Fallacies like ad hominem and appeal to authority are very often mis-cited without substantive refutations. But in the absence of other qualifying data it is absolutely relevant to a dialectic session that you mention authorities in a subject matter. When we talk about things like consensus in sciences (like climate change, for example), that is a valid appeal to authority.
You don't have enough time or expertise to verify specialized information or even qualify its context. At some point you absolutely need to appeal to authority just to be productive. If you can assess data yourself then that's obviously better, but in the absence of that capacity the assertion of various authorities is a valid first heuristic.
??? I just gave you a ton of arguments you can read through.
You want me to copy-paste them here?
Ultimately, your "arguments" are refuted with a simple "There's zero reason to be afraid of what you're afraid of".
> Ultimately, your "arguments" are refuted with a simple "There's zero reason to be afraid of what you're afraid of".
Yes, I'm sure it's that black and white.
I don't even know why people are arguing with me. The idea of replacing human life with AI isn't exactly controversial in the realm of futurism.
It isn't in Science Fiction. But this is reality, and in this reality we do not have AGI and we have no idea of how far we are away from it. And even if and when it happens there is absolutely no guarantee that that will lead to the extinction of the human race and/or us ending up as slaves to the machine.
Edit: /s because I would make me mad if even a few people thought I was being serious.
Assume two systems start out in a similar state. After interactions the systems are no longer identical and they gradually diverge over time (think twins). One is used in accounting and thinks of a 'table' as a kind of spreadsheet. Another is used in woodworking and thinks of a 'table' as a wooden object.
The implication is that one cannot just 'copy' what the first system knows to 'teach' the second system. There would have to be some sort of teaching mechanism (aka college) to transfer the information.
Our plan is to teach more material each year, in less time, with less prerequisites, by both curating the best practice techniques and adding our own.
So far we've spent much more time on education than research since that's the highest leverage activity right now (helping create more world class practitioners helps move the field forward). And most of the research is more curation and meta-analysis to figure out what really works in practice.
This statement says more about the author and her inability to understand RL than about RL itself. RL doesn't fit fast.ai's "AI is easy" narrative therefore it's not worth doing.
> Policy gradients is exactly the same as supervised learning with two minor differences: 1) We don’t have the correct labels yi so as a “fake label” we substitute the action we happened to sample from the policy when it saw xi, and 2) We modulate the loss for each example multiplicatively based on the eventual outcome, since we want to increase the log probability for actions that worked and decrease it for those that didn’t.
(from http://karpathy.github.io/2016/05/31/rl/ )
You guys are doing good work teaching tensorflow and algorithms/models researchers are coming up with, but are slapping those same researchers by disrespecting what they're working on now. Some humility would be wise.
The latest research in RL isn't getting us that much closer to AGI. We can't plop a robot into the real world and tell it to use RL to learn everything.
The reason we're discussing hardness of RL is fast.ai's narrative of "you don't need math for AI" and "AI is easy". Sure, implementing and applying AI is easy, and you just need to learn tensorflow, but doing even a modicum of novel research in RL requires a tremendous background in all kinds of math. I appreciate what fast.ai is doing to democratize as much of AI as possible, but that doesn't need to be at odds with other people prioritizing RL research.
This reminded me of a talk by Andrew Ng I watched recently :
"Worrying about evil AI robots today is a little bit like worrying about overpopulation on the planet Mars ... how do you not care ... and my answer is we haven't landed on the planet yet so I don't know how to work productively on that problem...of course, doing research on anti-evil AI is a positive thing but I do see that there is a massive misallocation of resources."
Nice talk all around if you can find the time.
IMO, as long as you accept that human brains run on physical processes (no souls) and that computers continue to improve, super-intelligent AI is inevitable; but it's reasonable to think it's more like 150 years away than 10. Given the magnitude of the consequences here, it's still worth spending some resources to work on.
If we can't make "paperclip maximization" the main goal of a some human, why do we expect to be able to make it the main goal of some AGI?
Any AGI system will also have some set of goals, just like we do. For us the goal is to (vaguely speaking) "feel good" and all the direct things that affect that, including feelings caused by anticipation of future events, etc - but this set of goals is pretty much arbitrary if these goals didn't have to survive through millions of years of natural selection and orthogonal to intelligence power.
I mean, if it doesn't have goals, then it won't do anything, it's not an agent. You could have very smart narrow systems that aren't "agentive" and just e.g. provide answers to very complex questions, but whenever we talk about general artificial intelligence we are talking about a system that has a feedback loop with the external world, i.e., it does stuff, observes the results, learns from that (i.e. is self-modifying in some sense) and decides on further actions - which means having some goals.
Some humans have adopted that goal. If I understand your view, it should be a goal that an AGI could run with.
I think 'killer robots' might present more subtle dangers than a universe of paperclips.
It would be nice if there were some fundamental property of motivation that kept paperclip maximization from being an overriding goal of an intelligent entity, but we don't yet have particularly strong reasons to believe that this is the case.
First of all, I know a lot of the popular perception of artificial "general intelligence" is overly simplistic - I don't believe in the nerd rapture, and I agree with a lot of what was written in the three articles tweeted by Chollet that have been linked in this thread. And yet, I still don't see how AI is not a plausible existential threat.
Unless you believe that some magical business is going on in the human mind that isn't subject to the normal laws of physics, then I don't see how you can believe that there's anything our brains can do that another machine can't. Even if no cognitive skill can be increased to infinity, we have no good reason to believe that our brains represent the maximum performance of all possible cognitive modes. That's an appeal to the discredited idea that evolution has a "ladder", upon which we stand at the apex as the finished product. Natural selection doesn't optimize for intelligence, and it is not "finished". So, if our brains are machines (albeit highly complex ones that we only partially understand), and if they probably don't represent the maximum potential performance of cognition, then how can we say with any confidence that it is not possible to create another machine with higher cognitive performance across all (or nearly all) modes of cognition? And if that is possible, how can we say with confidence that such machines could pose no existential threat to us? Sure, maybe they won't, or maybe we'll never figure out how to build them. But how is it implausible? How is it something to laugh out of the room?
Furthermore, AI need not surpass us in all modes of cognition in order to be an existential threat. As AI gets better at accomplishing a wide variety of tasks, it becomes an ever more powerful lever for those who own it. The near-term threat from AI is socioeconomic: the replacement of vast numbers of jobs with AI/robotics controlled by a small number of people who receive all the profits of their "labor". It doesn't take much imagination to see how thie could be, at the least, an existential threat to our current society if it is not addressed with adequate forethought.
All in all, I just do not see how AI is not an existential threat worth thinking about and sinking some money into researching how we can make it safer. The tired old argument about the need to spend those resources on more urgent matters doesn't hold water. There are seven billion of us. We can specialize - indeed, it's arguably our greatest strength! We can - and must! - devote resources and talent to a great many urgent issues, such as poverty, conflict, disease, and illiteracy. But I think we would be very unwise not to put a little of our wealth and time into researching how best to mitigate the long-term threats that don't seem urgent yet and might not even come to pass. If we don't, then chances are someday one of them will in fact come to pass and we'll wish we had worked on it sooner.
If society can get through these issues, then we may get to the point where the existential threat of superhuman AI is of most immediate concern.
Either way, we're not arguing that this issue deserves no time and attention at all. But currently it's the first (and often only) thing I'm asked about when I give talks about the future implications of AI, and it's receiving huge amounts of funding. Elon Musk, when he had the opportunity of addressing some of America's most powerful people, elected to spend his precious time discussing this issue, rather than anything else.
I think the "If society can get through these issues, then..." perspective is fundamentally flawed. We will always, always have urgent problems. Guanranteed. We need to work hard on them. But we also need to work a little bit on the big things that are not of most immediate concern. I think a great analogy for this is climate change. If people in the early Industrial Revolution knew what we know now about greenhouse gases, they wouldn't have been inclined to worry. The amount they were putting into the atmosphere just was not significant, and even if it rose sharply, it would be many generations before there might be an issue. And yet, if people had put some focus on long-term threats, we might be in less dire straits now.
I definitely sympathize with your experience as an AI educator being constantly peppered with hypotheticals about superintelligence. I love your fast.ai course and I am sure you'd rather talk about teaching AI than about something that may or may not be an issue decades from now. But that doesn't mean the issue doesn't deserve more funding.
Musk's topic choice may seem repetitive to us, as we are in the tech world and read about/discuss this topic all the time. But a bunch of older politician types probably have not even been exposed to these ideas of existential risk, and as they hold a lot of power, it's not a bad thing to educate them on. For what it's worth, Musk also spent a lot of time discussing nearer-term implications like job displacement.
That's great to hear - I honestly had no idea. Which suggests that the media doesn't have as much interest in covering that as it does the 'killer robots' angle...
I think that's what has happened with this topic in general and Elon in particular. In the talks I've seen him give, he seems less concerned with AI being "out of control" and more with it being a very powerful lever under the control of a small number of people. That's what he's given as the rationale for OpenAI, and if you look at the research on their website, it's all real research, not bloviating about killer robots. But of course the media just wants to talk about killer robots.
I believe that the fear of AI is unfounded.
Any system that is capable decide that e.g. paperclip maximization is a bad goal must unavoidably have some scale of what constitutes better or worse goals... and that de facto means that whatever is at the "good" end of that scale will be the true goal of that system. But where does that scale come from?
Especially given that this scale is absolutely arbitrary, there are no competing "innate drives" (like mammals have) that would make it difficult to stick with any arbitrary goal. We're not talking about giving some intelligence orders that might conflict with what it "really wants" - we're talking about configuring the ultimate desires of that system, this configuration will define the world states that it will find more or less "desirable" given complete autonomy.