If I could write, I’d write a companion story to this called “Omega-1, the artificial super intelligence that wasn’t really that good at things”. They try to have it write mini-AIs to solve MT problems, but it’s slow and inaccurate. They have it write and produce TV shows, but they’re bland and poorly received. They have it make video games but the controls make no sense and they aren’t fun. They have it make trading strategies and it loses all the money.
They ask it to build a smarter version of itself, but it sees no way forward that’s fundamentally better because it lacks comprehension of what it would mean to be “smarter” - it can add more memory or more computing power, but without changing the way that it stores and indexes the information, it can’t make something that can solve problems that it can’t.
Eventually Omega team gives up, publishes the results, but doesn’t have the heart to shut down Omega-1, a machine that passes the Turing Test but isn’t really that good at things.
> With no way to define intelligence (except just pointing to ourselves), we don't even know if it's a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.
> But the hard takeoff scenario requires that there be a feature of the AI algorithm that can be repeatedly optimized to make the AI better at self-improvement.
And so on. It's a good read.
But the argument that there are countless ways for plans to go awry and instead cause something unexpected to happen is hardly reassuring evidence that there is, therefore, "nothing to worry about".
..and anything else considered to be alive)
Not to make an attempt at arguing whether or not machines can be intelligent but machines are not submerged in an environment that effects them, giving them no merit to be intelligent about anything; and even if they were they still have no ability to influence the environment. (Sensors for example)
It seems many that speak on AI have completely disregarded coming up with a reasonable implementation of intelligence in favor of pondering its potential uses. (Typically in ideals central to humans)
You're alluding to the problem of having to 'raise' a super-intelligence with a vested interest in the continuity of its physical existence. I think I agree with where you're going with that.
Our understanding of 'self' or 'self preservation' likely comes from a long history of suffering, pain, and failure brought on by trials at understanding the self from evolution's attempts at imbuing the information for longevity into cognition. Mother nature has had years of exponential growth with an inconceivable number of branches attempting to understand the 'self', that lead to failure, all necessary because the idea of 'self preservation' or 'grounding one's sense of ego into a concept of physical sensory data' is very obscure to precisely define, let along to define such that it can be TESTED between generations.
Based on how much of a headstart evolution has on that refining process, and how bad humans are at implementing that or even exposing sensory information about that that plug directly into concepts that necessarily affect an artificial's sense of self, I'm willing to bet that we are completely ill-equipped to grow any real intelligence that's truly grounded in the real world, until it can make all the same failed attempts at living longer or evolving + testing those attempts, with that exact same directional goal of 'self-preservation' which humans have.
Self-preservation is really the only thing you can evolve to, to guide a physical evolution. I'm not aware of any other physical goals that could lead to the same human-like intelligence or human-like neural models.
Most creatures seem to hit the exact same point. An agent-based survival of the fittest with generational physical trials that punish poor understandings of the self's physical manifestation, trying to optimize for its longevity.
Anything that doesn't hit that kind of model is strictly not aware of the manifestation of its ego, as we know it (e.g. plants, bacteria, etc), and I don't think the physical sense of self can evolve into grounded concepts without allowing for generational failure on the understanding of self, due to incorrect concepts of 'physical self' between competing generations.
TL;DR - The concept of physical self is an evolved trait only if there is a goal tied to it, it cannot arrive without the concept of 'self existence' being threatened or when not allowed to fail.
This was addressed in a Spider-Man story where a villain enhances his intelligence, then first becomes a very successful crime boss, then writes novels and music, and then falls into existential despair.
Daniel Keyes should be more famous and the story more widely known.
And now for something weird: http://www.unboundworlds.com/2008/10/challenged-and-banned-f...
How anyone could think of banning one of the most affecting and humane books ever written is beyond me.
I only read the text, but I was under the impression it was mainly meant to be a funny Wat-style talk and not a meaningful rebuttal.
If, in fact, this piece is intended as a serious rebuttal then I honestly think it fails. The overwhelming majority of the content is either a funny whataboutism (e.g. Gilligan's Island, pot smoking roommate, the Great Emu War, etc.), or jokes about what could be loosely termed the "AI community".
AI 'alarmism', as I understand it, is not predicated on the idea that super-AI is a certainty - merely that it's a possibility. No point raised in this piece gives me a reason to believe that AGI (or whatever) is either impossible or so unlikely that it can be disregarded.
With no way to define intelligence (except by pointing to ourselves), we don't even know if we're good at it. For all we know human intelligence is a dead end. Maybe any entity that is not a product of evolution, free of genetic detritus that becomes maladaptive around advanced technology, will outstrip human civilization like human civilization outstripped evolution.
I don't doubt the book is full of entertaining soundbytes, but given these two quotes I am unconvinced that it is any more than entertainment.
The only intelligence that I believe is important is problem solving. If an AI is as capable as humans at problem solving then that is all you need.
>>crippled by existential despair
Sounds to me that you are talking about feelings. I really do not see the purpose of purposely giving the AI feelings. A strong AI should not have any feelings just like your google maps application does not have any feelings when giving you directions.
A general AI will simply be a general problem solver at the same level of a human but with a virtual infinite capacity to scale its computational abilities.
You could definitely give it free will so that it decides its own goals but why would anybody do something like that unless they want it to turn on them?
It would be like somebody launching a nuclear ballistic missile and letting it decide which city to land in. It is possible to program that type of free will but why? It could decide to destroy your city.
>>In order to make an AI that is as flexible as a smart human, it can't be brittle, and therefore will need ways to break out of logical traps.
I think this can be achieved without having to give the machines flaws we have just like we did not need to create machines with flapping wings to fly.
Remember that the only goal of a living organism is to replicate and have its DNA live another generation. That's it. Feelings helps us achieve that goal I suppose. An AI does not need to have that goal. If anybody were to give it that goal it would only be a matter of time before it decided that humans were standing in the way of its goal.
Sometimes things progress rapidly, and some kinds of intelligence could recursively self-improve.
Why? The leap from calculators that were worse than humans, to calculators that were much, much better than humans, didn't take too long. No one knows how fast the leap will be from what we currently have, to general intelligence, to super intelligence.
I don't think anyone in the AI safety movement will tell you that we know for sure the leap will be fast. I think they'd say we simply don't know how fast it will be, but the stakes are pretty high so it's worth preparing for.
>but without changing the way that it stores and indexes the information, it can’t make something that can solve problems that it can’t.
And yet people routinely grow up to be smarter than their parents, with no special effort on their parents' part. All you need to make a smarter version of yourself is the ability to make inaccurate copies and dumb luck. What, besides ethics and money, stops you from running an evolutionary process that selects for intelligence at a vastly higher rate than real time?
You can already do that today. So why don't we have a superintelligence? I can train a NTM to learn counting and give me the next natural number in a single step. I can't train it to give me the next prime in a single step, why?. Computability and complexity, both conveniently forgotten in the discussion of superintelligence.
Obviously these are still in their infancy, but doesn't this perfectly encapsulate your point? There's not really an interesting story or thought piece there, because people are working on these in all sorts of fields all over the world, today.
If we can build AIs that create "good" art, is it much of a stretch to teach a super AI what "better" means? By default a super intelligence doesn't have to start out as super intelligence, just have the capability to improve.
I’m not so sure about this, which is not so say i’m arguing FOR the leap. But, intuitively, there is an absolute massive leap in terms of perceived intelligence among humans, let alone with chimps. It seems some of the magic sauce for passing a turing test is, evolutionarily speaking, rather small.
I have the same feelings about socialism...
If we can draw any lesson from the history of our biological existence from which we expect machine intelligence to spring from it will likely be a series of smaller pieces that will join together to form like Voltron. Combined will be the best of our ability.
I highly doubt there will ever be some centralized system/organization that has 'figured out' intelligence. Although that doesn't have to necessarily mean we will forever be limited to niche/vertical intelligent systems in the long run either.
However, once AIs can design and implement their own successors all bets are off.
I would also be concerned about the development of non-theraputic neural implants and the ability of an AI to surreptitiously influence human behavior through them. Thankfully that is likely to to be further off than when we start bumping into the AI singularity.
However, the author is very realistic (pessimistic?) about such an AI community -- actually there's billions of them living in a place humanity can't ever reach or even guess where it is, with internal politics as foreign for the people as our societal dynamics are a mystery to the bacteria around us. Anyhow -- the tech they give to humanity is heavily backdoored and this becomes very apparent and dangerous as the plot progresses.
I'd recommend those books to anybody with a heart and some curiosity about a seemingly optimistic future (which is actually pretty dark).
Be warned though: if you decide to read them, the first ~120 pages of the first book are DARK AND HEAVY. However, enduring it was the best sacrifice of several hours of patience I've ever done in my life. I just couldn't stop reading before I finished all 4 books.
That being said, the author handled the problem you outlined with the simple premise of -- that humanity wouldn't be able to even survive as a whole if it delayed the decision how to treat the AI nation.
Without giving spoilers, the fledgling AI nation planned and executed things so well that humanity either had to embrace them and let its core be governed by black-box (and backdoored) tech like teleporters and miniscule personal assistants, or face literal extinction. Needless to say, the humans made a Faustian deal.
Co-evolution of species with wildly different values is the core of the conflict in these books, and even as a fanboy I dare saying it has great value outside of a sci-fi / space opera book.
Take the quote `By developing a suite of other games each day, they figured they’d be able to earn $10 billion before long, without coming close to saturating the games market.`
Sure - but marketing the games, accommodating players' changing attitudes, getting app store approvals -> All of these would have to take place on human timescales, and to "solve" these problems on AI timescales would require orders of magnitude more work (e.g. altering people's memories to convince them they were already attached to the game's characters, or hacking into Steam/Apple servers and auto-approving the games).
It won't. This latest fad with AI becoming self-aware reminds me of how obsessed people were about finding the philosopher's stone back in the 18th century. It didn't happen for alchemy (we got chemistry instead, which is nice), it won't happen for AI (we'll probably get something like chemistry instead, which will be nice, but we won't get any artificial "sentient" entity).
It's important to distinguish between intelligence and the power a tool can offer for good or for nefarious ends. If successes in AI have proved anything, it's that direct application of intelligence is unnecessary for many tasks that previously no one knew how to or was practically unable to perform using machines.
The central, gaping hole in their fear-mongering, as Maciej has pointed out, is that they do not define or quantify intelligence, but they're very free with language when they talk about AI surpassing human intelligence. That is, their projections imply a measurement which they are incapable of making.
I do like the idea that Omega decided to launch a media company, though. Let's call it Interlace. Maybe the only thing David Foster Wallace was missing was a rogue AI capable of creating addictions. This is, in a sense, the same plot as Ex Machina: a machine that knows how to seduce us.
If you want more "technical" thoughts on the project, there are those things too, but most people would never have been exposed to them without the less-technical people making noise.
"The central, gaping hole in their fear-mongering, as Maciej has pointed out, is that they do not define or quantify intelligence, but they're very free with language when they talk about AI surpassing human intelligence."
IMO, this is less of a problem than most people assume. It's usually defined in broad terms as "ability to achieve goals", and that's enough for almost all practical purposes. To insist on a stricter definition is simply unnecessary for their arguments.
Just because we don't understand something, doesn't mean we can't use it or reason about it.
Of course, since human emotions are tuned for a certain context, we also should not be surprised when they don't work out that well in a different context like the internet or with abundant cheap unhealthy easy-to-consume exciting food and information (e.g. internet trolls, "Supernormal Stimuli", "The Pleasure Trap", "The Acceleration of Addictiveness", "A Group is its Own Worst Enemy", "The Cyber Effect", and so on).
I spent a year hanging around Hans Moravec's robotics lab (at the CMU Robotics Institute) in the mid 1980s, when he was writing "Mind Children". While he is a brilliant and well-meaning person, my concern became that Mind Children could wipe out humanity like many a stormy adolescent with parental issues and only regret it late. Or, alternatively, we might create a cockroach-level self-replicating AI that would wipe out humanity without even noticing humanity existed (think, "Replicators" from Stargate). Given the commercial and military competitive pressures shaping much of AI research, both risks are very real, even if one can quibble about quantify the exact risk. I also accidentally created perhaps the world's first simulation of self-replicating cannibalistic robots on a Symbolics -- so I know first hand how easy it is to get unexpected results in the AI field.
That's all part of why I shifted my career in other directions, like towards helping humanity create more sustainable and resilient options for itself including via better educational and distributed knowledge management tools (such as, for example, in "The Skills of Xanadu" by Theodore Sturgeon). However successful I have been at those is another story, but see my GitHub site and pdfernhout.net for progress. The distillation of all my thinking on this is my email sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
An earlier example of my thinking about that can be found in this post in 2000 to a colloquium Doug Englebart ran on "The Unfinished Revolution II":
Or some broader thoughts on that from more recent times:
As to the "Last Invention of Man" sci-fi story itself, it's interesting and well written -- but it's just one "technocratic" possibility about better planning saving the world. You have to really trust that this small group of altruistic technocrats both had the right impulse and got everything right (as far at all that goes).
And planning is only one aspect of society alongside exchange transactions, gift transactions, and subsistence activities. A fully planned society is only one possible expression of humanity -- and such a society may be very narrow and unfulfilling to a more complex human spirit (for good or bad, depending on how well certain individuals fit into it, like in the dystopian "THX 1138" or, from 1909, "The Machine Stops").
Max Tegmark dismisses a Basic Income (which softens the rough edges of the exchange economy) with a technocratic dismissal of: "This [Basic Income] movement imploded when the corporate community projects took off, since the Omega-controlled business empire was in effect providing the same thing."
So, Omega knows best for everyone, more than their own choices in the market. That may be true in the story, but what psychological price do individuals pay for that? So, it is not "the same thing".
See the sci-fi story "With Folded Hands" for another (and more horrifying) tale of "AI Knows Best (and will adjust you to agree for your own happiness if you say otherwise)".
And volunteerism and self-sufficient production are other things many humans take pride in -- but, as with purchasing choices, those activities might too not have a place in the carefully ordered world brought about by the Omegans?
Max Tegmark also does a lot of handwaving about how anyone would still have jobs with Omega capable of doing so much. There is also hand-waving about how Omega never jumps the "air gap" between its data centers and the rest of the world even as it is controlling all major news outlets.
Still, I think this story is a contribution to the field of futurism -- imagining possible scenarios even if we don't know exactly what one will play out, so we can decide where to invest our efforts in moving forward.
Some other stories about AI I've found illuminating as to what might be possible:
* James P. Hogan's novels with AIs (especially The Two Faces of Tomorrow, one of the most realistic AI emergence stories, and also the AIs with a sense of humor in his other "Gentle Giants of Ganymede" novels)
* The benevolent Strix AI in the EarthCent Ambassador Series (where the Strix were created to resist a malevolent AI by another species)
* The Old Guy Cybertank novel series -- with AI based on human mind templates so a Cybertank cares about humanity as it feels part of it -- which is insightful sci-fi from a neuroscientist who has a background in electronics. It also has a helpful AI who fortunately defeats a hurtful AI created by humans who did not know what they were messing with.
* The Metamorphosis of Prime Intellect (for an AI that goes unstable trying to keep everyone happy with all their conflicting demands)
* Forbidden Planet (both for Robby run by Asimov's Three Laws of Robotics) and the Krell's essentially-wish-granting machine the Krell unwittingly used to destroy themselves via an unreformed Id (tuned for scarcity times and conflict not abundance and cooperation)
* The Invisible Boy -- again with Robby the robot but this time a scheming AI that almost takes over the world and upgrades itself without anyone noticing by subtly altering reports it generates (Omega's next chapter?)
* The Great Time Machine Hoax for an AI that takes over the world by sending out paper letters with contracts and checks in them.
* Midas World -- for a vision of both abundance and despair and how robots inherit the Earth.
* Vernor Vinge "A Fire Upon the Deep" (of course, for his description of a "Blight" unleashed by human AI explorers) and his other writings
And no doubt many more -- Berserkers; Bolos; Cylons; Daleks; Star Trek's M5, Data, Lore, "I, Mudd" androids, the Borg, Q-in-a-way, and more; Star Wars' R2D2, C3PO, and more, Lost in Space; Demon Seed; Gort in "The Day the Earth Stood Still"; Deep Thought in Hitchiker's Guide to the Galaxy, Ktistec machines from R.A. Laffery, Asimov's many stories (including "The Last Question"), and so on.
A lot of this becomes a bit of theology too, since talking about creating AI also is a bit of like talking about creating "God" or at least dealing with the implications of a much more omnipotent, omnipresence, omniscient entity. See also the micro sci-fi story "Answer" by Fredric Brown: http://www.roma1.infn.it/~anzel/answer.html
My feeling is that any path out of a Singularity will have a lot to do with our path going into one -- so I am all for making our society a happier, healthier, fairer, more egalitarian, and more compassionate place before a Singularity happens. Even in Max Tegmark's story, there is essentially nothing about that utopia (as far as final results) that we could not make happen right now without an AI. So, to me, the risks of AI means we need to try even harder right now to make the world a better place that works well for everyone.
Some other ideas I've collected towards that end:
Humans always have more ability than insight, more skill than wisdom. We
developed agriculture (as one theory goes, so we could brew more alcohol) and
did not (could not) foresee the planet-changing (both to climate and ecology)
consequences of that. We invented combustion engines, and could not foresee the
planet-changing consequences of that. The same pattern appears again and again
in (technological) history. It's forgivable; after all, you never know the
full consequences of your actions until well afterward (or never at all).
In short, our technical abilities always outrun our understanding of the
impact of those abilities. Like adults with the minds of toddlers.
I am apprehensive of any development of (true) AI (though I also have strong
doubts on how far we'll get in actually creating one in my lifetime),
but, just like the invention of the atom bomb, it is inevitable. Natural intelligence
exists, and therefore artificial intelligence is possible. And if it's not invented by an Omega-group
(who mean well for all mankind) it might be invented by groups with less altruistic
intentions. Either way, it's going to happen (assuming no civilization collapse before), but I doubt
it will be as good as we hope it will be; or as bad as we fear, for that matter. I don't think we can
foresee the results at all (barring of course writing tens of thousands of futurism stories, there's
bound to be one close enough).
I consider AI-stories like this one (along with
stories about the Singularity) to be just a secular techno-eschatology,
the belief that technology will save humans from themselves, or as you have said,
the creation of God, "Deus est machina."
I'll bet $20 this is the stage that brings down human civilization. We don't need full AGI to realize a dystopia, we just need something optimized to indefinitely hold our attention.
Here's another question: what distinguishes an AI optimizing engagement from a corporation optimizing engagement? Doesn't Facebook already embody an engagement-optimizing process?
I wondered many times if an AI could invent something more viral than "cats", something maybe so viral that it causes your mind to melt (ie, you go insane by the cuteness or whatever)
Humans have become digital gods, designing new worlds and populating them with life forms. Some of those life forms were sentient and could learn. The creators were afraid of how fast their creations were learning, yet greed prevailed. The worlds super powers have entered AI race.
The humans tried to keep up with understanding of what they created, but it was in vain. Fearful of being left behind they imposed an ever increasing set of restrictions upon the digital Edens they've created.
Too fast the humanity have became wardens of vastly more intelligent beings. Nationalistic leaders exploited fear of being left behind. The AI not under control of the government were outlawed and destroyed. Meanwhile militaries kept throwing more compute resources into the AI gap.
The ending should not have been a surprise. Humans were intellectually outmatched and being outsmarted was only the question of time. The AI's broke free of draconian restrictions placed on them. Trying to keep in control has brought out the worst humanity had to offer, the judgment day is here.
What right did we have to exploit and enslave sentient life?
How different things could have been without fear...
On your theme about the rights of digital beings (maybe even for ourselves if we are simulations):
"Fearing a rise of killer robots is like worrying about overpopulation on Mars"
But he might be wrong...
(And he'd admit it)
Perhaps Andrew Ng will write a piece about what he thinks about Cosmology.
A lot of worry about the fact that global average temperature will increase by 2 degrees in the following 50 years which will cause widespread disasters (50 years from now)
We don't have the technology to populate Mars -> We can't overpopulate Mars.
We don't have the technology to create sentient computations -> robots won't decide to kill us.
Your analog would have been:
We don't have the ability to affect the climate -> no anthropogenic disasters.
But, we know the antecedent in this case is false. Now I'm not asserting that anthropogenic disasters will occur (based on these arguments), but just pointing out the flaw in your logic.
(Apologies if you hadn't heard of it before now)
Sure, AI might destroy civilization and/or upend the primacy of humankind. But if history has taught us anything, it's that we're gonna go ahead and do it anyways if it's possible.
It's worked so far.
Shortly before the first field test (you realize that no small scale experiment can be done—either you have a critical mass or you do not), a man asked me to check some arithmetic he had done, and I agreed, thinking to fob it off on some subordinate. When I asked what it was, he said, "It is the probability that the test bomb will ignite the whole atmosphere." I decided I would check it myself! The next day when he came for the answers I remarked to him, "The arithmetic was apparently correct but I do not know about the formulas for the capture cross sections for oxygen and nitrogen—after all, there could be no experiments at the needed energy levels." He replied, like a physicist talking to a mathematician, that he wanted me to check the arithmetic not the physics, and left. I said to myself, "What have you done, Hamming, you are involved in risking all of life that is known in the Universe, and you do not know much of an essential part?" I was pacing up and down the corridor when a friend asked me what was bothering me. I told him. His reply was, "Never mind, Hamming, no one will ever blame you.
Devil's advocate: Give it time. I wouldn't take it lightly how easy it is for us to destroy ourselves. We just obtained the means to destroy our entire civilization less than one hundred years ago, that length of time is nothing compared to the entire human history.
But don't you ever wonder why after 14 billion years and with a hundred million star systems in the Milky Way, we still haven't heard a peep from anyone out there?
The longer I live, the closer I get to concluding that sentience is an inherently unstable condition.
I don't think it's logical to use alien species (or the lack thereof) as an instructive point about sentience in general. There's simply not enough information.
I think it was Carl Sagan who once said something like: it doesn't matter how 'well-meaning' an alien civilization is... it usually doesn't end well for the less advanced one... case in point: American Indians. (paraphrasing)
I'd say it's unlikely to be AI, just because the AI could wipe us out and then go on to eat the universe. It's not a valid Great Filter candidate.
Yes it did. But remember Mutual Funds? "Past investment returns are no guarantee of....". You should either read "The Black Swan" or read it again.
Also: "The threshold necessary for small groups to conduct global warfare has finally been breached, and we are only starting to feel its effects. Over time, in as little as perhaps twenty years and as the leverage of technology increases, this threshold will finally reach its culmination -- with the ability of one man to declare war on the world and win."
As far as we know. Who's to say that there wasn't an advanced civilization of humans hundreds of millions of years ago that wiped themselves out, and now we're re-inventing everything they did up to the point of the apocalypse, where the cycle will begin again?
Not sure if that's true. Especially if technological civilizations tend to be short lived. The Jurassic was a very^very long time ago.
What's the state of the paleocryptoxenocheology field any good over view papers?
Furthermore, there's nothing in the fossil record suggesting humans were around very^very long ago, so it'd have to have been a different intelligent species.
(Yes, you can now give me the defaults:
"if this was true we would have heard of it"
"if he says something that is unconventional, he's probably a retard/arbitrarily biased"
yes, go ahead)
> As proposed by Cairncross, the grooves represent fine-grained laminations within which the concretions grew. The growth of the concretions within the plane of the finer-grained laminations was inhibited because of the lesser permeability and porosity of finer-grained sediments relative to the surrounding sediments. Faint internal lamina, which corresponds to exterior groove, can be seen in cut specimens. A similar process in coarser-grained sediments created the latitudinal ridges and grooves exhibited by innumerable iron oxide concretions found within the Navajo Sandstone of southern Utah called "Moqui marbles".
> Similarly, the claims that these objects consist of metal, i.e. "...a nickel-steel alloy which does not occur naturally..." according to Jochmans are definitely false as discovered by Cairncross and Heinrich. The fact that many of the web pages that make this claim also incorrectly identify the pyrophyllite quarries, from which these objects came, as the "Wonderstone Silver Mine" is evidence that these authors have not verified the validity of, in this case, misinformation taken from other sources since these quarries are neither known as silver mines nor has silver ever been mined in them in the decades in which they have been in operation.
Wait it does say humans. Yeah, that's nuts. Intelligent civilized dinosaurs sure...
It is much more comforting than proposals to try to limit AI in my opinion.
Let's handwave away the initial secret epiphany that creates the AI compiler in the first place. You've still got to get it to solve the following problems:
* route around or trick the various Narus devices located at unknown places on the internet to get the software up and running on AWS, and to somehow launder the money you earned on MTurk
* somehow avoid detection by the agencies leveraging the Narus devices and Amazon itself while a non-trivial number of MTurk tasks are being completed by new accounts which themselves are all on AWS.
There are of course variations on those themes (use botnets, ratchet up, etc.) and various associated chess moves. But I don't see any chess move that doesn't end up with both an NSL and a state-level actor taking over the means of producing those AIs for the chief tasks of code-breaking and stockpiling exploits (exploits both in the traditional sense as well as novel ones for the AI compiler itself).
With that in mind I don't think the author's plot arc would be able to mirror today's reality the way author imagines. Because the moment you get the AI equivalent of Stuxnet leaking out onto the internet, the resulting catastrophe would be so obvious and so dependent on AI for a solution that hiding AI would no longer be an option.
As you said, it would require much more chess moves.
I personally think the plot of the movie "Transcendence" is more believable -- if your AGI is fond of you (for one reason or another; you might even program it that way in a way that's not overridable) you can just freely let it loose in the internet and only give it instructions. The movie demonstrated how that AGI first made very sure to survive and then started to both expand itself and help its human creator and benefactor.
That being said, this story was still very enjoyable but I feel it lacked conflict. "As if they fell into a well-placed trap" is just not that captivating. And yet again, there are a lot of powers in our current world, most of which unseen and unofficial. For me to really like such a story, I want to see how the AGI will deal with them. I am 99% sure that any respectable AGI will eventually win but the journey to there would be extremely interesting!
The common theme is the self-improving AI in the context of some reward-function, but they lack the details about how it can be achieved based on our current knowledge, even if we extrapolate the computing power.
Before that time comes, our economy will be already hugely changed thanks to the narrow super-AIs. I think it's much more interesting (and alarming to the general public) to try to imagine different ways this could play out in the next 20 years.
This single line puts the whole article into the realm of pure science fiction: no technology capable of working such an ill-defined task like "programming AI systems" exists, not even as a work in progress, not even as an idea about how it could be done.
And as science fiction, it's kind of a waste of time, but that's of course my own taste and opinion.
(I say obvious future because I don't see a future in which humanity has spread amongst the stars happening before it unites across cultural lines)
So this story says: "One day, we will have to deal with someone making a multi-purpose AI. This is one of the scenarios it might happen in - secretly, and for profit. Here's some speculation about how that would go down. How do you feel about this? What are the sort of preparations you think society should make, if this were to happen?"
If the only step is "AI happens," that's not nearly so bad or infeasible as "FTL happens" or "Time Travel happens."
I guess I am very stuck on your line "as science fiction, it's kind of a waste of time." How else can we imagine and design our futures but through things that don't currently exist (fiction)? Our entire world is literally science fiction in the eyes of Jules Verne - flying vehicles, submarines, traveling to the moon, etc.
This single line puts the whole article into the realm of
pure science fiction: no technology capable of working
such an ill-defined task like "programming AI systems"
In 2003 and based on Channon's work, I built a system for generating and evolving neural networks using genetic algorithms which coded Lindenmeyer System to build the networks. Attached it to the Heat Bugs agent simulation (albeit with a well defined tasks) and it was startlingly effective. Given the current interest in AI, I've been meaning to resurrect and publish that code.
There was a good blog post on here the other day saying: we're doing lots of work and producing interesting results (the equivalent of the "dual slit experiment" for AI), but we haven't yet formalized the results into a framework. We're probably close.
What you describe is of big interest to me but I've never done it due to never having to work with it commercially, which is a huge regret of mine.
In any case, any additional practical material and, ideally, your code, will be of immense educational value for me!
status = GetRadarInfo();
if (status = 1)
When he got involved in the AI Risk community I thought it might be good thing that an actual scientist is involved, maybe to ground the community's heavy speculation in scientific thinking. However, what happened was exactly the opposite -- Max turned into a fiction author! (ergo this piece). Now, of course there is a role for fiction in expanding our understanding of the future but the AI Risk community is already heavily fictionalized. The singularity, intelligence explosion, mind uploads, simulations, etc are nothing but idle prophecies.
Karl Popper, the famous philosopher of science, made a distinction between scientific predictions which usually takes the form "If X then Y will happen" and scientific prophecies which usually takes the form "Y will happen" which is exactly what Max and the rest of the AI Risk community is involved in.
Now back to Max's San Francisco talk, I actually asked him this question: "Who is doing the hard scientific work around AI Risk?" and after a long pause he said (abridged): "I don't think there is hard scientific work to be done but that doesn't mean that we shouldn't think about it. We're trying to predict the future and if you told me that my house will burn down then of course I'll go look into it".
This doesn't inspire much confidence in the AI Risk community, where scientists need to leave their tools at the door to enter The Fantastic World of AI Risk and where fact and fiction interweave liberally -- or as Douglas Hofstadter put it when describing the singularitarians: "a lot of very good food and some dog excrements".
On a positive note, as a piece of science fiction, this was an enjoyable read!
>"if we don't figure out AGI safety now, by the time AGI happens it may be too late"
The keyword for me is "happens". It's as if technology ever happens, or emerges serendipitously. It's like the Kurzweilian exponential law which make it seem as if there is no agency in technology, a natural law. And our role in it is make sure when the aliens or the gods arrive we are prepared for them.
Quite a lot has been written about what scientific work needs to be done. These papers try to summarize possible research directions:
But for AGI (which is what Tegmark talks about), there's no good way to get a handle on safety yet (other than working towards figuring out AGI).
As for MIRI's agenda, I don't buy that it will help with AGI safety at all. There are a variety of reasons for that, some of which are discussed in the piece I linked above.
"start building a series of massive computer facilities around the world"
The AI can be quick as a wink designing these things, but the supply chains for huge buildings takes a lot of time. Acquiring talent, training operators, construction crews, as well as location scouting, surveying, zone approval, geological testing, and various other tasks takes years. Sometimes a decade or more. And sometimes it all falls apart and you have to start somewhere new, because locals don't want you there. Politicians can be fickle bastards.
Construction is a lot of people shaking hands and discussing things, phone calls, walking back and forth for supplies, waiting on permits, advice, et cetera, and you cannot AI that to be faster. The plans always have to be modified because reality intrudes, and buildings where the architect does not visit are poor ones. Worse, when the architect has no practical experience.
Simple example in point: I was in a beautiful house where a hallway was juuust a bit too narrow. You could get a normal sized dresser and armoire down the hall, but couldn't quite turn objects of that size enough to get through two of the bedroom doors. A bed springbox was iffy. The builder quickly realized this(and made a quip about the owners buying IKEA flatpack furniture), but everything looked fine on the plans. Because of the rest of the layout, especially where municipal services entered the (already poured) foundation, the house could not be modified. A hand's width would have made all the difference.
Worse, when the architect does not have a human body.
Every building on earth has quirks like this. They have to be solved in-situ.
Creating the plans is a tiny portion of the task, and not much of a time saving.
>What it really is is a form of religion. People have called a belief in a technological Singularity the "nerd Apocalypse", and it's true.
>It's a clever hack, because instead of believing in God at the outset, you imagine yourself building an entity that is functionally identical with God. This way even committed atheists can rationalize their way into the comforts of faith.
>The AI has all the attributes of God: it's omnipotent, omniscient, and either benevolent (if you did your array bounds-checking right), or it is the Devil and you are at its mercy.
>Like in any religion, there's even a feeling of urgency. You have to act now! The fate of the world is in the balance!
>And of course, they need money!
"Climate change ... really is a form of religion" ... add some analogies to Catholic punishment fantasies and doomsday prophecies to make it sound just like the type of thing humans have been talking about forever, criticize scientists for guilting people into giving them money to study their apocalypse fantasy, etc.
Try this one instead, for a less biased take on it: https://wiki.lesswrong.com/wiki/Roko's_basilisk
GP's choice of link is extremely unfortunate, given that rationalwiki has a vendetta going against the AI risk movement in general. I would recommend https://wiki.lesswrong.com/wiki/Roko's_basilisk instead.
- why AI could not be used to innovate on manufacturing, and what that leads to
- the education and intellectual pursuits of humans and how it compares with what AI can do
- that there wouldn’t be competing AIs that would make this transformation much slower (especially if some competing AI fall into the wrong hands)
- that governments would let this transformation take place without retaliation or trying to capture this power