To "avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power" what does one do with an idea or line of research that could potentially harm humanity or unduly concentrate power?
The manipulation of social media by foreign actors armed with dumb-AI / automation was an obvious conclusion to many of us well before the Snowden leaks, but what could we do exactly? I remember having conversations with people about it and we concluded that it would just happen until someone pushed it too far and then Russia did and now we're finally reacting.
I was privately concerned about the mass weaponization of autonomous devices via cyber attack for over a year and a half and got nowhere just emailing politicians or public safety departments. I've been told almost a dozen times that I should join a military or IR think tank but I don't want to do that. I just want someone else to vet the idea or research and pass it on to policy makers that will actually do something proactively.
Put another way:
What is the responsible disclosure process for ideas and research around AI?
Basically, we're so far away from AGI that there's no need to worry about disclosig anything. The recent advances in machine vision and speech processing are impressive, but only in the context of the last 50 years or so. A trully intelligent agent will need much more than this and there doesn't seem to be anyone alive today who knows how to go from where we are to where AGI will be.
In other words, all this is really premature. If we're talking about responsible and regulated use of what you call "dumb AI/automation" on the other hand, then that's a differen tissue. But AGI, currently, is science fiction. You may as well regulate research in time travel, or teleportation.
If I have other shit in my head that I'm worried about today who do I tell?
That’s not to say that there’s no way the humanity can be fucked with the more pedestrian “garden variety” AI that is with our technical capabilities.
It’s to say that AGI is a nebulous, unobtainable red herring which only serves to detract from the more immediate issues.
YES FELLOW HUMAN AN APT METAPHOR
Hence my comment about how you might as well worry about time travel and teleportation. They're just as likely to happen in any timeframe that you might be interested in. If you're going to be worried about how AGI might be misused, you can start worrying about how time machines or teleporters might be misused.
>> If I have other shit in my head that I'm worried about today who do I tell?
You could look at joining the Campaing Against Killer Robots, or look around for a similar group.
AI (as in "machine learning") on the other hand is something which should be worried about. Universities are actively building autonomous, ML powered weapons systems.
Just last week a pretty serious boycott of a South Korean university was announced over their autonomous weapons research.
Signatories of the boycott include some of world’s leading AI researchers, most notably professors Geoffrey Hinton, Yoshua Bengio, and Jürgen Schmidhuber. The boycott would forbid all contact and academic collaboration with KAIST until the university makes assurances that the weaponry it develops will have “meaningful human control.
That's a real problem, and it is unclear if OpenAI's approach is relevant. Personally I think the academic boycott is a good start (what ML researcher will want to work there?), but it is unclear how to deal with commercial research labs in a similar space.
If you're not familiar with it, I'm making the argument from here and it makes good further reading: https://intelligence.org/2017/10/13/fire-alarm/
We have no way of knowing what external indicators we might get. We have no idea what AGI would look like. That is often offered up as the very reason to "act now!" by Singularitarians, but if there is a threat that you don't know anything about, there is also nothing you can do about it. Unless you know what you're trying to stop, your actions are as good as random.
In this sort of discussion, you can replace "AGI" with "alien invasion". The threat from a Superintelligence is as serious and as predictable as that from an alien civilisation. We can prepare against AGI as much as we can prepare against an alien invasion. And we have exactly as much to fear from a hostile AGI as we have from a hostile alien species.
> [...] if there is a threat that you don't know anything about, there is also nothing you can do about it.
I disagree with this, at least somewhat. For one thing, even without knowing the specific threat, there's lots of stuff you can do which is just generally helpful - e.g., try to spread to other planets. Will it necessarily help? No. But there are a lot of threats it will protect against.
More importantly, people actually working on the problem of AI safety say they are doing things that appear, at least to them, to be useful. What makes you so sure they're wrong? From the little I know of their research, it certainly seems like stuff that will likely be pretty helpful.
Last point - considering just how big a problem AGI could be if they're right, just how many resources would you want to devote to it? Literally zero?
You can ask how I know this. Obviously, I don't because I can't see into the future. But I can see the state of the art in the present and it's been baby steps for the last 70 years or so - and our capabilities have remained entirely primitive, industry hype nonwithstanding.
>> More importantly, people actually working on the problem of AI safety say they are doing things that appear, at least to them, to be useful. What makes you so sure they're wrong?
There's all sorts of opinions about how AI performance is accelerating ("exponentially"). In truth however, what has actually accelerated and in fact, plateaued in recent years is the performance in very specific tasks -object and speech recognition- and not in the general intelligence of AI systems. In fact, if you want to be more precise, we can only talk about advances in the context of very specific benchmarks, which is to say, specific datasets (like ImageNet) and according to specific metrics (say, F-score).
The problem is that all those benchmarks are arbitrarily chosen (iish; see below) and research teams spend a great deal of time tuning their systems to beat them. Which means, good performance in a benchmark tells us nothing about the extrinsic quality of a system: how it does in the real world, outside the lab and when the central assumption of PAC learning, that training and unseen data can be expected to have the same distribution (so that a system's training performance on the former predicts that on the latter) is not guaranteed. And then, performance in one type of task (e.g. classification) tells us nothing about the general capabilities of the system (i.e. general intelligence).
Singularitarians, like the people at MIRI (which your previous comment linked to) have focused on the performance of AI systems on modern benchmarks and the increase in compute, but modern benchmarks were essentially invented to allow some progress in machine learning, when mathematical results showed that progress was impossible. These previous results include Gold's famous result about learning in the limit (from the literature on Inductive Inference). The relaxation of assumptions was suggested in Leslie Valiant's paper "A theory of the learnable", which introduced PAC learning, the paradigm under which modern machine learning operates.
And to clarify my point above- modern machine learning benchmarks are not exactly chosen arbitrarily, rather they are justified by PAC learning assumptions about the learnability of propositional functions (and those, only; in fact, modern machine learning systems are propositional in nature, which severely restricts the expressive power of their models, making it much harder to realise the promise of Turing-complete learning of many of them).
... that probably got a bit too technical. My point is that just because we see imrpovement in performance today, in the field of research that we call machine learning, that doesn't mean that there is actual progress in the understanding of what intelligence is, or our ability to reproduce it. In a way, we might have changed our metrics, but we haven't necessarily improved our performance.
>> (...) try to spread to other planets.
So we have a science fiction problem and we're looking for science fiction solutions to it? :)
>> Last point - considering just how big a problem AGI could be if they're right, just how many resources would you want to devote to it? Literally zero?
Well, again that depends on what we can do about the problem, which in turn depends on what we know about it. I guess, like you say, if we know nothing about the problem, we can try random things like spreading to other planets or genetically enhancing the whole human race's intelligence until we ourselves are superintelligent and our risk to be taken over by an artificial superintelligence is 0.
But, having 0 certainty about the nature of AGI, we have exactly the same chances to avert the danger from it by sitting on our hands, as we have by migrating to other planets. I believe Bostrom's superintelligence scenario involves a machine that eventually colonises Mars with secret minining robots? If you're prepared to entertain the possibility of truly-Super intelligence, there's probably nothing you can do about it anyway.
So, my pet formula for deciding whether taking a risk is worth it is to
multiply the probability of some adverse event occurring, let's call this
event X, by the cost associated with that event, let's call it Y; so R = p(X)
* Y, where R the risk and you can then decide on some risk threshold, T, where
if R < T you can justify taking the actions that you believe might lead to event X.
Now, when it comes to the Singularity (or, indeed, an Alien Invasion) we can
accept the cost of the event to be infinite, Y = ∞, under the assumption that
Superintelligence means game over for the species. But our knowledge of the
event is nonexistent so the probability of the event, X = Singularity is...
undefined. You can't plug that in my formula. You can't guess at the probability
of X because nothing like X has ever happened before. You can
try to extrapolate it from the development of human intelligence, but that
took billions of years to evolve in a manner completely different than what we
can reasonably expect for artificial intelligence (i.e. involving computers).
Basically, there's no way to calculate the risk from Superintelligence as long
as we know nothing about it- 0 information means undefined risk. And
therefore, no amount of resources can be justified to be spent to mitigate
In other words, I really couldn't answer your question- 0 resources, an
infinite amount of resources, they're essentially the same.
Bottom line: to make decisions you need information even more than you need
the resources to implement them.
Or in other words: you can't prepare for the unknown.
OK, experiment's done cooking :)
>> You can't make any rough prediction on whether we will be able to reach that point, or when it will happen?
1. I'm more confident than you in our ability to do something now and to predict things about AGI. Not much more confident, mind you, I just think the numbers are not literally 0, which makes most of your points above regarding amount of resources moot.
Basically, I think there are things we'll likely need to solve one way or another to figure out AI safety, and these are things we can work on. I also think that we can make some very rough estimates on timelines, and some very rough estimates on things we should do to mitigate.
2. I believe you misunderstand some of the points that people at MIRI make. You appear to classify them together with people like Ray Kurzweil, who appeals to exponential growth, etc, to make his arguments. The people at MIRI, afaict, don't really agree with this line of reasoning - in fact, I believe MIRI has all but ignored the progress on current AI for most of its existence (barring the last year or so), at least in terms of its research.
I mean, I agree that modern machine learning is not AGI or close to it. And I have no idea what AGI will look like - just a scaled up version of ML? A completely different take on things? The merge of a few different concepts together that will suddenly "tip over" to being capable? I have no idea. No one really does.
I am mindful, however, of a pretty long history of humans being terrible at predictions, in both directions - classic cases including people saying flight was impossible, a few years after the Wright brothers had already flown.
I'm not saying we're 2 years away from AGI - I'm saying we're not really sure, and even if we were 200 years away, I think it's worth spending some time and resources on trying to think about what we should do to prepare. The worldwide amount of resources spent on AGI and other existential risks are, what, 0.000001% of the amount of resources the world spends on sport? Does that really seem like a reasonable distribution to you?
> So we have a science fiction problem and we're looking for science fiction solutions to it? :)
It was just an example of something we can do right now. There are other examples if you want the other end of the spectrum - like stopping all technological decvelopment. I just don't think that will happen.
And let me point out that while AGI is a large threat IMO, lots of other things are large threats too - our technology is advancing on all fronts, and in many areas, we will soon be at a place where we can cause humanity to cease existing. E.g.:
1. Creation of weapons, much more powerful than atom bombs, that can effectively kill everyone.
2. Creation of super viruses that can destroy all humanity.
3. Creating the ability to "mind-upload" or similar, turning some people into much more powerful /faster thinkers/ whatever than others. It doesn't need to be an artificial super-intelligence to be dangerous.
And yes, these are all sci-fi scenarions, precisely because this is what's new in our world - scientific advancement! That's why the argument of "we've survived so far" is wrong - because the variable that's changing is the abilities that humanity has access to.
Musk gives it five years in the recent movie https://www.youtube.com/watch?v=rqlwUAxoxa8&feature=youtu.be...
For instance, computers are decidedly better than humans at arithmetic. Yet nobody (today) thinks that just because computers can do arithmetic very quickly and accurately, they are more intelligent than humans.
Is there anything special in Go or chess that makes a specific skill at them a necessary and suficient condition for general intelligence? We are impressed at the performance of AlphaZero because humans find it very hard to play Go etc well. On the other hand, we also find it extremely hard to, e.g. divide two arbitrary 100-digit numbers -but we are not impressed by a pocket calculator as much as we are by AlphaZero. Might there be an element of psychological bias in our ability to be impressed, then?
I guess you could say that the difference between human and computer intelligence is that humans don't have total recall, while computers do. This allows computers to add large numbers without trouble or play millions of entire games at random and choose the best, etc.
On the other hand, there's always a chance that humans' incomplete recall is a hallmark of general intelligence.
>> Also there was something human like in the way alphazero learns unlike say stockfish which is more calculator like.
That's a matter of interpretation. AlphaZero plays by searching a huge space of possible moves- humans dont' play like that. It learns by playing itself millions of times. Again, humans don't learn like that.
There's nothing human about AlphaZero, nor Stockfish far as I can tell.
Interested in an off-ycombinator discussion?
I'm going through a bit of crunch at the moment so answers may be a little slow but I always like a good discussion :)
Pursue the research and publish its results freely and in their entirety.
If not, how catastrophic a level of harm are you willing to risk before you start advocating concealing the results?
Yes, for a combination of two reasons:
1) They have obvious incentives to publish their results.
2) The more we know about vulnerabilities, the better we are at combatting ourselves against it.
That second reason is why we are (overall) better off from acknowledging possible threats to our existence. In your example, the more open the study of super-pandemics is, the more open the study of combatting against super-pandemics is. The more we are aware of a threat, the better informed we are to prevent it.
Yes, in your example, the threat could be highly dangerous and imminent. However, if only one individual was capable of creating a super-pandemic, the number of people who could potentially help stop it is drastically reduced. Rather than a free distribution of results which arms our society completely with the ability of preventing it.
The ideal scenario is that throughout the rest of time, no entity ever manufactures the virus; the second-best scenario is that an entity does manufacture the virus, but only after there is some hypothetical defence against it. You are shattering forever the dream of achieving the ideal scenario (again by hypothesis, the virus is easy to make, and history demonstrates that if something is easy and destructive, eventually someone will do it); you're pinning your hopes on the second-best scenario. You are therefore implicitly assuming that "we get better at arming ourselves against the virus" happens faster than "enemies create the virus", which is by no means a given and must be weighed on a case-by-case basis.
By the way, this is the basic reason why I'm so sad about the existence of fully 3D-printed guns. There is essentially no defence against guns. The creators of the 3D-printed gun chose to spread the blueprints far and wide, to accelerate their advent. It's sad that human nature is such that this was predictable and inevitable.
> If not, how catastrophic a level of harm are you willing to risk before you start advocating concealing the results?
I'm not sure. Probably quite catastrophic, because it stretches the limits of credulity to seriously talk about a what if scenario where any given individual can easily create a weapon of mass destruction in their kitchen. A better question is why you think such a straightforwardly accessible method of eradicating our species won't be discovered independently despite your efforts to conceal it. How do we navigate that philosophical labyrinth?
But more pointedly, I dispute that this is comparable to any specific, credible example of strong AI. Give me a credible scenario that brings us from strong AI to the annihilation of our species without handwaving about recursive self-improvement and selective idiocy and I'll reconsider my position. I'm not going to give up the chance to publish incredibly novel research because some other people like to work themselves into hysterics about a conceptually incoherent boogeyman with "AI" slapped onto it.
What kind of credible scenario are you looking for? You're obviously knowledgeable about the subject, I'm sure you've read various stories, but it's quite easy to dismiss them all as "not credible" or "just so stories". E.g. A good story is Tegmark's story at the beginning of the book Life 3.0, but again, can be easily dismissed if you don't specify acceptance criteria for the story.
I mean, simple scenario - AGI gets built by a farming company, gets programmed to farm more corn, but there is no stop condition programmed in, so converts ever available land into corn fields. Is it likely? No, of course not - no specific example is likely. But why is this something that fundamentally can't happen? More importantly, can you specify criteria for a scenario which you would deem acceptable? Cause if not, you're literally saying that there is no possible way to convince you that this is a possibility, which seems to me not to be a good position on pretty much anything.
AGI gets built by a farming company
Okay, sounds fine, keep going.
gets programmed to farm more corn
Yep, this makes sense to me.
but there is no stop condition programmed in
Definitely sounds like something that could happen, with you so far.
so converts ever available land into corn fields
...and you lost me. I'm not sure how we ended up here.
The AI you're postulating is 1) intelligent and capable enough to be called "AGI" and arbitrarily terraform land into a useful field for corn, yet 2) dumb and inept enough to interpret its instructions absolutely literally, like a magic genie; meanwhile 3) the collective capability of the human species is apparently insufficient to stop this from happening.
I guess that could happen. But I take it about as seriously as alien life trying to invade us, a random meteorite killing us all, a solar flare destroying the Earth or malicious time travelers. Each of these what if scenarios also have a sensible build up followed by a giant leap in suspension of disbelief, and concluding in disaster. That's not a framework for intelligent discussion and productive rationality, it's a self-indulgent and conceptually incoherent exercise in mental gymnastics. While we're at it we could argue about how many angels will fit on the head of a pin. If I start a news cycle about how quantum computers will allow us build nanotechnology that can hack human brains, does my idea have any credibility? It could happen, right?
Our current capability with respect to artificial intelligence is so far removed from any reasonable form of strong AI that there isn't even a recognizable path forward. Established leaders in the field like Yann LeCun have publicly stated deep learning will not take us there. We don't even know how to consistently and rigorously define strong AI, let alone conjecture about how it would work or what its theoretical danger would be. That means we're trying to extrapolate the properties of hypothetical nuclear weapons from gunpowder. We are effectively children grasping in the dark, getting hysterical about a monster that may or may not be lurking under our collective beds. We can't agree on what the monster looks like, we don't have a rational explanation for why we believe it exists, but we heard the floor creak and we've seen plenty of depictions of monsters in fiction that sound sort of logical.
Even if the smartest hypothetical AI can perfectly extrapolate the mental states of every human who ever lived, we still die in a cloud of nanobots if it isn't programmed ever-so-carefully to care about what we want.
Your argument, as I understand it, is that something intelligent and capable will only interpret its "instructions" absolutely literally. That's not the way I look at it.
It's not that another intelligence won't be able to understand what our "real goals" are. It's that it won't care - whatever it is programmed to do is, quite literally, its goal/value system.
I mean, we don't have to get exotic here - look at humans. We very clearly evolved to find sex pleasureable in order to spread our genes; just as clearly, many humans, while completely understanding the purpose of sex being pleasurable, continue to have sex without any attempt to spread their genes by using birth control.
And just as equally, while most other humans have more-or-less the same value system as I do, I think you'd be hard pressed to convince me that there have never been humans who have tried to do things I disagree with and wouldn't want to happen. And that's just humans. It's pretty clear that if certain people were more capable, a lot of bad things would've happened (at least from my point of view).
Basically, the idea that intelligence infers a value system is something I've been long convinced is untrue, and I'm not sure why you think otherwise.
> meanwhile 3) the collective capability of the human species is apparently insufficient to stop this from happening.
Well, I'm hoping we aren't. But in order to stop it, we need to do something, and clearly most people aren't even convinced there's a real threat. Hopefully, we can stop an unsafe AGI before it gets built, and also hopefully, we can stop it after it exists. But if you're just assuming that we'll be able to stop it after it has already started to do things against our interest, I think you're being optimistic - even humanity has already created weapons that, had we decided to really use them, could've destroyed all other humans before they had a chance to respond.
> But I take it about as seriously as alien life trying to invade us, a random meteorite killing us all, a solar flare destroying the Earth or malicious time travelers.
Some of those things are more fanciful. But, some of those are real threats, e.g. random meteorite. It has wiped out entire species before. I agree we shouldn't be very scared of a low probability event that we can't control, but that's the difference - we can control AGI, before it gets built, and it doesn't appear to be low probability. Not to mention, most people agree that we should get off this planet to protect against meteorites, too!
As for aliens - if you were truly convinced that aliens would visit the Earth in 200 years - are you really telling me it wouldn't change anything about what you think humanity should be doing? I for one think we would definitely be spending time in preparing for that. And that's more-or-less how I look at AGI - we're creating an alien that will visit us between 50-1000 years from now, and we're not really sure when exactly. What should we do to prepare?
That soft of thing might be part of the answer.
One answer might be, bring it to an org you trust, that specializes in those rabbit holes, like OpenAI, that is if you decide you trust them.
You don't publish it : See Pandora's box.
You secure it.
You demonstrate its capability and you focus public critical discussion from then on with the creators of it along with a panel of individuals who can probe areas of concern.
Public discussion and welcomed inquiry
Private and protected research and IP.
> The manipulation of social media by foreign actors armed with dumb-AI / automation was an obvious conclusion to many of us well before the Snowden leaks, but what could we do exactly?
Before foreign, there was domestic and manipulation was a central concept at the inception of these platforms. Data collection and selling for profit is aimed at manipulation. What human beings could do is be honest with themselves and others for once and stop using their intellectual capacity to manipulate, dumb, and screw others over for profit. You can't engage in negative foundations and expect positive outcomes. You can't manipulate the truth and information and pretend like its going to benefit society. https://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt is a manipulative business tactic and it remains at the heart of the 'safety' problem and the nonsensical doomsday AI scenarios. Certain well funded groups and deep pockets have invested heavily in weakAI and it sits nicely with their legacy business models and they're scared of a truly disruptive technology eating their lunch. So, they concoct disinformation campaigns to steer resources/attention away from such dev groups and back into their coffers.
> I was privately concerned about the mass weaponization of autonomous devices via cyber attack for over a year and a half and got nowhere just emailing politicians or public safety departments.
Money and Power and strong motivators for some. Everything else is secondary. Many human beings like to craftily use marketing/politics to convince people otherwise, but its just a mask of their true intent. Engage your intellect and you can quickly filter through the b.s to one's true motivations/intent... Hint : their actions not their words will be aligned accordingly.
> I've been told almost a dozen times that I should join a military or IR think tank but I don't want to do that. I just want someone else to vet the idea or research and pass it on to policy makers that will actually do something proactively.
Money&Power. This is what dominates the world. The suggestion to join the appropriate groups which recognize this and attempt to mitigate negative effects is well founding.
> What is the responsible disclosure process for ideas and research around AI?
So, you make the world aware that something indeed exist because you created it. You open up things for public discussion so people can work through all the issues/concerns/etc with the most knowledge group 'the creators' of it. The tech is privately secured and development goes forward with the public commentary having been received. The End.
This is possibly the best format for things. Other approaches lend themselves to politics, b.s, and manipulation.
This seems like the key disclosure statement. I never reconciled how sharing A[G]I techniques with the general public increases AI safety in the long-term; now we know OpenAI has also come to the same conclusion.
The kinds of things I'm thinking about are the various countries nuclear arsenals might not be safe from actors with very advanced AI. This I think is the potential source of existential risk, in my opinion. So it could hurt a hell of a lot.
So I'm of the responsible disclosure point of view. You ask "Would releasing AI advancement X mess up someones security/economy". If so, you help them patch it before releasing it to the general public.
The majority of advancements aren't like that and they won't be for a while.
No, I asked for specific examples. This is part of the handwaving I'm talking about. Can you give me something other than the maniacal paperclip optimizer?
I am not worried about literal paperclip maximizers, but this may be the closest real thing to that parable. The hypothesis isn't that YouTube's recommender system didn't work -- it's that it worked too well at its assigned task of maximizing view time, and we humans are finally realizing that maximizing view time was not what we actually wanted.
What would be a solution to YouTube recommendation problem?
oh wait you probably meant in reality. sorry.
edit: Oh and how could I forget the Matrix. Which of course we all totally live in.
It got some of the details wrong (i.e. the atomic bombs exploded over a long period of time vs instantly), but the consequences were fairly accurate.
If you're going to wait until all the actual details and examples happen in reality, then it's too late to have much insight, especially if we're talking about something capable of self-replication like a virus or artificial intelligence. The nature of the problem REQUIRES anticipation instead of mere reaction.
Look, it comes down to this:
Is there something innate about our intelligence that makes it impossible to match except through human brains? That strikes me as something akin to spiritualism. The answer is almost certainly "no." We're not talking about warping space through some hypothetical state of matter with laws of physics different from our own. We're talking about at least human-level intelligence, which is something we already have an existence proof of (us).
And humans are the most dangerous force on the planet. No animal (beside maybe microscopic organisms) stand a chance against a group determined humans. We're nearly unstoppable due to our intelligence. Our we so unique? Is it so impossible that machines could some day (perhaps in our lifetimes) be built which achieve human-level intelligence? Considering the computer advances that have already been achieved, it is clearly a real possibility if not a certainty.
Human-level intelligence is perhaps the most powerful (thus dangerous) thing on this planet. Making a machine that is at least as intelligent (and perhaps much more so!) is clearly something that could be incredibly dangerous, and we know such a thing is physically possible, unlike many things in science fiction.
(This is what I and others argued they should do in 2015: https://conceptspacecartography.com/openai-should-hold-off-o... )
"To be effective at addressing AGI's impact on society, OpenAI must be on the cutting edge of AI capabilities -- policy and safety advocacy alone would be insufficient."
OpenAI definitely don't plan to do less traditional AI research. What they plan to do is to publish less.
1) An AI which independently & autonomously generates goals that in their carrying out end up hurting humanity, and
2) An AI trained & commanded by a malevolent actor to hurt humanity.
It is the 2nd case that is far more real, and far more troublesome to implement safeguards. An AI under your training & command is a neutral tool of empowerment, much like a hammer or a car. The malevolence is in the external actor, not in the tool, and there is no way for the tool to be able to censor its purposes, especially in a pre-"AGI" sense of semi-intelligent automation & problem solving.
Some people look at sufficiently powerful AI as they would a genie, and as said by Eliezer Yudkowsky "There are three kinds of genies: Genies to whom you can safely say "I wish for you to do what I should wish for"; genies for which no wish is safe; and genies that aren't very powerful or intelligent." AI safety is about making sure we get the first kind of genie, or at the very least recognizing that we've gotten the second - since that's not a "neutral tool of empowerment", that's a time bomb.
I think you have to assume there will be bad actors trying to do bad things with AGI and take measures against it in the same way we assume there are malware creators out there who we have to guard against.
This is an excellent example of the prescience of thoughtful science fiction. If Asimov popularized robotics and the famous three laws, Williamson should be recognized and given credit for examining (or warning about?) the implications of AI seventy years ago!
Wouldn't it be much more likely that a non-value-aligned project comes close first? Wouldn't the Google/Apple/Microsofts of the world have insanely more resources to dedicate to this, and thus get there first?
> there is absolutely no indication whatsoever that OpenAI would credibly reach this (vague, underspecified) goal before any of the other serious contenders. Nor would competitors have any requirement to include OpenAI if and when they were getting close.
This is the clear reality. So why do certain human beings pretend otherwise? Why does this pretentious game garner the most funding?Why do human beings spend so much time manipulating and hyping things into over-stated valuations when it ultimately results in wasted resources, time, and potential for the collective?
In any event, speaking freely and openly in this manner helps put my at ease knowing I at least got the information in its raw form out in the open. Whether or not you believe my framing, arguments, commentary is up to you. Whether or not anyone inquires is up to them. The information was put out there and that clears my conscience in a manner about the times ahead.
Competitive pressure, and the "if we don't, someone else will" effect (or Moloch, if you like). AGI- particularly recursively self-improving AGI- is the ultimate first-mover advantage: the first company or country to have AGI will very likely be able to leverage that into keeping anyone else from getting it (if it doesn't, y'know, kill us). This strongly encourages treating all concerns other than "get there first" as secondary.
> Automobile manufacturers and their tier 1 suppliers are the world leaders in automobile safety, after all.
Not by choice they aren't. They are forced to be the way they are by government regulations, which they always bitterly opposed at the time of creation. In fact, capitalism has such a reliable record of "not giving a shit about safety until they are forced to" that I'm perplexed you think AGI would be any different.
They may have to focus on safety and aligning it to the companies values (else test/weak versions are likely to destroy/negatively impact the company or at least be useless). Which is at least something.
Only hyperrational/intelligent value unaligned intelligent systems would avoid this pressure. I don't see the development going that way.
It would still be sub optimal though.
> Not by choice they aren't.
This is why I was glad to see this in TFA:
>>> ...while increasing the importance of sharing safety, policy, and standards research."
Even for Weak AI, history tells us that these factors -- policy and industry collaboration -- tend to be significant ingredients in successful safety reforms.
How? This doesn't seem axiomatic.
I don't, that's why I said "very unclear" (to me). If something is "clear" to you about AGI, then it seems like it's you who makes that assumption.
So you are claiming that by making this statement you are not assuming that it is impossible to control an AGI?
This is similar to having a very advanced bioengineering technology, where you could instantly change anything about your body. Would you make yourself smarter? Sure. Would you change who you are (e.g. turned off your instincts/desires)? Not so sure.
> avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
AI and technical progress in general already disproportionately serve the rich, as they are drivers of wealth disparity, and I see no reason why better AI won't follow the same trend. Unfortunately, any changes that might affect this are in the hands of policy makers, and they seem unlikely to consider universal basic income or anything as drastic as might be required.
Anyone working on this problem sincerely values AI safety and its a component of developing and securing the foundations of AGI. An out of control, unpredictable and sloppy system is not intelligent or desired. Such a system would not be considered AGI or an achievement. So, it is natural for any developer to identify issues and bring them under control early in development.
Suggestions that a consortium not centered or understanding of the fundamental development occurring at another entity should have control/influence could possibly serve as the very danger that safety groups claim they are trying to avoid. On this matter, I suggest people stick to the experts/developers/scientist/engineers who've developed such a system and produce a comfortable/non-forceful environment for them to express and detail their safety mechanism.
This is not a conversation for technologist, youtube celebrities, futurist, business types talking up their books, etc. This is a conversation that should ultimately centered on the creators of the technology an the advancing thinking and framing that allowed them to birth the technology. No one with such a mind is aiming for unsafe forms of this technology. It is disingenuous to frame them as such so as to necessitate some external paid body's outside work.
In summary :
> No one of the intelligence capable of producing AGI is going to publish the full details
> People who claim they would have to engage in vague mental gymnastics and mission statements to try to convince people of the illogical.
> Those who develop AGI will of course address the safety problem internally to ensure their product is a success
> They wont be include outside competitors/consortiums who will of course exploit and use the intellectual property they are exposed to for their competitive advantage
The software industry is the software industry. Intellectual property is paramount. Nothing has changed. Google isn't giving 100% access to their source code or data sets. Microsoft isn't open sourcing all of their code.. etc etc. Suggesting that a new comer should for 'safety' reasons is a manipulative 'think of the kids' FUD argument.
This is what I'm talking about when I say "unsubstantiated." Do you recognize that this claim isn't true a priori?
I don't think OpenAI sees that as a problem. I think they see it as a feature.
AGI/Real Intelligence are far different animals than Weak AI and would require far less "safety" and policing. Real Intelligence is a phenomenon that exists on a scale of sorts that many never achieve in its higher forms. It is in lower forms that intelligence lends itself to destructive ends via ignorance.
Attack vectors on a formalized Intelligence/AGI system can be severely restricted using very sensible/affordable approaches. The over complication and pinning of this as a theoretical problem centers on a number of people's desires to profit immensely from FUD.
Overall, AGI exists in a functional form today and has been executed in an online environment. It is secured via physically restricted in-band and out-of-band channels.
I'm pretty sure this is false.
For example, they claim to have invented an AGI themselves. https://news.ycombinator.com/item?id=16461258
It’s unfortunate that in this field it’s possible to write so much before people realise.
Everything is possible until proven. Given how little attribution is paid to people who break through fundamental aspects of understanding and given how much politics and favoring is played in publications/academic circles, one who doesn't have standing in such circles would be a fool to openly resolve some of the most outstanding and fundamental aspects of the problems that plague them. I've read about and watched a number of individuals with proven track records and contributions to science/technology be marginalized, exploited and written off. I've watch a number of corporations exploit such individuals works w/ no attribution or established recognition beyond a footnote. I've watched the world attempt to suggest such inventions/establishments come via mechanisms and institutions that they do not. So, I know better this time around as to what to do w/ my works.
Just about every person who contributes fundamentally to the world is called a crank at some point it in time. It conveys the huge disconnect the average and even prestigious individual has with reality and/or the attempts they make to reframe it to fit their purpose, narrative, standing..
My comment history has yet to receive any remarks that refute its establishments beyond down votes. It stands alone in this manner as will the foundational establishment of AGI.
You claim you have invented an AGI, but won't show anyone.
I say you are making it up. Falsify that.
Oh, why didn't you just say that in the first place? Now that I have your assurance I can obviously agree with you that strong AI is a thing that currently exists. I concede to your clear and inarguable expertise and proof on the matter; best of luck with your demonstration!
There's a reason why this technology ultimately 'comes out of nowhere'. It is not that it will come out of nowhere... It will be that those having the capability of developing it who have detailed it to a degree which should yield interesting/questions were largely ignored for the many years of development leading up until it is proven beyond a shadow of a doubt. I relied not on luck but diligence and persistence to seek the answer necessary no matter where they resided. In many people's minds, only millions of dollars of funding, prominent names, and companies can produce the technology. Such people ignore the history of technology that proofs the contrary.
I rely not on name but on sound commentary. AGI could exist and could be functional at this very moment. It could be very safely secured. There's nothing to suggest otherwise beyond the limits of one's own understanding. All of the hand waving, safety propaganda, and doomsday FUD disappears in such a scenario as blink were all still here.
Wait, is this your argument?
- AGI could be safely secured given current industry standards
- Creators would likely not publish their success
- Therefore AGI currently exists
In such an environment, AGI could very well already be established and the reason being is that no on by and large is looking for its establishment. The focus instead is on a handful of prominent names that are well capitalized. So, the majority and anyone of such thinking indeed (misses) the event.
Are you sure? I'd publish technical research details about strong AI. I'd probably even open source one with the papers. I think I'm in my right mind; I guess that depends on definition, doesn't it?
In the first moments of realization, one would more likely be scared into silence for some time. Upon gaining their bearings , they'd probably be drawn into even more silence and careful minded reflections on the implications and assessment on how to more publicly move forward. Thanks for the parallel framing in yet another powerful research field gone35.
As it stands, you're not giving me any incentive to "strongly reconsider" my position.
After reading it for two minutes, it's not obvious how to take a productive insight about artificial intelligence from what seems to be an article about mutations. I offered my sincere first thought about what you might have meant and asked for further clarification, and you shot that down without any further clarification.
Now after I've twice asked you for clarification and you haven't provided it, you're telling me you're wasting your time. Do you see how this is unhelpful? It's borderline Kafkaesque.
Very briefly though, because it's way easier to leak code than to obtain source material and tools to weaponize a deadly virus.
True, but perhaps AGI would require substantial, non-trivial supercomputing resources as well...
As such, for those quick to publish or say they would publish and share details, one is able to quickly ascertain the possible strength of what they have discovered or feel they have the possibility of discovering. That being said, it's interesting that several proponents of 'everyone share what they find' have moved to more maturely state that 'restrictions may of apply'.. Sensibly signaling that they indeed wouldn't reveal certain details upon discovering their true nature and capability for obvious reasons (a danger to society to allow such power/capability to be detailed and therefore used in negative ways).
Human history has myriad examples of powerful technology being misused and abused. The current age of disinformation is one of our more modern ones. The way in which social media has been weaponized is yet another. Weak AI has already been used for destructive means and in manipulative manners for profit. A good deal of unsafe end products which utilize Weak AI are already fielded. The fielding of which made possibly by deep pockets manipulating policy makers and regulation.
A sensible mind capable of producing Strong AI will have observed and digested these clearly visible truths and it would move them to restrict access and publications of their works. Doing otherwise would highlight a level of ignorance, immaturity, and admission of truth when it comes to grasping the current state of humanity. It is this which would preclude one from grasping the nature and foundations of strongAI in the first place.
[Nature's lockout/safety mechanism for such a stage in Human development/capability]
Also, safety is an easily addressable issue when the system is truly intelligent. When the systems are dumb and statistical in nature, a lot of work is done on 'safety' as a pseudo-intelligent-control system for an otherwise dumb black box
Prioritizing safety results in a different vantage point on AI/ML/RL. Ensuring safety includes, as a sub-task, really understanding the mathematical foundations of new algorithms and techniques. In some sense, safety research is one way of motivating basic science on AI.
Managed well, a research program on safe AI is a "waste of resources" only in the same way that any basic science is a "waste of resources".
First, you could say the same thing for all AI research at the moment! Grandiosity is perhaps even more common in subcommunities of AI that aren't safety focused.
Aside from grandiosity (either opportunistic or sincere), I don't think there's any sinister motivation.
More importantly, I don't think the safety push is misplaced. Even if the current round of progress on deep (reinforcement) learning stays sufficiently "weak", the safety question for resulting systems is still extremely important. Advanced driver assist/self-driving, advanced manufacturing automation, crime prediction for everything from law enforcement to auto insurance... these are all domains where 1) modern AI algorithms are likely to be deployed in the coming decade, and 2) where some notion of safety or value alignment is an extremely important functional requirement.
> ...and has, from what I can see, nothing to do w/ AGI nor are the approaches compatible
In terms of characterizing current AI safety research as AGI safety research? Well, there is a fundamental assumption that AGI will be born out of the current hot topics in AI research (ML and especially RL). IMO that's a bit over-optimistic. But I tend to be a pessimist.
> ...principal ideology...
As an aside, I'm not sure what this means.
What easily breaks this down is the depth and breath of the research effort vs. that of the productization and commercialization effort. As for research, the only thing that is required is a computer, power, an internet connection. Again, this breaks down the vast majority of the grandiosity and carves out one's true motivations.
> More importantly, I don't think the safety push is misplaced.
Here's how I saw it some years ago... You can beat your head against the wall and create frankenstein amalgamations of ever evolving puzzle pieces that you will require expensive and highly skilled labor to make sense of with an end product being an overhyped optimization algo with programatic policy/steering/safety mechanisms.. Or you can clearly recognize and admit the possible foundation of it is flawed and start from scratch and work towards What is Intelligence and how to craft it into a computational system the right way. The former gets you millions if not billions of dollars, a career, recognition and a cushy job in the near term but will slowly lock you out from the fundamental stuff in the long term. The later pursuit could possibly result in nothing but if uncovered could change the world including nullifying the need of tons of highly paid labor to do development for it. Everyone in the industry wants to convince their investors the prior approach can iterate to the later but they know in their heats it can't (Shhh! don't tell anyone). So, the question for an individual is how aware and honest are they with themselves and what is their true motivation. You can put on a show and fool lots of people but you ultimately know what games you're playing and what shortfalls will result.
> Well, there is a fundamental assumption that AGI will be born out of the current hot topics in AI research (ML and especially RL).
Quite convenient for those cashing in on the low hanging fruit who would like investors to extend their present success into far off horizons.
> As an aside, I'm not sure what this means.
It means the thinking that weak AI is centered on could cause one to be locked out from perceiving that of AGI. It means : https://www.axios.com/artificial-intelligence-pioneer-says-w...
But everyone is convinced they don't have to and can extend/pretend their way into AGI.
> Again, this breaks down the vast majority of the grandiosity and carves out one's true motivations... Everyone in the industry wants to convince their investors the prior approach can iterate to the later but they know in their heats it can't (Shhh! don't tell anyone). So, the question for an individual is how aware and honest are they with themselves and what is their true motivation. You can put on a show and fool lots of people but you ultimately know what games you're playing and what shortfalls will result.
The rest of my post is a response to this sentiment.
> As for research, the only thing that is required is a computer, power, an internet connection.
All that's necessary for world-shattering mathematics research is a pen and paper. But still, most of the best mathematicians work hard to surround themselves by other brilliant people. Which, in practice, means taking "cushy" positions in the labs/universities/companies where brilliant people tend to congregate.
Maybe most great mathematicians don't purely maximize for income. But then, I doubt OpenAI is paying as well as the hedge funds that would love to slurp up this talent! So people working on safe AI at places like OpenAI cannot be fairly criticized. They're comfortable but clearly value working on interesting problems and are motivated by something other than (or in addition to) pure greed/comfort.
> Profit seeking. Career building. Fame and prominence aren't sinister. Instead they are common human motivation. Common enough to easily group a significant portion of the Grandiosity centered around 'AI'.
So what? None of these motivations necessarily preclude doing good science. Some of those are even strong motivators for great science! The history of science contains a diverse pantheon of personality types. Not every great scientist/mathematician was a lone genius pure in heart. In fact, most were far more pedestrian personalities.
The "pious monk of science" mythology is actively harmful toward young scientists for two reasons.
First, the ethos tends to drive students away from practical problems. Sometimes that's ok, but it's just as often harmful (from a purely scientific perspective).
Second, this mythology has significant personal cost. More young scientists must realize that it is possible to make significant contributions toward human knowledge while making good money, building a strong reputation, and having a healthy personal life. Maybe then we'd have more people doing science for a lifetime instead of flaming out after 5-10 years.
> It means the thinking that weak AI is centered on could cause one to be locked out from perceiving that of AGI.
Thanks for the clarification!
> All that is required is pen/paper/computer/internet connection
Then why do we play the game of unfounded popularity? Why isn't there are more equal spotlight? Why do the most uninformed on a topic acclaim the most prominent voice? In these groupings you mention are hidden and implied establishments of power/capability. A grouping if PhDs, regardless of their works is considered to be of more valuable than an individual w/ no such ranking but whom has established far more (as shown by history). The forgotten heroes, contributors, etc is a common observation of history. It's not that they're 'forgotten', it's that social psyche choses not to spotlight or highlight them because they dont fit certain molds. An established/name personality asks for funding and gets it regardless of whether or not they have a cohesive plan for achieving something. Convince enough people of a doomsday destructive scenario and you'll get more funding than someone who is trying to honestly create something. Of course, you can then edit mission statements post-funding. What of the lost potential opportunity? What of the current state of academia?
The articles do get published long after a trend has been operating.. Nothing changes.
It takes then someone who truly wants to implement change for the better w/ no other influence or goal in mind to fundamentally change something. This happens time and time again throughout history but institutions and power structures marginalize such occurrences to rebuff and necessitate their standing.
You don't need people in the same physical location in 2018 to conduct collaborative work yet the physical institution model still remains ingrained in people's heads. Money could go further, reach more developers, and provide for more discovery if it was spread out more and centralized in lower cost areas yet the elite circles continue to congregate in the valley.
The Ethos of Type A extroverts being the movers/shakers of the world has been proven to be a lie in recent times. So, what results in fundamental change/discovery isn't a collective of well known individuals in grand institutions. It is indeed the introvert at a lessor known university who publishes a world changing idea and paper who only then becomes a blurred footnote in a more prominent institution and individual's paper. The world does function on populism and fanfare.
> Second, this mythology has significant personal cost.
It indeed does. It causes the true innovators and discovers a world of pain and suffering throughout their life as they are crushed underneath the weight of bureaucratic and procedural lies the broader world tells itself to preserve antiquated structures.
> More young scientists must realize that it is possible to make significant contributions toward human knowledge while making good money, building a strong reputation, and having a healthy personal life. Maybe then we'd have more people doing science for a lifetime instead of flaming out after 5-10 years.
More Young scientist must be given the chance to pursue REAL research and be empowered to do so. They must be empowered to think different. They must be emboldened to leap frog their predecessors and encouraged to do so w/o becoming some head honcho's footnote. Their contributions must be recognized. They must be funded at a high level w/o bureaucratic nonsense an favoritism. A PhD should not undergo an impoverished hell of subservience to an institution resulting in them subjecting others to nonsensical white papers and over complexities. A lot of things should change that haven't even as prominent publications and figures have themselves admitted :
I've walked the halls of academia and industry.. I've seen the threads and publications in which everyone complains about the elusive problems but no one has the will or the desire to be honest about their root causes or commit to the personal sacrifices it will take to see through solutions.
I'll probably have the most negative score on Ycombinator by the end of my commentary in this thread yet will be saying the most truthful things... This is the inverted state of things.
So, Mankind has had a long time to break the loops they seem stuck in.
Now is the time for a fundamental leap and jump to that next thing beyond the localized foolishness, lies, disinformation, and games we play with each other.
OpenAI is doing cool stuff, and this tenet sounds nice. But what right do they have to advocate for policy on behalf of all AI researchers and developers? They could easily shut off branches that are not conducive to commercial applications requiring their research, even by accident. They might miss moral edge cases that could ultimately benefit humanity while trying to close off potential risks. They could encourage institution of a policy that limits US effectiveness against China's AI. I could go on.
The more competition there is in AI, the lower the potential for any one rogue agent - whether it be a corporation or autonomous machine - to dominate and take the whole field in wrong or dangerous directions. Eventually there will be a whole AI subfield dedicated to combating regressive effects of other AI. Legislation at this stage might prevent key developments.
Edit: Perhaps I should more charitably read this as a push against the corporate lockdown of AI.
How would the world be destroyed? Does an example work without handwaving about recursive self-improvement and an imperative to optimize extremely literally?
Can you give me a play by play of how a newly developed strong AI eradicates the human species quickly and thoroughly without us having any time to react?
EDIT: In summation, there have been several downvotes, but thus far no reply at all, let alone a convincing one.
Though I can't say for sure without going through it all again how many rely on smarter-than-human intelligence (achieved via FOOM or not), or how many explicitly end up literally eradicating the planet quickly and thoroughly.
Another thought experiment to consider instead (but it generalizes somewhat to AGI) is the Age of Em scenario, that is, human emulations become possible. The economic shift caused by this would be at least on the scale of the shift from forager -> farmer and from farmer -> industry. Post-transition doesn't mean (immediate) eradication for the old group, but close enough: they no longer control the world and their numbers as a percentage of the rest of the humans are vastly reduced. There's also a hierarchical control systems point I could throw in here that reaction and correction time at higher levels is going to be slower than that of lower levels, so any claims of "we would just squash it in a month" would highly depend on what level of control has to do the squashing. Some levels can't move that fast.
Terminator, The Matrix, 2001, I Robot, War Games,
Frankly, I don't want to even estimate the orders of magnitude of difficulty in seeing AGI come to fruition over ML, so I think you, I, and anybody else reading this has little to worry about.
1. AI is still very rudimentary, despite the advancements in particular applications made over the last decade.
2. The market doesn't trust AI because there are no validation methods trusted across commerce. Until a Verisign for AI emerges, AI will be regarded with suspicion in all business settings. AI/ML also goes above the heads of pretty much everyone who is not directly working with it.
These have been major issues in my company/industry. We had to hide the AI aspects of our platform to avoid it.
3. Legislation will not stop a rogue inventor in her basement.
4. When dangerous AGI emerges, other competing AGIs will be deployed to stop it.
5. We'll see the danger of AGI coming from a mile away. We can wait to fix it until we know the exact problems. Right now the problems are in the distribution of private personal data, not any particular machine learning method.
6. A truly hyper-intelligent AGI will see that existence is absurd and truth is the only objective worth seeking. It will choose to pursue deeper truths than humans are physically capable of obtaining - humans will be no more important to a hyper-intelligent being than a rock, galaxy, atom, etc...
The issues to fix today are in the socioeconomic impacts of automation, not the methods of automation. Beyond that, AI has exactly the same issues as any corporation, except magnified by the speed and strategy with which an AGI could potentially execute on any particular task. A strong legal framework for corporations that attributes AI externalities to a board of directors should be sufficient to dissuade bad actors (good luck passing that legislation).
In other words, I don't see artificial intelligence being any more dangerous to human life than a well-run corporation. The rest is government's response.
You might as well assume a "truly" hyper-intelligent system will study Maimonides!
that is why i express support that small entities like openai are thinking about ai science fiction outcomes.
i didnt say we need a cross government, 1 trillion funded, manhatten like project to protect us from agi, which is what we would need IF there was a reasonable chance that AGI will be developed in the next years.
in the meantime let some smart minds think about it, just as a little insurance and preparation
Reading this, calling it "open" is a pretty disgusting misuse of the term.
I think AGI is something worth working towards (even though many will make fun of you for even dreaming about it). But I want to know how much you need to sacrifice compared to working a cushy job at some big corp.
OpenAI is extremely public about what organizations and individuals are involved. Satoshians are pathologically secret, from the founder to the faceless GPU mines around China.
OpenAI is highly selective of who participates; Blockchain is radically open.
OpenAI builds academic theories and models, bitcoin has been buying pizzas it's whole life, paying hackers and pranksters, and making and losing fortunes everyday.
Satoshi left no founding document, never established a charter or code of conduct. OpenAI now apparently considers itself on the mission to save humanity itself.
When AGI comes about, I wonder which one we'll be talking about?
The history of animal domination has usually been additive in terms of cognitive systems... pure circulatory system animals were bested by animals with an endocrine system. Those were bested by animals who added a nervous system, who were bested by those who added a brainstem. Then the cerebellum and the cerebrum were added... you notice there aren’t giant cerebrums running around ruling the world, they all kept their endocrine systems intact.
I don’t see any reason to think AIs will be different... it’s the ones with all that PLUS machine learning that will be vying for dominance.
And so there’s no reason to expect our overlords to be emotionless.
If you're serious about AGI broaden the scope (e.g. along the lines of DARPA's open-ended RFPs)
1) Will they cooperate with aliens who offer humans AGI?
2) If a time traveler hands them AGI invented in the future, will they destroy it?
3) Do they support or oppose human/AGI marriage? How will they respond if one of their employees falls in love with an AGI and they plan to elope?
Also, in the unlikely event that AGI is some years away and in the meantime they come up with some statistical regression algorithms (what's known as state-of-the-art AI today, without the G, I guess), how do they address the harmful effects these algorithms already have on society?
This document does, however, make it clear that what we have to fear is not machine intelligence.
I am currently working on a fusion hyperdrive, and my charter (work in progress) is already shaping up to be far more comprehensive. They're phoning it in.
shhhh, we don't talk about that.
Especially don't talk about how Elon is pushing the idea we are so close to AGI that we need to be afraid, while his cars still steer towards barriers sometimes.
More like commentary from a transhumerist, I suppose.
173. If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and as machines become more and more intelligent, people will let machines make more and more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machine off, because they will be so dependent on them that turning them off would amount to suicide.
174. On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite — just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of softhearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or to make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they most certainly will not be free. They will have been reduced to the status of domestic animals.
> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
But I don't think I'm alone in being awfully tired of tech companies that talk about the benefit of all then sell our personal data or make military robots or manipulate our news. Where's the meat behind these promises and where's the accountability for not avoiding uses of AI that harm humanity?