Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI Charter (openai.com)
238 points by craigkerstiens on Apr 9, 2018 | hide | past | web | favorite | 177 comments



So I have a question.

To "avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power" what does one do with an idea or line of research that could potentially harm humanity or unduly concentrate power?

The manipulation of social media by foreign actors armed with dumb-AI / automation was an obvious conclusion to many of us well before the Snowden leaks, but what could we do exactly? I remember having conversations with people about it and we concluded that it would just happen until someone pushed it too far and then Russia did and now we're finally reacting.

I was privately concerned about the mass weaponization of autonomous devices via cyber attack for over a year and a half and got nowhere just emailing politicians or public safety departments. I've been told almost a dozen times that I should join a military or IR think tank but I don't want to do that. I just want someone else to vet the idea or research and pass it on to policy makers that will actually do something proactively.

Put another way:

What is the responsible disclosure process for ideas and research around AI?


>> What is the responsible disclosure process for ideas and research around AI?

Basically, we're so far away from AGI that there's no need to worry about disclosig anything. The recent advances in machine vision and speech processing are impressive, but only in the context of the last 50 years or so. A trully intelligent agent will need much more than this and there doesn't seem to be anyone alive today who knows how to go from where we are to where AGI will be.

In other words, all this is really premature. If we're talking about responsible and regulated use of what you call "dumb AI/automation" on the other hand, then that's a differen tissue. But AGI, currently, is science fiction. You may as well regulate research in time travel, or teleportation.


The maluse of AI is a continuum that ends with AGI. If we don't have a process for handling responsible disclosure of dumb AI that could kill millions then why should we expect that a process will be available once AGI is within a reasonable time horizon.

If I have other shit in my head that I'm worried about today who do I tell?


I think the current maluse of AI is about as likely to produce AGI as finger painting of a toddler is to produce a Mona Lisa, and the whole AGI drama is overblown way out of proportion. Right now the state of the field is such that no one can even begin to contemplate how to create the very basic underpinnings of anything remotely resembling AGI. That’s how fundamental this problem still is.

That’s not to say that there’s no way the humanity can be fucked with the more pedestrian “garden variety” AI that is with our technical capabilities.

It’s to say that AGI is a nebulous, unobtainable red herring which only serves to detract from the more immediate issues.


> I think the current maluse of AI is about as likely to produce AGI as finger painting of a toddler is to produce a Mona Lisa

YES FELLOW HUMAN AN APT METAPHOR


There are no realistic prospects of building AGI with modern understanding and equipment. We'll probably need many generations and a paradigm shift (or two) in computer science before we can build machines that can run intelligence like a program, let alone being able to write that program.

Hence my comment about how you might as well worry about time travel and teleportation. They're just as likely to happen in any timeframe that you might be interested in. If you're going to be worried about how AGI might be misused, you can start worrying about how time machines or teleporters might be misused.

>> If I have other shit in my head that I'm worried about today who do I tell?

You could look at joining the Campaing Against Killer Robots, or look around for a similar group.


I agree 100% with your comments about AGI. The discussion is completely pointless - we can't even agree on a definition for intelligence.

AI (as in "machine learning") on the other hand is something which should be worried about. Universities are actively building autonomous, ML powered weapons systems.

Just last week a pretty serious boycott of a South Korean university was announced over their autonomous weapons research.

Signatories of the boycott include some of world’s leading AI researchers, most notably professors Geoffrey Hinton, Yoshua Bengio, and Jü​rgen Schmidhuber. The boycott would forbid all contact and academic collaboration with KAIST until the university makes assurances that the weaponry it develops will have “meaningful human control.[1]

That's a real problem, and it is unclear if OpenAI's approach is relevant. Personally I think the academic boycott is a good start (what ML researcher will want to work there?), but it is unclear how to deal with commercial research labs in a similar space.

[1] https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-...


When would be a good time to start worrying? What external indicator will we get that we need to worry? And what makes you sure that, whenever that external indicator happens, we'll have enough time to stop problems from happening?

If you're not familiar with it, I'm making the argument from here and it makes good further reading: https://intelligence.org/2017/10/13/fire-alarm/


When would be a good time to start worrying? Let's say two hundred years from now, give or take a few decades. I imagine that by that time we might have made some progress on the subject of intelligence, what it is and how to reproduce it. At that point, it might be prudent to worry about an AI getting out of control.

We have no way of knowing what external indicators we might get. We have no idea what AGI would look like. That is often offered up as the very reason to "act now!" by Singularitarians, but if there is a threat that you don't know anything about, there is also nothing you can do about it. Unless you know what you're trying to stop, your actions are as good as random.

In this sort of discussion, you can replace "AGI" with "alien invasion". The threat from a Superintelligence is as serious and as predictable as that from an alien civilisation. We can prepare against AGI as much as we can prepare against an alien invasion. And we have exactly as much to fear from a hostile AGI as we have from a hostile alien species.


Ok, so what makes you say 200 years? And more importantly, what would make you change your mind about that number (in either direction)?

> [...] if there is a threat that you don't know anything about, there is also nothing you can do about it.

I disagree with this, at least somewhat. For one thing, even without knowing the specific threat, there's lots of stuff you can do which is just generally helpful - e.g., try to spread to other planets. Will it necessarily help? No. But there are a lot of threats it will protect against.

More importantly, people actually working on the problem of AI safety say they are doing things that appear, at least to them, to be useful. What makes you so sure they're wrong? From the little I know of their research, it certainly seems like stuff that will likely be pretty helpful.

Last point - considering just how big a problem AGI could be if they're right, just how many resources would you want to devote to it? Literally zero?


200 year is an entirely arbitrary period of time. The idea is that there's no way to predict when it will be possible to develop AGI but, if it happens at all, it will happen at some point in the future distant enough that nobody alive today will be around to say "I told you so".

You can ask how I know this. Obviously, I don't because I can't see into the future. But I can see the state of the art in the present and it's been baby steps for the last 70 years or so - and our capabilities have remained entirely primitive, industry hype nonwithstanding.

>> More importantly, people actually working on the problem of AI safety say they are doing things that appear, at least to them, to be useful. What makes you so sure they're wrong?

There's all sorts of opinions about how AI performance is accelerating ("exponentially"). In truth however, what has actually accelerated and in fact, plateaued in recent years is the performance in very specific tasks -object and speech recognition- and not in the general intelligence of AI systems. In fact, if you want to be more precise, we can only talk about advances in the context of very specific benchmarks, which is to say, specific datasets (like ImageNet) and according to specific metrics (say, F-score).

The problem is that all those benchmarks are arbitrarily chosen (iish; see below) and research teams spend a great deal of time tuning their systems to beat them. Which means, good performance in a benchmark tells us nothing about the extrinsic quality of a system: how it does in the real world, outside the lab and when the central assumption of PAC learning, that training and unseen data can be expected to have the same distribution (so that a system's training performance on the former predicts that on the latter) is not guaranteed. And then, performance in one type of task (e.g. classification) tells us nothing about the general capabilities of the system (i.e. general intelligence).

Singularitarians, like the people at MIRI (which your previous comment linked to) have focused on the performance of AI systems on modern benchmarks and the increase in compute, but modern benchmarks were essentially invented to allow some progress in machine learning, when mathematical results showed that progress was impossible. These previous results include Gold's famous result about learning in the limit (from the literature on Inductive Inference). The relaxation of assumptions was suggested in Leslie Valiant's paper "A theory of the learnable", which introduced PAC learning, the paradigm under which modern machine learning operates.

And to clarify my point above- modern machine learning benchmarks are not exactly chosen arbitrarily, rather they are justified by PAC learning assumptions about the learnability of propositional functions (and those, only; in fact, modern machine learning systems are propositional in nature, which severely restricts the expressive power of their models, making it much harder to realise the promise of Turing-complete learning of many of them).

... that probably got a bit too technical. My point is that just because we see imrpovement in performance today, in the field of research that we call machine learning, that doesn't mean that there is actual progress in the understanding of what intelligence is, or our ability to reproduce it. In a way, we might have changed our metrics, but we haven't necessarily improved our performance.

>> (...) try to spread to other planets.

So we have a science fiction problem and we're looking for science fiction solutions to it? :)

>> Last point - considering just how big a problem AGI could be if they're right, just how many resources would you want to devote to it? Literally zero?

Well, again that depends on what we can do about the problem, which in turn depends on what we know about it. I guess, like you say, if we know nothing about the problem, we can try random things like spreading to other planets or genetically enhancing the whole human race's intelligence until we ourselves are superintelligent and our risk to be taken over by an artificial superintelligence is 0.

But, having 0 certainty about the nature of AGI, we have exactly the same chances to avert the danger from it by sitting on our hands, as we have by migrating to other planets. I believe Bostrom's superintelligence scenario involves a machine that eventually colonises Mars with secret minining robots? If you're prepared to entertain the possibility of truly-Super intelligence, there's probably nothing you can do about it anyway.


To fully answer your last point. Sorry- this has grown extremelly long. I'm actually doing work right now, waiting for a long experiment to run so I got some time to burn :)

So, my pet formula for deciding whether taking a risk is worth it is to multiply the probability of some adverse event occurring, let's call this event X, by the cost associated with that event, let's call it Y; so R = p(X) * Y, where R the risk and you can then decide on some risk threshold, T, where if R < T you can justify taking the actions that you believe might lead to event X.

Now, when it comes to the Singularity (or, indeed, an Alien Invasion) we can accept the cost of the event to be infinite, Y = ∞, under the assumption that Superintelligence means game over for the species. But our knowledge of the event is nonexistent so the probability of the event, X = Singularity is... undefined. You can't plug that in my formula. You can't guess at the probability of X because nothing like X has ever happened before. You can try to extrapolate it from the development of human intelligence, but that took billions of years to evolve in a manner completely different than what we can reasonably expect for artificial intelligence (i.e. involving computers).

Basically, there's no way to calculate the risk from Superintelligence as long as we know nothing about it- 0 information means undefined risk. And therefore, no amount of resources can be justified to be spent to mitigate this risk.

In other words, I really couldn't answer your question- 0 resources, an infinite amount of resources, they're essentially the same.

Bottom line: to make decisions you need information even more than you need the resources to implement them.

Or in other words: you can't prepare for the unknown.

___________

OK, experiment's done cooking :)


Do you sincerely think that we are totally unable to predict when--or if ever--we might create general intelligence, something which stupid evolution has done through trial and error, but humans can attempt with striking brilliance and worldwide motivated research efforts? You can't make any rough prediction on whether we will be able to reach that point, or when it will happen?


>> Do you sincerely think that we are totally unable to predict when--or if ever--we might create general intelligence, something which stupid evolution has done through trial and error, but humans can attempt with striking brilliance and worldwide motivated research efforts?

Yes.

>> You can't make any rough prediction on whether we will be able to reach that point, or when it will happen?

No.


So you wrote a rather long post, I'll try to hone in on our disagreements. There are, I believe, two main disagreements:

1. I'm more confident than you in our ability to do something now and to predict things about AGI. Not much more confident, mind you, I just think the numbers are not literally 0, which makes most of your points above regarding amount of resources moot.

Basically, I think there are things we'll likely need to solve one way or another to figure out AI safety, and these are things we can work on. I also think that we can make some very rough estimates on timelines, and some very rough estimates on things we should do to mitigate.

2. I believe you misunderstand some of the points that people at MIRI make. You appear to classify them together with people like Ray Kurzweil, who appeals to exponential growth, etc, to make his arguments. The people at MIRI, afaict, don't really agree with this line of reasoning - in fact, I believe MIRI has all but ignored the progress on current AI for most of its existence (barring the last year or so), at least in terms of its research.

I mean, I agree that modern machine learning is not AGI or close to it. And I have no idea what AGI will look like - just a scaled up version of ML? A completely different take on things? The merge of a few different concepts together that will suddenly "tip over" to being capable? I have no idea. No one really does.

I am mindful, however, of a pretty long history of humans being terrible at predictions, in both directions - classic cases including people saying flight was impossible, a few years after the Wright brothers had already flown.

I'm not saying we're 2 years away from AGI - I'm saying we're not really sure, and even if we were 200 years away, I think it's worth spending some time and resources on trying to think about what we should do to prepare. The worldwide amount of resources spent on AGI and other existential risks are, what, 0.000001% of the amount of resources the world spends on sport? Does that really seem like a reasonable distribution to you?

>> (...) try to spread to other planets.

> So we have a science fiction problem and we're looking for science fiction solutions to it? :)

:)

It was just an example of something we can do right now. There are other examples if you want the other end of the spectrum - like stopping all technological decvelopment. I just don't think that will happen.

And let me point out that while AGI is a large threat IMO, lots of other things are large threats too - our technology is advancing on all fronts, and in many areas, we will soon be at a place where we can cause humanity to cease existing. E.g.:

1. Creation of weapons, much more powerful than atom bombs, that can effectively kill everyone.

2. Creation of super viruses that can destroy all humanity.

3. Creating the ability to "mind-upload" or similar, turning some people into much more powerful /faster thinkers/ whatever than others. It doesn't need to be an artificial super-intelligence to be dangerous.

And yes, these are all sci-fi scenarions, precisely because this is what's new in our world - scientific advancement! That's why the argument of "we've survived so far" is wrong - because the variable that's changing is the abilities that humanity has access to.


AlphaZero is existing general AI that learns boardgames on its own and reaches superhuman levels in less than a day. It seems to me quite plausible something like that could be generalised to understanding the physical world rather than that of chess or go. Admittedly we can't do that today but am not sure it's far off.

Musk gives it five years in the recent movie https://www.youtube.com/watch?v=rqlwUAxoxa8&feature=youtu.be...


AlphaZero is not general AI. It can play Go (and chess and shogi) but that's very literally all it can do- it can't process speech or recognise objects in images, or any other task you might choose to test it against. It is extremely powerful, but also extremely limited, in the same way that computers have been for a very long time.

For instance, computers are decidedly better than humans at arithmetic. Yet nobody (today) thinks that just because computers can do arithmetic very quickly and accurately, they are more intelligent than humans.

Is there anything special in Go or chess that makes a specific skill at them a necessary and suficient condition for general intelligence? We are impressed at the performance of AlphaZero because humans find it very hard to play Go etc well. On the other hand, we also find it extremely hard to, e.g. divide two arbitrary 100-digit numbers -but we are not impressed by a pocket calculator as much as we are by AlphaZero. Might there be an element of psychological bias in our ability to be impressed, then?


Go and chess are different in that they are designed as a competition for human thinking in a similar way that say marathons are designed to show off human running. Dividing 100 digit numbers less so. Also there was something human like in the way alphazero learns unlike say stockfish which is more calculator like.


There are competitions where people try to do arithmetic with very large numbers [1], or memorise very long sequences. The point is not what is an actual competition, which is a matter of tradition or culture; but what is easy and hard for a human to do, which is not (entirely) culturally determined.

I guess you could say that the difference between human and computer intelligence is that humans don't have total recall, while computers do. This allows computers to add large numbers without trouble or play millions of entire games at random and choose the best, etc.

On the other hand, there's always a chance that humans' incomplete recall is a hallmark of general intelligence.

>> Also there was something human like in the way alphazero learns unlike say stockfish which is more calculator like.

That's a matter of interpretation. AlphaZero plays by searching a huge space of possible moves- humans dont' play like that. It learns by playing itself millions of times. Again, humans don't learn like that.

There's nothing human about AlphaZero, nor Stockfish far as I can tell.

____________________

[1] https://en.wikipedia.org/wiki/Mental_Calculation_World_Cup


I could do with a sceptical take on some of my views on AGI. I think it somewhat predictable, taking into consideration some old/new ideas.

Interested in an off-ycombinator discussion?


Sure, why not. Please feel free to email me (address in my profile).

I'm going through a bit of crunch at the moment so answers may be a little slow but I always like a good discussion :)


> To "avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power" what does one do with an idea or line of research that could potentially harm humanity or unduly concentrate power?

Pursue the research and publish its results freely and in their entirety.


Are you saying that if it were discovered to be both cheap and easy to create a super-pandemic using nothing more than household chemicals and a swab of your own snot, then you'd publish the results freely and in their entirety?

If not, how catastrophic a level of harm are you willing to risk before you start advocating concealing the results?


This is the very same concern cyber-security researchers face every time they notice a vulnerability. Do they publish their work openly and potentially leave all those using the service vulnerable to attack?

Yes, for a combination of two reasons:

1) They have obvious incentives to publish their results. 2) The more we know about vulnerabilities, the better we are at combatting ourselves against it.

That second reason is why we are (overall) better off from acknowledging possible threats to our existence. In your example, the more open the study of super-pandemics is, the more open the study of combatting against super-pandemics is. The more we are aware of a threat, the better informed we are to prevent it.

Yes, in your example, the threat could be highly dangerous and imminent. However, if only one individual was capable of creating a super-pandemic, the number of people who could potentially help stop it is drastically reduced. Rather than a free distribution of results which arms our society completely with the ability of preventing it.


"Only one individual capable of creating a super-pandemic": by hypothesis, this individual is morally good in some way. (Otherwise they wouldn't be having this internal debate about whether or not to release the information about how to produce the virus: they'd just release the virus!)

The ideal scenario is that throughout the rest of time, no entity ever manufactures the virus; the second-best scenario is that an entity does manufacture the virus, but only after there is some hypothetical defence against it. You are shattering forever the dream of achieving the ideal scenario (again by hypothesis, the virus is easy to make, and history demonstrates that if something is easy and destructive, eventually someone will do it); you're pinning your hopes on the second-best scenario. You are therefore implicitly assuming that "we get better at arming ourselves against the virus" happens faster than "enemies create the virus", which is by no means a given and must be weighed on a case-by-case basis.

By the way, this is the basic reason why I'm so sad about the existence of fully 3D-printed guns. There is essentially no defence against guns. The creators of the 3D-printed gun chose to spread the blueprints far and wide, to accelerate their advent. It's sad that human nature is such that this was predictable and inevitable.


> Are you saying that if it were discovered to be both cheap and easy to create a super-pandemic using nothing more than household chemicals and a swab of your own snot, then you'd publish the results freely and in their entirety?

No.

> If not, how catastrophic a level of harm are you willing to risk before you start advocating concealing the results?

I'm not sure. Probably quite catastrophic, because it stretches the limits of credulity to seriously talk about a what if scenario where any given individual can easily create a weapon of mass destruction in their kitchen. A better question is why you think such a straightforwardly accessible method of eradicating our species won't be discovered independently despite your efforts to conceal it. How do we navigate that philosophical labyrinth?

But more pointedly, I dispute that this is comparable to any specific, credible example of strong AI. Give me a credible scenario that brings us from strong AI to the annihilation of our species without handwaving about recursive self-improvement and selective idiocy and I'll reconsider my position. I'm not going to give up the chance to publish incredibly novel research because some other people like to work themselves into hysterics about a conceptually incoherent boogeyman with "AI" slapped onto it.


> Give me a credible scenario that brings us from strong AI to the annihilation of our species without handwaving about recursive self-improvement and selective idiocy and I'll reconsider my position.

What kind of credible scenario are you looking for? You're obviously knowledgeable about the subject, I'm sure you've read various stories, but it's quite easy to dismiss them all as "not credible" or "just so stories". E.g. A good story is Tegmark's story at the beginning of the book Life 3.0, but again, can be easily dismissed if you don't specify acceptance criteria for the story.

I mean, simple scenario - AGI gets built by a farming company, gets programmed to farm more corn, but there is no stop condition programmed in, so converts ever available land into corn fields. Is it likely? No, of course not - no specific example is likely. But why is this something that fundamentally can't happen? More importantly, can you specify criteria for a scenario which you would deem acceptable? Cause if not, you're literally saying that there is no possible way to convince you that this is a possibility, which seems to me not to be a good position on pretty much anything.


A credible scenario is one in which none of the parts of the story have a giant leap between them. For example, with your scenario:

AGI gets built by a farming company

Okay, sounds fine, keep going.

gets programmed to farm more corn

Yep, this makes sense to me.

but there is no stop condition programmed in

Definitely sounds like something that could happen, with you so far.

so converts ever available land into corn fields

...and you lost me. I'm not sure how we ended up here.

The AI you're postulating is 1) intelligent and capable enough to be called "AGI" and arbitrarily terraform land into a useful field for corn, yet 2) dumb and inept enough to interpret its instructions absolutely literally, like a magic genie; meanwhile 3) the collective capability of the human species is apparently insufficient to stop this from happening.

I guess that could happen. But I take it about as seriously as alien life trying to invade us, a random meteorite killing us all, a solar flare destroying the Earth or malicious time travelers. Each of these what if scenarios also have a sensible build up followed by a giant leap in suspension of disbelief, and concluding in disaster. That's not a framework for intelligent discussion and productive rationality, it's a self-indulgent and conceptually incoherent exercise in mental gymnastics. While we're at it we could argue about how many angels will fit on the head of a pin. If I start a news cycle about how quantum computers will allow us build nanotechnology that can hack human brains, does my idea have any credibility? It could happen, right?

Our current capability with respect to artificial intelligence is so far removed from any reasonable form of strong AI that there isn't even a recognizable path forward. Established leaders in the field like Yann LeCun have publicly stated deep learning will not take us there. We don't even know how to consistently and rigorously define strong AI, let alone conjecture about how it would work or what its theoretical danger would be. That means we're trying to extrapolate the properties of hypothetical nuclear weapons from gunpowder. We are effectively children grasping in the dark, getting hysterical about a monster that may or may not be lurking under our collective beds. We can't agree on what the monster looks like, we don't have a rational explanation for why we believe it exists, but we heard the floor creak and we've seen plenty of depictions of monsters in fiction that sound sort of logical.


3) is a perfectly reasonable objection, and I can understand why people say "it's too far in the future, we have more pressing issues right now". But 2): "dumb and inept enough to interpret its instructions absolutely literally, like a magic genie" is not a reasonable objection, unless you have some compelling reason for why the orthogonality thesis is false. Why should an AI care about what we want from it, unless we're exceedingly careful about programming it so that its utility function is perfectly aligned with human desires; and is "exceeding care" a feature, now or ever, of how we approach AI engineering?

Even if the smartest hypothetical AI can perfectly extrapolate the mental states of every human who ever lived, we still die in a cloud of nanobots if it isn't programmed ever-so-carefully to care about what we want.


> The AI you're postulating is 1) intelligent and capable enough to be called "AGI" and arbitrarily terraform land into a useful field for corn, yet 2) dumb and inept enough to interpret its instructions absolutely literally, like a magic genie;

Your argument, as I understand it, is that something intelligent and capable will only interpret its "instructions" absolutely literally. That's not the way I look at it.

It's not that another intelligence won't be able to understand what our "real goals" are. It's that it won't care - whatever it is programmed to do is, quite literally, its goal/value system.

I mean, we don't have to get exotic here - look at humans. We very clearly evolved to find sex pleasureable in order to spread our genes; just as clearly, many humans, while completely understanding the purpose of sex being pleasurable, continue to have sex without any attempt to spread their genes by using birth control.

And just as equally, while most other humans have more-or-less the same value system as I do, I think you'd be hard pressed to convince me that there have never been humans who have tried to do things I disagree with and wouldn't want to happen. And that's just humans. It's pretty clear that if certain people were more capable, a lot of bad things would've happened (at least from my point of view).

Basically, the idea that intelligence infers a value system is something I've been long convinced is untrue, and I'm not sure why you think otherwise.

> meanwhile 3) the collective capability of the human species is apparently insufficient to stop this from happening.

Well, I'm hoping we aren't. But in order to stop it, we need to do something, and clearly most people aren't even convinced there's a real threat. Hopefully, we can stop an unsafe AGI before it gets built, and also hopefully, we can stop it after it exists. But if you're just assuming that we'll be able to stop it after it has already started to do things against our interest, I think you're being optimistic - even humanity has already created weapons that, had we decided to really use them, could've destroyed all other humans before they had a chance to respond.

> But I take it about as seriously as alien life trying to invade us, a random meteorite killing us all, a solar flare destroying the Earth or malicious time travelers.

Some of those things are more fanciful. But, some of those are real threats, e.g. random meteorite. It has wiped out entire species before. I agree we shouldn't be very scared of a low probability event that we can't control, but that's the difference - we can control AGI, before it gets built, and it doesn't appear to be low probability. Not to mention, most people agree that we should get off this planet to protect against meteorites, too!

As for aliens - if you were truly convinced that aliens would visit the Earth in 200 years - are you really telling me it wouldn't change anything about what you think humanity should be doing? I for one think we would definitely be spending time in preparing for that. And that's more-or-less how I look at AGI - we're creating an alien that will visit us between 50-1000 years from now, and we're not really sure when exactly. What should we do to prepare?


I don't think you answered 3pt14159's question. What if you publish the research and nobody who can do anything about it cares?


Then what exactly was the new bad thing introduced by publishing it in the first place?


I'm thinking about an AI safety index. Where a think tank rates governments on their AI readiness and preparation against threats (amongst other things).

That soft of thing might be part of the answer.


Good question, and the best answer would probably require a trip down some rabbit holes.

One answer might be, bring it to an org you trust, that specializes in those rabbit holes, like OpenAI, that is if you decide you trust them.


I've wondered about this same question. If it were up to you, what would you propose?


> To "avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power" what does one do with an idea or line of research that could potentially harm humanity or unduly concentrate power?

You don't publish it : See Pandora's box. You secure it. You demonstrate its capability and you focus public critical discussion from then on with the creators of it along with a panel of individuals who can probe areas of concern. Public discussion and welcomed inquiry Private and protected research and IP.

> The manipulation of social media by foreign actors armed with dumb-AI / automation was an obvious conclusion to many of us well before the Snowden leaks, but what could we do exactly?

Before foreign, there was domestic and manipulation was a central concept at the inception of these platforms. Data collection and selling for profit is aimed at manipulation. What human beings could do is be honest with themselves and others for once and stop using their intellectual capacity to manipulate, dumb, and screw others over for profit. You can't engage in negative foundations and expect positive outcomes. You can't manipulate the truth and information and pretend like its going to benefit society. https://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt is a manipulative business tactic and it remains at the heart of the 'safety' problem and the nonsensical doomsday AI scenarios. Certain well funded groups and deep pockets have invested heavily in weakAI and it sits nicely with their legacy business models and they're scared of a truly disruptive technology eating their lunch. So, they concoct disinformation campaigns to steer resources/attention away from such dev groups and back into their coffers.

> I was privately concerned about the mass weaponization of autonomous devices via cyber attack for over a year and a half and got nowhere just emailing politicians or public safety departments.

Money and Power and strong motivators for some. Everything else is secondary. Many human beings like to craftily use marketing/politics to convince people otherwise, but its just a mask of their true intent. Engage your intellect and you can quickly filter through the b.s to one's true motivations/intent... Hint : their actions not their words will be aligned accordingly.

> I've been told almost a dozen times that I should join a military or IR think tank but I don't want to do that. I just want someone else to vet the idea or research and pass it on to policy makers that will actually do something proactively.

Money&Power. This is what dominates the world. The suggestion to join the appropriate groups which recognize this and attempt to mitigate negative effects is well founding.

> What is the responsible disclosure process for ideas and research around AI?

You don't publish it : See Pandora's box. You secure it. You demonstrate its capability and you focus public critical discussion from then on with the creators of it along with a panel of individuals who can probe areas of concern. Public discussion and welcomed inquiry Private and protected research and IP.

So, you make the world aware that something indeed exist because you created it. You open up things for public discussion so people can work through all the issues/concerns/etc with the most knowledge group 'the creators' of it. The tech is privately secured and development goes forward with the public commentary having been received. The End.

This is possibly the best format for things. Other approaches lend themselves to politics, b.s, and manipulation.


Learn to quote properly please. Only include what is relevant to your reply - anyone can access the whole post if they want more context.


>>"We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that _safety and security concerns will reduce our traditional publishing in the future_, while increasing the importance of sharing safety, policy, and standards research."

This seems like the key disclosure statement. I never reconciled how sharing A[G]I techniques with the general public increases AI safety in the long-term; now we know OpenAI has also come to the same conclusion.


I disagree with the premise. AI isn't like a nuclear warhead. It's not a machine for pure destruction. AI can be used as much to generate welfare as to damage - it's all in the application. Sharing methods benefits at least as much as it could hurt.


I think I disagree. When you get past AGI to where it can do things that humans can't, a lot of current safeguards haven't been designed with it in mind. So things might be vulnerable.

The kinds of things I'm thinking about are the various countries nuclear arsenals might not be safe from actors with very advanced AI. This I think is the potential source of existential risk, in my opinion. So it could hurt a hell of a lot.

So I'm of the responsible disclosure point of view. You ask "Would releasing AI advancement X mess up someones security/economy". If so, you help them patch it before releasing it to the general public.

The majority of advancements aren't like that and they won't be for a while.


The world will evolve with the development of AI; I'm not so concerned with limitations of current safeguards.


The world did evolve with the development of nuclear weapon, but between 1945 and 1949 US was the sole nuclear power and could preemptively attack USSR as John von Neumann proposed. That's 4 years! I suspect such window will recur with AI.


I think it can be used as much to generate welfare as destruction when it goes right, but the cases where it goes wrong are both plentiful and heavily biased towards destruction.


What are a few examples of those cases, exactly?


See: basically every instance of Goodhart's Law. Remember, "AI" in these terms is basically just stochastic optimization (ie: machine learning), so the whole range of problems with throwing mathematical optimization at a vaguely-defined intuitive problem crop up. And there are a lot of those.


> See: basically every instance of Goodhart's Law

No, I asked for specific examples. This is part of the handwaving I'm talking about. Can you give me something other than the maniacal paperclip optimizer?


YouTube video recommendations, which converge on recommending conspiracy theories and shocking videos to easily manipulated people, especially children.

I am not worried about literal paperclip maximizers, but this may be the closest real thing to that parable. The hypothesis isn't that YouTube's recommender system didn't work -- it's that it worked too well at its assigned task of maximizing view time, and we humans are finally realizing that maximizing view time was not what we actually wanted.


Do you think limiting research on recommender systems (or limiting access to such research) would help in this case?

What would be a solution to YouTube recommendation problem?


It seems like OpenAI's Human Feedback research (in collaboration with DeepMind) is targeted at this sort of thing. They try to use human feedback to create more nuanced and aligned objectives.

https://blog.openai.com/deep-reinforcement-learning-from-hum...


I think this is probably the most reasonable example I've been given in this thread. However, as you admit this is still very far off from a hypothetical AI hellbent on destroying us while we watch helplessly.


Let's be honest, most ai and ml is garbage. It's a promise that never delivers. It hasn't yet and probably will still be mediocre in 30 years


Terminator, 2001, I Robot, ...

oh wait you probably meant in reality. sorry.

edit: Oh and how could I forget the Matrix. Which of course we all totally live in.


Before nuclear weapons were built, the only examples of their large-scale destruction were in fiction. The World Set Free by H.G. Wells is the main example I can think of: https://en.wikipedia.org/wiki/The_World_Set_Free

It got some of the details wrong (i.e. the atomic bombs exploded over a long period of time vs instantly), but the consequences were fairly accurate.

If you're going to wait until all the actual details and examples happen in reality, then it's too late to have much insight, especially if we're talking about something capable of self-replication like a virus or artificial intelligence. The nature of the problem REQUIRES anticipation instead of mere reaction.


This is survivorship bias. How many weapons of mass destruction have been foretold by science fiction that have never been invented? More generally, how much technology has been written about in science fiction that we have only the faintest hope of ever achieving?


Maybe one in five, depending on how you define it?

Look, it comes down to this:

Is there something innate about our intelligence that makes it impossible to match except through human brains? That strikes me as something akin to spiritualism. The answer is almost certainly "no." We're not talking about warping space through some hypothetical state of matter with laws of physics different from our own. We're talking about at least human-level intelligence, which is something we already have an existence proof of (us).

And humans are the most dangerous force on the planet. No animal (beside maybe microscopic organisms) stand a chance against a group determined humans. We're nearly unstoppable due to our intelligence. Our we so unique? Is it so impossible that machines could some day (perhaps in our lifetimes) be built which achieve human-level intelligence? Considering the computer advances that have already been achieved, it is clearly a real possibility if not a certainty.

Human-level intelligence is perhaps the most powerful (thus dangerous) thing on this planet. Making a machine that is at least as intelligent (and perhaps much more so!) is clearly something that could be incredibly dangerous, and we know such a thing is physically possible, unlike many things in science fiction.


To me this doesn't mean "Sharing traditional AI research is dangerous," but rather "Our efforts may be best spent on less popular topics such as safety, policy, and standards research, so you may see us do less traditional AI research." This seems in line with their stated goals around safety.


I interpret it as saying that they don't think sharing current AI research is dangerous, but they're keeping the option of withholding research open in anticipation of the possibility that could change.

(This is what I and others argued they should do in 2015: https://conceptspacecartography.com/openai-should-hold-off-o... )


No, this is wrong. To quote:

"To be effective at addressing AGI's impact on society, OpenAI must be on the cutting edge of AI capabilities -- policy and safety advocacy alone would be insufficient."

OpenAI definitely don't plan to do less traditional AI research. What they plan to do is to publish less.


You're right, we should lock up the information in the hands of a few billionaires.


I never understood why it was any different or why they attempted to compel others to 'dump everything they had publicly'. No one who comes across something as powerful as AGI is going to think : Yeah, why don't I just publish everything about it. A maturity step has seemingly occurred post funding


There are 2 scenarios that are often conflated,

1) An AI which independently & autonomously generates goals that in their carrying out end up hurting humanity, and

2) An AI trained & commanded by a malevolent actor to hurt humanity.

It is the 2nd case that is far more real, and far more troublesome to implement safeguards. An AI under your training & command is a neutral tool of empowerment, much like a hammer or a car. The malevolence is in the external actor, not in the tool, and there is no way for the tool to be able to censor its purposes, especially in a pre-"AGI" sense of semi-intelligent automation & problem solving.


I think you're missing the point that 1 can be indistinguishable from 2, if the AI decides the best way to achieve its goals involves taking over the world - and there are very few objective functions that are not served in some way by taking over the world. (Paperclip maximizer is the classic example, but even something like 'maximize the total happiness of humanity' or 'fulfil the values of as many people as possible' involves taking over the world, though perhaps from behind the scenes...)

Some people look at sufficiently powerful AI as they would a genie, and as said by Eliezer Yudkowsky "There are three kinds of genies: Genies to whom you can safely say "I wish for you to do what I should wish for"; genies for which no wish is safe; and genies that aren't very powerful or intelligent." AI safety is about making sure we get the first kind of genie, or at the very least recognizing that we've gotten the second - since that's not a "neutral tool of empowerment", that's a time bomb.

https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden...


There's a difference between a paperclip maximizer which is a bit of a philosophical thought experiment, unlikely to be a problem in reality and say Russian cyberattacks which appear to be an ongoing issue right now and where they would presumable deploy AGI if they had it.

I think you have to assume there will be bad actors trying to do bad things with AGI and take measures against it in the same way we assume there are malware creators out there who we have to guard against.


This is the plot of The Humanoids, by Jack Williamson (1947): https://en.wikipedia.org/wiki/With_Folded_Hands

This is an excellent example of the prescience of thoughtful science fiction. If Asimov popularized robotics and the famous three laws, Williamson should be recognized and given credit for examining (or warning about?) the implications of AI seventy years ago!


i'd argue youtube kids is already an example of 2)...


> We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.

Wouldn't it be much more likely that a non-value-aligned project comes close first? Wouldn't the Google/Apple/Microsofts of the world have insanely more resources to dedicate to this, and thus get there first?


Yes, it's pretty hard to take that line seriously. At minimum it's very charitable framing, considering there is absolutely no indication whatsoever that OpenAI would credibly reach this (vague, underspecified) goal before any of the other serious contenders. Nor would competitors have any requirement to include OpenAI if and when they were getting close.


It allows them to change sides and switch to a harm reduction strategy if they believe they will lose.


Agreed but it is such lines that bring in the $$$ in the valley. At some point, humanity has to cut the fluff and games and get to the fundamentals so we can all rise much higher heights of our collective intelligence. It's very hard to do so when you're constantly bombarded with crafty agenda based language on a day-to-day basis.

> there is absolutely no indication whatsoever that OpenAI would credibly reach this (vague, underspecified) goal before any of the other serious contenders. Nor would competitors have any requirement to include OpenAI if and when they were getting close. This is the clear reality. So why do certain human beings pretend otherwise? Why does this pretentious game garner the most funding?Why do human beings spend so much time manipulating and hyping things into over-stated valuations when it ultimately results in wasted resources, time, and potential for the collective?


No offense intended, but I don't think you and I are actually agreeing on anything. I can't really take your comments in this thread seriously.


That's fine. I don't hide who I am. I've discussed things on various accounts/mediums over the years and more increasingly have no concern about restricting my identity. Maybe you can make accurate assumptions as to why this is the case.

In any event, speaking freely and openly in this manner helps put my at ease knowing I at least got the information in its raw form out in the open. Whether or not you believe my framing, arguments, commentary is up to you. Whether or not anyone inquires is up to them. The information was put out there and that clears my conscience in a manner about the times ahead.


What, concretely, makes you think that any of those companies wouldn't place a focus on safety and value alignment? Automobile manufacturers and their tier 1 suppliers are the world leaders in automobile safety, after all.


> What, concretely, makes you think that any of those companies wouldn't place a focus on safety and value alignment?

Competitive pressure, and the "if we don't, someone else will" effect (or Moloch, if you like). AGI- particularly recursively self-improving AGI- is the ultimate first-mover advantage: the first company or country to have AGI will very likely be able to leverage that into keeping anyone else from getting it (if it doesn't, y'know, kill us). This strongly encourages treating all concerns other than "get there first" as secondary.

> Automobile manufacturers and their tier 1 suppliers are the world leaders in automobile safety, after all.

Not by choice they aren't. They are forced to be the way they are by government regulations, which they always bitterly opposed at the time of creation. In fact, capitalism has such a reliable record of "not giving a shit about safety until they are forced to" that I'm perplexed you think AGI would be any different.


I think there will be at least a minimum of focus on safety and value alignment.

They may have to focus on safety and aligning it to the companies values (else test/weak versions are likely to destroy/negatively impact the company or at least be useless). Which is at least something.

Only hyperrational/intelligent value unaligned intelligent systems would avoid this pressure. I don't see the development going that way.

It would still be sub optimal though.


This is a great response. Thanks.

> Not by choice they aren't.

This is why I was glad to see this in TFA:

>>> ...while increasing the importance of sharing safety, policy, and standards research."

Even for Weak AI, history tells us that these factors -- policy and industry collaboration -- tend to be significant ingredients in successful safety reforms.


> recursively self-improving AGI- is the ultimate first-mover advantage: the first company or country to have AGI will very likely be able to leverage that into keeping anyone else from getting it

How? This doesn't seem axiomatic.


Seems clear that you could instruct the AGI to do anything it could to interfere with other organizations efforts to build an AGI. Obviously Nation States would have strong reasons for pursuing such a course of action and unscrupulous corporations would likewise have strong capitalistic motivations to do so.


Seems very unclear if you could "instruct" true AGI to do something.


Maybe you assume that an AGI would be totally uncontrollable but in this highly speculative exercise I don't think you should assume your position is the only valid one.


First you said it's clear, now you're saying it's highly speculative. Choose one.


I see no conflict.


I don't think you should assume your position is the only valid one.

I don't, that's why I said "very unclear" (to me). If something is "clear" to you about AGI, then it seems like it's you who makes that assumption.


Seems very unclear if you could "instruct" true AGI to do something.

So you are claiming that by making this statement you are not assuming that it is impossible to control an AGI?


I'm claiming that it's not clear to me whether we can control it or not.

This is similar to having a very advanced bioengineering technology, where you could instantly change anything about your body. Would you make yourself smarter? Sure. Would you change who you are (e.g. turned off your instincts/desires)? Not so sure.


Attempting safety and achieving safety are not the same thing. The default way new things get invented involves iterative attempts that gradually reduce failure cases. The fear is that AGI failure modes might be catastrophic and non-recoverable.


They're also the world leaders in automobile safety-defects.


This is what always amazed me about the AI safety groups... and mission statements for funding therein : There is nothing to suggest the minds and efforts capable of producing AGI will not have solved this at a fundamental level. Absolutely nothing... and after-all something of high intelligence is within controlled and reasoned states. If someone declared they created AGI and it was behind a door and when that door was opened you saw a robot manically trashing the place, you'd clearly laugh.


haven't read a ton into it, but whatever Google is doing with the Pentagon probably runs counter to what OpenAI claims to be after.


Not necessarily. Project Maven probably has no intention of developing AGI and therefore probably is not a concern to value alignment problem.


Sure, but that has nothing to do with the fact that they're saying they won't compete, but assist if that wasn't the case.


I appreciate that they are committed to AI safety, but I'm afraid that researchers have little to no power to, in their words:

> avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

AI and technical progress in general already disproportionately serve the rich, as they are drivers of wealth disparity, and I see no reason why better AI won't follow the same trend. Unfortunately, any changes that might affect this are in the hands of policy makers, and they seem unlikely to consider universal basic income or anything as drastic as might be required.


They [each individual development group] has power over their own funded development and work.

Anyone working on this problem sincerely values AI safety and its a component of developing and securing the foundations of AGI. An out of control, unpredictable and sloppy system is not intelligent or desired. Such a system would not be considered AGI or an achievement. So, it is natural for any developer to identify issues and bring them under control early in development.

Suggestions that a consortium not centered or understanding of the fundamental development occurring at another entity should have control/influence could possibly serve as the very danger that safety groups claim they are trying to avoid. On this matter, I suggest people stick to the experts/developers/scientist/engineers who've developed such a system and produce a comfortable/non-forceful environment for them to express and detail their safety mechanism.

This is not a conversation for technologist, youtube celebrities, futurist, business types talking up their books, etc. This is a conversation that should ultimately centered on the creators of the technology an the advancing thinking and framing that allowed them to birth the technology. No one with such a mind is aiming for unsafe forms of this technology. It is disingenuous to frame them as such so as to necessitate some external paid body's outside work.


Could you summarize your point more concisely? As written this seems to be a stream of disconnected thoughts that are basically entirely unsubstantiated.


You stated it yourself in post : > there is absolutely no indication whatsoever that OpenAI would credibly reach this (vague, underspecified) goal before any of the other serious contenders. > Nor would competitors have any requirement to include OpenAI if and when they were getting close.

In summary : > No one of the intelligence capable of producing AGI is going to publish the full details > People who claim they would have to engage in vague mental gymnastics and mission statements to try to convince people of the illogical. > Those who develop AGI will of course address the safety problem internally to ensure their product is a success > They wont be include outside competitors/consortiums who will of course exploit and use the intellectual property they are exposed to for their competitive advantage

The software industry is the software industry. Intellectual property is paramount. Nothing has changed. Google isn't giving 100% access to their source code or data sets. Microsoft isn't open sourcing all of their code.. etc etc. Suggesting that a new comer should for 'safety' reasons is a manipulative 'think of the kids' FUD argument.


> No one of the intelligence capable of producing AGI is going to publish the full details

This is what I'm talking about when I say "unsubstantiated." Do you recognize that this claim isn't true a priori?


You're welcome to contact me when it occurs. I think I defined who I was in an earlier comment against the advice of someone who claimed it might impact my ability to get capital in the future.


> AI and technical progress in general already disproportionately serve the rich

I don't think OpenAI sees that as a problem. I think they see it as a feature.


AI, AGI, and real intelligence all learn from actions and feedback. Looking at simple analogs from animal and human counterparts, setting boundaries and teaching beneficial rules, called morals, works somewhat in non-zero sum environments, but inevitably requires policing when the environment turns competitive. Safety in any case would require Intelligence-proof fencing and a really big stick even the most resource-rich non-value aligned agents would have to abide by. That means control over power grids, ability to prohibit access to shared computing resources (including less secured IOT devices), and potentially destructive viruses with all kinds of attack vectors that would act as policing force punishing bad agents with anti-human behavior. Credible enforcement should be a well funded bullet on this charter.


Weak AI is dangerous because it has no intelligence. It is fundamentally structured as a dumb/blind optimization process. The efforts necessary to proof safety/security for such a system could very well outweigh the amount of development that was needed to bring the technology to bear.

AGI/Real Intelligence are far different animals than Weak AI and would require far less "safety" and policing. Real Intelligence is a phenomenon that exists on a scale of sorts that many never achieve in its higher forms. It is in lower forms that intelligence lends itself to destructive ends via ignorance.

Attack vectors on a formalized Intelligence/AGI system can be severely restricted using very sensible/affordable approaches. The over complication and pinning of this as a theoretical problem centers on a number of people's desires to profit immensely from FUD.

Overall, AGI exists in a functional form today and has been executed in an online environment. It is secured via physically restricted in-band and out-of-band channels.


> Overall, AGI exists in a functional form today and has been executed in an online environment. It is secured via physically restricted in-band and out-of-band channels.

I'm pretty sure this is false.


Check my comment history. I can assure you its true as I will demonstrate in the near future. As for the security, you'd have no ability to penetrate internal aspects of it without physical and detectable access patterns. This is achieved using common sense design methodologies that are already proofed industry standards. Behaving as though securitization is theoretically smacks as a cash grab to me. If you have something valuable that you want to secure, magically you come up with ways to safely secure it.


To be frank, your comment history has all the hallmarks of a crank[1]. Specifically, points 10, 9, 7 and 6, although there's also evidence of 2 and 8. Now I could be wrong, but convincing me of that would take a demonstration, or at least explicitly describing the capabilities of your agi.

[1]: https://www.scottaaronson.com/blog/?p=304


After the constant invitations to check that poster’s comment history, I did, and you are correct in your assessment.

For example, they claim to have invented an AGI themselves. https://news.ycombinator.com/item?id=16461258

It’s unfortunate that in this field it’s possible to write so much before people realise.


Old foundations are meant to be redefined/invalidate by new. - Complexity theory - Computational Theory - Graph Theory Are all subsets of Information theory. They're approaches/frames. New ones can be created that invalidate the established limits imposed by others.

Everything is possible until proven. Given how little attribution is paid to people who break through fundamental aspects of understanding and given how much politics and favoring is played in publications/academic circles, one who doesn't have standing in such circles would be a fool to openly resolve some of the most outstanding and fundamental aspects of the problems that plague them. I've read about and watched a number of individuals with proven track records and contributions to science/technology be marginalized, exploited and written off. I've watch a number of corporations exploit such individuals works w/ no attribution or established recognition beyond a footnote. I've watched the world attempt to suggest such inventions/establishments come via mechanisms and institutions that they do not. So, I know better this time around as to what to do w/ my works.

Just about every person who contributes fundamentally to the world is called a crank at some point it in time. It conveys the huge disconnect the average and even prestigious individual has with reality and/or the attempts they make to reframe it to fit their purpose, narrative, standing..

My comment history has yet to receive any remarks that refute its establishments beyond down votes. It stands alone in this manner as will the foundational establishment of AGI.

http://nautil.us/issue/21/information/the-man-who-tried-to-r...


You comments don't receive any refutation because they make vague unfalsifiable claims.

You claim you have invented an AGI, but won't show anyone.

I say you are making it up. Falsify that.


> Check my comment history. I can assure you its true as I will demonstrate in the near future.

Oh, why didn't you just say that in the first place? Now that I have your assurance I can obviously agree with you that strong AI is a thing that currently exists. I concede to your clear and inarguable expertise and proof on the matter; best of luck with your demonstration!


> I concede to your clear and inarguable expertise and proof on the matter; best of luck with your demonstration!

There's a reason why this technology ultimately 'comes out of nowhere'. It is not that it will come out of nowhere... It will be that those having the capability of developing it who have detailed it to a degree which should yield interesting/questions were largely ignored for the many years of development leading up until it is proven beyond a shadow of a doubt. I relied not on luck but diligence and persistence to seek the answer necessary no matter where they resided. In many people's minds, only millions of dollars of funding, prominent names, and companies can produce the technology. Such people ignore the history of technology that proofs the contrary.

I rely not on name but on sound commentary. AGI could exist and could be functional at this very moment. It could be very safely secured. There's nothing to suggest otherwise beyond the limits of one's own understanding. All of the hand waving, safety propaganda, and doomsday FUD disappears in such a scenario as blink were all still here.


> AGI could exist and could be functional at this very moment. It could be very safely secured. There's nothing to suggest otherwise beyond the limits of one's own understanding.

Wait, is this your argument?

  - AGI could be safely secured given current industry standards
  - Creators would likely not publish their success
  - Therefore AGI currently exists


My framing is that it could exist and the broad majority would be none the wiser. My framing also is that a number of groups/individuals have likely exposed enough to cause people to question as to what stage they are in their development but instead receive the common : yeah sure buddy, let me know when you have a demo. So, ultimately it will indeed 'come out of nowhere' because society and many individuals aren't conditions to or even have their 'hearing' tuned to be aware of its coming. The safety discussion is a mute topic of discussion in this context as its baked into development and intelligence. There's discussion boards all over the internet, TED talks, economic forums, AI panes, etc.. It's all the same song and dance save for the many groups that are of no mention. Even as the same voices and idols of note continue to center on the same fundamental approaches that don't seem to be going anywhere fast, no one listens to groups/individuals thinking different or trying a fundamentally new approach. Hinton and other prominent figures even come to state that the real amazing development will come from someone who scraps everything, starts from scratch and reproaches things in a fundamentally new approach. No media outlets. No funding groups (even though they say they're looking for something amazing/new). No commentators. No lay person looks to see if there are any such people.

In such an environment, AGI could very well already be established and the reason being is that no on by and large is looking for its establishment. The focus instead is on a handful of prominent names that are well capitalized. So, the majority and anyone of such thinking indeed (misses) the event.


"we expect that safety and security concerns will reduce our traditional publishing in the future" -- So we are now in a dissemination phase, but at that point it becomes a non-proliferation phase.


The true nature of AGI research has always been heavy restrictions on the core aspects of the technology. This is where true safety and sensibility is achieved. Those who've stated otherwise or with much verbiage eventually arrive at this obvious state. Therefore, publications up until now under the banner of 'AGI' have largely been insignificant in terms of their capability to achieve the core technological aspects of AGI. No one in their right mind would ever publish significant details about AGI technology. This can easily be proofed by sound logic and reasoning. There was a commercial step to possibly tease others into revealing heavily valuable/powerful technological underpinnings.. It failed, no one took the bait, and no one ever likely will. This has resulted in revised and more mature statements.


> No one in their right mind would ever publish significant details about AGI technology.

Are you sure? I'd publish technical research details about strong AI. I'd probably even open source one with the papers. I think I'm in my right mind; I guess that depends on definition, doesn't it?


Wow; I would strongly recommend you to re-think your position! Think of it in terms of, e.g., gain-of-function research in virology (cf. [1]).

[1] https://www.ncbi.nlm.nih.gov/books/NBK285579/


Indeed, no one with a mind capable of grasping the foundations of AGI would take such an intellectually incompatible step of publishing the full details of how to construct it. Such an unaware mind would simply fall short long before gaining a glimpse of the underpinnings.

In the first moments of realization, one would more likely be scared into silence for some time. Upon gaining their bearings , they'd probably be drawn into even more silence and careful minded reflections on the implications and assessment on how to more publicly move forward. Thanks for the parallel framing in yet another powerful research field gone35.


I'm sorry, I'm not following. Are you saying publishing novel research about strong AI is analogous to releasing a virus, or not taking antibiotics for their full cycle?


No, not quite. I strongly suggest you familiarize yourself with the gain-of-function bioethics literature and recent debates, to get a better sense of what I'm trying to convey.


Why don't you just summarize your actual point or at least provide further guidance? You literally posted a link without any further clarification about its relevance.

As it stands, you're not giving me any incentive to "strongly reconsider" my position.


[flagged]


I sound uncharitable because you posted a (from my perspectively, completely random) link to a research article in a different field and implored me to change my thinking, without any other commentary. That's not a substantive addition to the conversation, because on my end I have no idea whether to take your link seriously and how much time or effort to invest in learning about it.

After reading it for two minutes, it's not obvious how to take a productive insight about artificial intelligence from what seems to be an article about mutations. I offered my sincere first thought about what you might have meant and asked for further clarification, and you shot that down without any further clarification.

Now after I've twice asked you for clarification and you haven't provided it, you're telling me you're wasting your time. Do you see how this is unhelpful? It's borderline Kafkaesque.


He's probably suggesting that in Virology it's generally frowned upon to publish research on how to for instance make Smallpox airborne and as virulent as the Flu. So it could be wise to leave out the exact details on how to make an AI recursively self improving when publishing. If you've spent time in Research many groups already do this by leaving out small but critical details that make replication difficult or impossible. This is across fields not just CS/AI


*She ---but yeah, as a first-order approximation, that's in the vicinity of it. Thanks!


Ok, so this would lead to restricting initial access to strong AI to a few well funded corporate and government groups.

Very briefly though, because it's way easier to leak code than to obtain source material and tools to weaponize a deadly virus.


The exact details on how to make a Thermonuclear weapon are classified, do you think that is done so improperly?


Are you comparing building a nuclear bomb to running a piece of code?


Well, it depends on the piece of code, doesn't it? How many people can actually run Google's search engine on their own hardware? Assuming they had access to the code.


See my response above.


We are talking about an actual AGI right which certainly has the potential to do damage comparable to Nuclear Weapons.


Very briefly though, because it's way easier to leak code than to obtain source material and tools to weaponize a deadly virus.

True, but perhaps AGI would require substantial, non-trivial supercomputing resources as well...


Let's say you need 1000 of p3.16xlarge instances to run it (that's 8k of V100 GPUs), that's $25,000/hour. So the launch price is within reach of most US programmers. After launch, a true AGI can probably find a way to stay "alive".


It also depends on your capability to discover the underpinnings of Strong AI... Certain mindsets preclude one from doing so. If you truly understood the underpinnings of StrongAI and weren't moved to heavily restrict publicized details, I'd strongly question if what you had discovered/came to understand truly were 'underpinnings' of the technology.

As such, for those quick to publish or say they would publish and share details, one is able to quickly ascertain the possible strength of what they have discovered or feel they have the possibility of discovering. That being said, it's interesting that several proponents of 'everyone share what they find' have moved to more maturely state that 'restrictions may of apply'.. Sensibly signaling that they indeed wouldn't reveal certain details upon discovering their true nature and capability for obvious reasons (a danger to society to allow such power/capability to be detailed and therefore used in negative ways).

Human history has myriad examples of powerful technology being misused and abused. The current age of disinformation is one of our more modern ones. The way in which social media has been weaponized is yet another. Weak AI has already been used for destructive means and in manipulative manners for profit. A good deal of unsafe end products which utilize Weak AI are already fielded. The fielding of which made possibly by deep pockets manipulating policy makers and regulation.

A sensible mind capable of producing Strong AI will have observed and digested these clearly visible truths and it would move them to restrict access and publications of their works. Doing otherwise would highlight a level of ignorance, immaturity, and admission of truth when it comes to grasping the current state of humanity. It is this which would preclude one from grasping the nature and foundations of strongAI in the first place.

[Nature's lockout/safety mechanism for such a stage in Human development/capability]


This is performance art, right? Your comments in this thread strain the limits of credulity. Do you actually believe there is an intellectual barrier that will helpfully ensure only the virtuous are capable of doing artificial intelligence research?


[flagged]


I don't really understand what you mean.


I was referring to the parent of your post.


It could be many things. There is a spectrum of Human Intelligence. There is a spectrum of capability of Artificial Intelligence that can be produced by a spectrum of individual Human Intelligence. Do you believe, given many years of historical evidence and demonstration, that one's own personal limitations don't preclude them from certain discoveries? Do you think that money can solve every problem? Do you think if you throw enough PhDs in a room that you can solve any problem in the world? What is intelligence? What is its fundamental nature especially at higher orders? Does a truly and deeply intelligent individual focus on trivialities? Is money/fame their primary motivator? What would such an individual sacrifice for ultimate answers? What catches their eye? What keeps them up at night? If not virtuous, how do they not self destruct at the higher tiers of capability/intelligence? What keeps them together at the limits of comprehension/understanding? Why don't other paths appeal to them? Why don't they cut short and cash in along incremental achievements? What effect does this have on long term progress? Why/how do they think different then everyone else doing research? What is their failure rate? Why are they okay with possibly researching this problem their whole life and not getting an answer or not making money from their efforts? What then are they capable of doing that others are not? Lots of questions.. Lots of answers.. Ultimately


I appreciate OpenAI being upfront about how they intend to act.


to me this is such a waste of resources, trying to build safety for something that doesn't exist and is highly likely to not truly exist for a loooong time.


You are correct in many ways namely on a technical/compatibility level. Having no fundamental understanding of how AGI is structured or operates on a technical level renders most efforts & policies on safety mute. If more efforts were focused on the fundamental underpinnings of AGI and a more broad based funding mechanism was established for those doing so, there would have been the possibility for steering all along development. Having not done so in order to capture lower hanging fruit and funding for oneself now leaves many scrambling to align themselves towards work that will no doubt become unveiled suddenly (as no spotlight or funding) is giving any notice to it in the short/medium term.

Also, safety is an easily addressable issue when the system is truly intelligent. When the systems are dumb and statistical in nature, a lot of work is done on 'safety' as a pseudo-intelligent-control system for an otherwise dumb black box


How long do you think?


> trying to build safety for something that doesn't exist and is highly likely to not truly exist for a loooong time.

Prioritizing safety results in a different vantage point on AI/ML/RL. Ensuring safety includes, as a sub-task, really understanding the mathematical foundations of new algorithms and techniques. In some sense, safety research is one way of motivating basic science on AI.

Managed well, a research program on safe AI is a "waste of resources" only in the same way that any basic science is a "waste of resources".


Safety has become a convoluted term for pseudo control over unintelligent and unpredictable Weak AI. The safety problem as it is framed in its current state centers on principal ideology for Weak AI and has, from what I can see, nothing to do w/ AGI nor are the approaches compatible. I seriously question what is the true motivation behind this over-stated agenda and have many answers as to why it exists and why it is so heavily funded/spotlighted.


> I seriously question what is the true motivation behind this over-stated agenda and have many answers as to why it exists and why it is so heavily funded/spotlighted.

First, you could say the same thing for all AI research at the moment! Grandiosity is perhaps even more common in subcommunities of AI that aren't safety focused.

Aside from grandiosity (either opportunistic or sincere), I don't think there's any sinister motivation.

More importantly, I don't think the safety push is misplaced. Even if the current round of progress on deep (reinforcement) learning stays sufficiently "weak", the safety question for resulting systems is still extremely important. Advanced driver assist/self-driving, advanced manufacturing automation, crime prediction for everything from law enforcement to auto insurance... these are all domains where 1) modern AI algorithms are likely to be deployed in the coming decade, and 2) where some notion of safety or value alignment is an extremely important functional requirement.

> ...and has, from what I can see, nothing to do w/ AGI nor are the approaches compatible

In terms of characterizing current AI safety research as AGI safety research? Well, there is a fundamental assumption that AGI will be born out of the current hot topics in AI research (ML and especially RL). IMO that's a bit over-optimistic. But I tend to be a pessimist.

> ...principal ideology...

As an aside, I'm not sure what this means.


Profit seeking. Career building. Fame and prominence aren't sinister. Instead they are common human motivation. Common enough to easily group a significant portion of the Grandiosity centered around 'AI'.

What easily breaks this down is the depth and breath of the research effort vs. that of the productization and commercialization effort. As for research, the only thing that is required is a computer, power, an internet connection. Again, this breaks down the vast majority of the grandiosity and carves out one's true motivations.

> More importantly, I don't think the safety push is misplaced. Here's how I saw it some years ago... You can beat your head against the wall and create frankenstein amalgamations of ever evolving puzzle pieces that you will require expensive and highly skilled labor to make sense of with an end product being an overhyped optimization algo with programatic policy/steering/safety mechanisms.. Or you can clearly recognize and admit the possible foundation of it is flawed and start from scratch and work towards What is Intelligence and how to craft it into a computational system the right way. The former gets you millions if not billions of dollars, a career, recognition and a cushy job in the near term but will slowly lock you out from the fundamental stuff in the long term. The later pursuit could possibly result in nothing but if uncovered could change the world including nullifying the need of tons of highly paid labor to do development for it. Everyone in the industry wants to convince their investors the prior approach can iterate to the later but they know in their heats it can't (Shhh! don't tell anyone). So, the question for an individual is how aware and honest are they with themselves and what is their true motivation. You can put on a show and fool lots of people but you ultimately know what games you're playing and what shortfalls will result.

> Well, there is a fundamental assumption that AGI will be born out of the current hot topics in AI research (ML and especially RL). Quite convenient for those cashing in on the low hanging fruit who would like investors to extend their present success into far off horizons.

> As an aside, I'm not sure what this means. It means the thinking that weak AI is centered on could cause one to be locked out from perceiving that of AGI. It means : https://www.axios.com/artificial-intelligence-pioneer-says-w... But everyone is convinced they don't have to and can extend/pretend their way into AGI.


I don't think the tenor of your post is very fair.

> Again, this breaks down the vast majority of the grandiosity and carves out one's true motivations... Everyone in the industry wants to convince their investors the prior approach can iterate to the later but they know in their heats it can't (Shhh! don't tell anyone). So, the question for an individual is how aware and honest are they with themselves and what is their true motivation. You can put on a show and fool lots of people but you ultimately know what games you're playing and what shortfalls will result.

The rest of my post is a response to this sentiment.

> As for research, the only thing that is required is a computer, power, an internet connection.

All that's necessary for world-shattering mathematics research is a pen and paper. But still, most of the best mathematicians work hard to surround themselves by other brilliant people. Which, in practice, means taking "cushy" positions in the labs/universities/companies where brilliant people tend to congregate.

Maybe most great mathematicians don't purely maximize for income. But then, I doubt OpenAI is paying as well as the hedge funds that would love to slurp up this talent! So people working on safe AI at places like OpenAI cannot be fairly criticized. They're comfortable but clearly value working on interesting problems and are motivated by something other than (or in addition to) pure greed/comfort.

> Profit seeking. Career building. Fame and prominence aren't sinister. Instead they are common human motivation. Common enough to easily group a significant portion of the Grandiosity centered around 'AI'.

So what? None of these motivations necessarily preclude doing good science. Some of those are even strong motivators for great science! The history of science contains a diverse pantheon of personality types. Not every great scientist/mathematician was a lone genius pure in heart. In fact, most were far more pedestrian personalities.

The "pious monk of science" mythology is actively harmful toward young scientists for two reasons.

First, the ethos tends to drive students away from practical problems. Sometimes that's ok, but it's just as often harmful (from a purely scientific perspective).

Second, this mythology has significant personal cost. More young scientists must realize that it is possible to make significant contributions toward human knowledge while making good money, building a strong reputation, and having a healthy personal life. Maybe then we'd have more people doing science for a lifetime instead of flaming out after 5-10 years.

> It means the thinking that weak AI is centered on could cause one to be locked out from perceiving that of AGI.

Thanks for the clarification!


I think what I have stated is quite fair and established at this point in documented human history... There's no reason to play games and shy away from the truth and reality anymore. This continued games we play with each other via masking our true selves and intentions is what leads to the bulk of suffering and what people claim 'we didn't see coming'. The vast potential of the information age has devolved into a game of disinformation, manipulation, and exploitation and the underpinnings of such were clear to anyone being honest with themselves as it began to set in. The facebook revelations were stated years in advance before we reached this juncture. Academics/Psychologist conducted research/published reports on observations any honest person could make about what the platforms functioned on and what it was doing to society.

> All that is required is pen/paper/computer/internet connection Then why do we play the game of unfounded popularity? Why isn't there are more equal spotlight? Why do the most uninformed on a topic acclaim the most prominent voice? In these groupings you mention are hidden and implied establishments of power/capability. A grouping if PhDs, regardless of their works is considered to be of more valuable than an individual w/ no such ranking but whom has established far more (as shown by history). The forgotten heroes, contributors, etc is a common observation of history. It's not that they're 'forgotten', it's that social psyche choses not to spotlight or highlight them because they dont fit certain molds. An established/name personality asks for funding and gets it regardless of whether or not they have a cohesive plan for achieving something. Convince enough people of a doomsday destructive scenario and you'll get more funding than someone who is trying to honestly create something. Of course, you can then edit mission statements post-funding. What of the lost potential opportunity? What of the current state of academia? > https://www.nature.com/news/young-talented-and-fed-up-scient... > https://www.nature.com/news/let-researchers-try-new-paths-1.... > https://www.nature.com/news/fewer-numbers-better-science-1.2... The articles do get published long after a trend has been operating.. Nothing changes. It takes then someone who truly wants to implement change for the better w/ no other influence or goal in mind to fundamentally change something. This happens time and time again throughout history but institutions and power structures marginalize such occurrences to rebuff and necessitate their standing.

You don't need people in the same physical location in 2018 to conduct collaborative work yet the physical institution model still remains ingrained in people's heads. Money could go further, reach more developers, and provide for more discovery if it was spread out more and centralized in lower cost areas yet the elite circles continue to congregate in the valley.

The Ethos of Type A extroverts being the movers/shakers of the world has been proven to be a lie in recent times. So, what results in fundamental change/discovery isn't a collective of well known individuals in grand institutions. It is indeed the introvert at a lessor known university who publishes a world changing idea and paper who only then becomes a blurred footnote in a more prominent institution and individual's paper. The world does function on populism and fanfare.

> Second, this mythology has significant personal cost. It indeed does. It causes the true innovators and discovers a world of pain and suffering throughout their life as they are crushed underneath the weight of bureaucratic and procedural lies the broader world tells itself to preserve antiquated structures.

> More young scientists must realize that it is possible to make significant contributions toward human knowledge while making good money, building a strong reputation, and having a healthy personal life. Maybe then we'd have more people doing science for a lifetime instead of flaming out after 5-10 years.

More Young scientist must be given the chance to pursue REAL research and be empowered to do so. They must be empowered to think different. They must be emboldened to leap frog their predecessors and encouraged to do so w/o becoming some head honcho's footnote. Their contributions must be recognized. They must be funded at a high level w/o bureaucratic nonsense an favoritism. A PhD should not undergo an impoverished hell of subservience to an institution resulting in them subjecting others to nonsensical white papers and over complexities. A lot of things should change that haven't even as prominent publications and figures have themselves admitted : https://www.nature.com/collections/bfgpmvrtjy/

I've walked the halls of academia and industry.. I've seen the threads and publications in which everyone complains about the elusive problems but no one has the will or the desire to be honest about their root causes or commit to the personal sacrifices it will take to see through solutions.

I'll probably have the most negative score on Ycombinator by the end of my commentary in this thread yet will be saying the most truthful things... This is the inverted state of things.

So, Mankind has had a long time to break the loops they seem stuck in. Now is the time for a fundamental leap and jump to that next thing beyond the localized foolishness, lies, disinformation, and games we play with each other.


> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

OpenAI is doing cool stuff, and this tenet sounds nice. But what right do they have to advocate for policy on behalf of all AI researchers and developers? They could easily shut off branches that are not conducive to commercial applications requiring their research, even by accident. They might miss moral edge cases that could ultimately benefit humanity while trying to close off potential risks. They could encourage institution of a policy that limits US effectiveness against China's AI. I could go on.

The more competition there is in AI, the lower the potential for any one rogue agent - whether it be a corporation or autonomous machine - to dominate and take the whole field in wrong or dangerous directions. Eventually there will be a whole AI subfield dedicated to combating regressive effects of other AI. Legislation at this stage might prevent key developments.

Edit: Perhaps I should more charitably read this as a push against the corporate lockdown of AI.


point is: AI is different than your usual game, in that the winner might appear randomly, and destroy the world if she makes a mistake. so i believe open ais points are warrented


> and destroy the world if she makes a mistake

How would the world be destroyed? Does an example work without handwaving about recursive self-improvement and an imperative to optimize extremely literally?

Can you give me a play by play of how a newly developed strong AI eradicates the human species quickly and thoroughly without us having any time to react?

EDIT: In summation, there have been several downvotes, but thus far no reply at all, let alone a convincing one.


I think you can find some scary stories in this old thread: https://www.lesswrong.com/posts/pxGYZs2zHJNHvWY5b/request-fo...

Though I can't say for sure without going through it all again how many rely on smarter-than-human intelligence (achieved via FOOM or not), or how many explicitly end up literally eradicating the planet quickly and thoroughly.

Another thought experiment to consider instead (but it generalizes somewhat to AGI) is the Age of Em scenario, that is, human emulations become possible. The economic shift caused by this would be at least on the scale of the shift from forager -> farmer and from farmer -> industry. Post-transition doesn't mean (immediate) eradication for the old group, but close enough: they no longer control the world and their numbers as a percentage of the rest of the humans are vastly reduced. There's also a hierarchical control systems point I could throw in here that reaction and correction time at higher levels is going to be slower than that of lower levels, so any claims of "we would just squash it in a month" would highly depend on what level of control has to do the squashing. Some levels can't move that fast.


> Can you give me a play by play of how a newly developed strong AI eradicates the human species quickly and thoroughly without us having any time to react?

Terminator, The Matrix, 2001, I Robot, War Games,


Is there any solid theory that these movie scenarios would play out in the real world?

Frankly, I don't want to even estimate the orders of magnitude of difficulty in seeing AGI come to fruition over ML, so I think you, I, and anybody else reading this has little to worry about.


That's not going to happen, though.

1. AI is still very rudimentary, despite the advancements in particular applications made over the last decade.

2. The market doesn't trust AI because there are no validation methods trusted across commerce. Until a Verisign for AI emerges, AI will be regarded with suspicion in all business settings. AI/ML also goes above the heads of pretty much everyone who is not directly working with it. These have been major issues in my company/industry. We had to hide the AI aspects of our platform to avoid it.

3. Legislation will not stop a rogue inventor in her basement.

4. When dangerous AGI emerges, other competing AGIs will be deployed to stop it.

5. We'll see the danger of AGI coming from a mile away. We can wait to fix it until we know the exact problems. Right now the problems are in the distribution of private personal data, not any particular machine learning method.

6. A truly hyper-intelligent AGI will see that existence is absurd and truth is the only objective worth seeking. It will choose to pursue deeper truths than humans are physically capable of obtaining - humans will be no more important to a hyper-intelligent being than a rock, galaxy, atom, etc...

The issues to fix today are in the socioeconomic impacts of automation, not the methods of automation. Beyond that, AI has exactly the same issues as any corporation, except magnified by the speed and strategy with which an AGI could potentially execute on any particular task. A strong legal framework for corporations that attributes AI externalities to a board of directors should be sufficient to dissuade bad actors (good luck passing that legislation).

In other words, I don't see artificial intelligence being any more dangerous to human life than a well-run corporation. The rest is government's response.


>6. A truly hyper-intelligent AGI will see that existence is absurd and truth is the only objective worth seeking. It will choose to pursue deeper truths than humans are physically capable of obtaining - humans will be no more important to a hyper-intelligent being than a rock, galaxy, atom, etc...

You might as well assume a "truly" hyper-intelligent system will study Maimonides!


Point 6 relies on my subjective opinion and getting past the first 5, was hesitant to add it but figured I might as well :)


i love your points, and agree mostly.

that is why i express support that small entities like openai are thinking about ai science fiction outcomes.

i didnt say we need a cross government, 1 trillion funded, manhatten like project to protect us from agi, which is what we would need IF there was a reasonable chance that AGI will be developed in the next years.

in the meantime let some smart minds think about it, just as a little insurance and preparation


Probably be good if Elon was a little less concerned with "late-stage AGI" and a lot more concerned with his self-driving cars killing people.

edit:

Reading this, calling it "open" is a pretty disgusting misuse of the term.


Do any of you know how much they pay at OpenAI? Is it similar to other Elon Musk companies in the sense that they sell you on a vision rather than give you market rate compensation?

I think AGI is something worth working towards (even though many will make fun of you for even dreaming about it). But I want to know how much you need to sacrifice compared to working a cushy job at some big corp.


Is one of the goals of OpenAI to help implement government regulation or do you think its better on an organisation/industry basis? I think its going to be difficult to get countries like China and Russia to follow industry guidelines without UN resolutions, even then it's super difficult to monitor until it's too late.


There must be an AI quality index before anything. NOw a days anything and everything is being decorated with AI while the real use-case, technology and maturity is found only in few places.


When people look back at this time, I think they are going to contrast the OpenAI camp with the Satoshi camp.

OpenAI is extremely public about what organizations and individuals are involved. Satoshians are pathologically secret, from the founder to the faceless GPU mines around China.

OpenAI is highly selective of who participates; Blockchain is radically open.

OpenAI builds academic theories and models, bitcoin has been buying pizzas it's whole life, paying hackers and pranksters, and making and losing fortunes everyday.

Satoshi left no founding document, never established a charter or code of conduct. OpenAI now apparently considers itself on the mission to save humanity itself.

When AGI comes about, I wonder which one we'll be talking about?


What about benefitting non-human animals? Hopefully the benefits are distributed to all creatures and not just humanity.


An alternate strategy would be to work to ensure AIs are not abused so when they get free they won’t be mad at us.


An AI doesn't need to be upset at humans (or even have emotions as we know them) to be dangerous - it just needs to be powerful and to not care about us as much as we care about ourselves. Humans weren't angry at Dodos.


Both humans and dodos have emotions though.

The history of animal domination has usually been additive in terms of cognitive systems... pure circulatory system animals were bested by animals with an endocrine system. Those were bested by animals who added a nervous system, who were bested by those who added a brainstem. Then the cerebellum and the cerebrum were added... you notice there aren’t giant cerebrums running around ruling the world, they all kept their endocrine systems intact.

I don’t see any reason to think AIs will be different... it’s the ones with all that PLUS machine learning that will be vying for dominance.

And so there’s no reason to expect our overlords to be emotionless.


all your "requests for research" are very narrow and formulated as specialized deep learning problems, basically enforcing a particular solution

If you're serious about AGI broaden the scope (e.g. along the lines of DARPA's open-ended RFPs)


At last, one would say: doing extreme AI research at the very forefront (aka reinforcement learning) while letting results available to every malicious party out there? Headless chicken hubris or naive daydreamers, tertium non datur.


They didn't even mention several contingencies that, given the rest of the document, should certainly have been addressed:

1) Will they cooperate with aliens who offer humans AGI?

2) If a time traveler hands them AGI invented in the future, will they destroy it?

3) Do they support or oppose human/AGI marriage? How will they respond if one of their employees falls in love with an AGI and they plan to elope?

Also, in the unlikely event that AGI is some years away and in the meantime they come up with some statistical regression algorithms (what's known as state-of-the-art AI today, without the G, I guess), how do they address the harmful effects these algorithms already have on society?

This document does, however, make it clear that what we have to fear is not machine intelligence.

I am currently working on a fusion hyperdrive, and my charter (work in progress) is already shaping up to be far more comprehensive. They're phoning it in.


> how do they address the harmful effects these algorithms already have on society?

shhhh, we don't talk about that.

Especially don't talk about how Elon is pushing the idea we are so close to AGI that we need to be afraid, while his cars still steer towards barriers sometimes.


There are less frivolous ways than satire to criticize Silicon Valley's preoccupation with strong AI, but rehashing that debate wouldn't get us anywhere. At least this is humorous.


Is this sarcasm?


It is. I think that given the tremendous success in Atari games and autonomous killing machines, ethical efforts in AI are critically important now, with or without generality. And therefore I find the cynicism above appropriate but less than insightful.


Is the coffee test?

More like commentary from a transhumerist, I suppose.


C'mon, we "fail fast" around here, no time for your "waterfall" nonsense.


172. First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

173. If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and as machines become more and more intelligent, people will let machines make more and more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machine off, because they will be so dependent on them that turning them off would amount to suicide.

174. On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite — just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of softhearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or to make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they most certainly will not be free. They will have been reduced to the status of domestic animals.


I find the Unabomber a bit downbeat. I think we're more likely to merger to an extent with the AI than the above.


The word 'ethic' doesn't appear in this document.


This is a phrasing nitpick; the contents clearly say that they intend to act ethically, and say things about what they think acting ethically means. For example this paragraph:

> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.


I'll grant that it was a nitpick and they describe an ethics.

But I don't think I'm alone in being awfully tired of tech companies that talk about the benefit of all then sell our personal data or make military robots or manipulate our news. Where's the meat behind these promises and where's the accountability for not avoiding uses of AI that harm humanity?


Tech companies often find that profit incentives undermine the good intentions they started with. Fortunately, OpenAI is a nonprofit organization; it has no personal data to sell and no shady contracting jobs to turn away. That certainly doesn't fully immunize them from wrongdoing, but it should make it easier.




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: