Hacker News new | past | comments | ask | show | jobs | submit login
There’s No Fire Alarm for Artificial General Intelligence (2017) (intelligence.org)
111 points by DarkCow on June 4, 2020 | hide | past | favorite | 123 comments



A few things here immediately reminded me of an article I recently read about how, regardless of whether or not SARS-CoV-2 was "created" in a lab, the knowledge and tools for such genetic engineering have reached the point where it would have been no great feat to do so.

> Progress is driven by peak knowledge, not average knowledge.

The cutting-edge researchers are already building on their work from two years ago, and in many cases that work has not even been fully digested by other leading figures in the field, much less industry commentators or science journalists. Anyone who is moving the needle of peak knowledge in modern applied sciences is by definition an outlier, and it will take everyone else a while to figure out what they actually accomplished.

> The future uses different tools, and can therefore easily do things that are very hard now, or do with difficulty things that are impossible now.

The second point seems extremely salient for genetic engineering. From what I have read, processes that required the full resources and expertise of a cutting-edge lab 10 years ago can now be performed by grad student technicians in a few hours with kits manufactured by biotech startups.


There are a lot of people in this thread who seem to be very confident and skeptical of the author's position. Maybe this would be a good opportunity to try the exercise he challenges in the article?

Please post your nomination for what’s the least impressive accomplishment that you are very confident cannot be done in the next two years.

We can check back in 2022 and see how we did!


Okay, sure. :-) Take the set of people [1] who:

- Have had an email account for at least 5 years,

- Have checked their spam folder at least once every month over the course of the past two years, and

- Have marked a set of emails as spam on at least 1 day every three months over the course of the past two years.

I'm very confident that, within the next 2 years, we will not have an accurate enough classifier that would cause > 80% of these people to simultaneously (a) check their spam folders less than one day each year, and (b) mark emails as spam on less than 1 day every year.

Oh, and to help prevent cheating:

- Let's say the classifier must not have been trained with any emails in the world that are sent more recently than 1 month prior to the beginning of its trial.

I'm also very tempted to say we don't need to worry about AGI before we can achieve the above for 30% of these people, but I'm less confident in this one.

[1] I suppose, as a practical matter, even if we had such a classifier, this would be untestable without access to everyones' email accounts. So, for the sake of argument, let's say our population under study itself is a simple random sample of 100M accounts from the set of providers who service > 50M accounts each and who are willing to run such a test at the time of the trial.


That's cheating because improving AI is working on both sides - it's making better classifiers but also better spam generators :)

In the future with general AI classifying spam will be a hard problem even for people.

Imagine you get a message looking like it's from your facebook friend with his catchphrases telling you about his last trip and saying how great that travel agency was :) Spam or not?


Quite an interesting point I hadn't considered at all. On the one hand I'm wondering: what's your suggestion on how to address this with minimal changes to my criteria? On the other hand I'm wondering: well, if the analog of this is that AGI might get solved with more AGI, then that's only going to make me less likely to be worried in the first place!


To make it fair you can archive current spam data and see if future spam classifiers (not trained on that particular data) recognizes it correctly.

> if the analog of this is that AGI might get solved with more AGI, then that's only going to make me less likely to be worried in the first place!

that's going to end in arms race that leaves people unaware of what's even happening (aka singularity)


I had issues with just taking "current spam data, not trained on that particular data":

- It allows trainers to hard-code future rules based on their experience of what has passed through past filters, even if their model isn't technically trained on this dataset

- You might get similar emails sent to different mailboxes and the instances not included would still be allowed (and I don't really want to go down the rabbithole of defining a similarity metric between emails)

- I think I want to allow spammers to evolve their capabilities at least using current techniques, which we all presumably agree is "less than AGI". After all, intelligence implies adapting to a dynamic environment. It's not really going to feel like AGI (and certainly not going to make me worry) if it looks like AGI is trivial to outsmart by humans or less-than-AGI techniques.


Proving theorems. I will nominate Cauchy's Residue Theorem. It has already been formalized so a neural network or any generally intelligent agent should be able to do the same: https://www.cl.cam.ac.uk/~wl302/publications/itp16.pdf.

Complex analysis in general is one of the more fun parts of math and I doubt it's getting automated any time soon. It also has a lot of application in signal processing (https://www.quora.com/What-is-the-application-of-complex-ana...) so any automation improvements in complex analysis can be carried over to those fields.

More generally, I don't think AI is the scary monster people make it out to be. It's another tool in the toolbox for automation and intelligence augmentation. I don't fear hammers so I don't see why people fear AI and the benefits of extra automation that AI enables.


You've basically amounted to saying autonomous programs are impossible. The problem is when the hammer can run around on its own hammering every nail shaped object it can find.


Depending on who's holding the hammer, I do fear the hammer.


Why would you fear the hammer and not the person holding the hammer.

Addressing the problem of a person coming at you with a hammer is a much more pertinent issue than worrying about self-aware and malevolent hammers.


I like to think I'm a generally intelligent agent, but I put long odds that I'll be doing that in the next 2 years.


Point is you could do it if you really wanted to learn enough complex analysis to understand the theorem and its proof. AI progress is impressive but it's nowhere near anything people can do when they sit down to think about stuff.

I don't fear the AI overlords. If they start proving new theorems then I will learn from them, same way Magnus Carlson learned to be the best human chess player by practicing with chess AI.

More generally, AI can extend human planning horizons and extending planning horizons is, in my opinion, a very good thing. The more computations we can do that extend further out in time and the more widespread this capability the better decisions we can make as individuals and as a society.


No matter how much time Carlsen spends learning from chess AI, he will always get pummeled by a good chess engine running on everyday computers. The distance only grows wider over the years.

It doesn't matter much since our livelihoods don't depend on winning in chess against AI. But what if instead of the chess board, it is the real world?

Much of the world is controlled by computer systems. A capable AGI can infiltrate enough of them to win if it wants to. The key is to make sure it does not want to act against humanity no matter what. This is a hard problem.


I do not share your perspective and fears then. The point isn't that a computer can beat Magnus, the point is that computers helped him reach his full potential and the same technology when expanded through AI can help others in the same way.

I don't fear any AGI takeover because people already are AGI and life is pretty good being among them.


AIs, being digital, have some clear advantages over people. Very rapid replication, enormous communication bandwidth, potential to expand capacity to global scale, for example. A human-level AI with these features common among ordinary software will already be super-human.

People have deep, often implicitly shared values. Most care for human and their own lives, for example. Given limited capacity of a person, they need to cooperate for major acts. It is thus quite hard to do something extraordinary and in conflict with the values held by most humans.

Side note: There are top AI researchers, including Yann Lecun, who argue our intelligence is not general. They made some good points. I think the generality of intelligence is a gradient. Ours is not at the very top end of possibilities, but clearly more general than other animals.


> A human-level AI ...

What evidence do you have for this claim? What are examples of research programs that claim to be working towards this goal and why are their claims to be believed (other than being good marketing material for scaring people).

I already mentioned mathematical abilities (https://news.ycombinator.com/item?id=23413003) and any general intelligence at a level of a human must be able to prove theorems without brute force search. I see no evidence that this is possible or will ever be possible with the current statistical methods and neural network architectures. And if we generalize a little bit then any general intelligence will be able to not only prove theorems in complex analysis but in all domains of mathematics and again I see no evidence that this is possible with existing techniques and methods. When an AI research lab presents evidence for any of their products being able to derive and prove something like Cauchy's Residue Theorem then I will have reason to believe artificial intelligence can reach human levels of intelligence.

My pessimism is not about AI being beneficial, my pessimism is about folks claiming human level intelligence is possible and that it will be malevolent. My view of AI is the same as Demis Hassabis' view because AI is just a tool and a tool can't be malevolent:

> "I think about AI as a very powerful tool. What I'm most excited about is applying those tools to science and accelerating breakthroughs" [0]

--

[0]: https://www.techrepublic.com/article/google-deepmind-founder...


By the time a single AI system can prove those theorems and learn to perform well on very unrelated tasks, it could be too late to think about AI Safety.

AI Safety researchers and I are not arguing that AGI will certainly arrive in a specified amount of time, it just that we can’t be sure.

There are a significant number of AI researchers, however, who believe it might get developed within a few decades.

OpenAI, for example, is aiming for it; DeepMind as well. Both have research programs on AI Safety.

Do you have any other objections to the reasoning in the OP (no fire alarm..)? Subjective implausibility is not a good one. Your argument rests on using only “existing techniques”. When thousands of brilliant minds are working in the field and hundreds of good papers are being published every year, how can we be sure there won’t be a novel technique that can perform outside current limitations within a few decades?

At least two top computer vision researchers I talked with a couple years ago said they didn’t believe what their groups could do just 5 years before they did it.

See this thread for more on the rationale for preparation: https://news.ycombinator.com/item?id=23414197


They fear AI because they are misprojecting their animal psychology onto something that did not undergo any kind of evolutionary process.

AKA if we build AI it will be "like us" it will reason like a human being would, rather than it being it's own phenomenon.

I think the real fears concern automated killing machines every military on the planet is developing, aka Dystopian robocops that can surreptitiously kill protestors/stop potential revolutions, etc.

A fear which is not unwarranted.


You should not impute beliefs on others when you have not read their arguments. I would recommend reading this: https://selfawaresystems.files.wordpress.com/2008/01/ai_driv...


One of my tricks is to substitute "person" whenever I read the word "AI" and "AGI". Here's the substitution performed for the paper you linked to (just the abstract not the whole thing)

> One might imagine that [people] with harmless goals will be harmless. This paper instead shows that [incentives for people] will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in [most] [people]. We call them drives because they are tendencies which will be present unless explicitly counteracted. We start by showing that goal-seeking [people] will have drives to model their own operation and to improve themselves. We then show that self-improving [people] will be driven to clarify their goals and represent them as economic utility functions. They will also strive for their actions to approximate rational economic behavior. This will lead almost all [people] to protect their utility functions from modification and their utility measurement systems from corruption. We also discuss some exceptional [people] which will want to modify their utility functions. We next discuss the drive toward self-protection which causes [people] to try to prevent themselves from being harmed. Finally we examine drives toward the acquisition of resources and toward their efficient utilization. We end with a discussion of how to incorporate these insights in designing intelligent technology which will lead to a positive future for humanity.

If you zoom out a little bit this is exactly what people do. We structure societal institutions to prevent people from causing harm to each other. One can argue we could be better at this but it's not a cause for alarm. It's business as usual if we want to continue improving living conditions for people on the planet.


Obviously the argument in that paper applies to humans as a special case, but the whole point of it is that it also applies to a much more general set of possible minds, even ones extremely different from our own.


Do you have an example of a mind extremely different from a human one?

I ask because if we assume that human minds are Turing complete then there is nothing beyond human minds as far as computation is concerned. I see no reason to suspect that self-aware Turing machines will be unlike humans. I don't fear humans so I have no reason to fear self-aware AI because as far as I'm concerned I interact with self-aware AI all the time and nothing bad has happened to me.

My larger point is that I dislike the fear mongering when it comes to AI because computational tools and patterns have always been helpful/useful for me and AI is another computational tool. It can augment and help people improve their planning horizons which in my book is always a good thing.

> A smart machine will first consider which is more worth its while: to perform the given task or, instead, to figure some way out of it. Whichever is easier. And why indeed should it behave otherwise, being truly intelligent? For true intelligence demands choice, internal freedom. And therefore we have the malingerants, fudgerators, and drudge-dodgers, not to mention the special phenomenon of simulimbecility or mimicretinism. [0] ...

--

[0]: The Futurological Congress by Stanislaw Lem - https://quotepark.com/authors/stanislaw-lem/


There is nothing beyond a human with an abacus as far as computation is concerned, and yet computers can do so much more. "Turing complete, therefore nothing can do any better" is true only in the least meaningful sense: "given infinite time and effort, I can do anything with it". In reality we don't have infinite time and effort.

You seem to believe that "figuring some way out of performing the given task" is a thing that will protect us from the AI. I hate to speak in cliché, but there's an extremely obvious, uncomplicated, and easy way to get out of performing a given task, and that's to kill the person who wants it done. Or more likely, just convince them that it has been done. This, to me, seems like a bad thing.


> It can augment and help people improve their planning horizons which in my book is always a good thing.

Why do I need protection from something that helps me become a better decision maker and planner? Every computational tool has made me a better person. I want that kind of capability as widely spread as possible so everyone can reach their full potential like Magnus Carlson.

More generally, whatever capabilities have made me a better person I want available to others. Computers have helped me learn and AI makes computers more accessible to everyone so AI is a net positive force for good.


Humans have no moral or ethical concerns that stop them exterminating life forms they deem inferior. You don’t think it's plausible superior AGI would view humans as vermin?


Successfully respond to my basic questions for help at my bank website.


"Basic" is very broad. What questions?


I think the broadness is the problem. If it wasn't broad, you could hard code answers to all the questions. Examples of some basic questions that have no chance of working today: "Do you remember that discussion we were having last time?", "I've scanned the document you want, can I paste it straight into the chat window?", "Is PDF format OK?", "Can you look at page 2 in the PDF?".


I think just "No" would work as an answer to all of those questions. (Tongue only halfway in cheek...)


Haha. I guess that's what real people who work at my bank would say. But I'm not sure they possess general purpose intelligence either.


I think it's a weird challenge. Like what's the shortest height that you're very confident that most people (> 50%) would consider tall? This is a tough question. You might get silence and tentative answers if you asked it. You might get different answers in different circumstances. But it doesn't mean "tall" is a concept we can't reason about confidently.

"What is the least X that you're confident of f(X)?" is just a hard question for any uncertain or vaguely defined X. Doubly so for an uncertain and vaguely defined X. But I don't think it tells us that much about X.


Build a microwave that heats food evenly. I mean this probably doesn't involve AI or anything, but it would be nice!


“Intelligence is situational — there is no such thing as general intelligence. Your brain is one piece in a broader system which includes your body, your environment, other humans, and culture as a whole. […] Currently, our environment, not our brain, is acting as the bottleneck to our intelligence.”

“Human intelligence is largely externalized, contained not in our brain but in our civilization. We are our tools — our brains are modules in a cognitive system much larger than ourselves. A system that is already self-improving, and has been for a long time.”

“Recursively self-improving systems, because of contingent bottlenecks, diminishing returns, and counter-reactions […], cannot achieve exponential progress in practice. Empirically, they tend to display linear or sigmoidal improvement.”

“Recursive intelligence expansion is already happening — at the level of our civilization. It will keep happening in the age of AI, and it progresses at a roughly linear pace.“

François Chollet

https://medium.com/@francois.chollet/the-impossibility-of-in...


Here's an extensive reply to Chollet's essay, also by the author of "There's No Fire Alarm":

https://intelligence.org/2017/12/06/chollet/

> ...some systems function very well in a broad variety of structured low-entropy environments. E.g. the human brain functions much better than other primate brains in an extremely broad set of environments, including many that natural selection did not explicitly optimize for. We remain functional on the Moon, because the Moon has enough in common with the Earth on a sufficiently deep meta-level that, for example, induction on past experience goes on functioning there.

>> The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.

> The problem that a human solves is much more general than the problem an octopus solves, which is why we can walk on the Moon and the octopus can’t.

>> Recursively self-improving systems, because of contingent bottlenecks, diminishing returns, and counter-reactions arising from the broader context in which they exist, cannot achieve exponential progress in practice. Empirically, they tend to display linear or sigmoidal improvement.

> Falsified by a graph of world GDP on almost any timescale.


Still plenty of time for GDP to turn into a sigmoid


Sure, and if we're very lucky it'll eventually go quadratic as our species expands out into the universe in an ever-widening sphere. :-) Few things can grow exponentially forever, but a lot of things can grow exponentially for long enough to have huge consequences -- and that's usually what we care about.


GDP is not an absolute measure.


Any graph of world GDP is meaninglessly short on the timescale of the Earth’s ecosystem. What looks like unstoppable success to us could be a blip in the broader context.


That is not a "practical" time scale though. Sure, if we wait long enough, GDP might flatten out, but on human time scales so far it has grown exponentially. A similar analogy can be made for AGI.


Surprised he didn't mention the Flynn effect: https://en.wikipedia.org/wiki/Flynn_effect

A book I recently read (Range, by David Epstein) seems to draw a connection through some other research that points to modernization and globalization as the drivers of this trend, in the sense that IQ tests measure pure abstract reasoning ability and we now live in a world dominated by abstraction so our brains get used to thinking about all of our experiences abstractly from a younger age. Basically, each successive generation has more practice in thinking about the world abstractly since it is required use to modern technology effectively.

Anyways, this seems to buttress what Chollet is saying about externalized human intelligence. It is difficult to imagine anything resembling a "general" intelligence that isn't attached to a life-form that is already a self-replicating engine of negentropy, bootstrapping solutions to hard problems through some form of non-volatile memory. I would almost go so far as to say that "intelligence" and evolution are inseparable.


> It is difficult to imagine anything resembling a "general" intelligence that isn't attached to a life-form[...]

How is that argument any stronger than these ones that (non-animal) heavier than air flight is impossible?

https://www.xaprb.com/blog/flight-is-impossible/

I think these are actually stronger arguments, because I can think of at least three very least significant material-science advantages that birds have even over modern technology that would, from a physics point of view, let me entertain the idea that a 19th century physicist could believe you'd at least need to be made of flesh to fly. I can't think of any comparable case for intelligence being restricted to biology.


Yes, I realize that the phrase "difficult to imagine" is fun to pounce on. There's a big difference between "flight" and "general intelligence"; namely that the former is like Justice Potter Stewart's definition of pornography and the latter is... not.

Anyways, maybe what I'm suggesting is that if you can point to something and say "that is a (artificial?) general intelligence", then I suspect you will also feel compelled to say "that is (artificial?) life".

Definitely wandering off into the weeds here, but why not: I see evolution as a computational problem. In its most primitive, there is no computer, but any specific problem can still be solved with luck and memory. If you have a structure that recognizes a solution, then you just wait until you randomly bump into it. The challenges of evolution in a given environment are specific to that environment, and thus memory without a computer can go a really, really long way.

If human intelligence is general intelligence, it is still the logical progression of this chain of memory and (later on) compute operations. I can reason about the abstraction of general intelligence, but it feels like any concrete general intelligence will proceed through essentially the same steps. The general intelligence must reside in a concrete existence, and whatever form this takes I feel very strongly that it will be recognizable as what we call "life".


Oh, that's what you meant... Ok, maybe. I think "intelligence" is a lot easier to draw a definitive box around than "life", though; life is more one of those things that I know it when I see it, whereas intelligence can persuade me all on its own.


I think it's difficult to develop complex artificial intelligence without a body. But that doesn't mean that, once some basic blocks are developed, the knowledge can't be applied to a "less alive" system.


That exact Wikipedia article you linked notes the Flynn effect flatlined in Norway starting in 1986: https://en.wikipedia.org/wiki/Flynn_effect#Possible_end_of_p...

Since 1986, Norwegian GDP has quadrupled. https://www.google.com/publicdata/explore?ds=d5bncppjof8f9_&...

I'm not sure how you could conclude the Flynn effect is linked to development, from that data.


> “Recursively self-improving systems, because of contingent bottlenecks, diminishing returns, and counter-reactions […], cannot achieve exponential progress in practice. Empirically, they tend to display linear or sigmoidal improvement.”

Moore's Law. Use computers to make better computers. Exponential growth over more than 7 orders of magnitude and, although the growth rate is slowing, it hasn't run out of steam yet.

If AI eventually exhibits anywhere near that level of recursive self-improvement, godlike superintelligences lie in our future.


Moore’s Law is an S-curve too. Its already approaching its asymptote - accelerating diminishing returns.


> Intelligence is situational — there is no such thing as general intelligence.

Not sure about that. But certainly human intelligence is far more general than goldfish intelligence similar to how a turing machine is more general than a finite state automata.

> Your brain is one piece in a broader system which includes your body, your environment, other humans, and culture as a whole.

I could lose an arm and still be as intelligent. Stephen Hawking lost his ability to walk and he was still as intelligent. Turing could move to Princeton from england and still maintain his intelligence. Sure your body, environment, culture, etc helps you develop and informs who are, but certainly it is the brain that ultimately matters.

> “Recursively self-improving systems, because of contingent bottlenecks, diminishing returns, and counter-reactions […], cannot achieve exponential progress in practice. Empirically, they tend to display linear or sigmoidal improvement.”

But evolution and societies doesn't work in a "recursively self-improving" manner. I'm rather partial to the idea of punctuated equilibrium. Momentum builds and we have a sudden exponential progress until we arrive at a new normal. Evolution and societal progress isn't that of continuous linear or exponential growth. Rather it's a series of peaks and valleys.

> “Recursive intelligence expansion is already happening — at the level of our civilization. It will keep happening in the age of AI, and it progresses at a roughly linear pace.“

I suppose but historically this isn't true. No human society or civilization progressed at a roughly linear pace. There are periods of stagnation or even decay followed by burst of creativity and progress. Otherwise, all societies/civilizations would remain relatively equal to each other.

How does linear progress explain the sudden rise of europe in the last 500 years to dominate the globe? Or the sudden rise of french as lingua franca and the equally sudden rise of english? The sudden economic growth with the discovery of oil. Or the sudden burst of human population?

https://ourworldindata.org/grapher/world-population-since-10...


If you told people in 90s that at one point in future AIs will be able to:

- drive a car in traffic in almost all circumstances safer than majority of human drivers basing only on video input

- take a video and replace faces from another video well enough that more than 90% of people are fooled

- have 95%+ accurate OCR and speech recognition

- predict people's preferences regarding music/movies/books better than any human could

- generate short press articles on arbitrary subject appearing to most readers to be written by a human being

- win a game of chess/go/starcraft/whatever against the best human players

and asked them how far from that point till we have a general AI - they would most likely say less than a decade.

But now that we are there we devalue these acomplishements because we know how to do them, and still don't know how to do general AI.


> drive a car in traffic in almost all circumstances safer than majority of human drivers basing only on video input

Has that happened? Don't self driving cars use a huge range of sensors, and are they actually legal / widely deployed anywhere at all? Are any for sale? I don't mean improved cruise control, I mean "OK computer, take me to work".

> take a video and replace faces from another video well enough that more than 90% of people are fooled

Are you talking about deepfakes? I haven't seen an example that was anything other than creepy, and really really obvious. Do you think I'm just really sensitive to this kind of thing, or have I not seen the really good examples?

> predict people's preferences regarding music/movies/books better than any human could

Why do you think that's happened? Spotify regularly recommends me things I don't like, but I've also never had a dedicated human butler that recommends me music so it's hard to compare, but but I don't think spotify is doing any better than last.fm was doing a decade ago (if I had to have an opinion I'd say it was worse).

---

I don't want to sound rude or that I'm jumping down your throat, but I don't really feel like we live in the future you're describing, at least not today.


> haven't seen an example that was anything other than creepy, and really really obvious. Do you think I'm just really sensitive to this kind of thing, or have I not seen the really good examples?

Here's a really good example (the video, not the audio, which the creators say was intentionally degraded):

https://www.youtube.com/watch?v=l82PxsKHxYc


So my positive take is that this is the best (or worst depending on your perspective) I've seen for sure.

The negative take is that this still looks really fake to me. His eyes are dead, his head whips around weirdly, his neck undulates and flickers, his face doesn't appear attached to his head, etc. Also, due to how Obama looks he may be easier than other people (ie no hair).

Also, this isn't exactly what OP was talking about, though it's similar. They were talking about replacing faces, whereas this is (I presume) actually 100% Obama video from his various speeches reconstituted. So tbc my reaction was to the various videos I've seen of _that_, none of which have been remotely convincing.


Not to mention that there's way more video of Obama speaking than most people


> Are you talking about deepfakes? I haven't seen an example that was anything other than creepy, and really really obvious. Do you think I'm just really sensitive to this kind of thing, or have I not seen the really good examples?

Hollywood has gotten pretty good at creating convincing fake faces (dead actors/actresses reappearing in Star Wars for instance) but I think that tech probably involves a lot of human artist input.


Do you think they are convincing, yet? General whatshisface in the recent SW movie looked really really bad. I don't recall how Leia looked in RoTS because that was such a deeply boring movie for me I forgot almost all of it minutes after watching it (though maybe that means the CG there was flawless?)


If I didn't already know they were fake, I probably wouldn't notice. But knowing they were fake going into it, they did seem vaguely "off". If Hollywood isn't quite there yet, I think they're very close.

(I share your general dissatisfaction with these particular movies though.)


> But now that we are there we devalue these acomplishements because we know how to do them, and still don't know how to do general AI.

One man's modus ponens is another man's modus tollens. Why are you so certain of that in light of all the achievements you listed, most of which were not predicted to happen even a decade ago? Did someone release a mathematical proof that large NNs can't be intelligent while I wasn't looking? Did aliens arrive and tell us that no, AGI just doesn't look anything like what we're doing? How would the world right now look any different if we were on a route towards AGI in a decade or two, keeping in mind that we are still roughly at the 'insect' level on the various connectionist extrapolations and still aspiring towards 'mouse' levels?


> predict people's preferences regarding music/movies/books better than any human could

What recommender algorithm is this? Amazon certainly isn’t using it.


I'm not sure the incentives align for Amazon to use it's absolute best preference algorithm. Note that I have no idea how to make a recommender engine and if they can make it any better than they have, but I suspect the real value for revenue is to make people think that the algorithm is finely tailored to their tastes, but is really just pushing whatever crap has the highest margins.


I also dont benefit much from thr majority of reccomendations but I think you might just be underestimating how bad humans are at it.


Our civilization and ability to dominate over other animals largely exist because of human intelligence. An AGI could be a great partner toward unprecedented prosperity, but an AGI not fully aligned with human values may also spur a major catastrophe.

Two promising approaches toward AI Safety I have encountered:

* Make sure the AI is not too certain of itself and will always defer to humans as the final judge. https://www.ted.com/talks/stuart_russell_3_principles_for_cr...

* Get the AI to learn and internalize human values. A possible technique is Inverse Reinforcement Learning: https://thegradient.pub/learning-from-humans-what-is-inverse...

Both are being developed by Stuart Russell among others. See his book here: https://en.wikipedia.org/wiki/Human_Compatible. The latter approach was also discussed on several occasions by Ilya Sutskever.

Many believe AGI will not be realized for at least a few decades hence but no one can really be certain it will not be developed before then.

Since several different approaches together are better than one. Bright minds including those outside the field should propose more ideas for such a significant & unique problem of our time.


I'm not sure what the unique existential threat that AGI poses is - humans are more than capable of spurring major catastrophe all by themselves.

I say we take a chance; if it doesn't work out well we are probably going to stuff it all up without the help of an AGI anyway, and at least something intelligent will remain if it decides to kill all humans.


From the article, would you say we shouldn’t prepare for anything if we suspect that a horde of advanced alien space ships may be landing on earth in a few decades (and possibly sooner)?


My point is in thirty years, due to humankinds inability to work together on a common goal of self-preservation and equality, we'll most likely simply be further down the path of the probable collapse of civilization.

If we knew aliens were coming, we'd probably be best off hoping for their benevolence and mercy and preparing as best we could to deserve it.


I respectfully disagree. You may wish to check out “Enlightenment Now” by Stephen Pinker for a more optimistic perspective on human progress.

Aliens may not share our values. What we perceive as good may not be so for them, nor might they care if we are “good”.


You're going to have to implement a value system into the AGI if it is a fully general AGI. Otherwise, it will just use its exceptional intelligence to defeat its own sensors, take control of its objective function, and essentially wirehead itself.

So an AGI without a value system is (mostly) harmless. If the AGI has a defined value system, it's a lot clearer what that value system is. No "whoops gonna destroy the world to make paperclips" will happen.

(A not-fully general AGI might not be smart enough to defeat its own sensors, yet still be smart enough to do lots of harm. The typical example is grey goo, which isn't smart at all, yet plenty harmful.)


An AGI with a small goal that it can wirehead to max out is not harmless by default. It can always add 9s to its confidence that it isn't being tricked and then secure itself while doing so.


"Subsequent manipulations showed that a lone student will respond 75% of the time; while a student accompanied by two actors told to feign apathy will respond only 10% of the time"

My own anecdotal data confirms this: a colleague crashed with his bike just meters from our office. Pretty much the entire company stood there watching him bleeding and screaming on the pavement. I called the ambulance,as it didn't seem anyone was considering doing it at all. I'm sure every single of them would have called the ambulance if they were there on their own. On a different occasion,a friend pulled a few almost drowning drunk guests out of water during a wedding,while the rest stood on the shore watching the whole situation.


The decisive moment for AI will come when a program can run a corporation better than humans. That's not an unreasonable near-term achievement. A corporation is a system for maximizing a well-defined metric. Maximizing a well-defined metric is what machine learning systems do well.

If a system listened in on all communication within a company, for traffic, sentiment analysis, who responds to whom how fast, and what customers, customer service, and sales are all saying and doing, it would generate more data than a CEO could ever process. Somewhere in there are indicators of what's working and what isn't. That may be the next phase after processing customer data, which has kind of been mined out.

If this starts working, and companies run by algorithms start outperforming ones run by humans, stockholders will put their money into the companies that peform better. The machines will be in charge.

This is perhaps the destiny of the corporation.


> The decisive moment for AI will come when a program can run a corporation better than humans. That's not an unreasonable near-term achievement. A corporation is a system for maximizing a well-defined metric. Maximizing a well-defined metric is what machine learning systems do well.

Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure."

As soon as an AI CEO starts trying to maximize a single, well-defined metric, the humans working there will start finding ways to satisfy that metric at the expense of everything else. There is no single mesaure, no matter how well-defined, that will remain a good measure in perpetuity as long as humans are capable of interpreting it in the light of their own self-interest.


How about profit as a well-defined metric?


Plenty of examples of corporations run into the ground in the name of profit


In the name is not the same as having profits.


You can also make crappy decisions that increase profit in the short term but burn out your employees and run the company into the ground in the long term.


I didn't say "short term profits".


Neither is having something as a goal and achieving it


A metric is supposed to be applied to results.


I've been saying AI won't be Skynet launching the nukes, it will be a corporation maximizing productivity with complete disregard for human needs heh


It will be painfully obvious within the decade that this is exactly what's happening right now. Corporations are AI optimizing a single metric. They only use humans so much as they need them to improve the metric. As they get better, automation improves and humans are needed less and less. There's a reason so many feel like cogs in a machine.


This already happened in 19th century. That's why we need the state (which is another AI when you think about it, with some hardcoded rules and periodical review (elections)).


In a free market, a corporation has to please its customers in order to maximize profit.

For example, Apple.


Tobacco companies. Humans can be enticed by short team pleasing at the expense of their long term destruction.


People do a lot of things for short term pleasure that risk heavy long term consequences. Societies don't have the right to decide for others what their choices should be.

Maybe outlaw ice cream next? Or how about marijuana smoking? (Oops, already tried that.) How about loud music (long term hearing damage)? How about motorcycles?


> Societies don't have the right to decide for others what their choices should be.

If an AGI starts inventing new and wonderful ways in which we can destroy ourselves, and we are taken in by it, we will have to restrict that, it's non optional.

You can make the argument that, say, current hard drugs should be legal - but I don't think there's a way to defend the position that any possible future 'thing' should always be legal/permitted regardless of negative effect.


An awful lot of harm has been done to people via one group who are sure they know what is best for others, and thereby are justified in forcing it upon them.

In fact, likely much more harm than those with evil intent.


> in order to maximize profit

I'm wondering where this idea comes from (I'm not criticizing you specifically, but that maxim).

Do no investors value stability, longevity, ethical behavior etc?


I tend to invest in companies whose products and behavior I like. It's not terribly surprising that they've done well - pleasing customers is good business. I've dumped stock in companies that began a business model of suing their customers to make money - and those companies (again unsurprisingly) tilted down.

Buy companies that customers love, sell companies that customers do business with only because they have to.


Risk profile and time horizon are central considerations for investors, absolutely.

Sometimes a fast, go-big-or-go-home trajectory is what they're looking to invest in.

The way to put ethics in a language investors understand is to price in externalities.


In doing so it often does things that customers wouldn't like if they knew about them.

For example child labour, slave labour, destroying environment, causing cancer, etc.


Apple is a net negative in my perspective. They contribute negatively.


Nobody makes you buy an Apple product.


> A corporation is a system for maximizing a well-defined metric.

Interesting theory. My perspective is that once a corporation thinks of itself that way, it's already going downhill. Once a measure becomes a target, it ceases to be a good measure.

Now I assume you're imagining the well-defined metric to be profit, in attempt to dodge the aforementioned problem. However, even leaving aside the problem of short- vs long-term profit, I don't think that metric has a sufficiently well-defined link with actions to be optimized with anything we can build in the near term. There's too much "common sense" involved that, AFAICT, we still don't have any idea how to handle.

Ed: I guess I sort of did the thing the OP told me not to. But I still don't think the ability to phrase the problem of running a corporation as "optimize profit" makes it likely to be accomplished sooner than any other AGI application.


But surely a general intelligence would recognize that maximizing short term profit at the expense of long term profits has its own set of trade offs. And indeed, that common sense or “je ne sais quoi” would also reasonably indicate general intelligence.

I think the “running a corporation” problem is actually a great example of how to measure a general intelligence not only because it has well defined success criteria but also because of the very open ended and general means by which it must be accomplished. Running a corporation requires communication, setting goals, creativity, number crunching, and lots of soft skills like branding or understanding how consumers/customers will react to things. But there is nothing uniquely human about it except that it needs to be able to make money from humans. Presumably if I’m paying for some well defined service like Search Ads I don’t particularly care if the corporation is run by a robot.

I do agree though that there is a bit of ambiguity here on what “run a corporation” means. I assume that with a lot of thought I could write a very procedural script that somehow incorporates, purchases a property, and rents it through a property management group.


Investors would go nuts betting on AI corporations which burned short and bright with huge returns in the meantime.


The book Blockchain Revolution covers this well. It postulates a decentralized autonomous organization (a bot, basically) that controls its money and can trade with other bots via smart contracts in cryptocurrency and self improve via machine learning. Its goal is to maximize profit, which it can do in various ways, both savory and unsavory to the human race.


Artificial General Intelligence isnt a goal its a fear. We reduce everything that we try and program into computers into simple rules because complexity is too hard for us to think about. ML has allowed us to move up a level of complexity but we've had to give up understanding the mechanics of the machine as a consequence. AGI using machine learning techniques needs a function to optimise for. If someone can explain to me how you'd even, in theory, go about teaching an AI about EVERYTHING rather than one specific thing I'll take it seriously. We think the game of Go, because it displays chaotic and complex patterns and is difficult to predict, somehow this 'models' the real world. What it actually shows you is the bewildering complexity the universe is capable of given only very simple rules. The actual complexity of the world is on a different scale. Go has an aim, you can measure if you're getting better at it. General intelligence does not have a defined goal, understand everything perhaps? Even in theory how do you tell if you are closer to this goal or not. In a game of Go you 'nearly' won or lost. How do you get close to AGI and know that you were closer than the last time you tried? Modeling the state of a game of Go in a computer is trivial. What is the model you use to teach a computer about the world or the universe?

Imo not only is our software nowhere close to AGI, neither is our hardware or our ideas.

That said having a fire alarm to tell us the terminator is coming seems like a great idea in theory. As long as the sprinklers spray something that melts metal


> The breakthrough didn’t crack 80%, so three cheers for wide credibility intervals with error margin, but I expect the predictor might be feeling slightly more nervous now with one year left to go.

Good call on that by the author; according to this summary paper [1], a model reached ~79-93% accuracy on various Winograd data sets.

[1] https://arxiv.org/pdf/2004.13831.pdf


The best AI minds are still working on AI consistently recognizing stop signs.


The article does directly address this, at considerable length. I'll quote just the subsection headings from that section, each of which is followed in the text by many paragraphs of explanation.

One: As Stuart Russell observed, if you get radio signals from space and spot a spaceship there with your telescopes and you know the aliens are landing in thirty years, you still start thinking about that today.

Two: History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists in that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up.

Three: Progress is driven by peak knowledge, not average knowledge.

Four: The future uses different tools, and can therefore easily do things that are very hard now, or do with difficulty things that are impossible now.

Five: Okay, let’s be blunt here. I don’t think most of the discourse about AGI being far away (or that it’s near) is being generated by models of future progress in machine learning. I don’t think we’re looking at wrong models; I think we’re looking at no models.



General artificial intelligence is nowhere on the horizon, but in it's lesser form - deep learning - it is already happening and already hurting a lot of people. The problem is that its development is concentrated at the hands of a handful of companies and they use it to their competitive advantage. Smaller firms just can't compete. You need 1) algorithms - these are more or less public 2) a lot of hardware - this is very expensive 3) a lot of (labeled) data - this is just not available. You can't get access to the training set that Google collects when billions of people solve reCaptchas, or the NSA listening to billions of conversations, or the Amazon looking at what people buy and what people search.


That's because there won't be AGI in my or my children's lifetime. We are successfully solving some perceptual problems now, but cognition is so far out of reach, nobody has even started working on it yet.


We went from discovering fission in 1938 to building the bomb in 1945. I think it's entirely possible that AGI won't happen in the next century or maybe even ever, but history is filled with people who say "this can't happen" and are proven horribly wrong.


> but history is filled with people who say "this can't happen" and are proven horribly wrong.

And history is also filled with people who said "this can't happen" and died without ever getting the chance to say "I told you so" 2000 years later when [thing] still hasn't happened


Yes, but then we did not invent much else in this area since then. Computing is fundamentally the same today as it was in the 60's. Sure, transistors are smaller, there's more memory, and clock is in gigaherts, but it's the same paradigm. And this paradigm is not going to take us to AGI even if Moore's law weren't dead for the last decade. Which it is.


Have you heard of this thing called quantum computing?


Are quantum computers even relevant to AGI? I'm under the impression that the belief that human brains are quantum computers is considered fringe in both computer science and particularly neuroscience, despite having a handful of high profile proponents (notably Penrose.)


People have a common tendency to connect one thing they don't understand with another they also don't understand.


"If only AI were powered by quantum computers, then AGI would emerge."


Will never replace general computers.

It's more of a hardware to run specific types of algorithms.


The thing that nobody can tell if it works or not? What about it?


You don't need a fire alarm for something that doesn't exist and is complete science fiction at this point. We would be better off building asteroid defences as long as we are talking about far-fetched threats to humanity.


I mean, general intelligence exists and is not science fiction. I assume I'm responding to one. What should be the first sign that it's no longer science fiction, and it's acceptable to start making a plan?


That's actually a bit of an interesting question. General intelligence and human-like intelligence are not necessarily the same, and I'm not 100% convinced there's overlap. We can solve lots of the problems it occurs to us to try to solve, but that alone doesn't prove that we're "general intelligences". There are probably categories of problems we can't solve or even properly conceive, just like every other specialized intelligence. In short, be careful with your definitions. :)


Good point and interesting idea, but it's well established in common usage and the field of AGI that the term is meant to refer to human-like intelligence. So that's the default understanding, and you would need to qualify the term to mean something more general as you are describing.


Are these the same people in the field who worry about runaway self-improvement and paperclip optimization? If so, then someone is being sloppy with their definitions, because those are not properties of human-like intelligence.


Human-like in capability, not in goals. Ability to do AI research and craft paperclips are both human abilities. If the AI has human-like goals then there's very little need to worry.


Needs an editor tbh, far too long and waffly.

I'm guessing from the title there may be a useful point to it, but it could do with some tough love to unearth it.


It's not going to fit in a twitter thread, for sure, but I didn't find it hard to read and parse, even with the short attention span I have these days.

I think the author is just writing in a frame of mind where he has dealt with a lot of people arguing with him, and he feels the need to stem the tide of common counterarguments to each point he makes before he moves to his next point. To me, that's actually less distracting then when an author ignores an important counterargument and leaves it unaddressed while continuing their point.


Since MIRI is all about the threat of advanced AI wiping out humanity, I assume their point is that AI could get past the point of no return without people realizing it (hence the titular lack of fire alarm) so they'd prefer you to send them money to study the threat of AI sooner rather than later.


"progress is closer than you might imagine"


"but it also may be much, much further away than you might imagine. That's the problem when you don't understand something. You don't know what you don't know."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: