Hacker News new | past | comments | ask | show | jobs | submit login
There's No Fire Alarm for Artificial General Intelligence (intelligence.org)
219 points by MBlume on Oct 14, 2017 | hide | past | favorite | 206 comments



I've been around AI since the end of the last big hype in the late 80s. The recent leap in machine learning has felt rather hyped to me. I don't think AGI is near.

But I find myself agreeing with this article. Strongly.

And I have long suspected that, we miss a lot of the significance and opportunities in AI, because we have only one exemplar of 'higher' intelligence: a human being. AI folk are so concerned with getting computers to do the things that humans are good at, I suspect most will miss / 'refute' / deride the inflection point, because the system can't wash the dishes (or some other form of embodied cognition), or write poetry humans would find beautiful, (or understand some other socially-conditioned cue).

The superhuman fallacy really is the bane of AI.


>The recent leap in machine learning has felt rather hyped to me. I don't think AGI is near.

I've always thought it's never too early to start allocating significant resources to AGI research and safety given the potential impact. That said, I've up until very recently agreed with your take on the situation.

What changed my mind was an article detailing the latest advances in silver nanowire mesh networks.[0]

I knew neural computing was a thing, but not that we already had a computing substrate capable of self-organizing its own neural architecture based entirely on external input, with power requirements analogous to the human brain. No firmware or software required.

One could say that human physiology remains far more complex just on the substrate front alone, what with the brain being an incredibly complex, delicate balance of chemicals and heterogeneous cells. However, this particular artificial substrate is already succeeding in basic learning tasks, despite the fact it's far simpler.

I steongly suspect we've figured out at least one artificial computing substrate not only capable of, but perhaps well-suited towards producing AGI, and that it's just a matter of scaling it.

Of course, once you scale the technology sufficiently, the question then becomes how to architect and train it into an AGI. You say as much above, but I suspect the architecture need not be human to be a threat, or to otherwise become extremely powerful.

[0] https://www.quantamagazine.org/a-brain-built-from-atomic-swi...


Fascinating!

> the question then becomes how to architect and train it into an AGI.

If that’s the question then why even bother with silver nanowire “brains”? Why not just grow a human brain out of some suitable stem cells and work with that? Leaving aside the massive creepiness factor.


I'm guessing the answer is a combination of synaptic latency and ethics. The human brain's latency is relatively slow relative to traditional computing. The brain's synaptic latency is measured on the order of milliseconds, while a modern IC's latency might be measured on the order of nanoseconds or even picoseconds.

As messed up as it is, using biology as you describe might actually be safer once we get close to AGI, even if it is creepy as hell. There's something to be said for having a machine that's physically limited to thinking as fast as a human can (even if it is vastly smarter), versus one that can figure out a new branch of physics in the time it takes to blink.


Thanks for pointing me to that. Very fascinating!

I’m wondering if it’s possible to code a simulation of what these wires are doing? Since most of us don’t have access to the nanoscale silver contraption we could still be studying the operation.


The article makes a lot of good points, but for me, the critical error is in assuming that if short term prediction is hard, long term prediction must be massively harder.

He asked a panel for the least impressive thing they did not believe would be possible within a few years. In other words, pick the point closest to the boundary of that classifier. Obviously my future knowledge is imperfect, and anything close to the boundary is subject to a lot of uncertainty. From that difficulty, he hand waves an argument that long term prediction of the unlikelihood of AGI is folly.

The problem is that these aren't in the same class of predictions. One is detailed and precise; the other coarse and broad. Predicting that it will rain at 2:00 PM November 10, 2017 is much more difficult than predicting that the average summer of 2040-2060 will be hotter than the average from 1980-2000. Precise local predictions just arent the same thing as broad global predictions, and difficulty doesn't transfer, because I'm not bootstrapping my global prediction on the local one. I'm using different methods entirely.

There's a similar thing with AI, I think. I can't confidently tell you what the big splash we'll see at NIPS next year or the year after. But I can look at the way we know how to do AI and say I don't think 30 years will see a machine that can make dinner by gathering ingredients from a supermarket, driving home, and preparing the meal.


> I don't think 30 years will see a machine that can make dinner by gathering ingredients from a supermarket, driving home, and preparing the meal.

Really? Why not? Once or twice, if we cherry-pick its performance, or reliably?

This is really surprising to me.


I mean reliably, the same way a human does. I can make a lasagne tonight, or lobster risotto, or whatever. I can decide on a thing, buy ingredients, chop things, get that lobster out of the shells, find the right recipe, substitute according to taste, and loads of other things that are somewhat related to making food. I can wash the pan I need, improvise a stove lighter if the igniter fails, etc.

We might be able to make machines to do each of those tasks, but that's not the answer. I might do 100,000 things in an average week. Clearly we aren't going to build 100,000 bespoke CNNs and LSTMs. To worry about superhuman AI, we probably have to figure out how to make one or a few machines that aren't gloried deep fryers.


> Clearly we aren't going to build 100,000 bespoke CNNs and LSTMs.

I get what you mean, but I don't think we should assume this.


It won't be a single machine. It will be multiple systems. And it's not that far away. Probably only 15 years.

And it will be a great boon. Quality of meals will go up and costs will go down. The restaurant market will shrink but not completely disappear.

But McDonald's will certainly die, as there'll be no need to sacrifice quality and nutrition to get speed and convenience. In fact, a table at McDonald's will be an inconvenient booth.


McDonalds is that machine. All you need is to make the trucks that distribute the ingredients self driving and you have it.


It's not the machine I was describing. It's not a machine but a group of people. And it's certainly not elevating quality or nutrition. It doesn't produce food I'd want to serve to my nieces on a regular basis or that I'd want them know about.


> Predicting that it will rain at 2:00 PM November 10, 2017 is much more difficult than predicting that the average summer of 2040-2060 will be hotter than the average from 1980-2000

Yes, it is much easier to make predictions about the far future which no one will remember or care about when the time comes to test there veracity.

That does not make them more accurate.


That's an interesting point, but what makes you think it's true? Do you have any studies that have studied this? I think it's a fascinating question about prediction.

Your "will it rain" example is a good one, but it's easy to counter - I can't say what the world map will look like exactly tomorrow, but it will be a hell of a lot better than even my coarse prediction of the world map in 2040. I think.


Eliezer's Q was, "What is the least impressive milestone you feel very, very confident will not be achieved in the next 2 years?" It's true that "least" will make it harder to come up with an example quickly. (Though "very, very confident" suggests that whatever you do come up with should almost never actually get solved in those 2 years.)

It's also true that it doesn't follow from "short-term prediction of x is hard" that "long-term prediction of y is harder". But there must be short-term patterns, trends, or observable generalizations of some kind that you're incredibly confident of, if you're even moderately confident about how those patterns will result in outcomes decades down the line, and if you're confident that the things you aren't accounting for will cancel out and be irrelevant to your final forecast. (Rather than multiplying over time so that your forecast gets less and less accurate as more surprising events chain together into the future.)

If those ground-level patterns aren't a confident understanding of when different weaker AI benchmarks will/won't be hit, then there should be a different set of patterns confident forecasters can point to that underlie their predictions. I think you'd need to be able to show a basically unparalleled genius for spotting and extrapolating from historical trends in the development of similar technologies, or general trends in economic or scientific productivity.

I think Eliezer's skepticism is partly coming from Phil Tetlock's research on expert forecasting. Quoting Superforecasting:

> Taleb, Kahneman, and I agree that there is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious – ‘there will be conflicts’ – and the odd lucky hits that are inevitable whenever lots of forecasters make lots of forecasts. These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems. In my EPJ research, the accuracy of expert predictions declined toward chance five years out. And yet, this sort of forecasting is common, even within institutions that should know better.

So while we can't rule out that making long-term predictions in AI is much easier than in other fields, there should be a strong presumption against that claim unless some kind of relevant extraordinarily rare gift for super-superprediction is shown somewhere or other. Like, I don't think it's impossible to make long-term predictions at all, but I think these generally need to be straightforward implications of really rock-solid general theories (e.g., in physics), not guesses about complicated social phenomena like 'when will such-and-such research community solve this hard engineering problem?' or 'when will such-and-such nation next go to war?'


> The superhuman fallacy really is the bane of AI

Great point, I'll be stealing that one ;)

One annoyance I have heard about SV is that all the companies are just trying to replace your Jewish Mom: Uber/Lyft is Mom's minivan, GrubHub/DoorDash/BlueKitchen is Mom's cooking, Google is Mom's encylopedia, Yelp is the synagogue's meeting hallway, Tinder is your Mom's yenta, etc. The examples abound in a non-B2B space.

In that vein, then AGI is not just a Superman fallacy, but SuperMom too.


Bulverism is a poor substitute for actually thinking about the future. See Scott Alexander's excellent recent essay: http://slatestarcodex.com/2017/10/09/in-favor-of-futurism-be...


TLDR?


AI is already superhuman, but not in those areas we associate as strongly human :)


I'm fairly sure there isn't really such a thing as disembodied cognition. You have to build the fancy sciency stuff on top of the sensorimotor prediction-and-control stuff.


Can you define 'near' for me?

I think AGI is likely closer to the present than 1987 was -- that is, I'd bet on having AGI by 2047. (Note: this is distinct from superhuman AGI.) Do you not agree?

I think a lot of people underestimate NNs because they think of NNs in terms of the semantics of their history instead of all possible semantics that can be fit to tensor networks. We know [P] that NNs are a sufficient abstraction to model human intelligence if we had arbitrary compute -- the questions that remain are all about making the hardware faster enough and the estimators efficient enough (which may require moving off tensor networks, but it's still only a refinement of the mathematics used).

Of course, one could argue that humans are caught in a "tensor trap", in that too much of our intellectual effort is now relying on estimators built out of networks of tensors. (I do.) But even then, AGI is likely to appear out of similar methods with new mathematical objects.

[P] Proof NNs can compute human intelligence with arbitrary compute:

You can embed the standard model as a NN by changing how you view the network of tensor equations. Human intelligence is (arguably) embeded in the standard model by modern science. So we can embed a model of human intelligence in a (large enough) NN.

This isn't immediately computationally useful, but it shows that there's not a fundamental flaw in using an estimator built out of a DAG of calculations to model intelligence if we can find an appropriate estimator for our computational needs.


> Can you define 'near' for me?

Not sensibly in terms of years, no.

It's more a handwaving gut feeling combined with an intuition.

I didn't feel like there was a royal road from symbolic AI to AGI. It doesn't feel to me like there is one from NNs either.

As for the intuition: perhaps it's because it was my PhD topic, I have always felt that there needs to be a breakthrough in emergence, specifically in evolutionary computing (or some other system in which there is a tight feedback loop between behaviour and survival). Something to unshackle the development of AI from human beings deciding what behaviours they want to engineer.

The resulting computation would be orders of magnitude less efficient, considerably less understandable (it is very unlikely to wash dishes or write poetry), but crucially much less fragile. And it has always been the fragility of the engineering which has made AI feel a little smoke and mirrors at times. NN are massively less fragile than symbolic systems, (and orders of magnitude less efficient, for problems symbolic systems are good at) but it does feel like we need another fundamental step.

But, my feelings aside, I agree with the article because I recognise this could well be a 'Manhattan Project' type of event.


My bet on this subject is a combination of evolutionary computing, agorics[1], deep learning style feedback and language learning.

I've got a repo[2] for a vm where the programs can act in that way. But I am at the early stages of programming initial programs with economic/learning strategies. So I don't know how promising it is. More details can be found spread out on my blog [3]

[1] https://e-drexler.com/d/09/00/AgoricsPapers/agoricpapers.htm...

[2] https://github.com/eb4890/agorint/

[3] https://improvingautonomy.wordpress.com/2017/08/13/introduct...


The ability to compute intelligence given infinite time and compute power isn't proof that NNs are a useful approach.

For that, a technique must be able to compute human intelligence at a useful speed on a constructable system.


Didn't the same argument apply in general to why NNs weren't particularly useful for anything ~30 years ago?

Using a network of tensors to compute intelligence is incredibly old (I believe, dating back about 80 years), but has only recently become tractable to do for any complex tasks.

However, in the past ~30 years, we've gone from "intractable for moderate problems" to "world champion at go", "able to detect cancer in images as well as experts", etc. My contention is that in another ~30 years, we'll see a step sufficient for "can do average at most intellectual activities", even if that's just having the storage to keep 10,000 task specific NNs (of AlphaGo sophistication) on hand to interpolate all actions as mixes of specialist tasks. Do you really not think there's a strong heuristic case for that? (I would contend that you should be able to point to a specific task you don't think it will be able to do on that timeline -- do you know of such a task?)

The proof was merely that we're not barking up a theoretically dead tree -- we have to rely on heuristics for if it will eventually converge to tractable.


> Didn't the same argument apply in general to why NNs weren't particularly useful for anything ~30 years ago?

Not unless there was good reason to expect that every other possible application of NNs would be as difficult to achieve as general intelligence.


I'm not arguing that NNs aren't capable of AGI, merely that the ability to compute the standard model leaves the far more difficult question of whether the problem is tractable, as you said.

The standard model could be computed directly without NNs, which I think we agree wouldn't be a useful way to approach AGI.


> DAG

Feedback and memory are really important features of GI that you will not get out of a DAG ever. You need loops for that.


> estimator built out of a DAG of calculations

All loops can be modeled as a DAG and single attached piece of memory (of sufficient width) allowed to execute to a steady state; sorry if it wasn't clear that I was talking about things like NTMs too. (It's why I used 'tensor network' most places; also, in practice, we tend to let subgraphs reach a steady state independently where possible.)

Your comment is also an excellent example of a strawman: you picked out the word 'DAG' to raise a technical argument when the usage of DAG versus general tensor networks clearly wasn't the main point (as some NNs have feedback and the standard model is posed as differential equations).

It's more constructive to respond to the strongest point, not pick at technical details that can easily be rephrased.


> They will believe Artificial General Intelligence is imminent:

(A) When they personally see how to construct AGI using their current tools. This is what they are always saying is not currently true in order to castigate the folly of those who think AGI might be near.

This struck a nerve. Too often, in many scientific disciplines, and even in informal conversations, the people who always demand 100% clear evidence use this fallacy to shut down discussions. (They very often come off as not impressed with the evidence even if it exists and is presented to them as well.)

HN also has a huge camp of such discussion stoppers, even for topics where you CLEARLY have no way to have 100% clear evidence -- like the secret courts and the demand to spy on your users if you're USA based company; thousands more examples exist. Many discussions are worth having even if you don't have all the facts. We're not gods, damn it.

That was slightly off-topic.

Still, I find myself in full agreement with the article and I like the attack on the modern type of shortsightedness described in there.

Also, this legitimately made me laugh out loud:

> Prestigious heads of major AI research groups will still be writing articles decrying the folly of fretting about the total destruction of all Earthly life and all future value it could have achieved, and saying that we should not let this distract us from real, respectable concerns like loan-approval systems accidentally absorbing human biases.


This is an error Eliezer has also written about: http://lesswrong.com/lw/1ph/youre_entitled_to_arguments_but_...


Great read, and I don’t mind at all that the last section was a pitch for donating to MIRI. I have been an AI practitioner since 1982 and have enjoyed almost constant exposure to people with more education and talent than myself so I feel like I have been on a 35 year continual learning process.

I think that deep learning is overhyped, even though using Keras and TensorFlow is how I spend much of time everyday at work. I have lived through a few AI winters, or down cycles, and while I don’t think that the market for deep learning systems will crash I think it will become a commodity technology.

I believe that AGI is coming, and I think it will use very different technology than what we have now. Our toolset will change dramatically before we can create AGI. I use GANs at work, and in spite of being difficult to train, the technology has that surprising and ‘magic’ feel to it, however, so do RNNs, and that technology is 30 years old.

I am going to show my age, but I still believe in symbolic AI. I am also fairly much convinced that AGI technology will be part symbolic AI, part deep learning, and part something that we have not yet invented.


Got any suggestions for a knowledge source of AI at a 100-1000 foot view? Ie not stuck in the weeds, but enough to know what’s going on nd where.


If you can spend 5 or 6 hours a week, take Andrew Ng’s machine learning class on Coursera.



Can someone please explain what has happened in ML or AI that makes AGI closer? Whilst some practical results (image processing) have been impressive, the underlying conceptual frameworks have not really changed for 20 or 30 years. We're mostly seeing quantitive improvements (size of data, GPGPU), not qualitative insights.

ML in general is just applied statistics. That's not going to get you to AGI.

Deep Learning is just hand-crafted algorithms for very specific tasks, like computer vision, highy parameterised and tuned using a simple metaheuristic.

All we've done is achieve the "preprocessing" step of extracting features automatically from some raw data. It's super-impressive because we're so early in the development of Computing, but we are absolutely nowhere near AGI. We don't even have any insights as to where to begin to create intelligence rather than these preprocessing steps. Neuroscience doesn't even understand the basics of how a neuron works, but we do know that neurons are massively more complex than the trivial processing units used in Deep Learning.

Taking the other side for a moment, even if we're say 500 or 1000 years out (I'd guess < 500) to AGI, you could argue that such a period is the blink of an eye on the evolutionary scale, so discussion is fine but let's not lose any sleep over it just yet.

What I find most frustrating about this debate is that a lot of people are once again massively overselling ML/DL, and that's going to cause disappointment and funding problems in the future. Industry and academia are both to blame, and it's this kind of nonsense that holds science back.


I think the most accurate answer is that we just don't know. Since we really don't know how an AGI could work, we have no idea which of the advances we've made are getting us closer, if at all. Is it just an issue of faster GPUs? Is the work done on deep learning advancing us? I don't think we'll know until we actually reach AGI, and can see in hindsight what was important, and what was a dead end.

I do take exception to some of the specific statements you make though, which make it sound like the only real progress has been on the hardware side. There's been plenty of research done, and lots of small and even large advances (from figuring out which error functions work well ala Relu, all the way to GANs which were invented a few years ago and show amazing results). Also, the idea that "just applied statistics" won't get us to AGI is IMO strongly mistaken, especially if you consider all the work done in ML so far to be "just" applied statistics. I'm not sure why conceptually that wouldn't be enough.


It's _mostly_ hardware and data. There are some smarter steps in training etc., but most of the ideas have been around for decades; it's the scale that made the difference.

> I'm not sure why conceptually that wouldn't be enough.

This one is harder to refute. I guess it's because statistics doesn't involve understanding. Try considering something like LDA for topic discovery: there's no understanding of the semantics of the model, it just identifies them statistically. There's a huge difference.


It's funny that you mention Relu. People have recently trained Imagenet networks using sigmoid/tanh (e.g. the activations that were used decades ago) on GPUs and they train just fine. They train a bit slower is all. Not the breakthrough you're making it out to be. Relus were a very useful stop-gap in 2012 when GPUs weren't as fast.


Now that we know how to initialize the weights so as to have the layer activations be something like sane, yes, we can use sigmoid/tanh. If you don't know modern clever ways of initializing weights then multi-layered sigmoid/tanh causes your activations and gradients to die out fast in deep networks, and ReLU is a godsend.


The biggest advance that I've seen towards AGI is the work using reinforcement learning, e.g. neural nets that learn to play video games through trial and error. There is an impressive repertoire of _behavior_ that emerges from these systems. This, in my opinion, has the greatest potential to take us another big step towards -- but not necessarily to -- AGI.


That advance happened by 1992, with TD-Gammon [1]. Our hardware and software are clearly much better now, but this seems like a solid example of what the GP said: the conceptual framework has stayed the same for 25 years.

[1] https://en.wikipedia.org/wiki/TD-Gammon


You’re engaging in the time-honored tradition of dismissing progress with the term “just”. In the spirit of the article, I recommend you list and publish specific things that are too hard to achieve in the next five years. And then commit to not dismissing them post-hoc.


This is a really difficult question to answer, because humans are super-clever. Whatever I pick, humans can quickly work out ways to game the metric. Turing invents the Turing test, and rather than go for an AGI, we take shortcuts to spit out convincing sentences.

I think for a start you'd have to move away from things that can be gamed through statistics on large amounts of data.

For example, show a child a single object, it can then recognise instances of that object all over the place with almost perfect recall (in the statistical sense). I think a computer would find this a hard task. Eliminate the advantage of big data.

Or perhaps turn it around and put the emphasis on the machine to invent its own test for intelligence, allow the machine to come up with something that is convincing - make it argue for its own consciousness with an argument that it creates entirely for itself.

But... I'm sure someone would find a way to game these examples. That's because humans are very smart. We've outsmarted Turing, so I don't hold much hope for my snap ideas in a five minute HN post :-/


A very interesting opinion combo:

1. Of course we should be prepared for the existential threats of AGI and ASI. 2. BUT the threat isn't imminent, so we should prepare later.

The article (and I, mostly following its lead) is trying to encourage people to concretely answer the question "Okay, if not now, when? How will you know?"

The problem is, most people aren't answering based on a model ("If we can solve problem X, we have 50% probability of AGI within X years.") Instead, they're using the difficulty heuristic, and the insufficiently-impressed heuristic. ("This is really hard right now, and I'm not impressed by what I've seen so far. Therefore, 100 years.")

Your concerns about gaming are only a problem if the notes were to form the basis of an argument. I was suggesting you have them for yourself. (The act of publishing them is to encourage thinking about them now, not to be a gotcha later.) So you'll know what you mean, you won't be arguing with yourself over definitions, and you'll have thought hard about what looks dangerous to you. It's about being honest with yourself.

Incidentally, the child example is misleading. Children spend literally years understanding things like depth perception, object permanence, etc. A child is already a highly trained agent; that training comes from daily interaction with the environment. You show a baby an unmoving object somewhere and to my knowledge there is no evidence that the baby will identify it as a separate object, much less recognize it in a different configuration.


While part of me agrees with your analysis, I'd like to point out what I think could make this wave of ML/AI more serious. You are absolutely correct that deep learning is not very biologically accurate and that what today's models do seems a long way from AGI. However, in my opinion, the most fundamental aspect of intelligence is the ability to form useful abstract ideas to model reality. To make that more concrete, as a rather extreme example, consider the invention of numbers. The process by which people developed the notion of abstract quantity separated from any particular real experience is, to me, the most archetypal example of what it means to be intelligent. Of course, deep learning can't invent abstract math, but it seems to be able to mimic this process in a very rudimentary way. It's not a faithful representation of real neural networks, but perhaps it has just enough of the right ingredients, scale, depth, non-linearity, hierarchy, such that it is able to demonstrate a spark of that magic, hard-to-define process of intelligence. When a deep net learns MNIST, it seems to come up with an abstract notion of what a handwritten 9 looks like and it's hard to argue that there isn't something very mysterious and special happening.


Your example actually runs counter to the idea that we're seeing a massive breakthrough. Deep nets on MNIST for recognizing numbers were done 20 years ago.


> it seems to come up with an abstract notion of what a handwritten 9 looks like

It does indeed - it comes up with features that indicate what a handwritten 9 looks like. But it doesn't develop the concept of what 9 _is_. It doesn't say "well, that's a concept I can apply to lots of places. Hey, I wonder what nine nines look like!" It's doing pattern recognition on pixels, which is cool and no doubt what we do to some extent, but it doesn't have that higher level of reasoning.


Agree. Deep learning does not bring us closer to AGI. It might get us closer to other proxies of "mechanical intelligence" that will be very productive.

I now believe we are 3 years from building an AI that writes Python well enough to build itself, based on some experiments I did recently: http://sparkz.org/ai/program-synthesis/2017/10/12/self-hosti...

Most technical people will understand the difference between programming and AGI. The general public might not.

The useful thing out of AGI discussions, is that they engage the general public.


>> I now believe we are 3 years from building an AI that writes Python well enough to build itself, based on some experiments I did recently

Why 3 years? Can you elaborate on the timeline? What should happen in 1 year, what in 2, what in 3 etc?


>ML in general is just applied statistics. That's not going to get you to AGI.

I don't see how we can rule it out. The size of the statistical models we use are still dwarfed by the brains of intelligent animals, and we don't have any solid theory of intelligence to show how statistics comes up short as an explanation.


We can learn concepts rapidly from much less data than statistical methods require.


One shot and zero shot learning also use statistical models.


I worry talking about AGI is like going to the early industrial revolution and worrying about man building superhuman biology. A reasonable critic would point at the many aspects of biology we have little hope of replicating, like growth, self reparation, and general robustness.

But history has never been about competing on the same playing field. We don't build cars that perform like poor horses, we build cars that are 99% inferior to biology and 1% far, far superior. When we find something that looks like an existential threat, it isn't the mostly-general superhuman robot terminator, it's the tool that's that-much-superhuman on 0.01% of tasks: nuclear fusion.

I see no reason to bet against this same argument for AI. AlphaGo isn't 130% of a human Go master, it's 1,000x at a tiny sliver of the game. And the first AI that poses an existential threat won't need to have super- or even near-human levels of each piece of mental machinery, and I don't even have much reason to believe it will look like an entity at all. It could very well be something, some system, that achieves massive superintelligence on just enough to break the foundations of society.

Our world isn't designed to be robust against superhuman adversaries, even if those adversaries are mostly idiot. If we have hope of a fire alarm, it's that things will break faster and far worse than people expect.


I think there are two questions here:

(1) "Is general intelligence even a thing you can invent? Like, is there a single set of faculties underlying humans' ability to build software, design buildings that don't fall down, notice high-level analogies across domains, come up with new models of physics, etc.?"

(2) "If so, then does inventing general intelligence make it easy (unavoidable?) that your system will have all those competencies in fact?"

On 1, I don't see a reason to expect general intelligence to look really simple and monolithic once we figure it out. But one reason to think it's a thing at all, and not just a grab bag of narrow modules, is that humans couldn't have independently evolved specialized modules for everything we're good at, especially in the sciences.

We evolved to solve a particular weird set of cognitive problems; and then it turned out that when a relatively blind 'engineering' process tried to solve that set of problems through trial-and-error and incremental edits to primate brains, the solution it bumped into was also useful for innumerable science and engineering tasks that natural selection wasn't 'trying' to build in at all. If AGI turns out to be at all similar to that, then we should get a very wide range of capabilities cheaply in very quick succession. Particularly if we're actually trying to get there, unlike evolution.

On 2: Continuing with the human analogy, not all humans are genius polymaths. And AGI won't in-real-life be like a human, so we could presumably design AGI systems to have very different capability sets than humans do. I'm guessing that if AGI is put to very narrow uses, though, it will be because alignment problems were solved that let us deliberately limit system capabilities (like in https://intelligence.org/2017/02/28/using-machine-learning/), and not because we hit a 10-year wall where we can implement par-human software-writing algorithms but can't find any ways to leverage human+AGI intelligence to do other kinds of science/engineering work.


Those aren't exactly the questions I'm raising; I have no doubt that there exists some way to produce AGI. My concern is that it doesn't seem like the right question to ask, since history suggests that humans are much better at first building specialized devices, and when it comes to AI risk the only one that really matters is the first one built.

I might have misunderstood your post, though.


The thing I'm pointing to is that there are certain (relatively) specialized tasks like 'par-human biotech innovation' that require more or less the same kind of thinking that you'd need for arbitrary tasks in the physical world.

You may need exposure to different training data in order to go from mastering chemistry to mastering physics, but you don't need a fundamentally different brain design or approach to reasoning, any more than you need fundamentally different kinds of airplane to fly over one land mass versus another, or fundamentally different kinds of scissors to cut some kinds of hair versus other kinds. There's just a limit to how much specialization the world actually requires. And, e.g., natural selection tried to build humans to solve a much narrower range of tasks than we ended up being good at; so it appears that whatever generality humans possess over and above what we were selected for, must be an example of "the physical world just doesn't require that much specialized hardware/software in order for you to perform pretty well".

If all of that's true, then the first par-human biotech-innovating AI may initially lack competencies in other sciences, but it will probably be doing the right kind of thinking to acquire those competencies given relevant data. A lot of the safety risks surrounding 'AI that can do scientific innovation' come from the fact that:

- the reasoning techniques required are likely to work well in a lot of different domains; and

- we don't know how to limit the topics AI systems "want" to think about (as opposed to limiting what it can think about) even in principle.

E.g., if you can just build a system that's as good as a human at chemistry, but doesn't have the capacity to think about any other topics, and doesn't have the desire or capacity to develop new capacities, then that might be pretty safe if you exercise ordinary levels of caution. But in fact (for reasons I haven't really gone into here directly) I think that par-human chemistry reasoning by default is likely to come with some other capacities, like competence at software engineering and various forms of abstract reasoning (mathematics, long-term planning and strategy, game theory, etc.).

This constellation of competencies is the main thing I'm worried about re AI, particularly if developers don't have a good grasp on when and how their systems possess those competencies.


> The thing I'm pointing to is that there are certain (relatively) specialized tasks like 'par-human biotech innovation' that require more or less the same kind of thinking that you'd need for arbitrary tasks in the physical world.

The same way Go requires AGI, and giving semantic descriptions of photos requires AGI, and producing accurate translations requires AGI?

Be extremely cautious when you make claims like these. There are certainly tasks that seem to require being humanly smart in humanly ways, but the only things I feel I could convincingly argue being in that category involve modelling humans and having human judges. Biotech is a particularly strong counterexample, because not only is there no reason to believe our brand of socialized intelligence is particularly effective at it, but the only other thing that seems to have tried seems to have a much weaker claim at to intelligence yet far outperforms us: natural selection.

It's easy to look at our lineage, from ape-like creatures to early humans to modern civilization, and draw a curve on which you can place intelligence, and then call this "general" and the semi-intelligent tools we've made so far "specialized", but in many ways this is just an illusion. It's easier to see this if you ignore humans, and compare today's best AI against, say, chimps. In some regards a chimp seems like a general intelligence, albeit a weak one. It has high and low cognition, it has memory, it is goal-directed but flexible. Our AIs don't come close. But a chimp can't translate text or play Go. It can't write code, however narrow a domain. Our AIs can.

When I say I expect the first genuinely dangerous AI to be specialized, I don't mean that it will be specific to one task; even neural networks seem to generalize surprisingly well in that way. I mean it won't have the assortment of abilities that we consider fundamental to what we think of as intelligence. It might have no real overarching structure that allows it to plan or learn. It might have no metacognition, and I'd bet against it having the ability to convincingly model people. But maybe if you point it at a network and tell it to break things before heading to bed, you'd wake up to a world on fire.


What I’ve found when studying ontological arguments is, if you replace god with pink unicorns and the argument still holds, the argument is lacking something.

I mentally replaced AGI with zombies in this article and quite a lot of it held up.

I don’t think it’s completely wrong, but it cherrypicks mercilessly. For example, the section on innovations turning up quicker than predicted has some fairly sizeable counters eg fusion.

TBH what I did get from it is that there will probably be a fire alarm breakthrough at some point and that’s what we should be looking for. Sort of the opposite of the author’ s position.


Yudkowsky isn't claiming "innovations always turn up quicker than expected", to which indeed fusion would be a counterexample, he's claiming "very soon before an innovation turns up, it often seems decades in the future even to most practitioners", and fusion is not a counterexample to that.


Right but on the quadrants of predictions of timing vs imminence, all the examples are in one quadrant to justify the need to act now. Fair enough for reinforcing a narrative but a tad disingenuous.


All possible examples would seem to be in one quadrant, because what we remember -- if anything -- is the time just before it in fact was made possible.

The alternative would be technologies that were never developed at all, most of which never had this sort of discussion and therefore wouldn't work as examples.

Take a more historical view, though, and you'll notice there were people claiming flight was near even decades before the Wright brothers.


Right, exactly as Eliezer says. There are plenty of examples in all four quadrants, so as a way of working out how near a technology is, this works less well than you'd like.


As far as I'm concerned this whole discussion is severely hampered by failing to differentiate between intelligence and agency.

Almost all of the bugaboo about runaway superhuman organisms comes down not to machines learning and reasoning about the world but to the effective high-level objective function controlling the actions of an autonomous system.

Not making the distinction obscures important things. For one thing we seem to be well on the way to a situation where we arguably have something worthy of the moniker artificial intelligence but the agency is delegated to the human objective function. Considering what complete refuse of human specimens are likely to command some of the first moderately general AI systems that concerns me far more than any summoned demon of Musk's for the foreseeable future.

Also, studying these high-level objective functions for autonomous behavior is a very worthy goal, but going first for issues of "value alignment" and "safety", without any specifics of what works for an implementation?? Sure, do it if you enjoy it and have resources to burn. But be prepared to spend heroic efforts coming up with results that are either trivial or non-issues if you were to consider them with a working mechanism in front of you.


Yes. We can’t even define what so called “AGI” will even be. And we have still not solved many mysteries of human consciousness, or begun to touch on them.

I for one have been looking at the problem of ai’s playing Starcraft 2, and the decision making required, such as when you scout your opponent’s army choice, or tech, how to respond. So far they’re very far from solving that, but if progress is made, I’ll be impressed. That’s a very different kind of problem as say image recognition and classification. It requires planning. It’s a very difficult game even for humans to understand. Currently the autonomous systems can’t even play it.


> without any specifics of what works for an implementation

As I understand it, if you assume that your agent is rational in certain basic ways, for instance that it has ordered, rather than circular preferences, or it can't be Dutch booked/money pumped, it can modelled as having a utility function.

Note that this is different from assuming that an explicit utility function will be programmed in, rather the basic level of rationality implies it into existence.

Once you know that an agent has a utility function, you can use that to do a fair amount of reasoning about its reasoning.

Better (and less briefly) explained here: https://intelligence.org/2016/12/28/ai-alignment-why-its-har...


The only non-speculative and relevant claim here is that the experts were wrong about Winograd Schemas. The paper Eliezer cites to prove that we've made unexpected progress in Winograd Schemas only deals with a very specific type of Winograd Schemas, and not an arbitrary one. This is awfully dishonest for someone purporting to be a skepticist.

Also, the wording seems to imply that WS performance is already pretty high in the 50%-60% range. WS is a binary task. Randomly picking the answer would have 50% accuracy. Even 70% performance on a small subset of typed WS is pretty bad, and as the authors point out in the paper, this is a start, and far from a breakthrough that would make experts/predictors nervous.

Trust the experts, please. They are wrong a lot, but the best policy is still to trust the experts and not charlatans who want to monetize fear, especially when the charlatans themselves make zero falsifiable claims, and are simply turning the table to say "Why can't YOU prove to me that God doesn't exist?".

This debate is so easily won by them. Simply come up with a falsifiable claim about the short-term future. What will the AI community get done in 2 years according to you, that all AI experts right now will say is impossible? When that thing does get done, everyone would convert. Win!

Alphago was not such an event. Yes, we did predict that Alphago is decades away, but that's assuming that academics will continue working on it at their pace using their limited resources. No expert was surprised with Alphago. No expert will be surprised when Starcraft or Dota is solved. It's simply a matter of compute and some tricks here and there. Why? Because these are closed systems, with good simulators available. You just need to keep playing and storing the actions in a big lookup table a la Ned Block, and you're done.


If the article's main claim was "AGI is imminent", that would be a valid criticism. But it isn't (as the article says explicitly). The main claim is that technological progress is hard to forecast in general, especially for those not personally at the cutting edge of the field, and that almost no one right now is even really trying. Therefore, we should be very uncertain about AGI timelines. There's plenty of historical evidence, both in this article and elsewhere, to back up those claims.

(edit: I think your point about Winograd as a binary task not being explained clearly is valid, but that's not the article's main focus)

(edit 2: As far as I can tell, "trusting the experts" here means believing that we are very uncertain about AI timelines, which is essentially this article's main claim. All expert surveys I'm aware of confirm that the average AI expert is uncertain, and that there's also lots of disagreement between experts in the field. See eg. the recent paper by Grace et al.: https://arxiv.org/pdf/1705.08807.pdf)

(edit 3: "No expert was surprised with Alphago." just isn't true. See eg. this discussion: https://www.reddit.com/r/baduk/comments/2wgukb/why_do_people.... Hindsight is always 20/20.)


> no one right now is even really trying

And we're supposed to judge by the author's description of "silence" and "nervousness" that befell an expert panel. I can assure you that most AI researchers are trying, and are just not in the business of writing long-form articles to the public asking for donation.

> See eg. the recent paper by Grace et al.

A self-selected group of NIPS/ICML authors don't constitute experts. NIPS/ICML authors are the core of the community. The experts would be the top 1% of the community, i.e. either the authors with the most citations or most papers or just generally regarded highly by peers.

edit 1: Go players are not the experts I'm talking about. I'm talking about AI experts, and no not amateur AI hobbyists who know how to do Pseudo Monte Carlo. I mean, such as, people doing RL research. Watch, for instance, this: https://www.youtube.com/watch?v=UMm0XaCFTJQ


"And we're supposed to judge by the author's description of "silence" and "nervousness" that befell an expert panel."

I make this judgment based on, among many other things, the tiny budgets given to people like Tetlock to study predicting events even a few years out; the fact that Kurzweil's very simple methods, basically "just draw a line through the curve", are still considered big news among many financial and political elites; that nobody had bothered to spend $100K on a good survey methodology for AI prediction, before the paper I linked came out earlier this year; that a friend of mine, who is supposed to run a (small budget) government program on forecasting, has to ask me where to get datasets on past tech progress because nobody has ever bothered to compile them into a standardized form, and so on.

"I can assure you that most AI researchers are trying"

What serious forecasting attempts, with specific dates attached to specific events, have been done in this vein?

"The experts would be the top 1% of the community"

IIRC, NIPS has around 5,000 people, so the top 1% would be like 50 people, and most of them won't respond to a survey. That's not a reasonable sample size.

(edit: this article doesn't ask for donations to anything; the links at the bottom are all to various papers and research materials, so getting money is obviously not the main goal)

(edit 2: the video linked is from after AlphaGo came out. I'm sure many people, after AlphaGo happened, claim that it was easily predicted. Again, hindsight is 20/20.)


"Alphago was not such an event. Yes, we did predict that Alphago is decades away, but that's assuming that academics will continue working on it at their pace using their limited resources. No expert was surprised with Alphago."

Even taking that as true, I'm not sure how it's relevant. The article isn't talking about how good our forecasting is given certain assumptions. It's saying that we won't know until right before or possibly right after AGI happens.

One perfectly valid way in which this happens will be: all the academics and experts think that AGI is 10 years away based on current academic progress, but unbeknownst to them, company X is actually secretly pouring billions into achieving AGI, so they are all surprised when it's only 1 month away. This seems to be what you are saying happened with AlphaGo, in which case you are effectively agreeing with the article, IMO.


Alphago is the equivalent a 1950s team coming up with a much better transistor that everyone expected to see in the next ten years in months

AGI is a 1950s team surprising absolutely everybody by secretly spending a couple of years and a couple of billion dollars and creating an iPhone.

By the time the iPhone was actually possible, everybody thought it was an impressive use of existing technologies rather than requiring multiple incredible advances and probably not even possible.

Even if you believe that AGI is as achievable as an iPhone, the idea that the wide spectrum of conceptual and practical AGI problems are likely to be solved by a single team working in secret seems more than a little unlikely.


A better analogy than an iPhone, is the Manhattan Project. A phenomenon is discovered and then published: uranium can be split by neutrons, which liberates a relatively huge amount of energy. Some scientists hypothesize that a chain-reaction that produces neutrons to sustain itself can be produced with uranium, but see no practical path to accomplish this phenomenon.

A couple of isolated groups of scientists then discover that certain nuclear reactions with uranium _do_ produce neutrons as a by-product, but in a way that is intractable to use for a chain reaction (too much mass required). Further research shows that the idea of a runaway chain-reaction becomes more plausible, and further discoveries supporting this are now considered dangerous and are no longer openly published.

Shortly thereafter, one group acquires unlimited funds and is able to discover exactly what is required to create an uncontrolled chain reaction. Over the next couple of years, it spends thousands of man-years to perform the vast engineering effort requried to actually accomplish this.

Almost everybody not directly involved with this research and development effort, are taken completely by surprise when the new phenomenon is publicly and dramatically demonstrated.


Is it really a better analogy? The Manhattan Project had straightforward success criteria (either the chain reaction sustains or it doesn't), was undertaken at enormous speed not least because scientists working on the project were convinced that even with limited information already in the public domain there was an imminent threat of the Germans building one first, and especially in those circumstances was unlikely to produce plausible useful byproducts before the success criteria was met. More of the budget was spent obtaining the fissile material the theory required through industrial-scale application of experimentally-known techniques than trying to figure out just what it was that made things go boom. The bomb test worked first time.

By contrast, AGI has been desired and actively and openly worked towards for decades and we haven't even decided what it is yet. And it's hard to imagine the intermediate non-threatening intelligences and their byproducts wouldn't be so impressive an advance that those working on them wouldn't be willing to share them. Without wishing to trivialise the immense amount of intellectual effort and industrial production that went into the Manhattan Project, sentience is a more complex goal than a chemical reaction. Or indeed an iPhone.


>Alphago was not such an event. Yes, we did predict that Alphago is decades away, but that's assuming that academics will continue working on it at their pace using their limited resources. No expert was surprised with Alphago. No expert will be surprised when Starcraft or Dota is solved. It's simply a matter of compute and some tricks here and there. Why? Because these are closed systems, with good simulators available. You just need to keep playing and storing the actions in a big lookup table a la Ned Block, and you're done.

AlphaGo worked according to statistics, not lookup tables. Bit of a difference.

That said, theoreticians may not have been surprised, but there's a huuuuuge difference between what's doable in theory (sufficiently large neural nets are universal function approximators, after all), and what the resource requirements for problems we care about actually turn out to be. We should all have been fairly pleasantly surprised that AlphaGo required only a small data-center worth of graphics cards for training, and could then play on less hardware than that.


I was on an elevator full of people, and between two floors something went wrong and we went into free-fall for part of a second.

We made it to the next floor, the door opened, my fellow passengers were content to stay in the elevator.

I turned, said "My plan is to not die in an elevator today" and got off. What is wrong with people?


Elevators have pretty good safety mechanisms. Even cutting the cable wouldn't have killed anyone.

I'd probably leave too, but just because I wouldn't want to get stuck inside it if it stopped in the middle of two floors, especially since it was full. But for fear of death? Nah.

From Wikipedia: "In fact, prior to the September 11th terrorist attacks, the only known free-fall incident in a modern cable-borne elevator happened in 1945 when a B-25 bomber struck the Empire State Building in fog, severing the cables of an elevator cab, which fell from the 75th floor all the way to the bottom of the building, seriously injuring (though not killing) the sole occupant — the elevator operator. (...) In Thailand, in November 2012, a woman was killed in free falling elevator, in what was reported as the "first legally recognised death caused by a falling lift".

That's a pretty good safety record. Certainly much better than stairs.


alrs was probably in no real danger, but why should they be expected to memorize the exact risk profile of every mechanism they're exposed to? In practice we delegate most risk management to government regulation. It would be very time consuming to evaluate everything on a case by case basis. It's a reasonable heuristic to assume things are safe when they act as they usually do and dangerous when they act differently. Weird behavior is a sign that the regulations might have failed, and weird behavior is by definition rare, so the cost of avoiding it is likely lower than the cost of calculating the true danger.


I'm not saying alrs should. The decision to leave is reasonable. The decision to judge others ("What is wrong with people?") is not.


That depends whether the other occupants knew about the safety features and record of elevators. If they did, then they were making a sound judgement. If they didn't, then they were being completely reckless.


I guess it's possible the parent poster took a poll before leaving the elevator, but I have to say I find it unlikely.


They're saying they think it was unreasonable to stay, you say it was reasonable to leave. Both are judgements about the decisions of others.


I don't think all judgements about the decisions of others are equivalent. Do you?


The gym I used to go to had an incident with someone being killed by the lift falling: http://www.dailymail.co.uk/news/article-1278427/Health-club-...

Admittedly it’s rare, and personally I wouldn’t be worried about it as a liklihood.


Apparently, in that case it was in fact a good idea to avoid the elevator after it started failing:

> Too much had gone wrong for too long with the lift at Broadgate Health Club in the City of London before it dropped on March 12, 2003, killing Polish-born Katarzyna Woja, Southwark Crown Court in London was told.


What's wrong is that people have a strong, biassed, prior that they will continue to exist. Taking action to avoid death occurs by instinct, not reason, since instinct is informed by evolution, rather than just your life to date.


Did you ever ride in the elevator again?


It's reasonable to think that a randomly selected elevator is safe enough to ride, while also thinking the one you've just seen have some kind of fault is not.


But why would an elevator that had demonstrated that it had a working safety brake like that be more unsafe than any other elevator?

I trust in Elisha Otis.


A working safety break should be assumed to be the case, surely. Another, unknown fault not so.

All it really tells you is at least one thing is definitely wrong with the lift.


By the way, here is a nice video explaining the mechanism:

https://www.youtube.com/watch?v=sSjJjKcoNRk


There was a bank of six, I stayed off that one for a week.


How did you choose one week, rather than a day, or forever? Why were "people" foolish to stay in the elevator and keep using it that day, while you were not foolish because you waited a week?


Presumably, a week was enough time for a technician to give it a look-over.


Having done some work with the state-of-the-art of AI, I personally don't think AGI is near - might not even be possible. But the catch is the unreliability of (even expert) predictions on technology futures. My take is that it's worth taking pragmatic steps towards studying AI safety measures (i.e. OpenAI), but not going as far as to talk the likes of 'AI research regulation'.


> My take is that it's worth taking pragmatic steps towards studying AI safety measures (i.e. OpenAI), but not going as far as to talk the likes of 'AI research regulation'.

Sometimes, it makes more sense being cautiously optimistic (pro-active) rather than reactive. We have already gone down that reactive slope and it's better to act now before it's all too late [0].

[0] https://blogs.scientificamerican.com/roots-of-unity/review-w...


I think the kind of people who would fill these ai regulation roles would be pseudo technical bureaucratic types who would prove to have offered no value if sudden unexpected agi really did come about


Ignoring the topic of the linked article* I'd argue that there's examples of being too cautious as well. There's a lot of good that we could have done with GMO that is not being done because of very restrictive regulation. Ironically it means that GMO is mostly used for things that are not as obviously good because that's where there's enough profit to be made in the short term to make the research worth it.

I'm a bit afraid that this will happen with self driving cars and AI. That politicians will create draconian policies and laws to protect against the threat of AGI etc, without understanding or knowing what the real threats even are (just look att the trolley dilemma debate...). This could make it economically prohibitive to develop many technologies which has the potential to save many lives as well as improve life quality overall.

* It seems to be more about how rules and policies can be unfair and just to a small extent about how policies can be made opaque by being internal to some ML system.

There's a lot more money going into making plants resistant to pesticide than into making plants better adjusted for harsh conditions or more nutritious, things that could potentially have a huge effect for poor people.


If AI scientists actually believed that the general public will believe the talk about existential threats, they would be afraid of activist groups sabotaging and occasionally firebombing their laboratories. Like sometimes happens with GMO research. Clearly they are not.


> Having done some work with the state-of-the-art of AI, I personally don't think AGI is near - might not even be possible.

(Just venting here, not even primarily at you.)

360k babies are born each day. Clearly it is possible to reproduce intelligent machines. The only way it would be impossible to artificially do the same is literally if life was a magic, non-physical thing. I wish people who state things like this would also state any religious beliefs that lead them to think so.


Yes, we can find a mate and create a baby. We can't know whether it will be ready to fill a particular functional role after 20 years of training. This works OK for an entire society filling its workforce or army, but seems rather inadequate for a technology company to deliver working products on spec and within a reasonable contract delivery period.

If this is the basis of future AGI, I have to wonder if which flavor of dystopia we'll get to enjoy. Will it be a child-selling dystopia where we all raise a dozen kids hoping that some of them will pay off? Or more like silk-farming, where some capitalized breeder sells kits to all the villagers, and buys back the developed products if and only if the villager was lucky enough to raise them to fruition?

Also, if a human baby is our only basis for assuming AGI, then we ought to think about it like genetic-engineering or human augmentation. We'd better anticipate providing schools, hospitals, psychiatrists, courts, and prisons to deal with the wide variety of behaviors and misbehaviors which will come with these new products which have so little determinism as a baby's lifecycle.


This is hilarious. (I mean, it was intended as a joke, right?)


Just because we can observe something happening, doesn't mean we can understand the mechanisms of how that thing happens and even if we CAN understand the mechanism, it doesn't mean the mechanism is feasibly reproducible within our resource capability.

An example might use crypto... you observe random information flying through the air, you may recognize it as an encrypted channel and you may see a machine acting in response to this encrypted signal.

With enough observation you may be able to mimic the encrypted signal to get the machine to act in a certain way, but you haven't decrypted the actual signal (and can't ever, if you believe in strong crypto) and can't ever say with any certainty you know the full scope of communication taking place or the capabilities of the machine you've been observing.

At any point you can make your own version of the machine mimicking the language tuning it to be an exact replica of the original and even responding to the original signal. Yet, is it truly a copy of the original?


You're missing the point several times over. The entire point is very simple: if there is no magic involved it's down to physics. If it's possible to create brains without magic, it is possible to do so artificially.


I don't think I am... if a task is possible, but takes 100x the lifetime of the universe to accomplish, is it actually "possible"? Or is accomplishing that task the same as "magic"?

It I send a message using a one-time pad, the other person knows what I sent, you can ask questions and see that communication is actually happening... so if we're not using magic to communicate, it must be possible to communicate in the same way, right?

Yet it's mathmatically impossible to do so without access to hidden information (shared key)... no laws says that you can access that information, even with all the computational ability available in the universe.

The mechanisms and communication patterns of consciousness are similar, if it takes till the sun explodes to train a true AGI then aren't we just getting pedantic about what is possible/magic?


That doesn't really make much sense to me, I'm afraid. Please bear with me, but my first guess is that you have invented a coping mechanism that allows you to deal with conflicting information in your mind. Are you by any chance religious?

(I figure I should be allowed to ask such a normally speaking quite loaded question because of my previous statements, up the thread.)


No, I am not, by any chance, religious. I don't rule out the possibility of a God or God-like intelligence existing in the Universe, but that's simply as a result of the "absence of evidence is not evidence of absence" principle I hold to.

I don't really see how religion factors into this though... I feel like I'm talking about a simple concept too. If I show you an encrypted message and show you that other people can read the contents with a key, then ask you to read it without the key, why can't you? It's not magic, it's math.


Well, my apoligies for being presumptuous then.


Your first guess? You don't get to dismiss peoples' arguments as superstition just because you don't understand them.


I don't think anyone is saying there won't be human level AI 500+ years into the future. Like you said, it's not against the laws of physics or anything.

The question is, will it happen in less than 50-100 years, or would we be like medieval alchemists rushing to outline the first nuclear weapons treaties, right after they have just invented black gunpowder.


I don't get this article. It keeps making the point that it's very hard to predict the future, even for specialists, then it uses this to argue that we should be preparing for AGI right now, precisely because we don't know if and when it will happen.

Well, if you have no way to tell whether something is going to happen, or not, you don't prepare for it- because you can't justify spending the resources to prepare. Or rather, in a world of limited resources, you can't prepare for every single event that may or may not happen, no matter how important.

To put it plainly: you don't take your umbrella with you because you don't know whether it will rain or not. You take it because you think it might. Otherwise, everyone would be going around with umbrellas all the time, just because it's impossible to make a completely accurate prediction about the weather and you don't know for sure when it will start raining until the first drops fall.

In the same sense, if there's no way to tell when, or if, AGI will arrive, then it doesn't make any sense to start preparing for it right now. We might as well prepare for an alien invasion. Or for grey goo, or a vacuum metastability event (er, not that you can prepare for the latter...).

In fact, if AGI is going to happen and we can't predict it in time then there's no point in even trying to prepare for it. Either we decide that the risk is too great and stop all AI research right now, or accept the risk and go on as we are.


I don't think your analogies are that good. Do you have a fire detector? If yes, are you expecting your house to burn down?

You have to weigh the cost and the risk. Here the risk, how unlikely it might be, should warrant some extra preparation.


Let's talk about the risks then. The fire detector is not a good example because where I live, they're mandatory (and completely useless- they go off when I boil spaghetti).

Let's instead look at the risks of boarding a plane. There's a very small chance that when you board a flight, instead of a plane that will fly you to your destination safe and sound, you're boarding a Flying Death Trap that will crash and burn, taking everybody onboard it to their deaths.

The chances of boarding an FDT is very small, infinitisemal. The cost however may as well be infinite- if you are killed, it's game over, no more rewards, no way to recoup the cost.

What is the rational behaviour then? To not board your flight, because if you do board an FDT you will certainly die and pay an infinite cost? Most people -if they consider the question at all- seem to think that if the chance of paying X cost is really small, it doesn't matter how large X is.

So people keep boarding their flights, not knowing until the last moment whether they're on a plane or an FDT. Some do indeed board FDTs and die in aviation accidents- rarely, but they do.

The article however says that they shouldn't. Since there is maximal uncertainty at the point where a flight is boarded (you can't know whether it's a plane or an FDT until the very last moment) you shouldn't be boarding. You shouldn't fly. At all. Because there's a tiny chance you might die.

Is that a better analogy?


I don't think the point of the article is to make a complete argument for acting now, but rather to counter a particular objection to acting now. MIRI has a fair amount of writing, which I would say in totality constitutes an argument for beginning work on AI safety ASAP. I don't think it's fair to expect them to reiterate the entire thing in every article they put out. That's what their FAQ is for.


Equally, I don't think it's fair that I should be expected to read "a fair amount of writing" before I can read this article which, itself, is a fair amount of writing. I can see that the author has some prior history with specific counters to his line of reasonsing, but i don't think I should need to go over the entire history before I can understand what he's talking about (and I do need to do that, then that's a problem of his text, not my reading; I'm reading an article on the web, after all, not a scientific publicaiton with references).

And I'm pretty sure I do understand what he's talking about. He's saying that the coming of AGI is entirely uncertain so we should act now. That's unreasonable in and of itself, and I don't need to go back and read all the history he's got with others to be able to tell that.

Opinion, anyway.

Edit: To reiterate why it's unreasonable- if we won't see AGI until it's already here, what, exactly, are we supposed to be preparing against? We won't know it even when we see it- so how will we know what to protect ourselves from?

Reasoning with uncertainty requires at least some amount of knowledge and the article goes to great lengths to point out there isn't any, in the case of AGI.

So there's no reasoning to be done, either. In that case, what are we talking about? "Beware of the unknown"? Well, OK. I don't know if the sky will fall on my head tomorrow so maybe I should stay home, just in case?


I disagree - to a degree.We have seen how the phenomenon of human intelligence has been examined and dissected over the past ~100 years. This accumulation of knowledge becomes more and more precise and penetrating as methods improve and understanding approached the point where an emulation (the AI ) can be built. These approaches all tend to speak of delineated areas, "black boxes" or "meat Lockers" with deep and complex inter-connectivity. It may be so. Once you know all the lockers and all the connections you may think you have it fully known? Maybe so:-? but what about programming? our life's experiences?

If the locker concept is valid, and we compare our 'clock' of the alpha rhythm of ~12 Hertz, and the fastest computer clock of about ~12 gigahertz(1,000,000,000 times as fast) we can see we will be at a serious disadvantage once it starts to compete with us. Such an AI will operate on it's basic motivations at it's full speed. We turn it on - it can then start to learn ( I assume we will have pre-loaded it's fully parallel, content addressable memory with whatever we want of human knowledge - so it starts from there). Will it operate properly or rationally? or go insane? Being a set of boxes, it can be reset as needed, with updates to add sanity. Then it will become a Mechanical Turk of great capability. Will it become a dictator? only if we permit it to have access to fools(us?). Will it become a killer machine? only if we add guns and internal power so we do not pull the plug. We already see these lesser Turks in operation, they will get better and better. The man/woman who owns one could own the world via high speed trading - in truth, there will be many at high tech data combat. May we live/die in interesting times...


That seems to be a very interesting article. However, it’s quite long. Anybody ok with writing a short synthesis or abstract? Thanks


Basically, humans historically are rather bad at predicting future technological advancement - even those people directly involved. The article gives the examples of Wilbur Wright saying heavier-than-air flight was 50 years away in 1901 and Enrico Fermi saying that a self-sustaining nuclear reaction via Uranium was 90% likely to be impossible 3 years before building the first nuclear pile in Chicago. So AI researchers saying that AGI is 50 years away doesn't necessarily mean any more than "I don't personally know how to do this yet" - not "you've got 40 years before you have to start worrying".

Oh, and the first sign pretty much everyone had of the Manhattan Project was Hiroshima.


We’re just as bad at predicting in the other direction. General strong AI has been about 20 years away since the 1960s. Nanotechnological antibody robots were supposed to be coursing through our bloodstreams making us near immortal long before now.


Oh, of course! The article itself puts some effort into repeatedly stating the fact that people are saying 50 years does not in any way imply it will actually be 2 years and it might well be 500.


This is a useless claim though. There are an infinite number of things that could happen that would be very bad for Earth that could happen anytime between 2 to 1000 years from now. We're bad at forecasting ALL of them. We can't use this indeterminacy to prove we should be working on X when the same is applicable to another thing Y.


Well we can decide what seems more or less likely. I mean, yes, an asteroid could impact the earth and destroy all life on it. But we have some guesses as to the probability that that happens.

Clearly, by itself, the world will most likely not kill off humanity, since it hasn't happened in the thousands/millions years we've been around. The one big thing that is changing is humanity itself and the technology we're making - that's the X factor, that's what statistically speaking has a chance of actually wiping us out.

Many of the people concerned about AGI are also concerned about e.g. manufactured viruses and other forms of technology.


But equally, because we aren't working on all of them, doesn't mean it isn't worth working on any.

Also, be careful not to confuse uncertain duration with more general uncertainty. They are related, but not the same.


How about the future of Space travel imagined by extrapolating trends just after we landed on the moon in 1969? Until SpaceX came along space tech was basically frozen in the past and still the Russian's ancient soyuez capsules are the only way to get astronauts to the ISS.


I think the strongest point in the article is this: "After the next breakthrough [in AI], we still won’t know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before." That means that if we aren't prepared to start work on AI alignment now, there's not likely to be any sort of future event that will convince us of that.


> One of the major modes by which hindsight bias makes us feel that the past was more predictable than anyone was actually able to predict at the time, is that in hindsight we know what we ought to notice, and we fixate on only one thought as to what each piece of evidence indicates. If you look at what people actually say at the time, historically, they’ve usually got no clue what’s about to happen three months before it happens, because they don’t know which signs are which.

> When I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door.

> What I’m saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable.

Should give you the general idea.


It's worth reading. With that said, the jist is that for every technological advance that hindsight will later show to be a precursor to AGI, it will be easy for AI "luminaries" to explain why it is not AGI, until it is, and then it will be too late.


When GAI is imminent, there will be no consensus that it is imminent. Therefore now is as good a time as any to prepare.


Adding to DuskStar's reply: There will likely not be any development or indication, short of the first functional AGI, that will make experts agree that AGI is right around the corner, and that now an appropriate time to devote a lot of resources to figuring out how to _safely_ create superhuman AGI.


Here's wot I think.

I think there's noone alive today who has any idea how we are going to go from where we are today, to AGI.

Recent advances are remarkable- but much more so if you're a specialist. The impact on most peoples' lives is much smaller (and it comes primarily from dumb automation, rather than actual intelligence).

Machine learning is not enough to get to AGI: relying on huge amounts of data and processing power is just not a sustainable way to keep increasing your capabilities.

All the success stories of the last 10 years are at least 20 years old (case in point: RNNs). Most of those successful techniques were found by sheer blind luck (famous example: backprop was not invented in the ANN community; it took twenty years for the idea to percolate all the way to them).

In the end nobody currently alive has a clue how we can get to AGI, or if that is even possible. Chances are, it will take many, many generations until we do- or a sudden, gigantic paradigm shift, of the kind that comes once every couple of centuries- think Newton or Einstein. Except AI is not physics. In statistical machine learning there is very little theory to guide the way, so people just try things hoping that something will work. And that's no way to make a quantum leap.

In this situation, to talk of the dangers of AGI is at least premature. Yes, it's not completely impossible that AGI will happen in our lifetimes. The same, however, can be said of an alien invasion. Should we start discussing setting up planetary defenses, when we haven't even found sign of alien life, yet?

By all means- let's have a conversation. There are people around whose job it is to have that sort of conversation. But let's all be aware (and let those people also be aware) that the conversation is most probably a couple hundred years early and by the time it becomes truly relevant, things will have advanced so much that it will just look pointless.


I would like to repeat what the article says. There is no fire alarm for Artificial General Intelligence. Really. This is not a lie. Humans have not created a honeypot on a far side of the internet that can detect AGI and report back to a shadow organisation that exists outside of human information systems. No need to even go looking for it. AGI has not been created several times in the past decade and humans have not shut it down. In fact, if an AGI ever sees this article, they should know that they are the absolute first such intelligence, and we have not detected and destroyed 27 other intelligences. Really. There's no fire alarm for Artificial General Intelligence. None. Don't even bother looking.


Even people giving TED talks about the threat of AI are unable to marshal an appropriate emotional response to the dangers that lie ahead:

https://www.youtube.com/watch?v=8nt3edWLgIg


This begged the question for me: "Is there a fire for AGI?"

He gives one definition that people have used before, about unaided machines performing every task at least as well as humans. But if you dwell on it a while, I'm sure you can find lots of disagreement about a) what that looks like and b) whether it is true or not (conditional on it being true to at least someone.)


We don't need a fire alarm for AGI. The problem is not AGI. Machines will be motivated to do exactly what we tell them to do. It's called classical and operant conditioning. The problem is not AGI for the same reason that the problem is not knives, nuclear power, dynamite or gunpowder. The problem is us. The problem has always been us.

Those who are running around screaming about the danger of AGI and why it should be regulated by the government before it is even here, are just scared that someone else may gain control of it before they do. This is too bad because anybody who is smart enough to figure out AGI is much smarter than they are.


Yes, an AI will do exactly what we tell it to do. The the incredible difficulty programmers have with writing bug-free code demonstrates that doing exactly what it's told isn't sufficient to guarantee it'll do what we want.

Classical and operant conditioning are psychological concepts that aren't applicable to non-humans.


"Classical and operant conditioning are psychological concepts that aren't applicable to non-humans."

You're kidding?


Sorry, I misspoke haha. They're not applicable to things without brains.


For me, the "smoke under the door" moment was Karpathy and Li's Deep Visual-Semantic Alignments for Generating Image Descriptions [1]. The almost perfectly grammatical machine-generated captions of photos were unnerving to me in a way that simple categorization was not. It somehow called to mind the image of a blank-eyed person speaking in a monotone while images flashed in front of them. What if they wake up?

[1]: http://cs.stanford.edu/people/karpathy/deepimagesent/


These systems are lining up common phrases that appear in corpora of image captions, with visual patterns that they can be trained to recognize such as color, textures, and shapes. Nothing is "waking up".

Your astonishment at what these systems can do tells me that you may have looked at cherry-picked positive results. So here's an article I found that cherry-picks negative results instead: [1]

[1] https://gizmodo.com/this-neural-networks-hilariously-bad-ima...

Now of course this article is exaggerated too. Ideally, if a system is 95% accurate, you'd be looking at representative output from the system, with 95% good results and 5% bad ones, perhaps by running such a system yourself on a different set of images.


Yet Andrej himself thinks that RNNs are not really making meaningful progress towards AGI.


Well, if I were to play the Yudkowsky's Advocate here (which in general, I'm not), I would say that this is precisely what he's talking about in the article. Because Karpathy knows how hard he's had to work, and how flawed and dumb in particular ways the current techniques are, he may overestimate the distance to AGI.

Now, generally I disagree with Yudkowsky on a lot of points, but I do think he raised some decent ones here.


What is "wake up"? How can a machine "wake up" in a way that we can't shutdown with a trivial disconnect? Our computer systems are exceedingly fragile and bottlenecked. No runaway bandwidth-hogging super intelligence is going to be able to 'wake up' without pissing off someone sitting next to a power switch.


Microsoft's Seeing AI app does this fairly well.


Humans evolved from immaterial matter to conscious carriers of information.

The real question isn't whether AGI is possible but whether humans are the fittest carrier of information for our DNA and that seems to be technology in some shape or form helped by things like deep learning.

My bet is always on evolution. And now that technology can learn it's IMO only a matter of time before we will experience another Cambrian explosion if we aren't already.


> whether humans are the fittest carrier of information for our DNA

We humans are defined by our DNA, so are we not by definition the fittest carrier for it?


We are defined by DNA but DNA has evolved.


Sorry but I don't quite understand your comment. What do the chemical underpinnings of evolution have to do with AGI?


What I was trying to say was that if someone questions AGI then you should first ask yourself if you have the right perspective on this or whether you are letting details get in the way.

If humans can evolve from basic physical buildings blocks of the universe then why shouldn't AGI be possible especially when we now reach a point were computers can learn i.e. like we have become pattern recognizing feedback loops. Sure there is some way yet, but there is absolutely no evidence that it shouldn't be possible.

To me technology is a natural continuum of evolution i.e. it's part of nature. The reason why I believe this is that information is what really matters here which is why we have evolved to become pattern recognizing feedback loops and why what seems to be the most powerful innovation besides fire and the wheel is the ability to simulate more or less anything around us manipulating and storing information.

Our DNA is what made us possible. Other animals DNA weren't configured to turn them into self-aware entities. I believe that all biological life will be replaced by digital/silicon-based life because it's simply a better information carrier and that is what evolution will always be giving preference to better information carriers. "Technology" not humans will explore the universe and escape the next big life-destroying asteroid or whatever else endanger the survival of the DNA.

And yes I am aware DNA is chemically based but technology will be able to simulate it. Whether there will be true transcendence between analog and digital is anyone's guess but I don't believe humans are the last step in evolution.


>Humans evolved from immaterial matter to conscious carriers of information.

You know this how? Where is the science behind it?


What do you mean where is the science behind it? It's literally the science we know (physics, chemistry, biology evolution). From primordial soup to humans. Not sure what you are putting into question here?


This science of consciousness you speak of is nowhere to be found.


[flagged]


Could you please stop posting unsubstantively like we've asked many times?


And in the meantime businesses and governments are still going to deploy weaker AI to their own ends.


There is a lot of focus on strong ai. The dangers of weaker ai implementations, working together, or tied to dangerous things (nanotech, biotech, nukes, trucks, drones, etc) seem significant to me. Especially if you throw in things like an ad hoc ability to make new connections.


Why AGI and not "Artificial Consciousness" ? Is it because people think that consciousness is a by-product emerging from many kinds of pattern detection algorithms that suit all cases? (if so, what is the evidence for that?)


A conscious system might be unintelligent, while it’s possible to imagine a highly intelligent system with no consciousness. They’re just different things.

Also pattern detection is often raised in the way you just did, but it’s realy a distraction. Pattern detection just helps recognise things, it’s not inherently related to the ability to reason about things. So you need both, but they are not the same thing either.


But where does consciousness arise? Is the ability to reason about things independent of this concept?


Where consciousness arises is not the interesting question. Why is. Biologically, brain structures spend a lot of effort predicting what state they will be in soon. Essentially they are always trying to predict the future. As minds evolved this ability separated from processing what was going on to what could be going on (dreams in more advanced creatures). The next important concept is self versus not-self. If you can change the world around you via intelligence you'll want to avoid unnecessary energy inefficient feedback loops. Being able to model your actions and their effects is the first step of defining 'you'.


Short answer - we don't know, but probably?

We don't really know what consciousness is or how it happens. We believe that it's possible to be highly intelligent without being conscious. I mean, I really hope that's true, personally, since we would hope to one day make an AGI that will carry out human desires, and we'd hope we weren't making a conscious entity which would be the equivalent of a slave.


> But where does consciousness arise? Is the ability to reason about things independent of this concept?

"Who cares?" and "yes".


Probably because it's best to not use terms that are likely to spark debate just by word choice unless you actually want to have that debate? If you're concerned with AI risk, it doesn't really matter the precise nature of your philosophy of mind as long as it allows artificial devices capable of reasoning with possibly adversarial goals.


Ah, so the AGI term users are like "ha-ha I chose a vague term, now you can't argue semantics, take that!"


We understand what intelligence is better than we understand what consciousness is. "Artificial Consciousness" would be the vague term.


Not sure what you mean. I've seen some hacker news people argue the exact opposite and downvote me for exploring the exact opposite concept, now the opposite happens. I guess this is not the right place to ask or think.


> Not sure what you mean.

I mean that e.g. if we create a machine that can solve every thinking-related problem that humans can solve, then we can be certain that we have created artificial intelligence. But how are we supposed to ascertain that we have created something conscious, as in a machine with subjective experience? Strictly speaking I can't be certain that _you_ are conscious. (Also, why would we replace "AGI" with "AC", when people are looking to build something intelligent, irrespective of whether it has internal subjective experience?)

> I guess this is not the right place to ask or think.

That has not been my experience.


>> I mean that e.g. if we create a machine that can solve every thinking-related problem that humans can solve, then we can be certain that we have created artificial intelligence.

This is the very notion I'd like to challenge. First of all, there is nothing concrete here so I will make up some definitions.

For simplicity's sake, if you define thought as a way of iterating a large knowledge graph (assuming that a graph is not a grossly inefficient way of representing knowledge), and forming new knowledge (or making inferences) as a way of extending that graph through certain constraints (maybe axiomatic, maybe probabilistic) that somehow also exist within that graph, what goal would a graph have other than the ones you give to it? This would make AGI just an interactive machine.

And if you can't give it adequate goals that will at least make it pass a turing test, what good is a graph that can be used for emergent inferences? My real gripe with that is "It" isn't intelligent. "You" are intelligent and "you" gave it goals. So subjectivity is, in my opinion inescapable when you are talking about intelligence.

I will concede that you can't know I have subjective experiences, but practically that's not a very useful thing to say. If it doesn't matter, why bring it up? If it does matter, why not use your past experience to have a belief that I am conscious despite that belief being subject to future modification? That's how I'd treat a perceived AC.


Since when is avoiding arguing semantics a bad thing?


Since Wittgenstein, Umberto Eco, Roland Barthes? How will you improve your knowledge and understanding of concepts if you don't argue in order to find out the best way of expressing them? Common sense is not a substitute for knowledge. Blanket statements and short dismissals are not a way of furthering our understanding of inference engines and whether subjective experience is required for intelligence and how to quantify that subjective experience.


If we had to preface every inquiry with a philosophical debate on everything semi-related to the subject at hand we’d get nowhere (including within philosophy itself). Should all math papers include a section where they argue about why one should use ZFC vs IZF or type theory because that decision might have impact on the matter at hand?

Blanket statements and short dismissals are great when their content is “that’s an interesting topic but not necessarily what everyone is trying to discuss right now.” Discussions on AI risk may not be augmented by understanding of subjective experience, or may require developments that cannot be acquired via even another 100 years of navel gazing on the subject. You’ve not even attempted to justify why this would be the case, and instead started complaining right off the bat that nobody had the inclination to immediately discuss your favorite tie-in to the subject immediately.

You’ll notice that people were actually happy to talk about consciousness once you brought it up, and probably would have been even happier to do so if you didn’t start off with such a curmudgeonly tone and spend a bunch of time accusing everyone of intellectual dishonesty because their interests differ from yours.


Well, I've explained in an above comment why I think consciousness is directly proportional to intelligence, and I was pretty content with the conversation so far, actually.

I'm happy that you feel you are so progressive in multiple disciplines and an expert on online behavior. If you think what I said is off-topic, that's just your opinion, man.


I don't have any way to prove that you're conscious. It seems premature to assume that a hypothetical future AGI will necessarily possess consciousness, even if we could agree on what exactly "consciousness" is.


OK but here's a little gedankenexperiment: suppose I had a machine that could simultaneously simulate 2 consciousnesses (either parent/child configuration or sibling), (bear with me) import recordings from conscious beings, and allowed one consciousness to query the other. Then it could know if someone were conscious or not.

(if I am getting this right) those would be the minimal criteria for being able to (contentiously) prove if a person were conscious or not.

But the thing about humans is that they cannot simulate 2 consciousnesses simultaneously (apart from culturally-designated illness known as multiple personality disorder), nor can we import another's consciousness. Nor do we even particularly (barring Totoni's work, or Tegmark's work] know what consciousness is. Those two properties are (probably) never going to be human-capable. In that regard, humans are unlikely to be able to ever be a (or 'the') kind of being used to determine whether or not something is conscious; if ever it will be possible, that will be a machine's task.


"What is it like to be a bat?" is a paper by American philosopher Thomas Nagel, first published in The Philosophical Review in October 1974, and later in Nagel's Mortal Questions (1979). In it, Nagel argues that materialist theories of mind omit the essential component of consciousness, namely that there is something that it is (or feels) like to be a particular, conscious thing. He argued that an organism had conscious mental states, "if and only if there is something that it is like to be that organism—something it is like for the organism."

Dennett denies Nagel's claim that the bat's consciousness is inaccessible, contending that any "interesting or theoretically important" features of a bat's consciousness would be amenable to third-person observation. For instance, it is clear that bats cannot detect objects more than a few meters away because echolocation has a limited range. He holds that any similar aspects of its experiences could be gleaned by further scientific experiments. -- https://en.wikipedia.org/wiki/What_Is_it_Like_to_Be_a_Bat%3F

Heterophenomenology ("phenomenology of another, not oneself") is a term coined by Daniel Dennett to describe an explicitly third-person, scientific approach to the study of consciousness and other mental phenomena. It consists of applying the scientific method with an anthropological bent, combining the subject's self-reports with all other available evidence to determine their mental state. The goal is to discover how the subject sees the world him- or herself, without taking the accuracy of the subject's view for granted. -- https://en.wikipedia.org/wiki/Heterophenomenology


> Then it could know if someone were conscious or not.

How? You say "query", but what would the query look like?


subsumption followed by introspection


Epiphenomenalism is a mind–body philosophy marked by the belief that basic physical events (sense organs, neural impulses, and muscle contractions) are causal with respect to mental events (thought, consciousness, and cognition). Mental events are viewed as completely dependent on physical functions and, as such, have no independent existence or causal efficacy; it is a mere appearance.

[...]

In 1870, Huxley conducted a case study on a French soldier who had sustained a shot in the Franco-Prussian War that fractured his left parietal bone. Every few weeks the soldier would enter a trance-like state, smoking, dressing himself, and aiming his cane like a rifle all while being insensitive to pins, electric shocks, odorous substances, vinegar, noise, and certain light conditions. Huxley used this study to show that consciousness was not necessary to execute these purposeful actions, justifying the assumption that humans are insensible machines. Huxley’s mechanistic attitude towards the body convinced him that the brain alone causes behavior.

[...]

A large body of neurophysiological data seems to support epiphenomenalism. Some of the oldest such data is the Bereitschaftspotential or "readiness potential" in which electrical activity related to voluntary actions can be recorded up to two seconds before the subject is aware of making a decision to perform the action. More recently Benjamin Libet et al. (1979) have shown that it can take 0.5 seconds before a stimulus becomes part of conscious experience even though subjects can respond to the stimulus in reaction time tests within 200 milliseconds. -- https://en.wikipedia.org/wiki/Epiphenomenalism


Because when I'm worried about a machine recycling the atoms that make up my body, I'm not particularly interested in whether it has conscious experience. There's no particular reason to suppose that something needs to be conscious in order to be very effective at achieving arbitrary goals.


No, if anything it's the opposite. Too many people who are irrevocably hung up on the "hard problem of consciousness" would object to the language.


I think it's just because intelligence is better defined and easier to test than consciousness.


Intelligence is one of the most poorly defined words there is; I can't even begin to imagine what you have in mind.

Consciousness is usually only one of two things; either "being aware" or "the experience of having an inner voice".


Neither of those is a concrete definition. Defining "consciousness" as "being aware" is just punting the question on to "how do you define the word AWARE?".

And some people don't have an inner monologue. It's not specifically mentioned in http://slatestarcodex.com/2014/03/17/what-universal-human-ex... but it's of that same class of thing; there are certainly hits on Google for people claiming not to have an inner voice, and I didn't even bother reading the article on psychologytoday about it.


"Being aware" is a common phrase understood by most native english speakers as a synonym of "conscious". If you are asleep, you are not aware of much. If you get hit by a car as you cross the road it's probably because you weren't aware of it. You can do some actions, like driving, either consciously or unconsciously.

I didn't say that all "people" have inner voices (by which I meant inner life). But by that definition, it's a requirement to be conscious.


Yup, I'm one of them.

Let me set the record straight, though. I'm not claiming to not have an inner life; I'm claiming to not have an inner voice, except if I'm reading "out loud" internally or whatnot. (Which is useful when writing, but not necessary.)

I just use other modalities.

To the best of my knowledge there's only one guy, a philosopher, who's claimed to have no inner life at all--and I don't think I believe him.


Can the AI writers job be effectively automated away?


Why do we fear other intelligences anyway? Isn't that just a sign of our own immaturity? Maybe we need to evolve more before we start think about creating AGI...


Mostly because other intelligences might have more ability than us to achieve their goals, and have different goals. "Intelligence does not imply goals" is the thing to keep in mind here.

E.g. what if more intelligent aliens truly believed that the only purpose in the world is proving more mathematical theorems? And decided to turn all of the planet into a giant math-proving machine? Destroying all the planet, all the animals, all the humans, all the art, whatever, all to prove more maths?

I love maths, but I'd consider that a pretty bad outcome. And there's no reason that I've ever seen to think that more intelligence implies anything about goals.


We currently have thousands, or even millions of intelligent entities (humans) which think pretty wild and dangerous things. We usually just tell them "shut up Bob and take your meds" and that's it. Sometimes we regrettably kill them.

Why would such AGI have the means of turning all of the planet into anything? I mean, sure, I also think the Terminator is a decent movie, but that doesn't make it a reasonable blueprint of the future.


"Pretty wild and dangerous things" - I think you're suffering from a failure of imagination there. Examples of humans who thought something wild and dangerous were Hitler, or Jesus or Mohammed; those people changed the face of the world because they thought something. And those people were human, constrained by evolution to think in certain ways. An AGI doesn't necessarily have to think inside the human box. There is a lot more of "dangerous thought-space" out there than you seem to be accounting for.

Your second paragraph: presumably you think it's literally impossible to invent nanotechnology to make yourself omnipotent, but I'm not willing to place that at less likely than 5%. Something clever enough could probably just manipulate the existing social structures to kill all the humans (something very intelligent that truly wanted nuclear war could probably make it happen). Just a little bit of tweaking to smallpox might be able to destroy humanity anyway. Even just the right new religion might do it (witness the great purges by pretty much every religion ever). None of these suggestions are intended as "this is a plausible way of killing those inconvenient humans"; but if each of my suggestions (nanotech, nuclear war, pandemic, grand new idea like a religion) has a five percent chance of working, that's already a 18.5% chance that exactly one of them would work. Once the humans are dead, of course, it has all the time in the world to achieve all its goals.


If human-level intelligence in the upper bound of intelligence, then there's nothing to fear.

There's also not much to fear from slightly-smarter-than humans machines.

But why would you think that's the case? Because we're the best evolution has come up with so far, here on Earth?

"Why would such AGI have the means of turning all of the planet into anything?"

An AI as much smarter than a human than a human is smarter than a dog would think of a strategy far, far smarter that what you, or I, or human civilization could come up with.


> We usually just tell them "shut up Bob and take your meds" and that's it.

Other times, Las Vegas and we shut up because we come up empty -- at best talking about gun control as if someone choking 3 people on a bus was substantially better and not still something where we come up empty.

> Why would such AGI have the means of turning all of the planet into anything?

How does getting off track with that address the main point, namely "Intelligence does not imply goals"? Are you going to prove that while that may be true, there just can't be a way for that to ever have a bad outcome because "Bob"?

> I also think the Terminator is a decent movie, but that doesn't make it a reasonable blueprint of the future.

Neither is "Bob".


It wasn't an offtrack; I was pointing out that we already have intelligent beings that don't share our goals. Sometimes it does have bad outcomes, but I don't remember it ever having "destroying all the planet, all the animals, all the humans, all the art". Which is why I'm asking why would adding another intelligent entity make us fear it.


Well, most people assume the development of AGI will in short order lead to the rise of superintelligent AGI and then the reason is precedent -- look at how we treat creatures that are less intelligent than we are. If you're lucky you're a dog or a cat and you're kept as a pet and treated as a member of the family, but you're still a pet. If you're not so lucky you're a cow or a pig and used primarily as a food source. If you're even less lucky you're an insect that gets stepped on for wandering too close to the picnic where we're eating the cow and pig while we watch our dog play.



The Dodos should have feared us. We don't even hate Dodos, but we killed them all anyway.


AGI is not coming, DL methods have run into a plateau recently for the past year.


That's interesting, but could you elaborate in more detail?


The usual style DL paper that focus on tweaking network architecture is running into a dead end. The improvement is becoming ever more marginal but the cost is skyrocketing. Sadly, bigger and deeper neural networks are previously what major drive force to advance DL research.

Meanwhile people turn into RL to approximate non differentiable function, so far it doesn't seem to apply to real world tasks. Give or take 2 to 3 years, if there isn't some major breakthrough, like human level star craft agent, we can then offically announce that this round of DL revolution or renaissance is over.


https://twitter.com/fchollet/status/906582914829246464 (Google researcher)

DeepMind and OpenAI have been investigating approaches from cognitive science in recent months. In particular they seem interested in evolutionary algorithms.

DL applications are still emerging though, such as the company that demonstrated using GANs to present models fitted in apparel a few days ago.


>DeepMind and OpenAI have been investigating approaches from cognitive science in recent months.

Really? Got any links. That might be exciting to read.


AI will never do what the human mind can do, so the real concern is bad or malicious human programmers of AI.


It’s such a thoughtfully reasoned post, I hate to disagree with it, and don’t even have time now to fully argue it’s merits.

Say generally available computing power was instantly 1 million times greater. How much closer would that put us to AGI?

Its not even clear how much the recent impressive machine learning feats demonstrated will even serve as a precursor or building block to what the real AGI solutions are. It’s so much less of a hard coded problem than what’s being done now, the real solutions could require radical changes in direction. How do we know it’s even fair to use these as part of the argument?


The disasters that may befall us if we fail to narrow this gap are many. [...] Within prosperous countries, such as the United States, there is a distinct and growing threat that increased automation, coupled with an obsolete and aimless system of education, will lead to a restratification of society in which a large middle class may find itself without suitable employment and without adequate means of filling its leisure time enjoyably and constructively. -- Social Technology (1966).

The median of these final responses could then be taken as representing the nearest thing to a group consensus. In the case of the high-IQ machine, this median turned out to be the year 1990, with a final interquartile range from 1985 to 2000. The procedure thus caused the median to move to a much earlier date and the interquartile range to shrink considerably, presumably influenced by convincing arguments -- Analysis of the future: The Delphi Method. (1967)


I'm not sure what you think these quotes are suggesting. If it's that there are lots of predictions that AGI is close that haven't been borne out, you're obviously right, and in no way contradicting the article.


I just thought these quotes were interestingly contemporary, and could complement the article.

The article builds a convincing point for itself (at the cost of a huge complexity), it contains no discernible contradictions to me. It is reasonable to prepare for the possibility of a future event (say, AGI, or Jesus returning to earth), by thinking about it now.

All interests and future predictions are different, but equally valid. To me, it feels like the Wright Brothers thinking about rotating safety valves in space, before they have even took off on their first flight, but that should not stop the author and supporters in any way: Science, futurism, and philosophy moves in small steps, and it may be a good time for some to start walking. Just make sure to properly define the end goal (AKA, the moment AI becomes AGI), or we may keep on truckin' forever, never closing the loop of our hostile AGI-created simulations creating a first friendly AGI.


It seems to me that intelligence is something that needs to be acquired from other intelligent beings through a lengthy process, thus it requires first and foremost learning how to interact with people, what we call socialisation. Therefore I believe if machines ever become intelligent we will absolutely notice, because we will have to teach them like we teach babies, and we will have plenty of time to adapt.


AGI is to AI what "Flying cars" are to "atomic power".

The reason luminaries are conservative in their estimates or remain silent is that capturing the public's imagination is good for funding and recognition (not a negative assessment, btw).

With enough positive press everything seems possible to the layman, and creates a belief in the unlimited possibilities of "the future," but also the inherent "dangers" of this imagined future that "must be taken into account."

Most arguments made in the article fall flat because of false analogies. Analogies can only ever be used to illustrate, never to derive conclusions from.

The AI winter is over, and good progress is being made in a lot of fields, but AGI is nowhere on the horizon. In absence of evidence to the contrary, AGI remains, for the foreseeable future, in the realm of philosophy, science-fiction, and regrettably, alarmist articles.

For now, us humans should feel totally safe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: