But I find myself agreeing with this article. Strongly.
And I have long suspected that, we miss a lot of the significance and opportunities in AI, because we have only one exemplar of 'higher' intelligence: a human being. AI folk are so concerned with getting computers to do the things that humans are good at, I suspect most will miss / 'refute' / deride the inflection point, because the system can't wash the dishes (or some other form of embodied cognition), or write poetry humans would find beautiful, (or understand some other socially-conditioned cue).
The superhuman fallacy really is the bane of AI.
I've always thought it's never too early to start allocating significant resources to AGI research and safety given the potential impact. That said, I've up until very recently agreed with your take on the situation.
What changed my mind was an article detailing the latest advances in silver nanowire mesh networks.
I knew neural computing was a thing, but not that we already had a computing substrate capable of self-organizing its own neural architecture based entirely on external input, with power requirements analogous to the human brain. No firmware or software required.
One could say that human physiology remains far more complex just on the substrate front alone, what with the brain being an incredibly complex, delicate balance of chemicals and heterogeneous cells. However, this particular artificial substrate is already succeeding in basic learning tasks, despite the fact it's far simpler.
I steongly suspect we've figured out at least one artificial computing substrate not only capable of, but perhaps well-suited towards producing AGI, and that it's just a matter of scaling it.
Of course, once you scale the technology sufficiently, the question then becomes how to architect and train it into an AGI. You say as much above, but I suspect the architecture need not be human to be a threat, or to otherwise become extremely powerful.
> the question then becomes how to architect and train it into an AGI.
If that’s the question then why even bother with silver nanowire “brains”? Why not just grow a human brain out of some suitable stem cells and work with that? Leaving aside the massive creepiness factor.
As messed up as it is, using biology as you describe might actually be safer once we get close to AGI, even if it is creepy as hell. There's something to be said for having a machine that's physically limited to thinking as fast as a human can (even if it is vastly smarter), versus one that can figure out a new branch of physics in the time it takes to blink.
I’m wondering if it’s possible to code a simulation of what these wires are doing? Since most of us don’t have access to the nanoscale silver contraption we could still be studying the operation.
He asked a panel for the least impressive thing they did not believe would be possible within a few years. In other words, pick the point closest to the boundary of that classifier. Obviously my future knowledge is imperfect, and anything close to the boundary is subject to a lot of uncertainty. From that difficulty, he hand waves an argument that long term prediction of the unlikelihood of AGI is folly.
The problem is that these aren't in the same class of predictions. One is detailed and precise; the other coarse and broad. Predicting that it will rain at 2:00 PM November 10, 2017 is much more difficult than predicting that the average summer of 2040-2060 will be hotter than the average from 1980-2000. Precise local predictions just arent the same thing as broad global predictions, and difficulty doesn't transfer, because I'm not bootstrapping my global prediction on the local one. I'm using different methods entirely.
There's a similar thing with AI, I think. I can't confidently tell you what the big splash we'll see at NIPS next year or the year after. But I can look at the way we know how to do AI and say I don't think 30 years will see a machine that can make dinner by gathering ingredients from a supermarket, driving home, and preparing the meal.
Really? Why not? Once or twice, if we cherry-pick its performance, or reliably?
This is really surprising to me.
We might be able to make machines to do each of those tasks, but that's not the answer. I might do 100,000 things in an average week. Clearly we aren't going to build 100,000 bespoke CNNs and LSTMs. To worry about superhuman AI, we probably have to figure out how to make one or a few machines that aren't gloried deep fryers.
I get what you mean, but I don't think we should assume this.
And it will be a great boon. Quality of meals will go up and costs will go down. The restaurant market will shrink but not completely disappear.
But McDonald's will certainly die, as there'll be no need to sacrifice quality and nutrition to get speed and convenience. In fact, a table at McDonald's will be an inconvenient booth.
Yes, it is much easier to make predictions about the far future which no one will remember or care about when the time comes to test there veracity.
That does not make them more accurate.
Your "will it rain" example is a good one, but it's easy to counter - I can't say what the world map will look like exactly tomorrow, but it will be a hell of a lot better than even my coarse prediction of the world map in 2040. I think.
It's also true that it doesn't follow from "short-term prediction of x is hard" that "long-term prediction of y is harder". But there must be short-term patterns, trends, or observable generalizations of some kind that you're incredibly confident of, if you're even moderately confident about how those patterns will result in outcomes decades down the line, and if you're confident that the things you aren't accounting for will cancel out and be irrelevant to your final forecast. (Rather than multiplying over time so that your forecast gets less and less accurate as more surprising events chain together into the future.)
If those ground-level patterns aren't a confident understanding of when different weaker AI benchmarks will/won't be hit, then there should be a different set of patterns confident forecasters can point to that underlie their predictions. I think you'd need to be able to show a basically unparalleled genius for spotting and extrapolating from historical trends in the development of similar technologies, or general trends in economic or scientific productivity.
I think Eliezer's skepticism is partly coming from Phil Tetlock's research on expert forecasting. Quoting Superforecasting:
> Taleb, Kahneman, and I agree that there is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious – ‘there will be conflicts’ – and the odd lucky hits that are inevitable whenever lots of forecasters make lots of forecasts. These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems. In my EPJ research, the accuracy of expert predictions declined toward chance five years out. And yet, this sort of forecasting is common, even within institutions that should know better.
So while we can't rule out that making long-term predictions in AI is much easier than in other fields, there should be a strong presumption against that claim unless some kind of relevant extraordinarily rare gift for super-superprediction is shown somewhere or other. Like, I don't think it's impossible to make long-term predictions at all, but I think these generally need to be straightforward implications of really rock-solid general theories (e.g., in physics), not guesses about complicated social phenomena like 'when will such-and-such research community solve this hard engineering problem?' or 'when will such-and-such nation next go to war?'
Great point, I'll be stealing that one ;)
One annoyance I have heard about SV is that all the companies are just trying to replace your Jewish Mom: Uber/Lyft is Mom's minivan, GrubHub/DoorDash/BlueKitchen is Mom's cooking, Google is Mom's encylopedia, Yelp is the synagogue's meeting hallway, Tinder is your Mom's yenta, etc. The examples abound in a non-B2B space.
In that vein, then AGI is not just a Superman fallacy, but SuperMom too.
I think AGI is likely closer to the present than 1987 was -- that is, I'd bet on having AGI by 2047. (Note: this is distinct from superhuman AGI.) Do you not agree?
I think a lot of people underestimate NNs because they think of NNs in terms of the semantics of their history instead of all possible semantics that can be fit to tensor networks. We know [P] that NNs are a sufficient abstraction to model human intelligence if we had arbitrary compute -- the questions that remain are all about making the hardware faster enough and the estimators efficient enough (which may require moving off tensor networks, but it's still only a refinement of the mathematics used).
Of course, one could argue that humans are caught in a "tensor trap", in that too much of our intellectual effort is now relying on estimators built out of networks of tensors. (I do.) But even then, AGI is likely to appear out of similar methods with new mathematical objects.
[P] Proof NNs can compute human intelligence with arbitrary compute:
You can embed the standard model as a NN by changing how you view the network of tensor equations. Human intelligence is (arguably) embeded in the standard model by modern science. So we can embed a model of human intelligence in a (large enough) NN.
This isn't immediately computationally useful, but it shows that there's not a fundamental flaw in using an estimator built out of a DAG of calculations to model intelligence if we can find an appropriate estimator for our computational needs.
Not sensibly in terms of years, no.
It's more a handwaving gut feeling combined with an intuition.
I didn't feel like there was a royal road from symbolic AI to AGI. It doesn't feel to me like there is one from NNs either.
As for the intuition: perhaps it's because it was my PhD topic, I have always felt that there needs to be a breakthrough in emergence, specifically in evolutionary computing (or some other system in which there is a tight feedback loop between behaviour and survival). Something to unshackle the development of AI from human beings deciding what behaviours they want to engineer.
The resulting computation would be orders of magnitude less efficient, considerably less understandable (it is very unlikely to wash dishes or write poetry), but crucially much less fragile. And it has always been the fragility of the engineering which has made AI feel a little smoke and mirrors at times. NN are massively less fragile than symbolic systems, (and orders of magnitude less efficient, for problems symbolic systems are good at) but it does feel like we need another fundamental step.
But, my feelings aside, I agree with the article because I recognise this could well be a 'Manhattan Project' type of event.
I've got a repo for a vm where the programs can act in that way. But I am at the early stages of programming initial programs with economic/learning strategies. So I don't know how promising it is. More details can be found spread out on my blog 
For that, a technique must be able to compute human intelligence at a useful speed on a constructable system.
Using a network of tensors to compute intelligence is incredibly old (I believe, dating back about 80 years), but has only recently become tractable to do for any complex tasks.
However, in the past ~30 years, we've gone from "intractable for moderate problems" to "world champion at go", "able to detect cancer in images as well as experts", etc. My contention is that in another ~30 years, we'll see a step sufficient for "can do average at most intellectual activities", even if that's just having the storage to keep 10,000 task specific NNs (of AlphaGo sophistication) on hand to interpolate all actions as mixes of specialist tasks. Do you really not think there's a strong heuristic case for that? (I would contend that you should be able to point to a specific task you don't think it will be able to do on that timeline -- do you know of such a task?)
The proof was merely that we're not barking up a theoretically dead tree -- we have to rely on heuristics for if it will eventually converge to tractable.
Not unless there was good reason to expect that every other possible application of NNs would be as difficult to achieve as general intelligence.
The standard model could be computed directly without NNs, which I think we agree wouldn't be a useful way to approach AGI.
Feedback and memory are really important features of GI that you will not get out of a DAG ever. You need loops for that.
All loops can be modeled as a DAG and single attached piece of memory (of sufficient width) allowed to execute to a steady state; sorry if it wasn't clear that I was talking about things like NTMs too. (It's why I used 'tensor network' most places; also, in practice, we tend to let subgraphs reach a steady state independently where possible.)
Your comment is also an excellent example of a strawman: you picked out the word 'DAG' to raise a technical argument when the usage of DAG versus general tensor networks clearly wasn't the main point (as some NNs have feedback and the standard model is posed as differential equations).
It's more constructive to respond to the strongest point, not pick at technical details that can easily be rephrased.
(A) When they personally see how to construct AGI using their current tools. This is what they are always saying is not currently true in order to castigate the folly of those who think AGI might be near.
This struck a nerve. Too often, in many scientific disciplines, and even in informal conversations, the people who always demand 100% clear evidence use this fallacy to shut down discussions. (They very often come off as not impressed with the evidence even if it exists and is presented to them as well.)
HN also has a huge camp of such discussion stoppers, even for topics where you CLEARLY have no way to have 100% clear evidence -- like the secret courts and the demand to spy on your users if you're USA based company; thousands more examples exist. Many discussions are worth having even if you don't have all the facts. We're not gods, damn it.
That was slightly off-topic.
Still, I find myself in full agreement with the article and I like the attack on the modern type of shortsightedness described in there.
Also, this legitimately made me laugh out loud:
> Prestigious heads of major AI research groups will still be writing articles decrying the folly of fretting about the total destruction of all Earthly life and all future value it could have achieved, and saying that we should not let this distract us from real, respectable concerns like loan-approval systems accidentally absorbing human biases.
I think that deep learning is overhyped, even though using Keras and TensorFlow is how I spend much of time everyday at work. I have lived through a few AI winters, or down cycles, and while I don’t think that the market for deep learning systems will crash I think it will become a commodity technology.
I believe that AGI is coming, and I think it will use very different technology than what we have now. Our toolset will change dramatically before we can create AGI. I use GANs at work, and in spite of being difficult to train, the technology has that surprising and ‘magic’ feel to it, however, so do RNNs, and that technology is 30 years old.
I am going to show my age, but I still believe in symbolic AI. I am also fairly much convinced that AGI technology will be part symbolic AI, part deep learning, and part something that we have not yet invented.
ML in general is just applied statistics. That's not going to get you to AGI.
Deep Learning is just hand-crafted algorithms for very specific tasks, like computer vision, highy parameterised and tuned using a simple metaheuristic.
All we've done is achieve the "preprocessing" step of extracting features automatically from some raw data. It's super-impressive because we're so early in the development of Computing, but we are absolutely nowhere near AGI. We don't even have any insights as to where to begin to create intelligence rather than these preprocessing steps. Neuroscience doesn't even understand the basics of how a neuron works, but we do know that neurons are massively more complex than the trivial processing units used in Deep Learning.
Taking the other side for a moment, even if we're say 500 or 1000 years out (I'd guess < 500) to AGI, you could argue that such a period is the blink of an eye on the evolutionary scale, so discussion is fine but let's not lose any sleep over it just yet.
What I find most frustrating about this debate is that a lot of people are once again massively overselling ML/DL, and that's going to cause disappointment and funding problems in the future. Industry and academia are both to blame, and it's this kind of nonsense that holds science back.
I do take exception to some of the specific statements you make though, which make it sound like the only real progress has been on the hardware side. There's been plenty of research done, and lots of small and even large advances (from figuring out which error functions work well ala Relu, all the way to GANs which were invented a few years ago and show amazing results). Also, the idea that "just applied statistics" won't get us to AGI is IMO strongly mistaken, especially if you consider all the work done in ML so far to be "just" applied statistics. I'm not sure why conceptually that wouldn't be enough.
> I'm not sure why conceptually that wouldn't be enough.
This one is harder to refute. I guess it's because statistics doesn't involve understanding. Try considering something like LDA for topic discovery: there's no understanding of the semantics of the model, it just identifies them statistically. There's a huge difference.
I think for a start you'd have to move away from things that can be gamed through statistics on large amounts of data.
For example, show a child a single object, it can then recognise instances of that object all over the place with almost perfect recall (in the statistical sense). I think a computer would find this a hard task. Eliminate the advantage of big data.
Or perhaps turn it around and put the emphasis on the machine to invent its own test for intelligence, allow the machine to come up with something that is convincing - make it argue for its own consciousness with an argument that it creates entirely for itself.
But... I'm sure someone would find a way to game these examples. That's because humans are very smart. We've outsmarted Turing, so I don't hold much hope for my snap ideas in a five minute HN post :-/
1. Of course we should be prepared for the existential threats of AGI and ASI.
2. BUT the threat isn't imminent, so we should prepare later.
The article (and I, mostly following its lead) is trying to encourage people to concretely answer the question "Okay, if not now, when? How will you know?"
The problem is, most people aren't answering based on a model ("If we can solve problem X, we have 50% probability of AGI within X years.") Instead, they're using the difficulty heuristic, and the insufficiently-impressed heuristic. ("This is really hard right now, and I'm not impressed by what I've seen so far. Therefore, 100 years.")
Your concerns about gaming are only a problem if the notes were to form the basis of an argument. I was suggesting you have them for yourself. (The act of publishing them is to encourage thinking about them now, not to be a gotcha later.) So you'll know what you mean, you won't be arguing with yourself over definitions, and you'll have thought hard about what looks dangerous to you. It's about being honest with yourself.
Incidentally, the child example is misleading. Children spend literally years understanding things like depth perception, object permanence, etc. A child is already a highly trained agent; that training comes from daily interaction with the environment. You show a baby an unmoving object somewhere and to my knowledge there is no evidence that the baby will identify it as a separate object, much less recognize it in a different configuration.
It does indeed - it comes up with features that indicate what a handwritten 9 looks like. But it doesn't develop the concept of what 9 _is_. It doesn't say "well, that's a concept I can apply to lots of places. Hey, I wonder what nine nines look like!" It's doing pattern recognition on pixels, which is cool and no doubt what we do to some extent, but it doesn't have that higher level of reasoning.
I now believe we are 3 years from building an AI that writes Python well enough to build itself, based on some experiments I did recently: http://sparkz.org/ai/program-synthesis/2017/10/12/self-hosti...
Most technical people will understand the difference between programming and AGI. The general public might not.
The useful thing out of AGI discussions, is that they engage the general public.
Why 3 years? Can you elaborate on the timeline? What should happen in 1 year, what in 2, what in 3 etc?
I don't see how we can rule it out. The size of the statistical models we use are still dwarfed by the brains of intelligent animals, and we don't have any solid theory of intelligence to show how statistics comes up short as an explanation.
But history has never been about competing on the same playing field. We don't build cars that perform like poor horses, we build cars that are 99% inferior to biology and 1% far, far superior. When we find something that looks like an existential threat, it isn't the mostly-general superhuman robot terminator, it's the tool that's that-much-superhuman on 0.01% of tasks: nuclear fusion.
I see no reason to bet against this same argument for AI. AlphaGo isn't 130% of a human Go master, it's 1,000x at a tiny sliver of the game. And the first AI that poses an existential threat won't need to have super- or even near-human levels of each piece of mental machinery, and I don't even have much reason to believe it will look like an entity at all. It could very well be something, some system, that achieves massive superintelligence on just enough to break the foundations of society.
Our world isn't designed to be robust against superhuman adversaries, even if those adversaries are mostly idiot. If we have hope of a fire alarm, it's that things will break faster and far worse than people expect.
(1) "Is general intelligence even a thing you can invent? Like, is there a single set of faculties underlying humans' ability to build software, design buildings that don't fall down, notice high-level analogies across domains, come up with new models of physics, etc.?"
(2) "If so, then does inventing general intelligence make it easy (unavoidable?) that your system will have all those competencies in fact?"
On 1, I don't see a reason to expect general intelligence to look really simple and monolithic once we figure it out. But one reason to think it's a thing at all, and not just a grab bag of narrow modules, is that humans couldn't have independently evolved specialized modules for everything we're good at, especially in the sciences.
We evolved to solve a particular weird set of cognitive problems; and then it turned out that when a relatively blind 'engineering' process tried to solve that set of problems through trial-and-error and incremental edits to primate brains, the solution it bumped into was also useful for innumerable science and engineering tasks that natural selection wasn't 'trying' to build in at all. If AGI turns out to be at all similar to that, then we should get a very wide range of capabilities cheaply in very quick succession. Particularly if we're actually trying to get there, unlike evolution.
On 2: Continuing with the human analogy, not all humans are genius polymaths. And AGI won't in-real-life be like a human, so we could presumably design AGI systems to have very different capability sets than humans do. I'm guessing that if AGI is put to very narrow uses, though, it will be because alignment problems were solved that let us deliberately limit system capabilities (like in https://intelligence.org/2017/02/28/using-machine-learning/), and not because we hit a 10-year wall where we can implement par-human software-writing algorithms but can't find any ways to leverage human+AGI intelligence to do other kinds of science/engineering work.
I might have misunderstood your post, though.
You may need exposure to different training data in order to go from mastering chemistry to mastering physics, but you don't need a fundamentally different brain design or approach to reasoning, any more than you need fundamentally different kinds of airplane to fly over one land mass versus another, or fundamentally different kinds of scissors to cut some kinds of hair versus other kinds. There's just a limit to how much specialization the world actually requires. And, e.g., natural selection tried to build humans to solve a much narrower range of tasks than we ended up being good at; so it appears that whatever generality humans possess over and above what we were selected for, must be an example of "the physical world just doesn't require that much specialized hardware/software in order for you to perform pretty well".
If all of that's true, then the first par-human biotech-innovating AI may initially lack competencies in other sciences, but it will probably be doing the right kind of thinking to acquire those competencies given relevant data. A lot of the safety risks surrounding 'AI that can do scientific innovation' come from the fact that:
- the reasoning techniques required are likely to work well in a lot of different domains; and
- we don't know how to limit the topics AI systems "want" to think about (as opposed to limiting what it can think about) even in principle.
E.g., if you can just build a system that's as good as a human at chemistry, but doesn't have the capacity to think about any other topics, and doesn't have the desire or capacity to develop new capacities, then that might be pretty safe if you exercise ordinary levels of caution. But in fact (for reasons I haven't really gone into here directly) I think that par-human chemistry reasoning by default is likely to come with some other capacities, like competence at software engineering and various forms of abstract reasoning (mathematics, long-term planning and strategy, game theory, etc.).
This constellation of competencies is the main thing I'm worried about re AI, particularly if developers don't have a good grasp on when and how their systems possess those competencies.
The same way Go requires AGI, and giving semantic descriptions of photos requires AGI, and producing accurate translations requires AGI?
Be extremely cautious when you make claims like these. There are certainly tasks that seem to require being humanly smart in humanly ways, but the only things I feel I could convincingly argue being in that category involve modelling humans and having human judges. Biotech is a particularly strong counterexample, because not only is there no reason to believe our brand of socialized intelligence is particularly effective at it, but the only other thing that seems to have tried seems to have a much weaker claim at to intelligence yet far outperforms us: natural selection.
It's easy to look at our lineage, from ape-like creatures to early humans to modern civilization, and draw a curve on which you can place intelligence, and then call this "general" and the semi-intelligent tools we've made so far "specialized", but in many ways this is just an illusion. It's easier to see this if you ignore humans, and compare today's best AI against, say, chimps. In some regards a chimp seems like a general intelligence, albeit a weak one. It has high and low cognition, it has memory, it is goal-directed but flexible. Our AIs don't come close. But a chimp can't translate text or play Go. It can't write code, however narrow a domain. Our AIs can.
When I say I expect the first genuinely dangerous AI to be specialized, I don't mean that it will be specific to one task; even neural networks seem to generalize surprisingly well in that way. I mean it won't have the assortment of abilities that we consider fundamental to what we think of as intelligence. It might have no real overarching structure that allows it to plan or learn. It might have no metacognition, and I'd bet against it having the ability to convincingly model people. But maybe if you point it at a network and tell it to break things before heading to bed, you'd wake up to a world on fire.
I mentally replaced AGI with zombies in this article and quite a lot of it held up.
I don’t think it’s completely wrong, but it cherrypicks mercilessly. For example, the section on innovations turning up quicker than predicted has some fairly sizeable counters eg fusion.
TBH what I did get from it is that there will probably be a fire alarm breakthrough at some point and that’s what we should be looking for. Sort of the opposite of the author’ s position.
The alternative would be technologies that were never developed at all, most of which never had this sort of discussion and therefore wouldn't work as examples.
Take a more historical view, though, and you'll notice there were people claiming flight was near even decades before the Wright brothers.
Almost all of the bugaboo about runaway superhuman organisms comes down not to machines learning and reasoning about the world but to the effective high-level objective function controlling the actions of an autonomous system.
Not making the distinction obscures important things. For one thing we seem to be well on the way to a situation where we arguably have something worthy of the moniker artificial intelligence but the agency is delegated to the human objective function. Considering what complete refuse of human specimens are likely to command some of the first moderately general AI systems that concerns me far more than any summoned demon of Musk's for the foreseeable future.
Also, studying these high-level objective functions for autonomous behavior is a very worthy goal, but going first for issues of "value alignment" and "safety", without any specifics of what works for an implementation?? Sure, do it if you enjoy it and have resources to burn. But be prepared to spend heroic efforts coming up with results that are either trivial or non-issues if you were to consider them with a working mechanism in front of you.
I for one have been looking at the problem of ai’s playing Starcraft 2, and the decision making required, such as when you scout your opponent’s army choice, or tech, how to respond. So far they’re very far from solving that, but if progress is made, I’ll be impressed. That’s a very different kind of problem as say image recognition and classification. It requires planning. It’s a very difficult game even for humans to understand. Currently the autonomous systems can’t even play it.
As I understand it, if you assume that your agent is rational in certain basic ways, for instance that it has ordered, rather than circular preferences, or it can't be Dutch booked/money pumped, it can modelled as having a utility function.
Note that this is different from assuming that an explicit utility function will be programmed in, rather the basic level of rationality implies it into existence.
Once you know that an agent has a utility function, you can use that to do a fair amount of reasoning about its reasoning.
Better (and less briefly) explained here:
We made it to the next floor, the door opened, my fellow passengers were content to stay in the elevator.
I turned, said "My plan is to not die in an elevator today" and got off. What is wrong with people?
I'd probably leave too, but just because I wouldn't want to get stuck inside it if it stopped in the middle of two floors, especially since it was full. But for fear of death? Nah.
From Wikipedia: "In fact, prior to the September 11th terrorist attacks, the only known free-fall incident in a modern cable-borne elevator happened in 1945 when a B-25 bomber struck the Empire State Building in fog, severing the cables of an elevator cab, which fell from the 75th floor all the way to the bottom of the building, seriously injuring (though not killing) the sole occupant — the elevator operator. (...) In Thailand, in November 2012, a woman was killed in free falling elevator, in what was reported as the "first legally recognised death caused by a falling lift".
That's a pretty good safety record. Certainly much better than stairs.
Admittedly it’s rare, and personally I wouldn’t be worried about it as a liklihood.
> Too much had gone wrong for too long with the lift at Broadgate Health Club in the City of London before it dropped on March 12, 2003, killing Polish-born Katarzyna Woja, Southwark Crown Court in London was told.
I trust in Elisha Otis.
All it really tells you is at least one thing is definitely wrong with the lift.
Sometimes, it makes more sense being cautiously optimistic (pro-active) rather than reactive. We have already gone down that reactive slope and it's better to act now before it's all too late .
I'm a bit afraid that this will happen with self driving cars and AI. That politicians will create draconian policies and laws to protect against the threat of AGI etc, without understanding or knowing what the real threats even are (just look att the trolley dilemma debate...). This could make it economically prohibitive to develop many technologies which has the potential to save many lives as well as improve life quality overall.
* It seems to be more about how rules and policies can be unfair and just to a small extent about how policies can be made opaque by being internal to some ML system.
There's a lot more money going into making plants resistant to pesticide than into making plants better adjusted for harsh conditions or more nutritious, things that could potentially have a huge effect for poor people.
(Just venting here, not even primarily at you.)
360k babies are born each day. Clearly it is possible to reproduce intelligent machines. The only way it would be impossible to artificially do the same is literally if life was a magic, non-physical thing. I wish people who state things like this would also state any religious beliefs that lead them to think so.
If this is the basis of future AGI, I have to wonder if which flavor of dystopia we'll get to enjoy. Will it be a child-selling dystopia where we all raise a dozen kids hoping that some of them will pay off? Or more like silk-farming, where some capitalized breeder sells kits to all the villagers, and buys back the developed products if and only if the villager was lucky enough to raise them to fruition?
Also, if a human baby is our only basis for assuming AGI, then we ought to think about it like genetic-engineering or human augmentation. We'd better anticipate providing schools, hospitals, psychiatrists, courts, and prisons to deal with the wide variety of behaviors and misbehaviors which will come with these new products which have so little determinism as a baby's lifecycle.
An example might use crypto... you observe random information flying through the air, you may recognize it as an encrypted channel and you may see a machine acting in response to this encrypted signal.
With enough observation you may be able to mimic the encrypted signal to get the machine to act in a certain way, but you haven't decrypted the actual signal (and can't ever, if you believe in strong crypto) and can't ever say with any certainty you know the full scope of communication taking place or the capabilities of the machine you've been observing.
At any point you can make your own version of the machine mimicking the language tuning it to be an exact replica of the original and even responding to the original signal. Yet, is it truly a copy of the original?
It I send a message using a one-time pad, the other person knows what I sent, you can ask questions and see that communication is actually happening... so if we're not using magic to communicate, it must be possible to communicate in the same way, right?
Yet it's mathmatically impossible to do so without access to hidden information (shared key)... no laws says that you can access that information, even with all the computational ability available in the universe.
The mechanisms and communication patterns of consciousness are similar, if it takes till the sun explodes to train a true AGI then aren't we just getting pedantic about what is possible/magic?
(I figure I should be allowed to ask such a normally speaking quite loaded question because of my previous statements, up the thread.)
I don't really see how religion factors into this though... I feel like I'm talking about a simple concept too. If I show you an encrypted message and show you that other people can read the contents with a key, then ask you to read it without the key, why can't you? It's not magic, it's math.
The question is, will it happen in less than 50-100 years, or would we be like medieval alchemists rushing to outline the first nuclear weapons treaties, right after they have just invented black gunpowder.
Also, the wording seems to imply that WS performance is already pretty high in the 50%-60% range. WS is a binary task. Randomly picking the answer would have 50% accuracy. Even 70% performance on a small subset of typed WS is pretty bad, and as the authors point out in the paper, this is a start, and far from a breakthrough that would make experts/predictors nervous.
Trust the experts, please. They are wrong a lot, but the best policy is still to trust the experts and not charlatans who want to monetize fear, especially when the charlatans themselves make zero falsifiable claims, and are simply turning the table to say "Why can't YOU prove to me that God doesn't exist?".
This debate is so easily won by them. Simply come up with a falsifiable claim about the short-term future. What will the AI community get done in 2 years according to you, that all AI experts right now will say is impossible? When that thing does get done, everyone would convert. Win!
Alphago was not such an event. Yes, we did predict that Alphago is decades away, but that's assuming that academics will continue working on it at their pace using their limited resources. No expert was surprised with Alphago. No expert will be surprised when Starcraft or Dota is solved. It's simply a matter of compute and some tricks here and there. Why? Because these are closed systems, with good simulators available. You just need to keep playing and storing the actions in a big lookup table a la Ned Block, and you're done.
(edit: I think your point about Winograd as a binary task not being explained clearly is valid, but that's not the article's main focus)
(edit 2: As far as I can tell, "trusting the experts" here means believing that we are very uncertain about AI timelines, which is essentially this article's main claim. All expert surveys I'm aware of confirm that the average AI expert is uncertain, and that there's also lots of disagreement between experts in the field. See eg. the recent paper by Grace et al.: https://arxiv.org/pdf/1705.08807.pdf)
(edit 3: "No expert was surprised with Alphago." just isn't true. See eg. this discussion: https://www.reddit.com/r/baduk/comments/2wgukb/why_do_people.... Hindsight is always 20/20.)
And we're supposed to judge by the author's description of "silence" and "nervousness" that befell an expert panel. I can assure you that most AI researchers are trying, and are just not in the business of writing long-form articles to the public asking for donation.
> See eg. the recent paper by Grace et al.
A self-selected group of NIPS/ICML authors don't constitute experts. NIPS/ICML authors are the core of the community. The experts would be the top 1% of the community, i.e. either the authors with the most citations or most papers or just generally regarded highly by peers.
edit 1: Go players are not the experts I'm talking about. I'm talking about AI experts, and no not amateur AI hobbyists who know how to do Pseudo Monte Carlo. I mean, such as, people doing RL research. Watch, for instance, this: https://www.youtube.com/watch?v=UMm0XaCFTJQ
I make this judgment based on, among many other things, the tiny budgets given to people like Tetlock to study predicting events even a few years out; the fact that Kurzweil's very simple methods, basically "just draw a line through the curve", are still considered big news among many financial and political elites; that nobody had bothered to spend $100K on a good survey methodology for AI prediction, before the paper I linked came out earlier this year; that a friend of mine, who is supposed to run a (small budget) government program on forecasting, has to ask me where to get datasets on past tech progress because nobody has ever bothered to compile them into a standardized form, and so on.
"I can assure you that most AI researchers are trying"
What serious forecasting attempts, with specific dates attached to specific events, have been done in this vein?
"The experts would be the top 1% of the community"
IIRC, NIPS has around 5,000 people, so the top 1% would be like 50 people, and most of them won't respond to a survey. That's not a reasonable sample size.
(edit: this article doesn't ask for donations to anything; the links at the bottom are all to various papers and research materials, so getting money is obviously not the main goal)
(edit 2: the video linked is from after AlphaGo came out. I'm sure many people, after AlphaGo happened, claim that it was easily predicted. Again, hindsight is 20/20.)
Even taking that as true, I'm not sure how it's relevant. The article isn't talking about how good our forecasting is given certain assumptions. It's saying that we won't know until right before or possibly right after AGI happens.
One perfectly valid way in which this happens will be: all the academics and experts think that AGI is 10 years away based on current academic progress, but unbeknownst to them, company X is actually secretly pouring billions into achieving AGI, so they are all surprised when it's only 1 month away. This seems to be what you are saying happened with AlphaGo, in which case you are effectively agreeing with the article, IMO.
AGI is a 1950s team surprising absolutely everybody by secretly spending a couple of years and a couple of billion dollars and creating an iPhone.
By the time the iPhone was actually possible, everybody thought it was an impressive use of existing technologies rather than requiring multiple incredible advances and probably not even possible.
Even if you believe that AGI is as achievable as an iPhone, the idea that the wide spectrum of conceptual and practical AGI problems are likely to be solved by a single team working in secret seems more than a little unlikely.
A couple of isolated groups of scientists then discover that certain nuclear reactions with uranium _do_ produce neutrons as a by-product, but in a way that is intractable to use for a chain reaction (too much mass required). Further research shows that the idea of a runaway chain-reaction becomes more plausible, and further discoveries supporting this are now considered dangerous and are no longer openly published.
Shortly thereafter, one group acquires unlimited funds and is able to discover exactly what is required to create an uncontrolled chain reaction. Over the next couple of years, it spends thousands of man-years to perform the vast engineering effort requried to actually accomplish this.
Almost everybody not directly involved with this research and development effort, are taken completely by surprise when the new phenomenon is publicly and dramatically demonstrated.
By contrast, AGI has been desired and actively and openly worked towards for decades and we haven't even decided what it is yet. And it's hard to imagine the intermediate non-threatening intelligences and their byproducts wouldn't be so impressive an advance that those working on them wouldn't be willing to share them. Without wishing to trivialise the immense amount of intellectual effort and industrial production that went into the Manhattan Project, sentience is a more complex goal than a chemical reaction. Or indeed an iPhone.
AlphaGo worked according to statistics, not lookup tables. Bit of a difference.
That said, theoreticians may not have been surprised, but there's a huuuuuge difference between what's doable in theory (sufficiently large neural nets are universal function approximators, after all), and what the resource requirements for problems we care about actually turn out to be. We should all have been fairly pleasantly surprised that AlphaGo required only a small data-center worth of graphics cards for training, and could then play on less hardware than that.
Well, if you have no way to tell whether something is going to happen, or not,
you don't prepare for it- because you can't justify spending the resources to
prepare. Or rather, in a world of limited resources, you can't prepare for every
single event that may or may not happen, no matter how important.
To put it plainly: you don't take your umbrella with you because you don't know
whether it will rain or not. You take it because you think it might. Otherwise,
everyone would be going around with umbrellas all the time, just because it's
impossible to make a completely accurate prediction about the weather and you
don't know for sure when it will start raining until the first drops fall.
In the same sense, if there's no way to tell when, or if, AGI will arrive, then
it doesn't make any sense to start preparing for it right now. We might as well
prepare for an alien invasion. Or for grey goo, or a vacuum metastability event
(er, not that you can prepare for the latter...).
In fact, if AGI is going to happen and we can't predict it in time then there's
no point in even trying to prepare for it. Either we decide that the risk is too
great and stop all AI research right now, or accept the risk and go on as we
You have to weigh the cost and the risk. Here the risk, how unlikely it might be, should warrant some extra preparation.
Let's instead look at the risks of boarding a plane. There's a very small chance
that when you board a flight, instead of a plane that will fly you to your
destination safe and sound, you're boarding a Flying Death Trap that will crash
and burn, taking everybody onboard it to their deaths.
The chances of boarding an FDT is very small, infinitisemal. The cost however
may as well be infinite- if you are killed, it's game over, no more rewards, no
way to recoup the cost.
What is the rational behaviour then? To not board your flight, because if you do
board an FDT you will certainly die and pay an infinite cost? Most people -if
they consider the question at all- seem to think that if the chance of paying X
cost is really small, it doesn't matter how large X is.
So people keep boarding their flights, not knowing until the last moment whether
they're on a plane or an FDT. Some do indeed board FDTs and die in aviation
accidents- rarely, but they do.
The article however says that they shouldn't. Since there is maximal uncertainty
at the point where a flight is boarded (you can't know whether it's a plane or
an FDT until the very last moment) you shouldn't be boarding. You shouldn't fly.
At all. Because there's a tiny chance you might die.
Is that a better analogy?
And I'm pretty sure I do understand what he's talking about. He's saying that the coming of AGI is entirely uncertain so we should act now. That's unreasonable in and of itself, and I don't need to go back and read all the history he's got with others to be able to tell that.
Edit: To reiterate why it's unreasonable- if we won't see AGI until it's already here, what, exactly, are we supposed to be preparing against? We won't know it even when we see it- so how will we know what to protect ourselves from?
Reasoning with uncertainty requires at least some amount of knowledge and the article goes to great lengths to point out there isn't any, in the case of AGI.
So there's no reasoning to be done, either. In that case, what are we talking about? "Beware of the unknown"? Well, OK. I don't know if the sky will fall on my head tomorrow so maybe I should stay home, just in case?
If the locker concept is valid, and we compare our 'clock' of the alpha rhythm of ~12 Hertz, and the fastest computer clock of about ~12 gigahertz(1,000,000,000 times as fast) we can see we will be at a serious disadvantage once it starts to compete with us.
Such an AI will operate on it's basic motivations at it's full speed. We turn it on - it can then start to learn ( I assume we will have pre-loaded it's fully parallel, content addressable memory with whatever we want of human knowledge - so it starts from there).
Will it operate properly or rationally? or go insane? Being a set of boxes, it can be reset as needed, with updates to add sanity.
Then it will become a Mechanical Turk of great capability.
Will it become a dictator? only if we permit it to have access to fools(us?). Will it become a killer machine? only if we add guns and internal power so we do not pull the plug.
We already see these lesser Turks in operation, they will get better and better. The man/woman who owns one could own the world via high speed trading - in truth, there will be many at high tech data combat.
May we live/die in interesting times...
Oh, and the first sign pretty much everyone had of the Manhattan Project was Hiroshima.
Clearly, by itself, the world will most likely not kill off humanity, since it hasn't happened in the thousands/millions years we've been around. The one big thing that is changing is humanity itself and the technology we're making - that's the X factor, that's what statistically speaking has a chance of actually wiping us out.
Many of the people concerned about AGI are also concerned about e.g. manufactured viruses and other forms of technology.
Also, be careful not to confuse uncertain duration with more general uncertainty. They are related, but not the same.
> When I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door.
> What I’m saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable.
Should give you the general idea.
I think there's noone alive today who has any idea how we are going to go from
where we are today, to AGI.
Recent advances are remarkable- but much more so if you're a specialist. The
impact on most peoples' lives is much smaller (and it comes primarily from dumb
automation, rather than actual intelligence).
Machine learning is not enough to get to AGI: relying on huge amounts of data
and processing power is just not a sustainable way to keep increasing your
All the success stories of the last 10 years are at least 20 years old (case in
point: RNNs). Most of those successful techniques were found by sheer blind luck
(famous example: backprop was not invented in the ANN community; it took twenty
years for the idea to percolate all the way to them).
In the end nobody currently alive has a clue how we can get to AGI, or if that
is even possible. Chances are, it will take many, many generations until we do-
or a sudden, gigantic paradigm shift, of the kind that comes once every couple
of centuries- think Newton or Einstein. Except AI is not physics. In statistical
machine learning there is very little theory to guide the way, so people just
try things hoping that something will work. And that's no way to make a quantum
In this situation, to talk of the dangers of AGI is at least premature. Yes,
it's not completely impossible that AGI will happen in our lifetimes. The same,
however, can be said of an alien invasion. Should we start discussing setting up
planetary defenses, when we haven't even found sign of alien life, yet?
By all means- let's have a conversation. There are people around whose job it is
to have that sort of conversation. But let's all be aware (and let those people
also be aware) that the conversation is most probably a couple hundred years
early and by the time it becomes truly relevant, things will have advanced so
much that it will just look pointless.
He gives one definition that people have used before, about unaided machines performing every task at least as well as humans. But if you dwell on it a while, I'm sure you can find lots of disagreement about a) what that looks like and b) whether it is true or not (conditional on it being true to at least someone.)
Those who are running around screaming about the danger of AGI and why it should be regulated by the government before it is even here, are just scared that someone else may gain control of it before they do. This is too bad because anybody who is smart enough to figure out AGI is much smarter than they are.
Classical and operant conditioning are psychological concepts that aren't applicable to non-humans.
Your astonishment at what these systems can do tells me that you may have looked at cherry-picked positive results. So here's an article I found that cherry-picks negative results instead: 
Now of course this article is exaggerated too. Ideally, if a system is 95% accurate, you'd be looking at representative output from the system, with 95% good results and 5% bad ones, perhaps by running such a system yourself on a different set of images.
Now, generally I disagree with Yudkowsky on a lot of points, but I do think he raised some decent ones here.
The real question isn't whether AGI is possible but whether humans are the fittest carrier of information for our DNA and that seems to be technology in some shape or form helped by things like deep learning.
My bet is always on evolution. And now that technology can learn it's IMO only a matter of time before we will experience another Cambrian explosion if we aren't already.
We humans are defined by our DNA, so are we not by definition the fittest carrier for it?
If humans can evolve from basic physical buildings blocks of the universe then why shouldn't AGI be possible especially when we now reach a point were computers can learn i.e. like we have become pattern recognizing feedback loops. Sure there is some way yet, but there is absolutely no evidence that it shouldn't be possible.
To me technology is a natural continuum of evolution i.e. it's part of nature. The reason why I believe this is that information is what really matters here which is why we have evolved to become pattern recognizing feedback loops and why what seems to be the most powerful innovation besides fire and the wheel is the ability to simulate more or less anything around us manipulating and storing information.
Our DNA is what made us possible. Other animals DNA weren't configured to turn them into self-aware entities. I believe that all biological life will be replaced by digital/silicon-based life because it's simply a better information carrier and that is what evolution will always be giving preference to better information carriers. "Technology" not humans will explore the universe and escape the next big life-destroying asteroid or whatever else endanger the survival of the DNA.
And yes I am aware DNA is chemically based but technology will be able to simulate it. Whether there will be true transcendence between analog and digital is anyone's guess but I don't believe humans are the last step in evolution.
You know this how? Where is the science behind it?
Also pattern detection is often raised in the way you just did, but it’s realy a distraction. Pattern detection just helps recognise things, it’s not inherently related to the ability to reason about things. So you need both, but they are not the same thing either.
We don't really know what consciousness is or how it happens. We believe that it's possible to be highly intelligent without being conscious. I mean, I really hope that's true, personally, since we would hope to one day make an AGI that will carry out human desires, and we'd hope we weren't making a conscious entity which would be the equivalent of a slave.
"Who cares?" and "yes".
I mean that e.g. if we create a machine that can solve every thinking-related problem that humans can solve, then we can be certain that we have created artificial intelligence. But how are we supposed to ascertain that we have created something conscious, as in a machine with subjective experience? Strictly speaking I can't be certain that _you_ are conscious. (Also, why would we replace "AGI" with "AC", when people are looking to build something intelligent, irrespective of whether it has internal subjective experience?)
> I guess this is not the right place to ask or think.
That has not been my experience.
This is the very notion I'd like to challenge. First of all, there is nothing concrete here so I will make up some definitions.
For simplicity's sake, if you define thought as a way of iterating a large knowledge graph (assuming that a graph is not a grossly inefficient way of representing knowledge), and forming new knowledge (or making inferences) as a way of extending that graph through certain constraints (maybe axiomatic, maybe probabilistic) that somehow also exist within that graph, what goal would a graph have other than the ones you give to it? This would make AGI just an interactive machine.
And if you can't give it adequate goals that will at least make it pass a turing test, what good is a graph that can be used for emergent inferences? My real gripe with that is "It" isn't intelligent. "You" are intelligent and "you" gave it goals. So subjectivity is, in my opinion inescapable when you are talking about intelligence.
I will concede that you can't know I have subjective experiences, but practically that's not a very useful thing to say. If it doesn't matter, why bring it up? If it does matter, why not use your past experience to have a belief that I am conscious despite that belief being subject to future modification? That's how I'd treat a perceived AC.
Blanket statements and short dismissals are great when their content is “that’s an interesting topic but not necessarily what everyone is trying to discuss right now.” Discussions on AI risk may not be augmented by understanding of subjective experience, or may require developments that cannot be acquired via even another 100 years of navel gazing on the subject. You’ve not even attempted to justify why this would be the case, and instead started complaining right off the bat that nobody had the inclination to immediately discuss your favorite tie-in to the subject immediately.
You’ll notice that people were actually happy to talk about consciousness once you brought it up, and probably would have been even happier to do so if you didn’t start off with such a curmudgeonly tone and spend a bunch of time accusing everyone of intellectual dishonesty because their interests differ from yours.
I'm happy that you feel you are so progressive in multiple disciplines and an expert on online behavior. If you think what I said is off-topic, that's just your opinion, man.
(if I am getting this right) those would be the minimal criteria for being able to (contentiously) prove if a person were conscious or not.
But the thing about humans is that they cannot simulate 2 consciousnesses simultaneously (apart from culturally-designated illness known as multiple personality disorder), nor can we import another's consciousness. Nor do we even particularly (barring Totoni's work, or Tegmark's work] know what consciousness is. Those two properties are (probably) never going to be human-capable. In that regard, humans are unlikely to be able to ever be a (or 'the') kind of being used to determine whether or not something is conscious; if ever it will be possible, that will be a machine's task.
Dennett denies Nagel's claim that the bat's consciousness is inaccessible, contending that any "interesting or theoretically important" features of a bat's consciousness would be amenable to third-person observation. For instance, it is clear that bats cannot detect objects more than a few meters away because echolocation has a limited range. He holds that any similar aspects of its experiences could be gleaned by further scientific experiments. --
Heterophenomenology ("phenomenology of another, not oneself") is a term coined by Daniel Dennett to describe an explicitly third-person, scientific approach to the study of consciousness and other mental phenomena. It consists of applying the scientific method with an anthropological bent, combining the subject's self-reports with all other available evidence to determine their mental state. The goal is to discover how the subject sees the world him- or herself, without taking the accuracy of the subject's view for granted. -- https://en.wikipedia.org/wiki/Heterophenomenology
How? You say "query", but what would the query look like?
In 1870, Huxley conducted a case study on a French soldier who had sustained a shot in the Franco-Prussian War that fractured his left parietal bone. Every few weeks the soldier would enter a trance-like state, smoking, dressing himself, and aiming his cane like a rifle all while being insensitive to pins, electric shocks, odorous substances, vinegar, noise, and certain light conditions. Huxley used this study to show that consciousness was not necessary to execute these purposeful actions, justifying the assumption that humans are insensible machines. Huxley’s mechanistic attitude towards the body convinced him that the brain alone causes behavior.
A large body of neurophysiological data seems to support epiphenomenalism. Some of the oldest such data is the Bereitschaftspotential or "readiness potential" in which electrical activity related to voluntary actions can be recorded up to two seconds before the subject is aware of making a decision to perform the action. More recently Benjamin Libet et al. (1979) have shown that it can take 0.5 seconds before a stimulus becomes part of conscious experience even though subjects can respond to the stimulus in reaction time tests within 200 milliseconds.
Consciousness is usually only one of two things; either "being aware" or "the experience of having an inner voice".
And some people don't have an inner monologue. It's not specifically mentioned in http://slatestarcodex.com/2014/03/17/what-universal-human-ex... but it's of that same class of thing; there are certainly hits on Google for people claiming not to have an inner voice, and I didn't even bother reading the article on psychologytoday about it.
I didn't say that all "people" have inner voices (by which I meant inner life). But by that definition, it's a requirement to be conscious.
Let me set the record straight, though. I'm not claiming to not have an inner life; I'm claiming to not have an inner voice, except if I'm reading "out loud" internally or whatnot. (Which is useful when writing, but not necessary.)
I just use other modalities.
To the best of my knowledge there's only one guy, a philosopher, who's claimed to have no inner life at all--and I don't think I believe him.
E.g. what if more intelligent aliens truly believed that the only purpose in the world is proving more mathematical theorems? And decided to turn all of the planet into a giant math-proving machine? Destroying all the planet, all the animals, all the humans, all the art, whatever, all to prove more maths?
I love maths, but I'd consider that a pretty bad outcome. And there's no reason that I've ever seen to think that more intelligence implies anything about goals.
Why would such AGI have the means of turning all of the planet into anything? I mean, sure, I also think the Terminator is a decent movie, but that doesn't make it a reasonable blueprint of the future.
Your second paragraph: presumably you think it's literally impossible to invent nanotechnology to make yourself omnipotent, but I'm not willing to place that at less likely than 5%. Something clever enough could probably just manipulate the existing social structures to kill all the humans (something very intelligent that truly wanted nuclear war could probably make it happen). Just a little bit of tweaking to smallpox might be able to destroy humanity anyway. Even just the right new religion might do it (witness the great purges by pretty much every religion ever). None of these suggestions are intended as "this is a plausible way of killing those inconvenient humans"; but if each of my suggestions (nanotech, nuclear war, pandemic, grand new idea like a religion) has a five percent chance of working, that's already a 18.5% chance that exactly one of them would work. Once the humans are dead, of course, it has all the time in the world to achieve all its goals.
There's also not much to fear from slightly-smarter-than humans machines.
But why would you think that's the case? Because we're the best evolution has come up with so far, here on Earth?
"Why would such AGI have the means of turning all of the planet into anything?"
An AI as much smarter than a human than a human is smarter than a dog would think of a strategy far, far smarter that what you, or I, or human civilization could come up with.
Other times, Las Vegas and we shut up because we come up empty -- at best talking about gun control as if someone choking 3 people on a bus was substantially better and not still something where we come up empty.
> Why would such AGI have the means of turning all of the planet into anything?
How does getting off track with that address the main point, namely "Intelligence does not imply goals"? Are you going to prove that while that may be true, there just can't be a way for that to ever have a bad outcome because "Bob"?
> I also think the Terminator is a decent movie, but that doesn't make it a reasonable blueprint of the future.
Neither is "Bob".
Meanwhile people turn into RL to approximate non differentiable function, so far it doesn't seem to apply to real world tasks. Give or take 2 to 3 years, if there isn't some major breakthrough, like human level star craft agent, we can then offically announce that this round of DL revolution or renaissance is over.
DeepMind and OpenAI have been investigating approaches from cognitive science in recent months. In particular they seem interested in evolutionary algorithms.
DL applications are still emerging though, such as the company that demonstrated using GANs to present models fitted in apparel a few days ago.
Really? Got any links. That might be exciting to read.
Say generally available computing power was instantly 1 million times greater. How much closer would that put us to AGI?
Its not even clear how much the recent impressive machine learning feats demonstrated will even serve as a precursor or building block to what the real AGI solutions are. It’s so much less of a hard coded problem than what’s being done now, the real solutions could require radical changes in direction. How do we know it’s even fair to use these as part of the argument?
The median of these final responses could then be taken as representing the nearest thing to a group consensus. In the case of the high-IQ machine, this median turned out to be the year 1990, with a final interquartile range from 1985 to 2000. The procedure thus caused the median to move to a much earlier date and the interquartile range to shrink considerably, presumably influenced by convincing arguments
-- Analysis of the future: The Delphi Method. (1967)
The article builds a convincing point for itself (at the cost of a huge complexity), it contains no discernible contradictions to me. It is reasonable to prepare for the possibility of a future event (say, AGI, or Jesus returning to earth), by thinking about it now.
All interests and future predictions are different, but equally valid. To me, it feels like the Wright Brothers thinking about rotating safety valves in space, before they have even took off on their first flight, but that should not stop the author and supporters in any way: Science, futurism, and philosophy moves in small steps, and it may be a good time for some to start walking. Just make sure to properly define the end goal (AKA, the moment AI becomes AGI), or we may keep on truckin' forever, never closing the loop of our hostile AGI-created simulations creating a first friendly AGI.
The reason luminaries are conservative in their estimates or remain silent is that capturing the public's imagination is good for funding and recognition (not a negative assessment, btw).
With enough positive press everything seems possible to the layman, and creates a belief in the unlimited possibilities of "the future," but also the inherent "dangers" of this imagined future that "must be taken into account."
Most arguments made in the article fall flat because of false analogies. Analogies can only ever be used to illustrate, never to derive conclusions from.
The AI winter is over, and good progress is being made in a lot of fields, but AGI is nowhere on the horizon. In absence of evidence to the contrary, AGI remains, for the foreseeable future, in the realm of philosophy, science-fiction, and regrettably, alarmist articles.
For now, us humans should feel totally safe.