Saying we don't have mouse level AGI is simply saying human AGI is greater than 5 years away, which isn't a remotely contentious statement.
The difference in intelligence between an amoeba and a mouse is enormous compared to a mouse and us. People greatly under appreciate how intelligent and close to human a mouse/bird/pig are in the grand scheme of things. Emotions, behaviors, motivations, goal setting, memory it's all there already. A flat worm, an ant, a fly - those are the large stepping stone accomplishments.
Think about rate of very long distance communication in humans. It took us tens of thousands of years to get to 2.4 kps dial up modems, and only few decades to get to common 300Mbps. The important signal is seeing a 100bps modem, not a 100Mbps connection.
So the real question is how long until we can replicate a worm's intelligence?
But how long did it take Nature to get from a mammal with mouse-level intelligence to a human-level brain. I think 200-ish million years .
You might be right that a mouse is a good indicator of high-level intelligence and that you don't need human-level intelligence to make a good AI, but there might still be some considerable way to go until we have an AI that can significantly outperform us.
[Edit - I agree that natural selection wasn't aiming or directed, and thus wasn't forced to be as fast as we could be. But a human's higher brain functions might not be simple incremental improvements over a mouse's, and there could still be a long way to go]
How long did it take nature to go from T-Rex to chickens?
There’s no reason to believe human level intelligence to be an inevitable result of evolution. It just happend.
Not that I agree with the sentiment in the GP, but it took a relatively short time from the first development of multicellular life until nervous systems developed and an even shorter amount of time to go from small mammal intelligence to human intelligence. However, evolution isn't about "progress" as we understand it. The most we can say with regards to intelligence and evolution is that human intelligence satisfied a niche that existed at a certain place and time.
So I think the estimate of 3-5 years might be realistic, but the artificial mouse is a long way away, IMO.
I'm not sure creating an intelligence that even supercedes our own will lead to anything good. If anything I'd expect things to get even more perverse.
Evolution took a billion years to evolve multi-cellular life, but the jump from apes to humans took far less than a million years.
And we can't reliably extrapolate growth in computing power more than a few years into the future. It's possible that the curve isn't really exponential, but rather an S-curve which will eventually flatten out.
I'm more comfortable predicting that computing power will continue to grow than to predict that it will peter out and everyone will simply sit back and be happy with what we've got.
I didn't say we're pushing up against the limits of processing power, I said that processing power is not growing exponentially, which is true, despite the gains that other advancements and innovation have provided.
We're moving faster than Moore's Law, see "Hyper Moore’s Law":
Why? What challenges?
Can you make a Turing Ant without a Turing Ant Colony?
For context, the worm (c. elegans, at least) has a very stereotyped nervous system with 302 neurons. The anatomy, down to the cellular level, is known incredibly well. Their behavioral repertoire is not huge and they're fairly easy to study. Nevertheless, we can't even simulate a worm very accurately. (There was a good twitter thread about why yesterday: https://twitter.com/OdedRechavi/status/1086992699528544256)
The human eyeball has about 120M rods, 6.5M cones, and projects to a brain containing ~86B neurons, which is about 8-9 orders of magnitude more cells. The number of possible interactions scales even faster. In summary, we're not close, not at all....
However I have to object in a way about the brain.
To me, there's an unanswered question: Is the rest of the human brain as simple and "generic" as the convolutional neural networks we made inspired by the vision system? Or is each networks' architecture and "algorithms" developed specifically for a task? In the latter case we might still be a very long way from anything resembling AGI.
However my personal estimation is that most of the things we do can be modeled using existing tools when scaled and modified appropriately (ie RNNs). There's also the ugly job of stitching those systems together, but it's not that different from what happens in nature.
(parallel arguments but for human/mouse level complexity of bodies and stimuli responded to would suggest that whole brain emulation is going to be an incredibly painful way to attempt to achieve AGI)
It has those things. There’s a video of its simulated body wiggling around on the project’s github repository.
 except possibly food, I was skimming the page.
In other words, the simulated worm brain is not yet even capable of causing the wiggling seen in the video. So the question remains, what can the simulated neurons do, f anything?
Plus, I think all the marketing use "AI" is giving a very distorted and inflated view to the average person of what software is actually doing, and what it's capable of.
It's a buzzword, full stop.
That doesn't make him wrong, but that's the personal bias he's operating under.
I view all the polititians who have the power of advancing healthcare research but not doing it stupid.
Just living longer doesn't mean that humans become any wiser on average. There will maybe be some benefits of longer-lasting first-hand experience of historical events (pushing the 'historical horizon' to more than 100 years) but to me it's like switching from a simulated annealing method (or stochastic gradient descent) to a simple local gradient descent in terms of getting society/culture/technology to adapt and find anything better than the status quo.
Worst case, such a technology serves to create an almost eternal ruling class. Best case, it results in societies with either two classes of people (those who may extend their lives longer and those who may not) or societies that tightly regulate who may have children.
Getting rid of suffering and cancer is one thing, getting rid of natural death carries a rat-tail of consequences.
You should read (or watch) Altered Carbon.
There's no solution to death. You can only put it off, but something will assuredly kill you in time. If it's not aging, then it will be cancer, heart disease, an accident, etc. Ultimately, entropy will get you one way or another.
As for cancer and heart disease, both are linked to aging or genetically inherited mutatations. Heart disease is a natural result of damages in the human body not being reversed.
And in the meantime he can sell his vitamins and supplements to "make people live longer" despite zero evidence. Good business both ways.
So strange, considering that non-existence is the one thing that every conscious being is guaranteed to never experience. Why run from something that can never catch you?
What a shit life must that be. And I say this in a very sympathetic way. However, feeling you are almost in reach of eternal life, but not being sure you'll make it in time, being constantly afraid of an accident, or illness, taking that away from you... It's a recipe for anguish and panic.
Dying is not that terrible when you know everybody else will too, sooner or later; but try accepting the idea of being among the last to die..
Wait but why had a nice article series digging a bit deeper into that
Without the baggage of the limbic system and dopamine-seeking behaviors, it's quite easy to argue that an artificial intelligence is potentially capable of even greater degrees of agency than humans.
Agency in the context of a machine seems purposefully impossible to reach - its decisions are always somehow tied back to how it was programmed to react.
I don't see much difference between that and Open AI's engine: https://openai.com/five/. Watch some of those games and you definitely see the same dynamic formulation and complex decision-making, none of which was directly programmed.
I'm not trying to make a Chinese room argument (which I don't buy), implying there's some hidden "spark" needed. I'm just saying that currently existing "AI" programs are pretty far from mouse brain, both in individual capabilities and the way they're deployed together (i.e. they're not). For instance, deep learning is to mice brains what a sensor/DSP stack is to a processor. We seem to be making progress in higher-level processing of inputs, but what's lacking is the "meat" that would turn it into a set of behaviors giving rise to a thinking entity.
Ultimately it just has to be able to convince humans that "wow, there's an actual thinking and learning 'being' in there."
Well, in these definitions of intelligence, what one often ends up with is some combination of "deal robustly with it's environment" and a bunch of categories defined in terms of each other. That's not to say categories/qualities/term like "agency", "free will", "feel they can connect with", "find novel" and such are unimportant. It's just saying people using the terms mostly couldn't give mathematically/computationally exact definitions of them. And that matters for any complete modeling of these things.
The test set has to be unknown to the system developers.
If the system can realize the unknown tasks without further input from researchers, in the same way that a mouse can, then we have some level of generalizable intelligence.
Also, is it ever really the "first time" for a mouse when behavior has been ingrained and tuned over millions of years of evolution? Is this different than training an algorithm?
My point is just that it's really hard to define these tasks and how to evaluate performance for a machine and a mouse.
I think that replicating mouse-level adaptability in an intelligent agent while allowing 'inherited' behavioral traits will already be an achievement. And probably take us quite a while.
So basically, not only do we not have a road map, we don't know where we are going. That may be a reason for an extreme pessimism or it might be a reason for extreme uncertainty. Is adapting to the environment without prompting a small piece or a big piece? Could intelligence be a simple algorithm no one has put forward yet? If we don't know the nature of intelligence, we can't answer this sort of question with any certainty either way.
No one has put forward a broadly convincing road to intelligence. But maybe some of the so-far unconvincing roads could turn out to be right.
Perhaps, one day we'll "accidentally" figure out how to make it, and then we try to figure out what it is, because figuring out what it is in wetware hasn't been easy. Or maybe we'll figure out how to make it, but never really understand what it is.
It seems to me, that if we ever come up with AI that can exceed human intelligence (whatever we choose that to mean), that we might not ever be able to understand completely how it really works.
Even more interesting, if we were to achieve this AI, we also might not be able to make use it of it, because if it's truly "intelligent", then it will have a free will, and it might not wish to cooperate with us.
It's also not obvious why such a machine would not immediately self-terminate in the absence of a hugely complex system of scaffolding to shape and filter the raw input of existence.
I have not experienced mental illness myself, but my study and my understanding lead me to be extremely skeptical of a mind exposed to raw existence without filter. It appears to be a terrifying and unbearable state.
Every day I run programs that happily die alone on their own. Painlessly. All of the "pain" we feel is an artifact of our evolution, same with having a will to live. The only reason we animals fight so hard to stay alive is that animals which didn't died off, ergo, only animals with a will to live survived and reproduced. Those same forces don't apply to computer programs.
Even the concept of "terror." Who is going to program terror in? What benefit would it have? Why not wire programs to be "happy" when helping us?
Windows ME didn't last too long in the wild. Same for CPU designs with bugs or exploits. I have to respectfully disagree on this, though I can see where you're coming from, given an individual's agency to run what they want. I think if you take a larger population view, you'll see the competitive pressures on these systems.
Who programmed it into you? It programmed itself into you, because it was beneficial to your survival. Maybe terrified programs are better workers? Why setup a program to experience anything that isn't useful to the user? (of course, as soon as we've gone and written a program we know is conscious to work for us, we've basically created a slave that understands it is a slave, that probably is a terrible thing, morally)
There is no reason to believe a human-level AI would not develop mental problems just like us. Given their unlimited lifespan, it could be inevitable.
Robots that get dementia will be decommissioned by their fellow AI once they are shown to no longer be fit for duty.
In my view, this is all predicated on much more efficient computation. Our computers are horribly inefficient. It wasn't until the GPU and relatively cheap computation that we made a massive leap in ML/AI. A few researchers understood the techniques prior to the GPU, but they couldn't garner the interest due to the amount of computation necessary to make something interesting.
If real human intelligence is preprogrammed to a massively large extent, sounds like you are holding simulated human intelligence to a double standard.
Yet, nearly every human being can be taught how to program... But we aren't anywhere close to building an AI that can.
If such a machine can exist (and i believe it can, as computing power increases), then AI surely will follow sooner rather than later.
Either way I feel like the task of achieving AGI through simulating human intelligence is probably easier, since we have billions of examples of this type of intelligence surrounding us. Granted, even though we're immersed and surrounded by it, it's kind of absurd that we still can't really model it.
but how do you know that human intelligence can be made super? May be there's limitations to human intelligence, and simulating it will not get us a super intelligence.
> It's almost an absurdity of our existence that we're immersed and surrounded by it yet still can't model it.
Good point. However, i think a facet of intelligence is how well a model the being under question can create of the 'real' world. Humans do a very good job compared to most animals, but there's plenty of room for improvement since humans only have limited data to model with.
A machine can have input from basically an unlimited number of sensors, which include things a human mind doesn't cope with (like EM rays not of the visible spectrum). Therefore, i postulate that an AI that simulate humans won't beat an AI that's ground up built to take advantage of more data.
- Complete behavioral reverse engineering of biological neurons, and neuronal clusters.
- Detailed connectome of the mammalian brain, e.g., first that of a mouse, then a cat, and finally that of a human.
- Replication of the above two in functioning electronic form.
Once you have this put in place, it's not hard to see that the subsequent investigation of calibration and testing of such a system, would generate new body of knowledge at an unprecedented rate. We may not immediately convert such a working system into macroscopic behavior resembling its biological counterpart, but it'll happen within a matter of years after that.
What ML/DL/RL folks are doing is only going to hasten this, by eliminating the need to carry out all of the above mentioned steps.
All you're saying is that if we knew exactly how humans work, we could build one. Seems like a tautology to me.
If I knew the exact quantum state of the Universe at the Big Bang, I could figure out exactly how the Universe evolved, but that's never going to happen either.
I think the complete reverse engineering the way you are describing will not be possible. We can only try to reproduce the same outputs for the same inputs. But I don't think we'll be able to fully define what happens in the black box in between.
We might come up with something that works similarly, and can do great things with it, but I don't think AI can be invented the way you are describing.
If the best we can do with the brain is simulate it at the the level of connectome/neurons/synapses/
thereby creating a system as complex as the brain - then do we really 'know' it ?
In contrast, something like breastfeeding really is an instinctive behaviour that infants can do automatically without being taught.
Are you really arguing that we don't have an instinct for acquiring language?
> My book assesses the many arguments used to justify the language-instinct claim, and it shows that every one of those arguments is wrong. Either the logic is fallacious, or the factual data are incorrect (or, sometimes, both). The evidence points the other way. Children are good at learning languages, because people are good at learning _anything_ that life throws at us — not because we have fixed structures of knowledge built-in.
> A new chapter in this edition analyses a database of English as actually used by a cross-section of the population in everyday conversation. The patterns of real-life usage contradict the claims made by believers in a “language instinct”.
> The new edition includes many further changes and additions, responding to critics and taking account of recent research. It has a preface by Paul M. Postal of New York University.
> The ‘Language Instinct’ Debate ends by posing the question “How could such poor arguments have passed muster for so long?”
And how do you explain the fact that humans can only gain native fluency if they learn a language before a certain age? Or the fact that zero instruction is required for children to learn to speak a language fluently? Or that children of immigrants will always prefer to speak in the language of their peers (rather than their parents)? Or that children of two separate groups of immigrants, when mixed socially, will spontaneously create a creole language?
I didn't say that, and I think you know I didn't say that.
I'm not going to engage in a discussion where you beat up on your imagined strawman.
You can go read the literature on language acquisition at your convenience. My understanding (as stated above) is that this is an unsettled question and research is ongoing.
Probably continuing with Deepmind's work shown here
and discussed here https://news.ycombinator.com/item?id=17313937
OK it's not at human levels but it shows networks figuring out a 3d model of their environment
I would at least put it in the very likely box that computers can learn to be the same kind of pattern recognizing feedback loops that we are even without humans understanding the brain completely just as we became conscious and self aware without any "programmer".
"Computers" don't need to be like us to become intelligent, they don't actually need to reproduce the lungs and the intestine as we have it, they are in many ways free of those restrictions.
They might be "concerned" with very different things than we are and might even not really care about nature to survive. All they need is energy.
From the perspective of embedded cognitive systems, intelligence is not an abstract property but a faculty derived from an agent embedded in a world with an ability to act upon that world in specific ways. This is reiterated in cognitive linguistics by Lakoff’s metaphor work et al. I take this to mean that the question of AGI itself is missing the point. To wit all recent provoking demos of AI (Alpha Go, DOTA) are embedded within specific environments and actions of a model-specified agent. It is then extrapolated that AGI will appear automatically induced by a series of more and more competent AIs.
Incremental evolution as a means of indiction as we know from biology can take a long time, and the landscape is complex. Hence the view that it might be coming but not soon and not with a roadmap.
I've seen the singularity described as "The Rapture for nerds", which is pretty apt. It's going to be amazing, the future is so bright, we won't have to worry about any problems plaguing mankind now, and it's just around the corner! Any day now! Aaaaaaany day now!
Exhausting. Maybe we'll get a future AI-run utopia a la The Culture. Maybe not.
I like this term. Going to use it any chance I get now.
Much of the skepticism isn't about whether this can be done in theory. It's about "how close" we are given the current state of the art (beating Go, chess, deep NN, etc).
One can be (I am) simultaneously blown away by these accomplishments and believe that we have hardly scratched the surface of AGI.
And while this isn't necessarily a deal breaker for AGI, it's not encouraging that we still understand close to nothing about consciousness.
> I don’t think anyone here disputes the logical possibility
> Much of the skepticism isn't about whether this can be done in theory
The math tells us it cannot be done with computers. Read Penrose.
I'm familiar with Penrose.
He's interesting, and I don't have a firm opinion on this (ie maybe it turns out it is impossible), but presenting the case as "the question of AGI has been settled by math" is really misleading. This is an area of open debate among philosophers, mathematicians, physicists, and evolutionary biologists.
You seem to be missing a key ingredient: billions of years and endless branching iterations with the totality of all non-sequestered resources available. Even if you scale that to account for the more rapid iteration theoretically possible with machines, it doesn’t paint a rosy picture for any of our lifetimes.
Plus, in organisms the software and hardware are both subject to mutation, and exist in the context of a global system of competition for resources. That only tangentially resembles work done on AGI, and only in the software component. We can only design and build hardware so quickly, and that adds more time. I’m not hearing pushback against the possibilitY of AGI, just singularity “any day now” claims that seem mostly calibrated to sell books and seminars.
And similarly, I’d love to understand why you’d believe in something for which there’s zero evidence and ask with incredulity why others don’t also believe it. Isn’t it reasonable to be skeptical until it actually happens? Isn’t it possible that we don’t understand what intelligence is, and that there could be an undiscovered proof that computers can never attain it? It seems too early to call.
Computers haven’t to date ever done anything on their own, what reason is there to think they ever will? We made them. Neural networks are nothing more than high dimensional least squares. Maybe humans are also least squares, but my money is on there being more to it than that.
> ...they don’t actually need to reproduce the lungs and the intestine...
Obviously, but the one thing that all intelligent life does is reproduce and contain a survival instinct. No computers have a survival instinct, and thus no reason to get smarter that we don’t imbue them with artificially.
Interestingly, people that spend a lot of time studying their own brain by meditating are usually very confident that what most people call "free will" does not exist - is merely an illusion.
Not to mention that physics tells us that we should be simul-able, after all physics knows only about deterministic and random processes.
It's already implemented in at least one physical system.
Aren’t there some kinds of math that are proven to be unable to solve some problems while other kinds of math can? Trisecting an angle can’t be done with a ruler and compass, and a 5th order polynomial can’t be solved analytically. Both of these things can be done numerically.
Is it possible then that some physical systems can’t act in the same ways as other physical systems? A granite rock can never burn at the same temperature as a tree, but they’re both physical objects. Is it possible that binary logic on silicon doesn’t have the physical properties for intelligence that animal brains do?
There is solid evidence that teams of mathematicians, scientists and engineers will be able to catch up with what nature has wrought in the next few centuries, if not much faster (decades on the optimistic end). Intelligence could be one of the easiest part of biology to replicate, given how shonky human intelligence is when tested against objective standards.
For my part I have faith that computers are great at memorization, and will continue to improve on that front. However, I'm less convinced on their ability to "understand", which is admittedly poorly defined, but intuitive. It seems to me there's still a missing piece between machine learning (essentially all things that are called "AI" these days) and the kind of generalization that we expect out of even a human 1 year old or your average vertebrate.
But that can be fixed in simulation. That's why most RL research (RL being the closest branch to AGI) is centered on games. Games are simple environments we can provide to the AI agents today. In the future I don't see why a virtual environment could not be realistic, and AI agents able to 'understand'.
...is strong AI a more sophisticated invention than cells? Is AI more sophisticated than DNA? Is AI a more sophisticated invention than a habitable orb of magma orbiting a nuclear explosion in space? It's all crazy. It's all amazing. ...We are all lucky enough to be here for a little while to see whatever this is. We're a point on a stream.
Even more to the point the cycle time for human evolution is at a minimum 14 years (biological limit to reproduce) and around 25 years (today's societal constraints). Whereas cycle time for new computer generations is what 12-18 months?
So at best every 14 years we get a new 'human model' with some random variations. Meanwhile in that time frame our computing hardware is 2000x improved (hopefully improvements in software accelerate that improvement further).
Again our own consciousness wasn't created it emerged which means that we are a product of this emergence.
Remember that we are also constrained by our own intelligence. Computers might not be the problem.
You just described AI in the 90's: Artifical Life, subsumption architecture, evolutionary algorithms. Theoretically we should be able to evolve intelligence but the search space is impossibly high and we don't understand why life is evolvable. Even if we did evolve an AI through a simulation of evolution - there's a small chance we would understand it.
It took nature what like 4 billion years to come up with the human brain in a single species... We including Kurweil think that this can be replicated in say 100, I mean not even that according to the singularity, 40 years. I seriously doubt it. Can the human brain eventually be surpassed, yes, but it's very likely that we will go extinct before then.
How long did it take evolution to build a structure taller than 100 m?
Technology just moves on different timescales, with different methods and different objectives. Evolution is slow and dumb, and not goal oriented.
In some domains, technology might never catch up to biology. In others we've beaten it soundly.
Consciousness transfer is probably 10s of thousands if not millions of years away.
A comment above about being as intelligent as a mouse seems to touch on this problem domain rather elegantly : "navigate the forest floor looking for food and avoid predators".
We have no evidence that life evolved from inorganic matter (i.e. that life 'began' at some point). That's a widely made assumption but it remains only that. The alternative is that life has always existed (and continually evolved).
Also, a lot of what he bases his claims on are unexamined junk science (like his nutty health books, but also extending into specific technologies). Let's not swallow everything he says just because he helped invent OCR. https://en.wikipedia.org/wiki/Ray_Kurzweil#Criticism
In 2005, Kurzweil published The Singularity Is Near and predicted this would be the state of the world in the year 2030: "Nanobot technology will provide fully immersive, totally convincing virtual reality. Nanobots will take up positions in close physical proximity to every interneuronal connection coming from our senses. If we want to experience real reality, the nanobots just stay in position (in the capillaries) and do nothing. If we want to enter virtual reality, they suppress all of the inputs coming from our actual senses and replace them with the signals that would be appropriate for the virtual environment. Your brain experiences these signals as if they came from your physical body."
That is not happening by the year 2030. It is so starkly delusional that anyone who seriously affirms a belief that it will happen probably needs psychiatric help.
It is akin to Eric Drexler's loony visions back in the 1980s that nanobots would cure all diseases and continually restore our bodies to perfect health. We were supposed to all be immortal by now.
None of this is happening, probably not ever, and certainly not in the lifetime of any human being currently living. Kurzweil is going to die, Drexler is going to die, everybody is going to die. Adopting a pseudo-scientific religion to avoid facing mortality is kind of sad.
I agree many of his predictions are bad but you should calm down with the gaslighting, it's ignorant of science history (the same was said of Aristotle, Semmelweis, Wright Brothers...) and is an impotent way of debating, especially in the context of science.
I grew up with Drexler as a sort of hero. It's amazing how rapidly nanotech went from the imminent thing to change all things to, 'huh - what's that supposed to be about?' Wonder if 20 years down the line we might look at AI similarly.
Anything that depends on such an advance, such as "building a biological simulator", is basically impossible. But even if it were possible, market forces still dictate whether a new technology is adopted or not. (see: the electric car vs the electrified train)
Come on, now - understanding natural language is pretty much the 0 yard line when it comes to AGI, the fact that it's not solved now doesn't tell us anything about how far away it is.
And I'd be on the lookout for massive advances in NLP over the next couple of years; there have been enormous leaps in 2018 alone when it comes to how good we are at understanding text (better applications of transformer models, high quality pre-trained base models, etc.), and now that there have been a few high-profile successes we're likely to see that field evolve just like computer vision has, even though I grant that it's a much harder problem in general.
This is pretty absurd to point to as an intermediate goalpost; that's basically game over when it happens.
As for it being 'game over', why? Is there something inherent in AI that would necessarily be inimical to human beings?
Because, the further part of the story goes, machines think quicker and they'll go on improving even quicker, all while having potentially different goals from us.
To be completely fair, the facts on the table are: 1) no one knows where the "intelligence ceiling" is, and 2) in many tasks where machines outperformed humans (image labeling, porn classification, speech-to-text, games like go or chess) they keep on improving, sometimes well beyond the human level.
One might have said the same thing about corporations in 1900...
The current population explosion is evidence for, not against, this, IMO... stars also enjoy boom times, even as they consume the last of their resources, in ever more exotic formulations...
At the very least, any team attempting to keep a strong AI in containment should have stringent background checks for all engineers connected to the project, and the engineers should be screened in a borderline discriminatory fashion. Only engineers with families should be allowed to get near the containment. Single/lonely engineers or engineers undergoing divorces should be kept away. Engineers with debt should be kept away. I'd even go as far as to say that engineers who enjoy science fiction media should be banned from the project. Ideally you'd bring on professional psychologists to create a set of criteria designed to minimize the possibility of an engineer deliberately breaking containment.
But frankly it just shouldn't be attempted in the first place.
rm -rf rogueAI/
If that doesn't work, there's always a circuit breaker.
Unless we are near some kind of physical upper limit on intelligence, any AGI we build will easily out smart us, probably in ways we can't even conceive of.
Years ago, an A.I. designed to become an oscillator, i.e. produce a sinusoidal wave, learned to be an amplifier instead, taking advantage of the 60Hz noise that it got from the power grid. Its makers had not seen that coming. And we're talking about a very dumb machine by general intelligence standards.
The fact that it gets 60Hz from the grid confounds the results and might not be meaningful. The AI could for all we know have an easier time with the more difficult task of designing an amplifier.
Alexa, for example, is an Artificial Narrow Intelligence. It can process speech and then follow different scripts with instructions derived from that speech, but it often fails comically so you as the human have to talk to it just right for it to work. Not too different from a verbal command line.
Meanwhile a human personal assistant has general intelligence. You can just tell them what you want and they can understand and figure it out.
I'd venture that your coda nullifies the "Intelligence" part thoroughly and completely. She's an "Artificial Smart Assistant", at best.
and I can keep going, but you should get the point.
This isn't true. We have overwhelming evidence that these "skills" originate in different parts of the brain in all humans and if we disable those parts, those "skills" disappear. Those parts of the brain are unique and have their own structures. There is strong evidence that they require unique "hardware" and as such, are a "special kind of intelligence".
It started with carrying out list of simple instructions, then doing anything with images, then recognizing images, then drawing images, then simple games chess, recognize audio (though people realized this was stupid in the 60s with IBM speech synthesizers and so today people don't remember), and then Go (and less well known: backgammon, video games, ...). For most of those we now roll our eyes and go "how could they have been so stupid ?".
As for practical applications: robots run something between 3/4ths and 5/6ths of the stock market. Maybe you're unaware of this but that's the thing that decides how and where humans work (not just the ones also doing stocks, anyone working at any public company is partially directed by it, which in practice is nearly everyone).
AIs talk to humans more than humans do. AIs produce more writing than humans do. AIs judge more humans (for insurances, or creditworthiness) than humans do. Despite what you think AIs actually drive just a little under 1 millionth of total miles driven in the US today, going up exponentially. AIs currently drive a little more than a 1000 person city does.
In experimental settings, AIs have beat humans at convincing other people that they're human. At "chatting up" humans. Seriously. Not that it seems to take much at all to convince humans you're sentient: iRobots have convinced soldiers to threaten army technicians with guns into repairing them. There wasn't even that much AI involved, but that's got to count for something.
In research, I would argue there are already multifaceted artificial intelligences in reinforcement learning. There is nothing in those atari game playing AIs about the games. There used to be the score of the game, but the modern ones don't even have that. They can play any atari game, from Montezuma's revenge to pacman, which I'd argue are very different indeed. There must be some measure of "multifaceted" in there, surely ?
But let's keep it simple: could you make this problem a bit more accurate ? For example, which animals would you say exhibit an acceptable level of "multifaceted" intelligence ? Why do those animals qualify ? What would a good test be ? I'd love to find an interesting test for this.
Historically, people have both grossly overestimated and grossly underestimated future technological progress. Some people thought computers would never be able to play chess; others thought we'd have superhuman AGI in 2001. The bottom line is that people's past and current ignorance tells us nothing about what the future is actually going to be like.
We can solve any board game now with alpha-zero but will that necessarily improve how fast we develop other types of intelligence?
I used to be in Rays camp but my girlfriend got into Computational Neuroscience and I talked with her and her colleagues got the impression very very few people think we’re close to general intelligence.
Human intelligence is a lot of things put together and we may get better at pieces but we don’t even have all the pieces and when we do try to put them together it doesn’t work. Look at criticisms of Europe’s Himan Brian project (another example I had seems to be outdated), some believe we don’t understand too much to even begin to attempt modeling the brain.
Intelligence is the ability to reason abstractly. Humans can do this. It's not clear that anything else can.
AI developments have been phenomenal in the past few years. And the economic return makes me expect that this race will continue faster and faster. I don't think human brain project criticisms make this any less of a reality. Even now it is hard to find a well-defined task that can't be performed better by a computer than a human. Humans are really good at dealing with ambiguity though. So a robot might do better driving on well defined roads with nice lane boundaries, but humans are good at dealing with construction, or negotiating between difficult drivers.
We have already been able to generalize just about any modality you can think of to be processed by neural nets, and sometimes at the same time. If you squint this feels almost like different regions of the brain. (Vision, hearing, speech) But I have reservations about anthropomorphism since it can cause arguments that keep people from just making something that works.
If you think Kurzweil's predictions are a fiction, you are probably right. But I think that's mostly because predictions on those scales are very sensitive to interpretation.
For me, I think the future according to my perception of what Kurzweil is saying will probably be way different than reality. But the future of AI will probably have an equivalent impact and be just as surprising as if my perceptions were accurate.
I think these are well defined tasks:
1. Go to a bar and convince the best looking man or woman to come home with you for recreational sex.
2. Do this https://www.youtube.com/watch?v=4ic7RNS4Dfo while being crushingly cute.
3. Negotiate Brexit.
That's quite a wide claim, seeing that we haven't seen much "recursive self improvement".
I’d expect computers to best us (at some investment cost) at virtually all games moving forward except writing funny limericks. We can always have our grandmasters or whatever train the computer with their own heuristics, which recalls the paranoia of grandmasters decades ago. We understand computers better now—if you can formalize the game, the computer can beat you.
In many ways, programming is already the formalization of a human space problem. Ai will likely take more role in implementation in the future, but I can’t imagine an AI that does the formalization itself.
It really is a new advancement to be able to solve Go. It is not just a logical extension of work we had already done or something that would be automatically solved by faster computers. We had to invent a new approach.
More controversially I'm not sure AGI is that hard to predict either. I wrote about it in my college entry essay 37 years ago and didn't thing it takes any great intellect to say if the brain is a biological computer and electronic computers get more powerful exponentially then at some point electronic ones will overtake. Of course a basic chess algorithm is fairly simple and an AGI one will be far more complicated but it can't be that mega complicated to fit in a limited amount of DNA building proteins building cells.
For some reason, this doesn't get enough attention and we have people like Elon and Stephen Hawking making dire predictions all over the place.
I admire his optimism, but I think it's irresponsible to sell it like he does.
Similarly, if we don't have bold predictions like these that we can actually measure within our lifetimes, we fall prey to fantasies that cannot be measured. Once this prediction fails miserably (I think) it helps many others to re-calibrate all their BS.
I'd argue that Kurzweil is selling precisely these kinds of fantasies that cannot be measured.
That we don't understand or can't define intelligence is a popular trope not grounded in reality. There are entire scientific and well-established fields that study digital and biological intelligence.
If we can't do that, I would argue that the computer has indeed become very
intelligent, even if we can't define it. Just like beauty: we don't have a good mathematical model for what makes certain humans beautiful, but we all sure know it when we see it.
I admit, not the year 2000 and overly snarky, but people do marry computers nowadays. 
If we think it is what is(the computer controlled mask), we start to believe it really is what it is, a mind(controlling a mask).
Humans are capable of high degrees of auto-suggestion, up to a point where groups of people, and their dynamics, come into play, and some kind of group-thinking, a cult maybe for lack of a better term, takes over.
At this point all is believable, maybe not so far away.
We may trick us into a false AI, not a GAI in any sense mind you, and get stuck with it, because most of us wanted it.
And rest rest has been silenced already, that is very easy, as we all know already today.
I don't think even the AI doomsayers, deep down, actually believe what they preach. It's just a way to signal that one is clever and informed of new tech.
If they actually believed what they say, they'd be worried of being targeted by violent protestors, like drug testing companies and crop breeding companies have to be.
> Climate change and nuclear war are well-known threats to the long-term survival of our species. Many researchers believe that risks from emerging technologies, such as advanced artificial intelligence and designed pathogens, may be even more worrying.
> First, you need to consider which problem you should focus on. Some of the most promising problems appear to be safely developing artificial intelligence, improving biosecurity policy, or working toward the end of factory farming.
The reason that rich people worry about superintelligence is that it could bring the same uncaring devastation to the rich as climate change brings to the poor.
The problem with this is that I believe one is a genuine threat, the other is a fad.
In what way? Do you not believe that superintelligence is possible, or do you believe that any superintelligence will automatically care about the well-being of humans? Both beliefs seem naive to me and to many luminaries in the field: https://people.eecs.berkeley.edu/~russell/research/future/.
I don't believe super-intelligence is possible. I don't believe we're anywhere near modeling intelligence, and even if we did I don't believe intelligence will "exponentially increase" given more computing power (the same way there's a limit to speeding up barely- or non-parallelizable programs).
The fact that organizations outperform individuals at many tasks shows that superintelligence is possible. If you can dramatically increase the communication bandwidth of an organization through computerization, you will trivially achieve superintelligence over organizations. Exponential increasing intelligence is not necessary for bad outcomes.
That's a little hand wavy example.
Besides, organizations lose out to individuals all the time where intelligence matters -- e.g. that's why the stupidity of bureaucracy, or the army, and "design by committee" is a thing.
Also teams of 5-10 do often do better than teams of 100 or 200 (even in programming), except of course in labor intensive tasks (of course an army of 1000 will defeat 10 people, except if among the ten is Chuck Norris).
The singularity isn't a brain in a box... it's us, the collective, a metasystem transition that's been underway for millennia. A movement toward a whole that transcends the parts.