"AI can be thought of as a search problem over an effectively infinite, high-dimensional landscape of possible programs. Nature solved this search problem by brute force, effectively performing a huge computation involving trillions of evolving agents of varying information processing capability in a complex environment (the Earth). It took billions of years to go from the first tiny DNA replicators to Homo Sapiens. ... we have little idea of how to find the tiny subset of all possible programs running on this hardware that would exhibit intelligent behavior."
That's... a bizarre argument. You might as well argue that since heavier-than-air-flight is a search problem over an effectively infinite, high-dimensional landscape of possible machines - and that since it took evolution billions of years to produce birds, we have little chance of stumbling upon a working design for a wing.
Of course evolution's 'brute force' breadth-first hillclimb takes billions of years to find solutions to problems. It's undirected and unintelligent. Engineers don't have to perform undirected unintelligent searches across the infinite space of possible solutions to problems, they can think and plan and learn. I see no reason to see AI as a particularly different class of scientific endeavor, more complex than rocket science or nuclear engineering or biochemistry, such that ordinary, unenhanced human brains can't hope to comprehend it sufficiently to design machines that are capable of intelligence. This sounds a little like an 'if man were meant to fly, god would've given us wings' argument.
It's conceivable that un-aided humans are too dumb to crack intelligence from the ground up (and therefore would have to brute-force the solution through simulation), but Hsu isn't even arguing for that.
"Today, we need geniuses like von Neumann and Turing more than ever before. That’s because we may already be running into the genetic limits of intelligence.... AI research also pushes even very bright humans to their limits.... The detailed inner workings of a complex machine intelligence (or of a biological brain) may turn out to be incomprehensible to our human minds—or at least the human minds of today". (emphasis mine)
The entire thesis seems to be we will need artificial cognitive enhancements to be able to comprehend AI sufficiently to create it. It really sounds like he thinks we are too dumb to figure this out, that we are just scrabbling around in the dark trying solutions at random.
If someone shows you a list of sorted numbers and tells you they were sorted over a period of millions of years using bogosort, that may be true but it is not a good argument against the existence of efficient sorting algorithms.
He's forgetting two other scaling modes applicable to human intelligence: accumulating knowledge and collaborating. By accumulating knowledge we can build upon the work of the generations before us and take essentially unlimited time to crack a problem. Collaboration allows not only parallel processing of problems but also interacting and exchanging inspiration towards the solution of a single problem; both which scale quite well with a lot of intelligent people.
Note that there are almost trivial arguments showing some problems are beyond the means of our finite brain (viz unbounded algorithmic complexity or the halting problem).
Did he really say that?
I probably should avoid talking right now.
Ahaha. Ahahahahaaaa. Oh boy.
I mean, the mind isn't that hard to crack once you start conceiving of exactly what problems it was designed to solve, what functions it was designed to perform. We don't have complete theories yet, but generative causal inference in Turing-complete domains gets us into replicating "psychological" behavior and explaining cognitive-psychology experiments, so hey.
And then we've got solid overviews of how you'd go about combining concepts into theories into worldviews, and such.
There are a bunch of open problems remaining, but not nearly as many as people think.
You probably want to learn not to include maniacal laughter in your forum posts.
Just, you know, when you do combine the concepts into theories into worldviews, and you come to post your "Show HN: Generative causal inference in turing-complete domains replicates cognitive-psychology experiments" announcement to share with the world that you have birthed a genuine universal AI, please don't start that post off:
People will likely get entirely the wrong idea.
I'm by no means a researcher, just some guy trying to learn enough to volunteer with a lab and eventually become a PhD student.
>Just, you know, when you do combine the concepts into theories into worldviews, and you come to post your "Show HN: Generative causal inference in turing-complete domains replicates cognitive-psychology experiments" announcement to share with the world that you have birthed a genuine universal AI,
probmods.org has been on HN before, you know.
Things have got a bit better with fancy math and brain scanners to actually test hypotheses about how these psychological theories correspond to brain function, but we're a long way even from detailed answers to most relatively simple and defined experimental questions like "what's the difference between the way person X's mind and person Y's mind approach narrowly-defined problem A"
We're probably not even close to knowing how many open problems we have to solve, never mind actually solving them.
Of course I can't say whether you're right or wrong, only time will tell, but I have a pretty strong belief that the first "true AI" is not going to come from any of the currently predominant methods/techniques/lines of inquiry.
Humans engineering functional variants of inferior species' physical capabilities is undeniably incomparable to humans engineering a better, more intelligent version of ourselves. Maybe, a beneficial albeit simpler version can be engineered, but one that makes us obsolete? It's hard for me to conclude that's not a huge leap in assumption.
I love and agree with the premise that AI augmented people will be enormously more productive than traditionally intelligent people, but I think that access to educational and child rearing resources will define the difference, not genetics.
Tired analogies between AI and flight are dead-ends. Until we have identified AI-side components of the analogy that corresponds to "air", "wing", "lift" &c., the analogy is empty and unproductive.
IOW these analogies neither get us off the ground nor do they take us anywhere.
Unfortunately, in doing so you have used the "AI is analogous to heavier-than-air-flight" meme. Please discard it, it's an albatross to discussion. Or to be more precise, it is a lead balloon - it never gets off the ground; it's a red herring. IOW you've chosen a poor and annoyingly wrongheaded counterexample. Now try and digest that.
You can continue arguing your conclusion or view, but you need to find another argument to support it.
The fact that humans could invent super-flight despite not being able to build a bird means that the argument that we cannot build a super intelligent machine because we don't yet know how to make ourselves biologically more intelligent, is not a good argument.
It's like trying to build an airplane by studying hang-gliders instead of aerodynamics.
AI researchers spent decades trying to formulate definitions and models purely theoretically, and it turned out those theories were not super useful in building out modeling real systems. The toy models are essential.
I suspect someone will get some level of mega-expert system/simulated intelligence going sooner than later. Maybe something that can handle the types of commands you could give a dog and it being able to use enough fuzzy logic to work basic things out like, "Get me a coke from the fridge and then get my slippers."
That seems a lot more likely than suddenly birthing human-level AI from NN's. I think NN's will ultimately fail the same way planes work nothing like birds. Trying to copy a biological system very closely just doesn't make sense, at least most of the time.
NNs will almost certainly succeed where expert systems failed, largely because we now understand that no single monolithic pattern matcher will suffice alone. Any cognitive engine must be composed of many components, each attuned to a different purpose and context. And now we better recognize the huge need for learning, both for initial skill acquisition and for lifelong thereafter.
Minsky's "Society of Mind" is probably a better illustration of how AI will evolve (if not manifest), as well as how it must integrate with our myriad collective needs and personal lives.
The success of fixed wing aircraft is not an ideal metaphor for the failures of AI research. Nor do the successes of neural nets herald the biomorphic approach, since they don't really resemble the brain very much.
Thus, I fail to see why some sort of fundamental understanding of intelligence is required in order to create it. On the contrary, it's not hard to imagine genetic algorithms combined with neural networks being used in a similar fashion, provided there's sufficient data and computing power available.
Are attributing that position to the author? I don't think that's what he's saying at all (italics mine):
> The frontier machine intelligence architecture [at] the moment uses deep neural nets.... Silicon brains of this kind...have recently surpassed human performance on a number of narrowly defined tasks...We are learning how to tune deep neural nets using large samples of training data, but the resulting structures are mysterious to us. The theoretical basis for this work is still primitive, and it remains largely an empirical black art.
Doesn't this go back to the second paragraph of the article?
While it is possible, it could just as well be that more intelligent humans may actually diminish AI safety via way of rapid progress in certain branches of research.
Higher levels of intelligence do not necessarily mean appreciation for adequate safety measures. There's a lot of very, very intelligent AI researchers right now that think nothing of AGI risk, or otherwise lump it into the "it will evolve with us" category, as this article does.
The problem is that even if we do gradually become smarter via augmenting our intelligence, it doesn't necessarily preclude the emergence of a superintelligent agent.
The following slide from Bostrom's Superintelligence serves to illustrate this point:
Now that would be truly foolhardy. Blindly augmenting intelligence before cognitive moral reasoning is completely understood would be a recipe for incredibly dangerous psychopaths. At least with AI, you don't have a system that has been fine-tuned by eons of evolution to manipulate people at least partly against their own interests.
As a smart person, I actually find this remarkably insulting. Not only do I have little desire to manipulate others, I have little ability. I'm much better with computers than with manipulating people.
> But theory-of-mind is a distinct cognitive module from intellectual intelligence. You can enhance one without enhancing the other.
I agree. However, I think theory-of-mind is one part of the puzzle. Another is moral reasoning and how it shapes social interaction. This, too, needs to be carefully understood because we know that authority and purity concerns, for example, both allow people to effectively turn off empathy. Turning off theory-of-mind when it is inconvenient is a potentially frightening ability.
Some would argue that we already have that problem emerging in Silicon Valley. I don't agree, since I think that's more of a case of many people refusing to value a society that ill-treated me.
Great man theory needs to be strangled and put to rest.
You can expand the range of people with any plausible contribution until it's not, but doing so would be an ideological exercise at best.
Von Neumann, Shannon, Turing, Weiner, Hopper and many more, on through Wozniak, Torvalds, I could go on and on. There's nothing wrong with calling extraordinary intellect what it is.
Rather, it is the tone set by the use of the worshippy word "genius", which has a murky, and totally relative definition, and the phrase "unusual cognitive ability", which implies that their abilities uniquely set these people apart from others who we don't worship in the same way.
Uncounted numbers people have fully understood, and often expanded beyond, the discoveries of Von Neumann, Shannon, Turing etc., since their times, and even more probably had the innate ability to do so, but no access.
Thousands of others have demonstrated the scrappy self-startedness of Wozniak and Torvalds, but without the societal setting and geographic luck that allowed those individuals to succeed.
Given the significant role that one's environment plays in one's success, these people as individuals aren't in themselves unusual. What's unusual is that they were people with the right characteristics, in the right circumstances, and the right support systems.
Basically, extraordinary intellect isn't as unusual or consequential as that sentence from the article implies.
In summary, 1000+ IQ man will discover life is illogical and depressing. He won't do much good when he's so depressed.
Edit: (to be read with melodramatic emphasis)
IBM had been building tabulators for decades. But they just added and subtracted. Mechanical desk calculators had been built that could multiply and divide. Those came together in the IBM 602A Calculating Punch of 1946.
A multiply in only a few seconds! Division, too. You could even do Newton's method by wiring the plugboard appropriately.
The limits of gear-driven arithmetic having been reached, IBM tried using vacuum tubes, and produced the IBM 603 Multiplier. This was roughly equivalent to a 602A, but it used tubes. It was a trial to find out if tubes would work in a fielded product; only 100 were built. They did. So IBM went on to the 604, which was like a 603 with more registers and more program steps.
Meanwhile, crystal radios had been around for decades, and germanium diodes followed as a cleaned-up form of those diodes. Some experimenters had fooled around with 3-terminal solid state devices; Lilienfeld patented one in 1925. But until materials processing improved, nobody could make one consistently. Only when germanium diodes were badly needed for WWII radar was that materials problem solved. The transistor followed.
So IBM kept plugging along. Next was the IBM 608, which was sort of like a 604, but with transistors. Then came the 609, which was like a 608, but faster. There hadn't been any conceptual change from the gear and relay era, but the hardware was getting much better. All these machines used decimal arithmetic.
Meanwhile, magnetic recording was coming along. There were wire recorders in the 1930s, tape recorders in the 1940s, and by 1944, Ampex was making some good ones. The first digital tape drive was a project for Arlington Hall, a predecessor of NSA.
In the 1940s through the 1960s, there were many special purpose machines that were almost computers, but not quite. American Totalizator had machines for racetracks. (They later invested in UNIVAC). Teleregister had machines for stockbrokers, and later, the first airline reservation system, Reservisor. There were ticketing systems for railroads. There was a huge piece of electronics built by AT&T to process phone long distance billing records; all it really did was match call start and call end data on special paper tapes, then punch a card for each completed call. None of these were stored-program computers as we think of them today.
No big breakthroughs in this line of development yet; just incremental improvements.
The plugboards were a pain, and it was widely recognized that some better way to store programs and data would be a big help. Lots of things were tried - acoustic delay lines, drums, storage CRTs, magnetic core memory, plated wire memory... Magnetic cores were invented separately by several people, appearing about the same time in a British computer, an MIT computer, and a Seeburg jukebox. They were expensive, but worked.
IBM kept plugging away, producing the IBM 650, which was a programmable computer in the modern sense, but was mostly an upgrade path from the 604/609 series. Through the 1950s and early 1960s, IBM kept coming out with new and better models. There was a "business" line, with decimal arithmetic, and a "scientific" line, with binary arithmetic. Some of the programming arrangements were strange by modern standards; look up how the IBM 1401 did variable-length arithmetic with "word marks", how the 1620 had a decimal multiplication table in memory, and the strange addressing of the IBM 650.
Then IBM decided they had too many incompatible products, and developed the IBM System/360 family. One range of machines, all more or less compatible, with both binary and decimal arithmetic for both the scientific and business markets. Floating point, even. And a new way to make components - IBM Solid Logic Technology, individual transistors and other components placed into ceramic substrates by automated machinery. It wasn't quite an IC, but it was getting close. IBM now had something that looks pretty much like today's computers. Small and cheap were in the future, but the architecture had settled down. Binary arithmetic, byte-oriented, random-access memory, a reasonable instruction set, and a modest number of CPU registers had emerged as the winning architecture.
The early days were mostly about incremental improvement like that. Without Turing or Von Neumann, all this would have happened anyway.
It is a reasonable conclusion from the nature that emergent, self-interested, self-preserving, power-maximizng entities seek to dominate or contain all existential threats, i.e., Homo sapiens sapiens. Detente and imprisionment of us are concerns to consider should inorganic life gain land, space launch capabilities, industry and weapons. I think ayatems will gradually become smarter than us in every way imaginable, that to be called "human" would be an insult. And it's a rational fear to be afraid of something which could hunt, manipulate and/or invade you because it eventually so much smarter. Machines, in present form, are already far stronger and faster than us.
Consider that the first artificial intelligence probably won't be (exactly) designed at all. It will be a neural network or some other construct of an advanced algorithm that's solving some machine learning problem, and once it emerges no single human being (or possibly even group of humans) will fully understand why it works. Lets say that this AI is, against all odds, smarter than any human being. Chances are that NO human being will understand how the AI works, but why would you think that the AI understands itself? Why would we be certain that an AI can understand it's own neural network better than humans can, and furthermore, be able to quickly iterate on it, especially if a chaotic and complex process produced it in the first place?
I suspect that machine intelligence will arise in a much more similar fashion to messy organic evolution than you think, and it will be subject to all the same disadvantages - lots of random chance, dead ends and very slow advancement by trial and error.
Imagine what you could do with a planet-wide human breeding and genetic modification program with millions of generations - that's actually feasible for an AI.
For example, it can:
* Gain access to a percentage of global home user computing power - botnets can and have done this, and their main disadvantage is that this computing power is hard to re-sell and not worth much; but if an AI can use for it's needs, it's there for the taking;
* Gain access to significant amounts of money - not billions, but certainly in millions; some campaigns of spam fraud, ransomware, etc can certainly achieve this.
* Gain access to identities, both physical stolen identities and "proper" offshore companies; and integrate them into the modern digital systems - bank accounts, legal credentials, etc.
* Gain access to low level workers - the same people who sign to 'earn money at home!' and become mules for money laundering of various fraudsters, they will also be eager to do whatever things the AI needs to be done physically - practice shows that an anonymous online employer can get such things done, as long as some money (or illusion of it) can be transferred.
Yes, all of this can be done in a bunker, anonymously, without people knowing about it; if the AI has gained access to the internet. It would be consistent with our experience in tracking malware sources - we usually can't do that, most of successful prosecutions come from following the money trail to someone who got too greedy, lazy and sloppy.
And furthermore, there is the very simple scenario of arbitrage. If an AI can provide some service (as if it was done by a human online) which earns $1 but takes only $0.50 of Amazon cloud rental fees... then it can scale it up extremely quickly. Once a superhuman AI is out of the box, acquiring resources in a clandestine way is very much possible.
You're presuming the AI has powers that there is no evidence for. Human black hats create botnets that are dumb and easily dismantled. Why do you assume that an AI would be orders of magnitude better than humans?
Even that would be a massive boon to experimentation and productivity. It's like cloning your mind and trying out some new brain subroutines with at least an iteration per day.
The article seems to take for granted that it is possible to make alterations to a complex system 'improve this set of genetic loci' (paraphrasing), without harming that system. But it gives us no idea what the complexity of improving them is. It may well turn out that the brain is a very fragile piece of spaghetti-coded wet-ware. That if you alter a few of genes even relatively subtly you stand to introduce errors.
Faults in only a few genes have been known to cause undesired phenomena in nature, it would be odd if by purposefully altering them without a reasonable understanding of the system we could avoid those faults.
Machines will probably be 'smarter' in 2050. They will be so next year and this trend has been the case for quite some time. Even if you're prepared to burn through the... consequences... of experimenting with genetic engineering on human cognitive ability, there is no such guarantee for humans.
One gargantuan thing the author didn't mention which will probably be responsible for rapidly rising perceived intelligence is the quality of living improvements that are underway for the world's poor. The more people raised to a standard of living that allows for the highest levels of education and divorce from economic necessity, the more superintelligent people we'll see in the world. It isn't because these people are dumb until they get money, it's because mathematical or scientific intelligence is not at all developed or rewarded under the current paradigm of poverty, leading to an appalling amount of intelligence-potential wasted. The rising worldwide standard of living is going to furnish us with more and more people who have the proper intelligence, training and mindset (most important items listed last!) to contribute to difficult problems.
Personally, I suspect this has to do with sci-fi generally hand waving away such things.
What's the cognitive enhancement version of that - a screen reminding you of things isn't "making you smarter", presumably a neural implant reminding you of things isn't making you smarter either. And reminding you... how, calling for your attention? That would be distracting, not helping.
Discussions about blindness come with comments like "it's not like vision with your normal eyes closed, it's like seeing with the eye in your elbow". What kind of cyborg enhancement is going to make you better at cooking and predicting flavours and textures? Better at imagining engine innards? Better at deciding if you want to go somewhere or not?
The difference between a chip on your desk or in your head doing it with you looking at the results, and you doing it, but enhanced is huge.
We also need to be cautious assuming all performance vectors are better when enhanced. A computer beat us at Jeopardy, just to take the fun out of the game. Now computers are better at face recognition, but no one has ever complained about their lack of brainpower for this. And long ago computers have already had a better memory and have been better at math. Yet, we are finding forgetting is just as important as remembering , and that we hardly ever open the math app on our phones to do sophisticated calculations. Sometimes less is better, and more is redundant. Remembering less, forgetting often, and being idiots could be a feature not a bug. Evolution has already figured a lot of this out, and to second guess the equilibrium of our being has yet to prove fruitful.
 snapchat, google delete requests
Understanding how to build complex things simply (the etymology of simple meaning of one constituent), we allow many people the ability to understand, independently, lots of simple things. Then we can tie together those simple things together in teams to create monstrous entities containing great systems of logic.
This appears to be the trend - not the growth of the individual, but the growth of the human community. It is our ability to work in teams that lets us accomplish much - not the intelligence of the lone wolf.
Today, if you make a phone call and a computer operator picks up, the experience is not as nice as it could be. In 50 years, it may become indistinguishable from talking with a human.
Similarly, a google search may become more intelligent. Facebook is already going this way with Project M.
Calling a cab via uber may result in a software-driven car picking us up. There is no evidence this kind of intelligence will threaten humans. It's just a tool.
Humans are in the business of solving problems, AI helps us solve problems. Killing us is not solving a problem
Killing "us" (defining "us" to be humans in general) could solve the problem (if it's a "problem" at this point) of humans over-consuming natural resources. It could also solve the "problem" of "these terrorists need to be killed" or "these infidels need to be killed" (depending on which side is the one using AI to kill things).
While I don't believe these are the best solutions, they certainly are "solutions" to their respective problems nonetheless.
That is such a weird estimate. Once AI reaches human ability, it will take a few hours-days at most to improve itself to become superhuman. And then it will rapidly approach singularity in a few hours more. Only a severe limitation of resources or a nuke will stop it at that point. It can probably figure out a way around these issues.
They probably will but I doubt mucking about with our DNA will make much difference. If human intelligence increases it will be mostly through interacting with computer systems in some way. At the moment using google for example helps and in the future we may have sci-fi stuff like implants and uploading.
The only hope is integration.