Hacker News new | comments | show | ask | jobs | submit login
Humans with Amplified Intelligence Could Be More Powerful Than AI (io9.com)
46 points by jphilip147 on Sept 20, 2015 | hide | past | web | favorite | 32 comments

It's not that great intelligence (be it "humans with amplified intelligence" or AI) is that powerful.

It's only because we overestimate intelligence we think of that, probably as part of the "Great Man" fallacy.

Science is not advanced merely by people with "amplified intelligence" -- it's advanced through thousands of scientists working independently and in co-operation and tons of hard "manual" work of testing, verification, experimentation etc.

Not just some sage coming in with his insights, a la Newton and Einstein (though that did happen with more frequence in the previous centuries when more "low hanging fruit" discoveries were available).

As for politics, it's usually "sociopaths" and/or good manipulators and liars that go far ahead, not people with high IQ specifically -- and those traits are even conflicting sometimes (e.g. people with high IQ but Aspergers).

As you're yourself saying, sages with insights did occur relatively frequently when there were the low hanging fruit in different sciences still available. But there is no reason to believe that there exists just one level of these low hanging fruit that just happens to be at around the level that exceptional human intelligence can grasp. Which means that that this hypothetical "amplified intelligence" could vrey well have it's own, next level of "low hanging fruit" that is simply too complex for current human intelligence to grasp, but is well within the reach of the more powerful intelligence.

>Which means that that this hypothetical "amplified intelligence" could vrey well have it's own, next level of "low hanging fruit" that is simply too complex for current human intelligence to grasp, but is well within the reach of the more powerful intelligence.

Perhaps, though I doubt it. We imagine intelligence as some kind of infinite scale, that can extend forever.

I would place my bets on diminishing returns.

A "fallacy" would imply some hidden logical error or some loose thought, but this isn't the case: the idea that "great men are important" isn't by itself fallacious. The error lies in denying the contribution of "small men".

Furthermore, even if "small men" are important, why wouldn't they benefit by becoming greater? Instead of waiting on destiny to grant us another Leibniz or another von Neumann, maybe we could harness such powers directly from the source.

That, of course, if it works. Tampering with the brain is certainly one of those things that might have very unintended consequences. What neuronal adaptations wold arise from sticking a chip in someone's brain? For instance, the majority of savants have some sort mental impairment. What if by optimizing for specific skills one might perhaps find it difficult to learn new skills? Or maybe the contrary: optimizing for learning and generalist thinking might not increase computational intelligence, or even reduce it.

> Science is not advanced merely by people with "amplified intelligence" -- it's advanced through thousands of scientists working independently and in co-operation and tons of hard "manual" work of testing, verification, experimentation etc.

Correct. Hence I think one low-hanging fruit in science would be to not only share information through peer-reviewed articles (or even ArXiv) but actual fine-grained collaboration between different researchers

Instead of waiting for a published paper (which might take months or years), there might be a way of saying "ok this is promising" or "this is crap" or "do it this way, it works better"

Oh and of course stop crap like pseudocode algorithms, Excel calculations in favour of actual source code and open data.

Not to be a dick, but "they also might not". The article doesn't present many good reasons for believing augmented biological intelligence will be superior. Or even what that means, ie, for how long? Obviously the human brain is an existing form of intelligence, so in that sense it may be easier to bootstrap from. But ultimately, augmentations to the brain would be limited by the requirement of being attached to a brain.

The space of augmented brains is a tiny subset of the space of possible intelligent systems. There's comparatively unlimited scope for superior designs to augmented brains within the realm of artificial intelligence, or "every other nonbiological intelligent system concievable"

It's possible to believe that augmented brains might get a lead in intelligence for a short period, they are limited by their format, and would ultimately be overtaken by superior designs.

With regards augmentation prostheses, Kurzweil makes a strong case in his book "how to Create a Mind", that anything you plug into your brain is bandwidth limited by how much information the brain can actually process at a time, in a similar way to how most things in our visual field are immediately discarded from working memory and we can only really email with the small set of visual information that we pay attention to.

The article (absurd as it is) not only overlooks all the engineering constraints in the integration of man and machine, but also focuses only on bigger and better, completely missing the alternate worlds that may arise. To modify the brain (and our sensors) will surely change the way we think, our emotions, and invite a totally different world view. How might that restructure our social dynamics, especially after the need for competition for limited resources abates or ends entirely?

Also, the article completely misses the implications of the intermediate stages of man with machine, or Intelligence Augmentation (the real interpretation of IA). This is a much more immediate and plausible phase of the transition, and is already upon us in several ways. I recommend John Markoff's new book for more on the topic.

There are many demerits to the article, especially given the wealth of more interesting alternative paths ahead.

Sure, and the microsecond we achieve a high bandwidth connection to a new substrate is the same microsecond we start migrating our process out of cranial us-east-1 and its barely 99.99% availabity to the brand new shiny and crack open a huge can of legal and philosophical worms, no?

How do we know it can be more powerful than AI if we don't even know how powerful AI can be yet?

Yep. That's why they still __could__ be. Of course they also couldn't be. The beauty of unknowns. Opening a vast world of clickbait headlines....

The article could be re-titled "Maybe a slave AI for your brain will make you feel better about your brain being as insignificant as an ant's to AIs." Except for the fact that upbeat titles get more hits.

Don't you try thinking critically here ok?!

There are various forms of intelligence, most of which have a better chance of being excelled by AI. If reading inputs and detecting patterns in inputs to make sense of the world is what brain does, then AI can beat us hands down. Just the fact that it can hold huge amount of data in "working memory" and work as long as it takes to find patterns in it, it can find more patterns (read scientific discoveries) than any human ever aspired to find. If amplified human intelligence is still decentralized, portable, miniaturized device connected to our brain to enhance its abilities, there is no way it can compete with giant supercomputers running AI.

If it does occur I hope there is an equal increase in social skills it's terifying to read reddit /r/iamverysmart some truly smart people but psychopath level social skills.

Isn't the point of that subreddit to showcase people who are outwardly insecure about their intelligence, rather than people who are intelligent and awkward?

Yeah, I'm pretty sure it's meant as satire.

I'd bet good money that most people on that subreddit are just plain lying about being all that clever.

I would agree many posts are but I've had a few encounters with some of the people since some tend to comment on posts about them. If you interact with them they will search for your Twitter, Facebook accounts or guess your e-mail address it's like Hannibal Lecter is after you.

The few I have met are well-spoken and smart atypical Internet users not the usual riff raff but are just so oblivious of basic social skills.

Well everything still boils down to ethics and morals of the Intelligence.

At the end it all depends on what actions are taken using this intelligence by super AI or a human with amplified intelligence. Does this intelligence create a weapons to destroy the planet or helps in finding cure for cancer.

aren't humans already more powerful than AI?

Have you ever tried indexing the web?

I guess, nowadays the answer will be more like "no comparison makes sense, we are good at different tasks".

>Have you ever tried indexing the web?

There most likely is some system of visualization that would let a person get a very good intuition about the web.

Because by "AI" we usually mean superhuman-level AI, which by definition is more powerful than humans.

Well, a calculator is more powerful than a human at most types of mathematics. Machines are more powerful than humans in all kinds of tasks. It's probably better if we start breaking AI into the tasks that it can do better than a human can, that way we get to watch the milestones fall one by one.

Comparing to which type of AI? I think a well made AI will be infinitely smarter than any form of human intelligence.

My bet is on an uploaded mind as the singularity. A human intelligence boosted by computational resources.

What AI? My neato has trouble getting back to its "base" without bumping into every wall like drunk sailor.

The author is a neoreactionary. Just thought you might want to know.

You are confusing George with Michel Anisimov. Also, what you just did, invoking politics instead of addressing the point should be taboo. I don't care about people's politics. I don't want to hear about it unless we are talking politics.

No, he's not, because George is interviewing Anissimov:

> Looking to learn more about this, I contacted futurist Michael Anissimov, a blogger at Accelerating Future and a co-organizer of the Singularity Summit. He’s given this subject considerable thought — and warns that we need to be just as wary of IA as we are AI.

Notice the bolding of questions and then responses throughout the bulk of the article.

(Incidentally, Anissimov was perma-banned from Twitter the other day, apparently. That takes some doing if you're not gushing about ISIS.)

Yeah. I noticed that afterwords. I had only skimmed the first paragraph of the article when I posted the comment. I'm not a fan of Anissimov, I haven't been since I saw him speak very naively about whole brain emulation in 2005 and later when he made some very technically incompetent arguments for some imagined form of nanotechnological DRM. Even in this case, brain augmentation strikes me as heroically unpromising. Once we know enough to augment a brain, I think we will know enough to create superior substitutes - though an obvious exception to this would be genetic augmentations.

However, his bizarre politics are entirely beside the point. And talking about politics in technical conversations is a very bad cultural practice.

Nope. A human being with Amplified Intelligence is not going to be more powerful than Artificial Intelligence. Nor does the article do much to make a strong case. The author tries to talk about the psychological impacts of a radical increase in intelligence, but it's all armchair observations and guesstimates with little substance to actually digest and ponder.

Interesting topic, not interesting article regarding.

Could Amplified Intelligence happen? Sure. Would it allow us to be smarter? probably. But we wouldn't be able to outpace AI, because we're still human beings with finite space and time.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact