It's only because we overestimate intelligence we think of that, probably as part of the "Great Man" fallacy.
Science is not advanced merely by people with "amplified intelligence" -- it's advanced through thousands of scientists working independently and in co-operation and tons of hard "manual" work of testing, verification, experimentation etc.
Not just some sage coming in with his insights, a la Newton and Einstein (though that did happen with more frequence in the previous centuries when more "low hanging fruit" discoveries were available).
As for politics, it's usually "sociopaths" and/or good manipulators and liars that go far ahead, not people with high IQ specifically -- and those traits are even conflicting sometimes (e.g. people with high IQ but Aspergers).
Perhaps, though I doubt it. We imagine intelligence as some kind of infinite scale, that can extend forever.
I would place my bets on diminishing returns.
Furthermore, even if "small men" are important, why wouldn't they benefit by becoming greater? Instead of waiting on destiny to grant us another Leibniz or another von Neumann, maybe we could harness such powers directly from the source.
That, of course, if it works. Tampering with the brain is certainly one of those things that might have very unintended consequences. What neuronal adaptations wold arise from sticking a chip in someone's brain? For instance, the majority of savants have some sort mental impairment. What if by optimizing for specific skills one might perhaps find it difficult to learn new skills? Or maybe the contrary: optimizing for learning and generalist thinking might not increase computational intelligence, or even reduce it.
Correct. Hence I think one low-hanging fruit in science would be to not only share information through peer-reviewed articles (or even ArXiv) but actual fine-grained collaboration between different researchers
Instead of waiting for a published paper (which might take months or years), there might be a way of saying "ok this is promising" or "this is crap" or "do it this way, it works better"
Oh and of course stop crap like pseudocode algorithms, Excel calculations in favour of actual source code and open data.
The space of augmented brains is a tiny subset of the space of possible intelligent systems. There's comparatively unlimited scope for superior designs to augmented brains within the realm of artificial intelligence, or "every other nonbiological intelligent system concievable"
It's possible to believe that augmented brains might get a lead in intelligence for a short period, they are limited by their format, and would ultimately be overtaken by superior designs.
With regards augmentation prostheses, Kurzweil makes a strong case in his book "how to Create a Mind", that anything you plug into your brain is bandwidth limited by how much information the brain can actually process at a time, in a similar way to how most things in our visual field are immediately discarded from working memory and we can only really email with the small set of visual information that we pay attention to.
Also, the article completely misses the implications of the intermediate stages of man with machine, or Intelligence Augmentation (the real interpretation of IA). This is a much more immediate and plausible phase of the transition, and is already upon us in several ways. I recommend John Markoff's new book for more on the topic.
There are many demerits to the article, especially given the wealth of more interesting alternative paths ahead.
The few I have met are well-spoken and smart atypical Internet users not the usual riff raff but are just so oblivious of basic social skills.
At the end it all depends on what actions are taken using this intelligence by super AI or a human with amplified intelligence. Does this intelligence create a weapons to destroy the planet or helps in finding cure for cancer.
I guess, nowadays the answer will be more like "no comparison makes sense, we are good at different tasks".
There most likely is some system of visualization that would let a person get a very good intuition about the web.
> Looking to learn more about this, I contacted futurist Michael Anissimov, a blogger at Accelerating Future and a co-organizer of the Singularity Summit. He’s given this subject considerable thought — and warns that we need to be just as wary of IA as we are AI.
Notice the bolding of questions and then responses throughout the bulk of the article.
(Incidentally, Anissimov was perma-banned from Twitter the other day, apparently. That takes some doing if you're not gushing about ISIS.)
However, his bizarre politics are entirely beside the point. And talking about politics in technical conversations is a very bad cultural practice.
Interesting topic, not interesting article regarding.
Could Amplified Intelligence happen? Sure. Would it allow us to be smarter? probably. But we wouldn't be able to outpace AI, because we're still human beings with finite space and time.