Hacker News new | comments | ask | show | jobs | submit login
Computational Complexity versus the Singularity (gwern.net)
16 points by gwern on Aug 24, 2016 | hide | past | web | favorite | 1 comment



A classic gwern post. Exhaustively well-researched, and with reams of context and examples provided for every point and sub-point. Unfortunately, in dire need of editing and essentialising.

Like other writers in the Lesswrong-rationalist sphere (Scott Alexander is even worse in this regard), Gwern overwhelms his reader with so much extended discussion, parenthetical comments and tangential points that the actual structure of his argument is hard to make out. Normally when I encounter a writer making several points I disagree with, I'm able to grasp the overall structure of their argument and see whether those points are essential or not.

His basic point seems logical: computational complexity theory probably doesn't establish a fundamental barrier to AI. However, I think this is simply because we lack anything approaching a mathematical definition of intelligence, making it meaningless to apply a mathematical tool like complexity theory. How does the complexity of, say, a conversation, scale with the length of the conversation? Linearly? Polynomially? Exponentially? Is the question even meaningful?

Then, there are lots of specific remarks I disagree with, e.g:

"Turning to human intelligence, the absolute range of human intelligence is very small: differences in reaction times are small, backwards digit spans range from 3-7, brain imaging studies have difficulty spotting neurological differences, the absolute genetic influence on intelligence is on net minimal, and this narrow range may be a general phenomenon about humans (Wechsler 1935); and yet, in human society, how critical are these tiny absolute differences in determining who will become rich or poor, who will become a criminal, who will do cutting-edge scientific research, who will get into the Ivy Leagues, who will be a successful politician, and this holds true as high as IQ can be measured reliably (see TIP/SMPY etc)."

Again, exhaustive evidence, but (to me) an obvious conceptual flaw: why do similarities in measurements of basic brain operations between humans imply that the range of intelligence between humans is small? Doesn't the range of human achievement instead imply very large differences in mental "software", which is what should be measured instead?

Think of the mental achievements of researchers pre- and post-Newton. There was a huge leap, not because of any increase in basic brain power, but because the later researchers could learn, apply and extend the methods discovered by Newton. I think it's precisely the accumulated set of such methods which seems key to any definition of intelligence (remember that language and conceptual cognition were themselves methods discovered at some point in prehistory).




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: