Hacker News new | past | comments | ask | show | jobs | submit login

> We are quite slow at developing the first version (it's still a work in progress) but that v1 will likely be able to build a v2 much faster and so on, leading to an "intelligence explosion".

As with everything in nature, there are diminishing returns. It is not clear that any significant leaps in 'intelligence' are possible. At least to the extent proposed by 'singularity' advocates. Or that such an intelligence would be able to completely outclass humans, either in the current form, or augumented by tech.




As humans have evolved larger brains, intelligence went up extremely. Human brain size is limited by a lot of factors specific to human biology, like the fact that it needs to be small enough to fit through the birth canal, it can't take up too much of the body's energy, and it can't make us too top-heavy. It seems unlikely that human evolution ran into these limits right at the same time it finished making a maximally-intelligent mind. None of these limiting factors apply to AI. If/when we figure out how to make intelligent software, we make AIs, and then we improve them to be human-level intelligent, it seems really unlikely to me that we'll hit a limiting factor right there. I think if we ever create near-human-level intelligent AIs, we'll blow right past human-level intelligence and hit a limit further on that's not coincidentally placed at human-level intelligence.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: