Hacker News new | past | comments | ask | show | jobs | submit login

Wellp this was a fun conversation, but it seems to me that at this point there's not much more to do other than repeat ourselves. The final thing I'd emphasize is that it's important to make sure metrics measure what you want them to measure. To some degree we've already ruined the name of the Turing Test with excessive simplifications. 'Oh that? Yeah, it was passed a decade ago, right?'

Of course one practical issue that, in some ways, makes this all moot - is that if we ever create genuine AI systems capable of actual thought, the entire idea of a "test" would be quite pointless. Rapid recursive self improvement, perfect memory, perfect calculation, and the ability to think? We'd likely see rapid exponential discovery and advancement in basically ever field of human endeavor simultaneously. It'd be akin to carrying out a 'flying test' after we landed on the Moon.




I think we generally both agree that there are some poor misimplementations of the test, like the one you linked where (according to their paper) the interrogator could answer "unsure" on a bot's response and count as being "fooled" by that bot even if they then answer "human" on a human, which does allow for giving nonsense answers to be a legitimate strategy (unlike with Turing's specification, I'd claim).

Ultimately I do think Turing's experiment measures something interesting. There's a nice "minimal maximality" to it, in that it's a simple game yet set up in a way that solving it encompasses all facets of intelligence that current humans have. Maybe coincidentally comparable to the test for Turing completeness, in that a Turing machine is conceptually simple yet simulating it proves computational universality. I feel there's a risk of missing the nuance and just taking the experiment as a singular benchmark, whether it's made "easier" or "harder", akin to "simulating a Turing machine is too easy, how about simulating the Numerical Wind Tunnel?"

> Rapid recursive self improvement

I'm a bit sceptical of a hard take-off scenario.

Say on first pass it cleans up a lot of obvious inefficiencies and improves itself by 50%. On the next pass it has more capacity to work with, but the low-hanging fruit are already dealt with, so it probably only manages to squeeze out an extra 10%. To avoid diminishing returns, it'd need to automatically build better chip fabrication plants, improve mining equipment, etc. so that many steps in the pipeline are improving. This will all happen eventually, and contribute to humanity's continuing exponential progress, but IMO will be a relatively gradual changeover (as is happening now) rather than an overnight explosion from some researcher making a bot that can rewrite itself as soon as it can "actually think", whatever that entails.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: