Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If the failures of previous predictions - those that considered Chess or Go as unmistakable signals of true general reasoning - have taught us anything, it's that general reasoning simply does not exist.

I don't think DeepBlue or AlphaGo/etc were meant to teach us anything - they were just showcases of technological prowess by the companies involved, demonstrations of (narrow) machine intelligence.

But...

Reasoning (differentiated from simpler shallow "reactive" intelligence) is basically multi-step chained what-if prediction, and may involve a branching exploration of alternatives ("ok, so that wouldn't work, so what if I did this instead ..."), so could be framed as a tree search of sorts, not entirely disimilar to the MCTS used by DeepBlue or AlphaGo.

Of course general reasoning is a lot more general than playing a game like Chess or Go since the type of moves/choices available/applicable will vary at each step (these aren't all "game move" steps), as will the "evaluation function" that predicts what'll happen if we took that step, but "tree search" isn't a bad way to conceptualize the process, and this is true regardless of the domain(s) of knowledge over which the reasoning is operating.

Which is to say, that reasoning is in fact a generalized process, and one who' nature has some corresponding requirements (e.g. keeping track of state) for any machine to be capable of performing it ...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: