Hacker News new | past | comments | ask | show | jobs | submit login

From the linked paper in the article:

>If it relies instead on some sort of deep learning, then the answer is equivocal—at least until another algorithm is able to explain how the program reasons. If its principles are quite different from human ones, it has failed the test.

Why would we expect our reasoning methods to be the only or even the most efficient? If it can get to the correct conclusions, shouldn't that be what matters?




> Why would we expect our reasoning methods to be the only or even the most efficient? If it can get to the correct conclusions, shouldn't that be what matters?

This has epistemological roots.

Knowledge is commonly defined as true justified belief. If I correctly predict that in exactly 10 years from now, it will be a rainy day, that is not knowledge, as while my belief ended up true it was not justified.

Ultimately it is very easy to make correct predictions, you just need to make a lot of them, which is why whenever someone makes successful predictions, we scrutinize their justifications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: