Hacker News new | past | comments | ask | show | jobs | submit login

LLMs are so far removed mechanically from brains the idea they reason is not even remotely worth considering.

Jet planes are so far removed mechanically from a bird that the idea they fly is not even remotely worth considering.






You’re right that my argument depends upon there being a great physical distinction between brains and H100s or enough water flowing through troughs.

But since we knew properties of wings were major comments to flight dating back to beyond the myths of Pegasus or Icarus, we rightly connected the similarities in the flight case.

Yet while we have studied neurons and know the brain is apart of consciousness, we don’t know their role in consciousness like the wing’s for flight.

If you got a bunch if daisy chained brains and that started doing what LLMs do, I’d change my tune—because the physical substrates are now similar enough. Focusing on neurons, and their facsimilized abstractions, may be like thinking flight depending upon the local cellular structure of a wing, rather than the overall capability to generate lift, or any other false correlation.

Just because an LLM and a brain get to the same answer, doesn’t mean they got there the same way.


Motte? Consciousness.

Bailey? Reason.

How reasonable are the outputs of ANNs considering the inputs? This is a valid question and it has a useful response.

From ImageNet to LLMs we are finding these tools to give some scale of a reasonable response.

Recommended reading: Philosophical Investigations by Wittgenstein.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: