Unless you can show an example of humans reasoning solving a problem outside the Turing computable set, there is no rational basis for assuming the brain is anything but a computer, as the very notion that we exceed Turing computability would be revolutionary and utterly mindbending in terms of consequences on a number of fields.
there is no rational basis for assuming the brain is a "computer" in the same way an intel x86 chip is a "computer" or that the universe is a "computer". Using language in this way without defining terms like what even is a computer is folly.
There is no rational basis for assuming it is not, as we have not a single example of a computable function outside the Turing computable set.
The term "computer" has it's original outside of "electronic computer". It used to be a role, a job function. There has been no time in human history where the only computers have been electronic computers.
But, sure, let's be more precise: Any Turing complete system is equivalent to any Turing complete computer and can reasonably be called a computer, but let's also limit it to any system that can not compute functions outside the Turing computable set. We don't know of any such systems that have been shown to compute functions outside the Turing computable set, at all, including brains.
The rational basis for assuming the brain is a computer is that we have not a single shred of evidence that exceeding Turing computability is possible, nor any theory for how to even express a function that is computable for humans but not Turing computable.
If you can find one single such example, there'd be a rational basis for saying the brain isn't a computer. As it stands now, assuming it isn't, is nothing more than blind faith.
.
The reason a lot of people are unhappy about this notion is that it doesn't really matter: Any Turing complete system can emulate any other Turing complete system, and an LLM can trivially be made to execute a Turing machine if you put a loop around it, which means that unless you can find evidence humans exceed Turing computability AGI is "just" a question of scaling and training.
It could still turn out to be intractable without a better architecture, but the notion that it might not be impossible makes a lot of people very upset, and the only way it can be impossible even for just an LLM with a loop bolted on is if human brains can compute functions outside the Turing computable set.
"Llm thinks" is false advertising. (Maybe useful jargon, but still)
> Any Turing complete system can emulate any other Turing complete system, and an LLM can trivially be made to execute a Turing machine if you put a loop around it
Wouldn't it be more efficient to erase the LLM and use underlying hardware as Turing complete system?
BTW. Turing test is just admission that we have now way of defining human level intelligence apart from "you'll know it when you see it".
I agree with you. "Chain of thought" is not reasoning, just like LSD trip isn't.
I think we lack a good formal definition of what (fuzzy) reasoning is. Without it, we will always have some kind of unexplained hallucinations.
I also believe AGI could be implemented as a model that can train models for specific tasks completely autonomously. But that would kill the cash cow, so OpenAI etc. are not interested in developing it.
I’ve held the staff engineer title some times and when the company did so my role was to be a sort of backup if some team could not solve something. When I was not allocated to a specific team I collaborated with other engineers on specs and higher level planning. We also worked on internal libraries and tools.