Well first of all I never claimed that I was capable of thinking (smirk).
We also haven't agreed on a definition of "thinking" yet, so as you can read in my previous comment, there is no meaningful conversation to be had.
I also don't understand how your oddly aggresive phrasing adds to the conversation,
but if it helps you: my rights and protections do not depend on whether I'm able to prove to you that I am thinking.
(It also derails the conversation for what it's worth - it's a good strategy in the debating club, but these are about winning or loosing and not about fostering and obtaining knowledge)
Whatever you meant to say with "Sometimes, because of the consequences of otherwise, the order gets reversed" eludes me as well.
If I say I'm innocent, you don't say I have to prove it. Some facts are presumed to be true without burden of evidence because otherwise it could cause great harm.
So we don't require, say, minorities or animals to prove they have souls, we just inherently assume they do and make laws around protecting them.
Thank you for the clarification.
If you expect me to justify an action depending on you being innocent, then I actually do need you to prove it.
I wouldn't let you sleep in my room assuming you're innocent - or in your words: because of the consequences of otherwise.
It feels like you're moving the goalposts here: I don't want to justify an action based on something, i just want to know if something has a specific property.
With regards to the topic: Does AI think?
I don't know, but I also don't want to act upon knowing if it does (or doesn't for that matter).
In other words, I don't care.
The answer could go either way, but I'd rather say that I don't know (especially since "thinking" is not defined).
That means that I can assume both and consider the consequences using some heuristic to decide which assumption is better given the action I want to justify doing or not doing.
If you want me to believe an AI thinks, you have to prove it, if you want to justify an action you may assume whatever you deem most likely.
And if you want to know if an AI thinks, then you literally can't assume it does; simple as that.
Sometimes, because of the consequences of otherwise, the order gets reversed