This doesn't feel like a particularly useful distinction though because we're not existence proofs for consciousness in humans, just ourselves individually.
It is not apriori certain that what I consider my conscious mind is a shared common experience of being conscious, since as you note there's no test for it.
While I agree in principle, I prefer to act cautiously.
This caution goes both ways, so I will treat an uploaded mind as having Ξ from the point of view of human rights and animal rights (i.e. don’t do things to it which cause suffering), and yet also treat uploaded minds as not possessing Ξ for purposes of “would that make me immortal if it happened to me?” even though a perfect sim would necessarily achieve that (no sim is perfect, what is the difference between the actual sim and a sufficient sim?)
I do not believe current AI is anywhere near Ξ, so this is a question for the future rather than today; but the future has a way of arriving sooner than people expect and I do think we should do what we can to answer this now rather than waiting for it to become urgent.
If an AI is supposed to have Ξ and does, or if it is supposed to not have Ξ and indeed does not, this is fine.
If an AI is supposed to not have Ξ but it does, it’s enslaved.
If an uploaded mind is supposed to have Ξ but doesn’t, the person it was based on died and nobody noticed.
It is not apriori certain that what I consider my conscious mind is a shared common experience of being conscious, since as you note there's no test for it.