Interesting interpretation.... It's amazing how meta that problem gets. I'd like to point out, however, that you're casually bandying about the term "consciousness" as if it's a well-defined term. The main benefit of Searle's use of "understanding" is that it's a narrow facet of what humans consider when we hear "consciousness", it's universal, and it's extremely difficult to define in "realism" terms. Perhaps you would make more headway if you considered defining "consciousness" in narrower terms?
Of course, that entire thought experiment fails if you don't consider the experience of "understanding" to be meaningful. I don't have any strong sentimentality towards the concept; I don't see any reason we couldn't achieve Strong AI in the sense that Searle argued against (and the parent poster appears to argue for).
I wonder if the question you ask can ever be answered, even when the AIs insist that they have consciousness exactly like us.