Another way to look at this is that the AI has no "ethics" or "morals," and therefore the AI wouldn't care that it's betraying its buddy AI/human. In other words, the AI would "sell its own mother," if it came out ahead.
I think it's an important perspective to keep in mind as we start integrating AI more and more into our societies "because it's a more efficient/smarter way" of doing things. Perhaps higher efficiency won't always be a desired outcome.
Leaving an AI in charge of nation's healthcare funds comes to mind here. Would an "advanced AI" start denying people healthcare if they have a "low enough" chance of survival or because they've reached an age where continuing to pay for their treatment is not "cost-effective" for the society anymore? Is that a desired outcome?