Anecdotally, having done multi-year deep-dive security reviews of both Asian and Western carrier equipment (and compared notes with many colleagues working on similar efforts), there is a stark difference. It's not even close. I've focused on firmware security analysis of RAN/eNodeB/gNodeB equipment but have also done many pentests targeting core infra as well. Western nations have actually done the baseline assessment over years and years of deployment and defence - this is why we are able to see the contrast in the comparison.
The main purpose of this system was not to judge code quality (although that's a very useful side effect!) The goal was to convince politicians that they could allow the installation of cheaper telecom hardware made by a geopolitical rival, yet also protect themselves from espionage and deliberate sabotage.
Now personally I would say that this is a crazy idea from the jump, given the usual asymmetry between attackers and defenders. But even if you grant that it's possible, it requires that you begin with extremely high standards of code quality and verifiability. Those were apparently not present.
You're not thinking of the entire scope of the issue. For example, the UK cannot legislate Huawei or any other Chinese company. You might say that that's true about the US too, and to some degree you'd be correct, but this also isn't taking into account that the US is (was?) a strong ally and this provides much more leverage over the situation. It ALSO means that IF these networks are being used to spy on citizens that there's a lower worry (still a worry, but lower). It would also mean that if this data is not being shared with the UK then this would be a violation of the 5 eyes agreement, which means the UK has more leverage over that situation.
So yeah, even if they are equal, there are A LOT of reasons to spend the extra money.
As the other respondents said, it’s an issue of threat modeling. If you essentially model the origin country as your ally, you still need to worry about rogue developers and bad code quality enabling outside exploits. If you model the origin country as a potentially enemy then you need a level of assurance that is vastly higher.
Also, even if all providers provide equally crappy versions, it's still slightly more secure to prefer a vendor in your own or an allied nation. At least your interests are mildly aligned.
That control already exists because similar levels of audits have already happened on the competition. I'm not saying the competition is a shining example of quality, it definitely isn't, but it meets a bar of some set of basic security compliance standards.
That's a different kind of experiment and I just got to say that there is no "one size fits all" method of experimentation. The reason there doesn't need to be a control here is because comparitors have ZERO effect on the answers being asked.
The question being tested is:
- Do Huawei devices have the capacity for adequate capacity
Not
- Are Huawei devices better or on par in terms of security compared to other vendors.
These are completely different questions with completely different methods of evaluation. And honestly, there is no control in the latter. To have a control you'd have to compare against normal operating conditions and at that point instead you really should just do a holistic analysis and provide a ranking. Which is still evaluating each vendor independently. _You don't want to compare_ until the very end. Any prior comparison is only going to affect your experiments.
tldr: most experiments don't actually need comparisons to provide adequate hypothesis answering.