If fairness is defined in this approach as a process that does not add information to the system, and in this case actually removes both information (bias) and noise (bias) equally, all it would serve to do is further obfuscate the cause of being admitted.
For an admissions lottery to be considered "fair," you have to assume the participant selection is fair, and that the functioning of the university itself is indifferent to who it gets. Maybe they should A/B test it, where some are admitted at random and their success compared against the traditional admissions process. Arguably, that's even what "legacy," students provide, a sample independent of the admissions process.
That we're having this discussion at all is a greater indicator of the waning of the university system as meaningful process, and how undergraduate education is subject to Goodhart's Law, where it has ceased to be a useful measure of aptitude, competence, or much at all anymore really.
In a threshold-based admission lottery, everyone more than k sigma from the mean (for example) is collapsed into the same category, such that information distinguishing them is lost. But the implicit premise to this system is that you can't accurately measure the distinctions between those deviations anyway.
Given that premise, you're not adding noise to the system, though you are removing information. I think the claim under question is that trying to precisely measure people more than k sigma from the mean is intrinsically noisy and prone to spurious correlation with academic success. So then you'd also be removing noise under this system.
So I think your point of contention should be with the premise if you disagree with it, because I don't think we can really argue about statistical properties of the lottery distribution until we first settle on the underlying axioms.
The proportion of input students of a type will be the same as in the the output of a fair lottery. Appeals to randomness just forfeit responsibility and ownership of the decision.