I don't think they could be legally forced to use specific algorithms - most laws state the ends ("privacy"), not the means.
In the old world of analog marketing, you had market research companies (Nielsen, Kantar, GfK) to measure the audience and provide benchmarks.
One way to help curb the power of adtech companies would be to force them to let go of measurement. That would require adjustments to privacy laws, creating a specific data processor for audiences role.
The way it usually works, legislature writes the law that states the end, and establishes (or repurposes) an executive agency to implement the means, vesting them with the power necessary to do so. The agency then comes up with specific procedures etc - and they can enforce that.
For example, the federal law in US does not define the procedure to properly destroy a firearm (such that it ceases to be regulated by the relevant laws) - but ATF does, and it's fairly specific: https://www.atf.gov/firearms/how-properly-destroy-firearms
I don't see why the same approach couldn't work here.
Could you not set legislation that to claim anonymity companies have to provide a certain level which is mathematically bounded? And/or as the OP suggested, add transparency? I'd think the combination is better since just the transparency won't make sense to most people and slow them to make informed decisions.
In some ways it's preferable to leave the legislation a little open ended and put the details into a more flexible rule-making process. Then the rules can be updated by knowledgeable people as circumstances change, either to adopt new standards, relax them, or address unforeseen gaps.
Can we perhaps have a trusted third party which anonymizes data for these companies?