I can see it being useful to have a feature which could validate if another user on a website is a human. e.g: on reddit or twitter, the user you're talking to has a little checkmark (not the blue checkmark) next to their name if they've been WEI validated. Rather than refusing to let a user use the platform, just letting other users know that the person you're talking to isn't a bot
WEI doesn't check whether they are a bot though.... they can trivially use a "trusted" browser setup and just automate it with Selenium or whatever. Or in a worst-case scenario, a $5 robot arm, with a perfectly attested browser.
The whole "this will block bots" part of the spec is complete bollocks and a red herring to distract from the real purpose - to block adblockers and competition from new browsers. And DRM, of course.
I guess it depends how far the certification goes.
If even extensions can be detected, why wouldn't selenium be detected?
Granted, I don't know how it works exactly.
In addition to the 5$ robot arm you need to add 200$ for the device it is operating.
Drastically raising the cost to run a bot farm is key. You can't fully eliminate inauthentic behavior, but you can make a lot of it unprofitable.
You don't have to use selenium. You can use any software that can read video memory and act as a mouse and keyboard. It doesn't have to be an extension. The browser isn't directly involved, so vetting the browser or hardware does no good.
If McDonald's required 12 year-olds to use an ordering app because their banknotes might be stolen, would that be a reasonable compromise? Foreclosing the possibility of children not being tracked (which is illegal, btw) in exchange for some marginal benefit for big tech?