All the big service providers have a large problem with botnets logging in to accounts with stolen credentials, or with fake spam accounts they created themselves. They do this with automation and so detecting automation is a good way to detect and stop them without major inconvenience to users (who generally don't automate their own Google accounts).
The nature of this technique means their servers can't tell the difference between a niche browser and a dedicated abuse tool. If a tool claims to be Chrome and gets spotted because it's not, the obvious fallback is to just make up new user agents that are rare or unusual. Sure, the traffic is very visible to humans who may be watching post-hoc, but a UA is just a string so it can be constantly changed. Blocking bad traffic means adapting to changes in it automatically and quickly. So eventually this pushes companies towards just locking out browsers nobody uses because it's too hard to tell them apart from malicious automation software.
That's unfortunate: it makes it harder for someone to get traction with a genuinely new rendering engine. But those cost billions of dollars to develop these days and even Microsoft doesn't want to play that game anymore, so it's a rather theoretical loss compared to the gains, which are large and real.
Source: I used to work on this stuff.