Because of this, stopping sideloading is all about delicate balancing of incentives. "Carrots and sticks" so to speak.
We want to make it easy and effective for people to do the good thing (carrots), and hard and dangerous enough to dissuade them from doing bad things (sticks).
Previously our approach was to provide easy APIs  to install extensions into Chrome that we controlled. The result was that the Chrome team could monitor usage and see if it got out of hand.
Unfortunately, as Chrome became more popular, it did in fact get out of hand. So what you see here is us basically adding a few sticks, trying to reduce overall bad behavior. (We're also working on other things in other areas so that we don't just push the bad behavior into harder to monitor channels).
Also, where do you store the blacklist? Remember that the bad guy can just modify it to remove his entry. Or he can modify Chrome itself to not check the blacklist.
There are a long series of escalations you may propose here (encrypt the profile, try to detect changes, store the profile on the server, add a developer key system, etc). I'm just going to summarize and say there is no perfect solution to this problem. You can make bad behavior somewhat harder, but you cannot eliminate it without true application isolation.
At each escalation you increase the complexity of the product, make genuine features harder to introduce, add bugs, and make the experience for legitimate developers worse. It's a challenging environment to write software in.
That said, the team has some pretty clever ideas in development for future releases. We fight on.
In principle, given Chrome is often installed as the current user, there is nothing to stop any other user program from changing Chrome in any way it sees fit, simply adding an extension and marking it "user accepted" in whatever way.
Microsoft has had trouble with this kind of thing for years, as I say.