> Tsakalidis said that in order to make modifications to Electron apps, local access is needed, so remote attacks to modify Electron apps aren't (currently) a threat. But attackers could backdoor applications and then redistribute them, and the modified applications would be unlikely to trigger warnings—since their digital signature is not modified.
So the issue is that Electron app distributions dont include a signed integrity check so there's no way for end-users to detect if they got a modified version. I thought that the MacOS builds did do this, but maybe the ASAR bundles aren't included in the hash, or maybe I'm wrong entirely.
I assume the a solution would store the signing pubkey on initial install and then check updates against that. The only way the signing key could be checked other than trust-on-first-install would be through some kind of registry, which is what I assume the Windows and Mac stores are geared toward. Am I correct on all this?
EDIT: Either way, it seems like the solution is to only use the projects' official distribution channels. Signed integrity checks would be useful but probably not change the situation that dramatically. Is that accurate?
> I thought that the MacOS builds did do this, but maybe the ASAR bundles aren't included in the hash?
Yeah, I think that's the problem they're describing. It sounds like the Mac setup will require binaries -- like the Electron runtime itself -- to be codesigned, but if the first thing your codesigned binary does is to read an unprotected JS file off disk and execute it, there's no codesigning benefit.
> Either way, I assume the a solution would store the signing pubkey on initial install and then check updates against that
Not just updates that you initiate yourself, though -- I think the idea is that any other app on the system could backdoor the JS in the ASAR at any time. That's pretty hard to defend against.
> but if the first thing your codesigned binary does is to read an unprotected JS file off disk and execute it, there's no codesigning benefit.
The ASAR files described in this post are part of the signature of the application though. You can't modify that file and then redistribute the app to another machine without gatekeeper getting incredibly angry at you.
E.g. Try modifying the ASAR file, zip the app up, upload to google drive, download again and try run the app. Gatekeeper will boot it into the shadow realm :)
Good point, but if the attacker has filesystem access you're already hosed. I suppose there could be some other risk where the ASAR could be modified without full FS access? But I'd want to know what that attack is, if that's the case.
I think that's not supposed to be true in modern (e.g. latest macOS) threat models. App Y isn't permitted to just replace App X unannounced, and on both Mac and Win there's a large codesigning infrastructure in place to provide that protection.
I just tried this out with Slack on macOS, and it did work... almost as advertised. I had to use sudo to change the application files, which means this isn't really much of a novel attack surface, but it did bypass the code signing checks quite handily.
So, is this a "vulnerability"? That may be a stretch, as far as I can see, but putting application code in Resources/ definitely counts as a "smell" in my book.
> I just tried this out with Slack on macOS, and it did work
Here's the thing with how gatekeeper works, that application had already passed gatekeeper and will never be _fully_ validated ever again.
If you zipped your modified Slack.app up, uploaded it to google drive, and downloaded it again. Gatekeeper would 100% reject that application, the ASAR file is included as part of the application signature. You can prove this by checking the "CodeResources" file in the apps signature.
You can't re-distribute the app without gatekeeper completely shutting you down.
I was coming back to follow up and say that that's exactly what I found -- running `codesign --verify` does show the modification. It makes sense that Gatekeeper wouldn't re-verify a 185 MB bundle on every launch, which makes me wonder if there's something else macOS could be doing at the FS level to see if any files have been modified and trigger a new check.
At any rate, while I don't quite take back what I said about application code in Resources/, I do take back the implication that it had anything to do with this; I suppose there doesn't seem to be anything Electron-specific about TFA, other than that exposing raw JS lowers the bar for who can write the code to inject. (Assuming you can get FS access to inject code in the first place, of course.)
But this will all be in vain if the attacker scenario includes unfettered file-system access. (They can modify the app to not perform these checks, for example.)