Hacker News new | past | comments | ask | show | jobs | submit login

Author here, no clue about homeomorphic (or whatever) encryption, what could certainly be done is some sort of encryption of the model into the inference engine.

So e.g.: Apple CoreML issues a Public Key, the model is encrypted with that Public Key, and somewhere in a trusted computing environment the model is decrypted using a private key, and then inferred.

They should of course use multiple keypairs etc. but in the end this is just another obstacle in your way. When you own the device, root it or even gain JTAG access to it, you can access and control everything.

And matrix-multiplication is a computationally expensive process, in which I guess they won't add some sort of encryption technique for each and every cycle.




In principle, device manufacturers could make hardware DRM work for ML models.

You usually inference those on GPUs anyway, and they usually have some kind of hardware DRM support for video already.

The way hardware DRM works is that you pass some encrypted content to the GPU and get a blob containing the content key from somewhere, encrypted in a way that only this GPU can decrypt. This way, even if the OS is fully compromised, it never sees the decrypted content.


But then you could compromise the GPU, probably :)

Look at the bootloader, can you open a console?

If not, can you desolder the flash and read the key?

If not, can you access the bootloader when the flash is not detected anymore?

...

Can you solder off the capacitors and glitch the power line, to do a [Voltage Fault Injection](https://www.synacktiv.com/en/publications/how-to-voltage-fau...)?

Can you solder a shunt resistor to the power line, observe the fluctuations and do [Power analysis](https://en.wikipedia.org/wiki/Power_analysis)?

There are a lot of doors and every time someone closes them a window remains tilted.


Any company serious about building silicon that has keys, wouldn't just be storing them in flash.

Try getting a private key off a TPM. There have been novel attacks, but they are few and far between.

Try getting a key from Apple's trusted enclave (or whatever buzz-word they call it).


You're right about the TPM, I won't get the key out of it. It's a special ASIC which doesn't even have the silicon gates to give me the key.

But is the TPM doing matrix-mulitiplication at 1.3 Petaflops?

Or are you just sending the encrypted file to the TPM, getting the unencrypted file back from it, which I can intercept, be it on SPI or by gaining higher privileges on the core itself? Just like with this app but down lower?

Whatever core executes the multiplications will be vulnerable by some way or the other, for an motivated attacker which has the proper resources. This is true for every hardware device, but the attack vector of someone jailbreaking a Nintendo Switch by using a electron microscope and a ion-beam miller is neglectable.

If you are that paranoid about AI models being stolen, they are worth it, so some attacker will have enough motivation to power through.

Stealing the private key out of a GPU which allows you steal a lot of valuable AI models is break-once-break-everywhere.

Apple trusted enclave is also just a TPM with other branding, or maybe a HSM dunno.


I'll concede you are correct that whether the key is extractable or not doesn't really matter if the GPU eventually will eventually need to store the decrypted model in memory.

However, if NVidia or similar was serious about securing these models, I'd be pretty sure they could integrate the crypto in hardware multipliers / etc such that the model doesn't need to be decrypted anywhere in memory.

But at this point there isn't much value in deploying models to the edge. Particularly the type of models they would really want to protect as they are too large.

The types of models deployed to edge devices (like the Apple ones) are generally quite small and frankly not too difficult (computationally) to reimplement.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: