Hacker News new | past | comments | ask | show | jobs | submit login

IMHO, it's not significantly harder, at least in principle, than on other platforms - in fact, albeit intuitively, I suspect it's easier.

Most high-end FPGAs aren't "burned" the way you would with a logic array by burning fuses. Configuration data (i.e. the connections between the FPGA's tiny components) is stored in a non-volatile memory and is loaded when the device starts. Altering the design is simply a matter of altering this configuration data, which is easy to do dynamically, seeing how there's a closed piece of code running on a closed module which has access to every bit of it.

The problem ends up being boiled down to problems that we're already aware of: authenticating configuration data (which is akin to the problem of authenticating the OS running on a general-purpose CPU), ensuring that the FPGA's configuration matches that which was programmed in the non-volatile memory an so on.

The configuration data loader is, to the best of my knowledge, a pretty trivial piece at the moment, with the exception of high-end devices for sensible applications (which do include things like encryption, so that the bitstream cannot be retrieved in a useful form). But real-world requirements will soon provide a good excuse for inflating it to a level of complexity where backdoors can be hidden.

It's also important to realize that much of the hardware that ends up on an FPGA isn't really arbitrary data, it's in the form of vendor-supplied IPs that are probably pretty easy to recognize. Implementing your own cryptography hardware is as bad an idea as writing your own cryptography code. It don't think it would be too hard to backdoor a loader that alters the bitstream so that its crypto modules are weak, under specific circumstances.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: