Homomorphic encryption allows a provider to run computations over encrypted data. Thus we do not necessarily have to expose sensitive data to a provider in order to manipulate this data and derive insights from it.
The tradeoff comes in the form of increased time to compute and increased complexity in the storage, retrieval, and manipulation of the data - the provider has some general sense of what type of data is stored where, but cannot simply determine what to do by looking at the plaintext values.
A few companies/research groups are working on bridging the gap between the extremes (unencrypted/fully homomorphic encryption) in order to come up with a good compromise between security and cost.
> Both the CryptDB and Monomi publications cited approximately 20% performance impact and neither offers full, standard SQL functionality.
With increased regulatory standards on the near-horizon - healthcare/insurance in particular - research in this area holds a lot of potential, especially if they can get the overhead closer to 0% and figure out a way of safely tagging data for manipulation in a way that does not expose it to re-identification.
> A few companies/research groups are working on bridging
> the gap between the extremes (unencrypted/fully
> homomorphic encryption) in order to come up with a good
> compromise between security and cost.
Fully vs partially homomorphic encryption is a trade-off between supported operations and cost. Fully homomorphic systems allow arbitrary computations while partially homomorphic systems do not, hence the lack of support for standard SQL.
Happy to answer any questions or chat about cool uses of PHE!
In Craig Gentry's FHE schemes (and all current ones that I know of) are all based off of an idea of bootstrapping, where each operation performed introduces some additional noise and every so often one must work to limit the noise and re-extract the signal. But somehow, as more operations are performed, simply too much effort goes into understanding the signal and noise.
But let's say that I had a FHE scheme in which I could do a limited (and relatively small, on the order of 100) operations, but no more. I suspect this has a name, but I'm not familiar with it. If I had to name it something, I would give it the confusing name Limited FHE, or perhaps Finite FHE.
Then my question is: what sorts of things could we do with Limited FHE? Is this a question that people have thought about?
Enigma uses secure addition and multiplication protocols to construct a fully secure interpreter. A user throws a problem to a bunch of decentralized nodes running this secure interpreter that are incentivized by fees to perform these computations and then post the results to a public blockchain, where the validity of the computations can be verified by an auditor.
It also includes (unlike Bitcoin) a distribution protocol so that the "miners" aren't all solving the same problem in a redundant fashion, although I don't think the "network reduction" protocol is specified in the whitepaper.
This is probably the most relevant part to your question:
Code evaluated in our system is guaranteed not to leak any information unless a dishonest majority
colludes (t ≥ n/2). This is true for the inputs, as well as any interim variables computed while the
code is evaluated. An observant reader would notice that as a function is evaluated from inputs to
outputs, the interim results generally become less descriptive and more aggregative.
For simple functions or functions involving very few inputs, this may not hold true, but since these
functions are fast to compute - no additional steps are needed.
However, for computationally expensive functions, involving many lines of code and a large number
of inputs, we can dynamically reduce the number computing nodes as we progress, instead of having
a fixed n for the entire function evaluation process. Specifically, we design a feed-forward network
(Figure 5) that propagates results from inputs to outputs. The original code is reorganized so that we
process addition gates on the inputs first, followed by processing multiplication gates. The interim
results are then secret-shared with N/c nodes, and the process is repeated recursively.
Now imagine doing RSA with a 100k-200k bit key. That's the amount of slowdown FHE incurs.