

A working implementation of fully homomorphic encryption [pdf] - hadronzoo
http://eurocrypt2010rump.cr.yp.to/9854ad3cab48983f7c2c5a2258e27717.pdf
Bootstrapping technique: http://eprint.iacr.org/2010/145.pdf
======
trotsky
What is Homomorphic Encryption, and Why Should I Care?

<http://blogs.teamb.com/craigstuntz/2010/03/18/38566/>

~~~
Groxx
I was just going to link to this, but ya beat me to it!

It's an interesting article pair, but seems to have stopped rather abruptly
for quite a while now...

------
hadronzoo
Bootstrapping technique: <http://eprint.iacr.org/2010/145.pdf>

------
davidj
If I understand this right, this would mean now you could do all the
calculations on your data on a cloud based solution without disclosing the
data or results, a huge breakthrough for securing sensitive data on the cloud.
Or another example, you could run a Google search and Google would return the
results without knowing the results or what you initially searched for. This
kind of reminds me of the tor network. On the other side, it would make it
impossible to know if your cloud system was infected with spyware because you
could never prove what it would be doing. But on the other hand, that could
actually be a solution to a larger problem, which would solve the problem of
pirated software by limiting crackers the ability to decipher the
registration-code algorithm of a software product.

------
djcapelis
Does anyone know anything about the availability of this implementation?
(Before anyone gets excited: For poking at, not production use, being able to
use the thing would help me understand their paper better...)

------
zitterbewegung
Bruce Schneier on homomorphic encryption

[http://www.schneier.com/blog/archives/2009/07/homomorphic_en...](http://www.schneier.com/blog/archives/2009/07/homomorphic_enc.html)

Side note by me: If this is shown to be actually secure then this would be
very interesting. Note he states that an efficient algorithm would take 40
years.

------
sweis
This is a related paper on eprint:
<http://news.ycombinator.com/item?id=1784602>

------
partition
I have a question about homomorphic encryption and information leakage.

Do I have this right: A homomorphic encryption is a function

    
    
        e :: P -> C 
    

between groups (P, -) and (C, +), where P is the set of plaintext and C is the
set of gibberish and -, + are the respective group operations in either space,
such that e is a homomorphism in the group theoretic sense?

Because if so, the following basic properties must hold:

    
    
        e(id_P) === id_C
        e(inv(p)) === inv(e(p))
    

Suppose you are a malicious cloud services provider and you are manipulating
encrypted data. Couldn't you use these group properties to make much more
informed observations about the encrypted data without ever having to decrypt
it?

A very simple example: as a malicious cloud services provider, if you see a
bunch of gibberish strings x, y1, y2 ... yn, and it turns out that x + y1 ==
y1, x + y2 == y2, ... x + yn == yn, wouldn't you at least have a sneaking
suspicion, that x is the identity element? And in the situation where you
could potentially make very damaging (to the other party) decisions based on
this information, it would seem to defeat the purpose of homomorphic
encryption.

Moreover, recall that the cloud services provider is technically also allowed
to run any additional set of calculations using the group operation on the
data you give it, not just those you provided. It seems that, in general, even
with very unstructured data you could use the properties of homomorphisms to
de-anonymize data very easily.

How is this issue addressed?

~~~
Groxx
From what I remember from reading an ACM article on it a couple months back,
you can also run the operations after they have been encrypted as well,
preventing it a bit. I don't remember if there's a way to _require_ encrypted
operations, though.

In doing this, you could still brute-force operations and watch for
statistically-significant results... but you'd have to guess at what valid
operations _are_. Which amounts to the same problem as cracking the encryption
in the first place, though it seems to me it would be an attack vector which
weakens the encryption as a whole. To a significant degree? Don't know / doubt
it.

Just don't ask me to explain the math of it. I don't know it, though I plan on
studying enough to grasp it. Should take a few years.

edit: meh to the above.

Now that I've thought about it a bit longer, I do remember a decent chunk of
the article dealing with information leakage, and how algorithms would need to
be designed to touch as much information as possible (ideally, everything) to
prevent leakage. So, it seems a map/reduce setup which combines all datapoints
(removing unimportant ones further down the chain, thus hiding their
influence) would be valuable.

~~~
partition
OK, I should check out the ACM article.

It definitely seems like one needs to pare down the interactions with the
provider to some bare minimum. Limiting it to single, uncorrelated map-reduce
(or even just reduce) steps seems like it would remove a lot of potential for
abuse.

But who knows---if service providers tend to take in and operate on more data
than clients (especially from multiple clients), it seems there is a
fundamental information imbalance, and obfuscating techniques by clients can't
possibly do as well as deanonymizing techniques by the provider in the long
run.

I wonder if it is the case that fully homomorphic encryption might _increase_
the potential for such information leakage relative to a partial homomorphic
encryption; the more algebraic structures for which an encryption function is
a homomorphism the more vulnerable it is. If a fully homomorphic encryption
captures the +, * operations of a ring, you can exploit two identity elements,
one for * and one for +. The next step in 'badness' would then be something
like R-modules, where in addition to the ring you have another group (and
another operation and identity element).

