Hacker News new | past | comments | ask | show | jobs | submit login
A Guide to Fully Homomorphic Encryption (iacr.org)
66 points by kushti on Dec 17, 2015 | hide | past | favorite | 27 comments



The last time I tried to play with encrypted circuit code it was extremely, extremely, extremely slow. To put this into context: just adding and subtracting 64 bit numbers on a garbled circuit was considered quite a complex operation - taking several seconds to several minutes to evaluate.

Needless to say - I was very disappointed. I had originally started looking into FHE, functional encryption, and hommomorphic encryption as a novel solution to several trust problems in the Bitcoin world (as you could potentially write fully self-contained, encrypted, ECDSA signing and verification programs that could operate as fully decentralized autonomous entities.) So when researcher say that FHE is holy grail type stuff - they really aren't joking.

My question is: how long until any of this stuff will work for large algorithms?


FHE will probably take a while to become practical. Remember, the first schema was published only in 2009, which is very recent in cryptography standards.

Secure multi-party computation can achieve similar objectives (compute on encrypted data-sets), but is much more mature research-wise (first schema dates back to the 70s) and can be practical for large algorithms today. See http://enigma.media.mit.edu/ (disclosure - I'm one of the founders)


Garbled circuits are a different cryptographic primitive to FHE. Modern GC implementations are super quick and pretty practical. FHE is another story, though, as you state.


The efficiency of application to long algorithms and large datasets is a function of available computing power. Certain problems in coding theory can be reduced to homomorphic calculation, but it takes petabytes of data to represent the solution space.

I would recommend Moore's Law as an approximate formula which is useful for guessing what year you will be able to run programs at home on your personal computer. These days, conservative estimates for the number of transistors on a chip should double every 2 years.


That was true in an age where single threaded performance also saw similiar improvements. But those days a gone. The power wall and transister scaling also ate factors here.

Today, increases in compute capabilities enabling new technology requires new architectures that match the problems. Similarly to how GPUs enabled deep learning.


Can you elaborate on those coding problem reducible to homomorphic calculations? I'm very interested. I thought I heard the converse, homomorphic calculations using coding (my rough understanding of Learning With Errors).


How well do you understand the term "homomorphism"?

Search engines are the most common example, in general.


Just to be clear, you're referring to coding theory (https://en.wikipedia.org/wiki/Coding_theory) in the sense of geometric codes from information theory, right? I fail to see an obvious way search engines fit in.


This may be a silly question, but whats the current point of creating more efficient methods of encryption when 20-30 years down the road we'll have things like quantum computing with 10^N processing?

Wouldn't that then essentially make modern encryption methods obsolete?

I'd like to hear a more educated viewpoint on this, because most of the sources I've read gloss over everything and make it seem like magic and this seems like a good thread to ask.

Edit: Thanks for the responses, I think I get it a bit better now :D


To address just one of your points, quantum computation is not a silver bullet. It does not work the way one might think it works. Not many researchers are saying that they will replace classical computers, and for good reason. For many, or even most, computational tasks there is no way to get a large quantum speedup. (And classical computers have had a lot more R&D put into them.) Although easy integer factoring will break quite a few popular crypto schemes, like RSA, the risks of quantum computers are heavily overplayed. For example, I don't believe there is any known quantum attack against AES, which is a good start in a post-quantum world. (I can't think of an asymmetric encryption scheme that I'm sure is resistant to quantum attacks off the top of my head, since I'm no expert, but I'm positive some do exist.)


The (provably) best "general purpose" quantum algorithm is Grover's algorithm. The problem it solves is as follows:

Given an arbitrary function f(x), and a desired output k, find the unique input z such that f(z)=k.

We can see that using classical computers, this problem is O(n), where n is the size of the domain of f. However, Grover's algorithm can solve this problem in O(sqrt(n)). It has been shown that O(sqrt(n)) is the best possible solution to this problem on a quantum computer.

This means that quantum computers effectively halve the key-size for a brute force attack (and probably most other types), but doing any better than this would require exploiting some structure of the cryptosystem you are trying to break. To my knowledge, no such structure has been demonstrated for any major symmetric encryption algorithms.


That's really interesting! I have only a passing education in quantum computers, which is why my comment is so devoid of detail... I really should learn more.


Wait, it has been proved that QP != EXP?

That's great! Do you have any pointers?

EDIT: Wikipedia has pointers. It's not exactly what I thought at first. The paper: http://www.cs.berkeley.edu/~vazirani/pubs/bbbv.ps


AES (and other symmetric ciphers) are vulnerable to Grover's algorithm (https://en.wikipedia.org/wiki/Grover%27s_algorithm), which effectively cuts key sizes in half. AES-128 would be reduced to 64-bit security. This isn't a big problem in practice, since we can just switch to 256-bit ciphers like AES-256 and ChaCha20.

Public-key schemes based on factoring and discrete logarithms are undone by Shor's algorithm (https://en.wikipedia.org/wiki/Shor%27s_algorithm), but there are asymmetric systems not known to be vulnerable to quantum algorithms. They are less mature, but researchers are working it.

There's some good high-level information at http://pqcrypto.org/ and in this paper: http://pqcrypto.eu/docs/initial-recommendations.pdf.


Another way to explain this is that quantum computing would be the perfect answer to some extremely-naive kinds of cryptography, reducing the decryption time-complexity quite harshly (though not all the way to O(N) as one might expect)—but modern, practical cryptography does things fairly differently than spherical-cow cryptography.

For example, if cryptography basically consisted of taking a set of plaintext "blocks" and a key, permuting the key separately and deterministically for each block in O(1) time, expanding the key into a one-time pad again in O(1) time, and XORing each block with its respective pad—then you could use a quantum computer to quickly search the keyspace for a key that decrypts the blocks into something sensible according to some heuristic. You can make a quantum algorithm that "searches" a static keyspace, given that you can map a particular ciphertext block to particular plaintext block in O(1) time—this mapping effectively becomes a "projection" of the keyspace, and the quantum computer searches that.

The problem with this approach is that, in reality, subkeys in a "stream" aren't generated independently per-block (as happens in the much-derided "ECB mode" of block ciphers), but rather are generated serially by feeding in the previous subkey in a chained operation; and that each cipherblock is then created by "expanding" the subkey through a CSPRNG, which itself uses iterated hashing.

You can't make a quantum algorithm that searches a keyspace, given an encryption algorithm that is defined recursively or "statefully" on its input stream. There's no such thing as a "quantum for-loop" that magically makes an O(N)-step process into a single linear projection of the keyspace—and without this, there's nothing to usefully search through.


Yeah, we have some public key schemes that are conjectured to be resistant to quantum attacks. This one in particular is fun because of the connection to machine learning. (https://en.wikipedia.org/wiki/Learning_with_errors)

Basically, it uses the result that it should be 'hard' to learn a linear function with sufficient noise rate to create a cryptosystem.


Also note that most homomorphic encryption schemes proposed so far are based on LWE and are therefore also conjectured to be resistant to quantum attacks.


Also, homomorphic encryption isn't about making it more efficient, its about being able to compute any function over the encrypted dataset, and get the same result as if it was over the unencrypted version - this is pretty cool if it works because you could say send over an encrypted version of a program to let someone else test it without giving any details of how it works.

from the paper:

  The purpose of homomorphic encryption is to allow computation on encrypted
  data. Thus data can remain confidential while it is processed, enabling useful
  tasks to be accomplished with data residing in untrusted environments. In a
  world of distributed computation and heterogeneous  networking this is a hugely
  valuable capability.


For one, quantum computing may not happen? Like, we're pretty sure it will work, but we aren't actually sure we can make it happen. The closest we've gotten is D-wave and it's still controversial.


The best way to articulate that criticism is to quote me (or at least paraphrase Daniel Lidar) and say:

"We have quantum information processing, but no computers as of yet."

And, that which we already have is so-darned-expensive; that it is less useful than classical computation unless you are building some exotic kind of sensor.


NTRU is another quantum-secure (i.e. thought to be quantum-secure) cryptosystem. It can do most things we demand from public-key cryptosystems. More generally, there is no known quantum attack that significantly breaks lattice-based cryptosystems.


While true, LLL basis reduction is quite effective against it. To counter it, your key must be much larger. Further, it multiplies the length of the ciphertext, IIRC (or something gets bigger...).


I want to specify that and say there is no known quantum device which can successfully break lattice-based cryptosystems.

Even if there was such a machine, the number of people who could write code for it is less than or equal to ten.


my simple answer is, encryption works by creating a function which is vastly more costly in one direction (encryption) than the other (cracking).

for any computing power available, if you apply x seconds of computation to encrypt with a complex key, it will take a multiple of x years to crack it.

as long as that multiple remains, which cryptology seeks to improve, just applying more computing power will never obsolete modern encryption methods.


It is worth noting that modern cryptography generally considers a system securing only if it takes exponentially longer to crack than it does to encrypt. That is to say, the cryptosystem defines some security parameter y, and legitimate operations can be done in O(y) time, while attacks require O(b^y) for some b>1.

Even quatom computers require superlinear time run Shor's algorithm.


homeopathic encryption!

oh.. :(


I thought it said homophobic encryption...

My devious mind!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: