

Cryptography Breakthrough Could Make Software Unhackable - esalazar
http://www.wired.com/wiredscience/2014/02/cryptography-breakthrough/

======
tptacek
Previous discussion:
[https://news.ycombinator.com/item?id=7153657](https://news.ycombinator.com/item?id=7153657)

~~~
acqq
And the most interesting comment for me is from pbsd:

[https://news.ycombinator.com/item?id=7155371](https://news.ycombinator.com/item?id=7155371)

"It's worth reiterating that this is very much not practical by any standard,
and doesn't look anything like the usual software obfuscation used today, even
white-box. This thing requires at the very least that multilinear maps and
fully homomorphic encryption are anywhere near practical, which is not the
case today (and may end up being like quantum computing, always 10 years
away)."

~~~
tptacek
If you're interested in crypto, you should probably set a bookmark on 'pbsd
and read all his comments. Especially if you find me annoying, because I have
a habit of setting him up for great comments by being wrong about things.

~~~
pbsd
I'm not sure I can handle that kind of pressure :)

------
dvanduzer
The papers are over my head, but they both at least mention homomorphic
encryption. As far as I can tell, program obfuscation is a related, but
distinct mathematical problem.

Has anyone read and understood the impossibility proof[0] from 2001 they
linked? My lay understanding is that we've got addition tackled for
homomorphic encryption, but multiplication eludes us so far (i.e. algebraic
groups but not fields).

If program obfuscation is related to homomorphic encryption, does the
impossibility proof of "black box obfuscators" say anything about that
multiplication problem?

[0]
[http://link.springer.com/chapter/10.1007%2F3-540-44647-8_1](http://link.springer.com/chapter/10.1007%2F3-540-44647-8_1)

edit: thanks tptacek, the previous discussion suggests that this is would be a
more ambitious result than "mere" homomorphic encryption.

~~~
pbsd
It's obfuscation that is dependent on fully homomorphic encryption (FHE), not
the other way around. Therefore, Barak's impossibility proof is irrelevant to
FHE.

With respect to FHE, before 2009 we already had some tools. We had RSA, which
is multiplicatively homomorphic (this property is the reason behind the need
for good padding!). We also had Paillier, which is additively homomorphic. We
even had Boneh-Goh-Nissim, where you could do any number of additions plus
_one_ multiplication! In 2009 Gentry completed the picture with a partially
homomorphic system, based on lattices, where you could do a number of
additions _and_ multiplications before losing "precision". Perhaps more
importantly, he also gave us the bootstrapping notion, which allows you to
achieve a fully homomorphic encryption scheme from a partially homomorphic
one, provided you have enough "precision".

Now, what the article talks about is not strictly covered by Barak's theorem,
since it's a different notion (also defined in Barak's paper):
indistinguishability obfuscation. Barak proved that black-box obfuscation is
impossible in the general case (i.e., for every possible program). Black-box
obfuscation is the usual intuitive notion we have of obfuscation: given an
obfuscated problem, you can't obtain any more information from it than if you
were just querying it as a black-box.

Indistinguishability obfuscation is more subtle: given two different programs
_that compute the same function_ , you can't tell whether an obfuscated
program is the obfuscation of the first or the second one. This at first
sounds pretty useless as an obfuscation scheme, since it gives us few
guarantees, but the authors took advantage of this notion to get some useful
functionality out of it (namely functional encryption, and later other
things).

Even more impressive is the fact that they now have managed to prove that a
variant of the original scheme is actually black-box secure [1]. How is this
compatible with Barak's result, you ask? The black-box proof of [1] is in the
multilinear model, which means it is under the assumption that the attacker
can only do a limited set of (algebraic) operations. A poor analogy would be
to show that some code obfuscation scheme is secure, but only as long as the
attacker is limited to using static analysis.

[1] [https://eprint.iacr.org/2013/631](https://eprint.iacr.org/2013/631)

------
lvh
Okay, let's calm down. While the result is great, this article does a pretty
bad job describing it. "Could" is pretty darn important in the title. We're
nowhere near applications here. For those not willing to read the papers, the
original Quanta article[1] is better.

I'm flagging this because it's so horribly editorialized.

[1]
[https://www.simonsfoundation.org/quanta/20140130-perfecting-...](https://www.simonsfoundation.org/quanta/20140130-perfecting-
the-art-of-sensible-nonsense/)

------
tekproxy
I don't understand, at all. If two programs are semantically equivalent then
traditional deobfuscation techniques like symbolic execution, constant
propagation + folding, dead code removal, etc. should take you pretty far.
Throw in some peephole and pattern based rules specifically targeting this
obfuscator and you're good to go.

The main link is pretty shit so I may be missing something important. I plan
to read the other links provided when I have time, but feel free to help me
out by explaining how wrong I am, if that's the case.

------
WalterSear
This is great and all, but what we really need is cryptography to make the
titles on science reporting unlinkbaitable.

------
mox1
For those interested in a simple introduction to Zero Knowledge Proofs, check
out:
[http://pages.cs.wisc.edu/~mkowalcz/628.pdf](http://pages.cs.wisc.edu/~mkowalcz/628.pdf)

------
udev
This would be a major boon for video game console makers and game publishers.

~~~
herbig
It's much bigger than that. Read past the third paragraph, it's really
interesting.

------
csmithuk
A lot of assumptions and hype in the article.

Usually a bad sign.

------
iterable
Wow this could be huge

