
A New Design for Cryptography’s Black Box - 0cool
https://www.quantamagazine.org/20150902-indistinguishability-obfuscation-cryptographys-black-box/
======
aruss
We've come a long way since the first IO result came out. Since then, we've
gotten a couple more multilinear map candidates (though most are now broken),
and some simpler constructions, but we're still really far from IO with a
proof. This is primarily because of the underlying multilinear map that's
being used. The Gentry et al result that proves IO secure in the generic
multilinear model isn't that useful yet simply because there have been so many
nongeneric attacks against mmap candidates, especially when they're used in
IO. That is, at the moment there's no reason to believe that the generic
multilinear model is even a good way to think about IO security.

What would be a really big result is finding IO that doesn't rely on
multilinear maps.

------
0x0
Is this about data encryption, or is it about hiding the inner workings of an
executable binary (malware packing/copy protection)?

~~~
munin
what's the difference? ;)

~~~
derefr
To expand on that: what the article is basically talking about is "DRM that
actually works"—the ability to send someone some encrypted data embedded in a
wrapper program. You can run the wrapper program and interrogate it all you
like on its own terms—but other than satisfying the desire of the wrapper
program's code-paths, there's no way to get decrypted data out.

If the data is dumb content like text, this amounts to regular DRM content
encryption, except that there's no decryption key to be found in the wrapper
program or anywhere else; the key is "baked into" the logic of the program in
a non-recoverable way. (This would allow for things like "true" TPM chips,
that can store your keys opaquely from forensic recovery.)

If, on the other hand, the data is itself a program for which the wrapper
serves as an interpreter, this amounts to a mathematical basis for a real
"Trusted Computing Base", enabling any manner of things, like simple
distributed computation on untrusted hardware, or mathematically-strong anti-
cheating protection for an MMO game, or satisfying cell carriers' desires for
a protected "baseband processor" under their control without that needing to
be instantiated as a physical chip.

Effectively, creating a wrapper VM (the "bootstrap program" in the article's
terminology) would allow a processor to run a "binary" through the VM that is
literally opaque to it; code that, even in its operation _as instructions on
the CPU_ , the CPU is incapable of comprehending or interfering with (beyond
simply terminating/interrupting the wrapper VM, or restricting its hardware
access.) Not only would the interpreted program's code itself be opaque; the
working state—the contents of the wrapper program's memory (and the
processor's registers, and whatever else) would be opaque. The only place you
could see such a program's intent realized would be in the IO it does—and that
might be just encrypted network traffic sent to peers, too.

Such a software process, if given a full CPU hypervisor slot rather than
having to make system calls to an OS, would be for the first time a "first-
class citizen" on a computer, functioning more like[1] a flashable FPGA
coprocessor connected to the CPU than a series of instructions that the CPU
can edit to its whims. The CPU could _ignore_ such a coprocessor—choose to not
interact with it or power it (not emulate it, in other words), or tell the
IOMMU to remove the coprocessor's access to peripherals, etc. But the CPU
couldn't reach inside the coprocessor to fiddle with it, even though it's a
virtual coprocessor residing entirely within "the mind of" the CPU. [The CPU
_could_ arbitrarily corrupt the memory the coprocessor was using for its
state—but with good encryption, that would just immediately crash the wrapper
VM with an assertion failure, rather than leaking any info.]

\---

[1] Note that this is _just_ an analogy from the CPU's perspective; we already
have flashable coprocessors, but that doesn't help us any, because while the
CPU can't poke into them, people can. Indistinguishability Obfuscation means
that _we 're_ in the position the CPU is in; we can no more see into the VM or
its state than the CPU can reach over and take apart a coprocessor.

~~~
munin
There are "good" and "bad" uses for this technology.

Bad uses: Netflix will put their video stuff into this and now you will never
jack content from their software.

Good uses: Your IM and email can live in this and no compromise of your host
operating system can leak your information. Your computer can be hacked by
every hacker on Earth simultaneously and your secrets are safe.

~~~
venomsnake
> Netflix will put their video stuff into this and now you will never jack
> content from their software.

Yep. It is good I cannot record what is on the screen. Or in the video buffer.

------
tshadwell
I remember pretty clearly reading a comment by the author of the paper about
the 'unbreakable obfuscation' in which he said that the paper was greatly
misrepresented in that it had made a proof in a specific problem domain that
wasn't so applicable to real software.

I'm pretty sure it was posted on HN at some point. I don't remember the term
IO being used, so it may have been a different kind of obfuscation. There were
some allusions made to an unsolvable jigsaw puzzle.

------
ingenter
I'd like to note that IO does not give a guarantee of impossibility of
extracting keys.

AFAIK, the definition of IO is: we have two programs that perform the same
computation. After we apply IO to both programs, we cannot figure out which
obfuscated program corresponds to a particular original program.

However, there is a flaw: programs encrypting data with different keys are
performing _different_ computations.

So IO definition does _not claim_ that IO is able to hide the key.

~~~
phkahler
>> So IO definition does not claim that IO is able to hide the key.

From what I've read, that doesn't even matter. The obfuscated program IS
effectively the key. A copy of that obfuscated program is still a copy of the
key. It's still not clear to me what the advantage is supposed to be.

~~~
tromp
The obfuscated program only uses its embedded key in ways it sees fit. You
cannot sign arbitrary statements with the key.

