
Fully Homomorphic Encryption: Secret Key Homomorphic Encryption Over Integers - madrafi
https://radicalrafi.github.io/posts/more-homomorphic-encryption/
======
pieguy

       (c(mod p)) (mod 2) = (p * q + 2 * r + m (mod p)) (mod 2) = 2r + m (mod 2) = m
    

This breaks if 2*r > p. Even if you choose r to be small during encryption,
the r values accumulate with each homomorphic operation and will eventually be
too big. The only restriction stated is that r is "from a different interval
than the private key one". This should be made more clear.

~~~
jMyles
> Even if you choose r to be small during encryption, the r values accumulate
> with each homomorphic operation and will eventually be too big

As @tuxxy points out, there is a metaphor for describing this - a "noise
ceiling". To see this phrase used in context, see, for example:
[https://eprint.iacr.org/2011/277.pdf](https://eprint.iacr.org/2011/277.pdf)

------
8bitsrule
Top of article mentions 'In the previous post...' (which is an intro to
homomorphic encryption)

That post is _here_ : [https://radicalrafi.github.io/posts/homomorphic-
encryption/](https://radicalrafi.github.io/posts/homomorphic-encryption/)

~~~
madrafi
It was posted on HN 14 Days ago Thanks

------
admax88q
Homomorphic encryption is interesting from a mathematics perspective, but in
practical terms it seems like an awful lot of effort being invested to move
even more computing off of your own devices and onto the "cloud."

~~~
derefr
Homomorphic encryption + smart contracts = an immortal, non-subpoena-able
lawyer holding your signing keys in trust, who can continue to act “as you”
even after your death. That’s useful, no?

Add a cryptocurrency wallet for which the signing keys are the private keys,
and now it’s more like an immortal, non-subpoena-able _corporation_ executing
your desires using its treasury assets. (Which can include hiring real people
to do real-world things, and even—given that you could provision crypto mining
capacity—making income to keep said corporation’s actions [relatively] self-
sustaining.)

~~~
elcritch
William Gibson called, I believe he wishes to discuss a new novel with you.

But seriously while amazing in potential, having semi-intelligent contracts
running around indefinitely would eventually have to cause havoc. Either by
the “Paperip Maximizer” effect [1] or just having a huge weight dragging down
the economy from accruing smart contracts.

1:
[https://wiki.lesswrong.com/wiki/Paperclip_maximizer](https://wiki.lesswrong.com/wiki/Paperclip_maximizer)

~~~
philsnow
Daniel Suarez wrote it already. Check out Daemon
[https://amzn.com/0451228731](https://amzn.com/0451228731) .

~~~
dsnuh
Seconded! And also the sequel, Freedom. The basic premise of the story seems
entirely plausible in the near future to me.

------
archi42
I used HE in my informatics b.sc. thesis to do privacy preserving
surveillance: Store data on multiple servers and need server majority to
reconstruct (non-HE), perform face recognition on encrypted data and then use
(S?)HE to query a database if that face is in it - of course without the db
learning about the content of the face data. So, turns out just throwing some
math on the problem works (I just applied some previous work; you know what
they say about the shoulders of giants) and gives you the advantages of
surveillance with less potential for abuse - but the necessary computational
power is absurd :(

(And yeah, HE "noise" is a pain)

~~~
x220
Care to share a link to your thesis?

~~~
archi42
Here it is:
[https://www.dropbox.com/s/9zv0xjmf4bz602a/thesis_web.pdf?dl=...](https://www.dropbox.com/s/9zv0xjmf4bz602a/thesis_web.pdf?dl=0)

We did a some basic research on that for a seminar, and I wanted to figure out
how far this can be pushed for a more "real life"-setting with the four roles
mentioned in the introduction. I ran into slight time problems since the
implementation was more difficult than I expected & due to some out-of-
university-related stuff.

Anyway, maybe things changed in the past 6 years and my conclusion from back
than doesn't hold anymore. So this could be more feasible now. Let me know
what you think ;-)

------
ddtaylor
> c is odd if m = 1 c is even if m = 0 ( Yes 0 is even ).

If c is the ciphertext than can't someone simply mod 2 and "decrypt" it?

------
zebra9978
is this the same technology that Numerai uses... or is it multi party
computation ([https://mortendahl.github.io/2017/04/17/private-deep-
learnin...](https://mortendahl.github.io/2017/04/17/private-deep-learning-
with-mpc/)) ?

~~~
darawk
It is a virtual certainty that Numerai is lying about using homomorphic
encryption. The only known algorithms for FHM encryption require specialized
algorithms to perform computations on the encrypted data. You can't just run a
standard neural net over some homomorphically encrypted data and expect an
interpretable result. Yet, Numerai claims that this is exactly what's possible
with their data. This is clearly false. They are probably obfuscating their
private signals in some extremely trivial way.

~~~
madrafi
There's also the unlikely possibility of discovering a truly homomorphic
encryption scheme with no constraints on operations

~~~
tzahola
About as likely as discovering Fermat's purported proof.

------
olliej
I am unclear on this - it looks like it operates 1 but at a time, so if I have
a sequence of encrypted bits I can do freq analysis. Clearly that is not the
case, so what bone headed misunderstanding am I making?

~~~
sp332
P stays the same, but Q and R are randomly chosen for each bit. That wasn't
clear to me at first, but see the commented-out definition of the "encrypt"
function in the code block.

~~~
madrafi
nice catch I wanted to make the values small so the reader can follow with pen
and paper since that was the way I learned how it works .

~~~
sp332
So to be clear - do you need to store all those Q's and R's somewhere to
decrypt, or does it still work if you throw them away because of all the
mod's?

~~~
madrafi
No the point of secret key here is that you only need the p to encrypt and
decrypt in the code example q, r are random in each run.

------
gesman
The best way to encrypt any data is to make adversary think the data does not
exist.

For everything else rubber hose cryptanalysis will work.

~~~
maxbond
If your adversary decides the easiest way to get your data is to kidnap &
torture you, you're probably doing a really good job.

This sort of thing is called "security through obscurity" and the consensus is
that it doesn't work. It can be a deterrent to adversaries lacking in skill or
motivation, but it isn't a very strong layer. Attackers quickly discover that
one company looks much like the next, and they develop an intuition for what
sort of data a company needs to collect to accomplish their tasks.
Additionally, there's tons of sensitive data that companies in certain
industries are required to collect & retain, and those laws are a matter of
public record. You aren't going to outfox them into thinking you don't have
blueprints of your widgets and evaluations of your employees on file
somewhere; focus on keeping them away from them.

~~~
Semirhage
Obscurity is a valid layer, it’s just not valid as a sole means of security.

~~~
maxbond
Don't think of it as a security layer, it won't serve you. Don't link to your
admin interface from your homepage, that's asking for trouble, but don't
expend additional effort to create obscurity thinking you're strengthening
your outermost defenses. You'll waste your time while creating a false sense
of security.

The problem with obscurity is that it doesn't really impose asymmetric costs
on the attacker. You know how much effort you spent creating a layer of
obscurity, but there is no way to know how much effort the attacker has to
spend to break it. Do they find your secret URL on accident? Were they an ex-
employee who simply knew? Are you just not nearly as tricky as you imagine you
are? You can easily work yourself to the bone creating a "layer" which is as
effective as the Maginot line.

~~~
crankylinuxuser
Then I posit a philosophical question.. What exactly is "obscure"?

Is it 1 in 10?

Is it 1 in 10e6?

Is it i in 10e77 (2^256)?

Is it 1 in 10e174 (2^512)?

Is it 1 in 10^1233 (2^4096)?

Where is the value where its no longer "security by obscurity" to security? Is
a simple password enough? What about a login/password? What about
Login/password/2fa? Or is a 4096 bit key acceptable as security instead of
obscurity?

> The problem with obscurity is that it doesn't really impose asymmetric costs
> on the attacker.

If you don't know the "number", and all you can do is guess, adding another
binary digit increases the keyspace by 2x. I can add 2's faster than you can
guess.Mines sales linearly. Yours scales exponentially.

> Do they find your secret URL on accident?

Does the same apply if they find a 4096 bit key by "accident"? Or lets take a
ZKP - if I successfully make 128 correct guesses at 4096 bits each, is that
just a "lucky guess"? According to gambling and odds, that's pretty much a 0%
chance to just guess it.

> Were they an ex-employee who simply knew?

And the employee should have been deactivated. This specific secret should
never have been memorable or copy-able.

~~~
maxbond
It's an interesting question. For me, the difference is that obscurity
strategies are ad-hoc and unproven and perhaps unprovable. We should be able
to make a strong argument that our systems have particular security
properties, such as asymmetric costs.

So, asking how much entropy is "obscurity" and how much is "security" is the
wrong question. If you can measure the amount of entropy, you're already in
the "security" sphere, and you're talking about security and insecurity.

For instance, if you invent your own passwords rather than using a password
generator, and you use an ad-hoc strategy without employing any sort of
reasoning about how much entropy you're generating, I think it is fair to say
you're employing obscurity. For the initiated, it is not reasonable to expect
this strategy to do better than "hunter2". "Security", in this case, would be
using a password generator or some other strategy that we can reasonably
believe is sound.

You seem to be arguing for something provable which you can reason about
mathematically, and not something ad-hoc which we cannot be certain of.

If you happen to see my response and read the whole thing, then I pose to you
a second question; is creating a fake copy of your data, which you do not
protect as carefully as your real data, a security or obscurity strategy? Or
something else entirely?

