(c(mod p)) (mod 2) = (p * q + 2 * r + m (mod p)) (mod 2) = 2r + m (mod 2) = m
As @tuxxy points out, there is a metaphor for describing this - a "noise ceiling". To see this phrase used in context, see, for example: https://eprint.iacr.org/2011/277.pdf
That post is here: https://radicalrafi.github.io/posts/homomorphic-encryption/
Add a cryptocurrency wallet for which the signing keys are the private keys, and now it’s more like an immortal, non-subpoena-able corporation executing your desires using its treasury assets. (Which can include hiring real people to do real-world things, and even—given that you could provision crypto mining capacity—making income to keep said corporation’s actions [relatively] self-sustaining.)
But seriously while amazing in potential, having semi-intelligent contracts running around indefinitely would eventually have to cause havoc. Either by the “Paperip Maximizer” effect  or just having a huge weight dragging down the economy from accruing smart contracts.
But, assuming that one does, ever, want someone else to do a proprietary computation on one's data, it'd be real nice if that didn't require giving up plaintext.
(And yeah, HE "noise" is a pain)
We did a some basic research on that for a seminar, and I wanted to figure out how far this can be pushed for a more "real life"-setting with the four roles mentioned in the introduction.
I ran into slight time problems since the implementation was more difficult than I expected & due to some out-of-university-related stuff.
Anyway, maybe things changed in the past 6 years and my conclusion from back than doesn't hold anymore. So this could be more feasible now. Let me know what you think ;-)
If c is the ciphertext than can't someone simply mod 2 and "decrypt" it?
For everything else rubber hose cryptanalysis will work.
This sort of thing is called "security through obscurity" and the consensus is that it doesn't work. It can be a deterrent to adversaries lacking in skill or motivation, but it isn't a very strong layer. Attackers quickly discover that one company looks much like the next, and they develop an intuition for what sort of data a company needs to collect to accomplish their tasks. Additionally, there's tons of sensitive data that companies in certain industries are required to collect & retain, and those laws are a matter of public record. You aren't going to outfox them into thinking you don't have blueprints of your widgets and evaluations of your employees on file somewhere; focus on keeping them away from them.
The problem with obscurity is that it doesn't really impose asymmetric costs on the attacker. You know how much effort you spent creating a layer of obscurity, but there is no way to know how much effort the attacker has to spend to break it. Do they find your secret URL on accident? Were they an ex-employee who simply knew? Are you just not nearly as tricky as you imagine you are? You can easily work yourself to the bone creating a "layer" which is as effective as the Maginot line.
Is it 1 in 10?
Is it 1 in 10e6?
Is it i in 10e77 (2^256)?
Is it 1 in 10e174 (2^512)?
Is it 1 in 10^1233 (2^4096)?
Where is the value where its no longer "security by obscurity" to security? Is a simple password enough? What about a login/password? What about Login/password/2fa? Or is a 4096 bit key acceptable as security instead of obscurity?
> The problem with obscurity is that it doesn't really impose asymmetric costs on the attacker.
If you don't know the "number", and all you can do is guess, adding another binary digit increases the keyspace by 2x. I can add 2's faster than you can guess.Mines sales linearly. Yours scales exponentially.
> Do they find your secret URL on accident?
Does the same apply if they find a 4096 bit key by "accident"? Or lets take a ZKP - if I successfully make 128 correct guesses at 4096 bits each, is that just a "lucky guess"? According to gambling and odds, that's pretty much a 0% chance to just guess it.
> Were they an ex-employee who simply knew?
And the employee should have been deactivated. This specific secret should never have been memorable or copy-able.
So, asking how much entropy is "obscurity" and how much is "security" is the wrong question. If you can measure the amount of entropy, you're already in the "security" sphere, and you're talking about security and insecurity.
For instance, if you invent your own passwords rather than using a password generator, and you use an ad-hoc strategy without employing any sort of reasoning about how much entropy you're generating, I think it is fair to say you're employing obscurity. For the initiated, it is not reasonable to expect this strategy to do better than "hunter2". "Security", in this case, would be using a password generator or some other strategy that we can reasonably believe is sound.
You seem to be arguing for something provable which you can reason about mathematically, and not something ad-hoc which we cannot be certain of.
If you happen to see my response and read the whole thing, then I pose to you a second question; is creating a fake copy of your data, which you do not protect as carefully as your real data, a security or obscurity strategy? Or something else entirely?
but shift an effort from:
"I have this and you cannot have it!"
"I don't have it"