Hacker News new | past | comments | ask | show | jobs | submit login
Homomorphic encryption (wikipedia.org)
106 points by azujus on Aug 2, 2019 | hide | past | favorite | 83 comments



There is a decent size effort to build a system that runs (a restricted, but hopefully useful subset of) Julia programs fully homomorphically (as well as supporting various sort of secure multiparty computation protocols). At JuliaCon two years ago, the Galois folks talked about their initial prototype of this work: https://www.youtube.com/watch?v=_KLlMg6jKQg (fun to watch even if you don't care about julia to see FHE "in action"). This effort was recently funded with the goal of extending the prototype into a full robust system, so I'm hoping for some good news here over the next couple of years.


If anyone is interested in playing with Fully Homomorphic Encryption, we (NuCypher YC S16) built NuFHE (https://github.com/nucypher/nufhe/). It's written in Python and has excellent documentation, so you can try building some circuits and playing around with it. It requires a GPU to run, but it's also the fastest implementation of FHE in the world (that I know of).

Let me know what you think! :)


Is there some kind of interoperability with other libraries? Or does it support CPU encryption / decryption ? For example, one can expect clouds to have GPUs to perform computations but encryption and decryption are typically done by clients on various devices where portable code is expected.


This is mostly a research library, so we haven't put our limited effort into CPU operations yet, but it's definitely possible if someone wanted to take the time to expose it in the library.


Seriously one of the most important area of mathematics for democracies in an online world.

Homomorphic encryption promises a hidden and verifiable online voting system that does not rely on trusting third party.


Any political voting system will need a trusted third party to run the voter registration/identity system, so I doubt the lack of practical homomorphic encryption is blocking this. There are other voter-verifiable systems that don't rely on HE for trustworthy counting:

https://www.chaum.com/publications/AccessibleVoterVerifiabil...

The major problem with online voting is that people can be coerced into voting against their wishes outside the watchful eye of election authorities. This may be worth the increase in voting ease, but it's where the real debate is.


How does online voting differ from mail voting?

The only difference I see, is, the mail is sent via the postal service and the online vote is sent via my personal computer and internet connection.

To get around this, the government could issue verified voting tablets that are locked down and use secured connections.

Otherwise, people can force me to vote different without the authorities noticing already.


I don't know that there is a difference, and I'm finding the fact that it's becoming more widespread a problem. There could already be a nontrivial number of coerced or paid voters. Voting by mail should be a tiny percentage of the vote, largely consisting of people who are overseas. Instead, we're starting to see a lot of elections decided by mail-ins.


> How does online voting differ from mail voting?

You cannot easily encrypt your voting information when sent by regular mail. If you have a unique unforgeable id, like a private key, and a secure voting device then your vote can be submitted and counted securely online. Granted, you could print your encrypted vote and mail it in.


> The major problem with online voting is that people can be coerced into voting against their wishes

The main problem is guaranteeing one vote per eligible voter.

Coercion is a related but smaller problem. It's much harder to coerce most of the people most of the time than it is to stuff the ballot.


Worth mentioning that ballot stuffing is a problem with the people counting the votes/running the polls, not the voters. So it would be more accurate to say that the problem is preventing the entity that organizes the vote from accessing discrete votes.


I don't think that is a major problem, unless I am misunderstanding. Oregon for instance is all vote by mail, outside the watchful eye of any government authority.


What do you mean by "outside the watchful eye of any government authority"?

Do you just mean the ballots are filled out at home where a government authority is not looking over my shoulder? Because everything else is controlled by the government. The ballots and booklets are printed by the government (who authorize what can be on the ballot and in the booklet), are mailed by a government agency, are checked by a government authority, etc.


If anyone has the ability to confirm your vote, either without you or through you, you can be compelled or paid for it.

Imagine constructing a system that can thwart a abusive, tyrannical father who insists that his wife and children vote for a particular candidate (to make it concrete.) If you can get past him, your voting system passes the first test. Now imagine someone is offering $50 if you vote in a particular way. If there's no way to figure out how someone would claim it, it passes the second test.

The abusive father can literally just fill out all of his family's ballots, and the $50 could be claimed by filling out the ballot in front of the buyer. You could thwart this with allowing multiple votes but only accepting the first, but then the father or buyer could just have the ballots filled out immediately at the first legal moment.

I don't know that it's a thing that can be done without totally private environment around the voter and the record; meaning that the actions of the voter cannot be observed.


Those are cause for concern, but let's be realistic, the percentage of how many coercive ballots must be very low, I'd guess less than 1%. I think the pros of mail in voting (getting a greater percentage of the population to vote because they can do so at their leisure, don't have to take time off of work, don't have to stand in lines, etc.) outweigh the cons (such as potential coercion or selling of votes).


There are countries, or regions, or municipalities, or neighborhoods where the number of coerced ballots can easily be 50% or more. Voting by email is a complete no-no in those situations.

There are also countries where turnout is consistently above 70% and there is no mail voting. In the US the obstacles to voting are not having to go to a polling station: voter registration due to not having a federal ID, voting on a Tuesday rather than during the weekend, gerrymandering due to political bodies bring able to affect the redistricting process, and so on.


It's hard to look over someone's shoulder to make sure they're making the "right" vote if they're in a public voting booth.


I was guessing that the OP meant literally looking over your shoulder. If you fill out the ballot at home, it is feasible you could be coerced to vote a particular way.

I don't think this is currently happening, so I don't think it is a major issue.



I still think paper voting is the only way no matter the algorithm i.e. no matter how good the system is it's still just a black box at the end of the day.

Imagine trying to hack the British general election, it would be impossible without hiring millions.


How does computation on encrypted data relate to voting systems?


Homomorphic encryption would allow tallying the ballots without decrypting them.

Helios [1], for instance uses an homomorphic scheme.

There are alternatives to it though which preserve voter privacy but allow vote tallying. Shuffling is one of them. Cothority [2] implements an e-voting scheme based on Neff Shuffles

1. https://heliosvoting.org/ 2. https://github.com/dedis/cothority/tree/master/evoting

P.S. I contributed to the latter


It’s possible that OP meant multiparty computation.


I'm wondering if this could be applied for zero-knowledge training of AI, ensuring complete privacy while training a model.


It promises more than that. If we could actually have fast homomorphic execution we could have blind cloud computing.


It also means undebuggable black box computations running on your machine (DRM, javascript).


If this interests you, a related concept with similar applications as HE is functional encryption: https://en.m.wikipedia.org/wiki/Functional_encryption


Here is a descent looking Haskell library that implements functional encryption concepts https://github.com/cpeikert/Lol


The technology for all this progress was a huge discovery in 2009. But what if it is a dead end, that nothing originating from that discovery will ever be practical?

Like wouldn't it be preposterous if someone said, "Here Craig Gentry, take $1 billion to run enough computers for the current FHE schemes. What is the snazziest demo you can run?"


FHE isn't the only option. Somewhat Homomorphic Encryption can be fast and stupendously valuable for a lot of statistical operations where you can figure out how to compute your function off only a small number of multiplications.


Some of the newer schemes are much faster. The recent progress feels like deep learning in 2010, right before everyone realized it worked


> The recent progress feels like deep learning in 2010, right before everyone realized it worked

Does it work, though?


https://www.microsoft.com/en-us/research/publication/crypton... (2016)

> We demonstrate CryptoNets on the MNIST optical character recognition tasks. CryptoNets achieve 99% accuracy and can make more than 51000 predictions per hour on a single PC. Therefore, they allow high throughput, accurate, and private predictions.


It’s starting to, yes, in particular for machine learning. There is a yearly competition called iDash where people show the performances of their homomorphic schemes. This year should be very interesting


If they keep that name it will be a dead end.


That’s life

It will join the graveyard of technologies was that rhetorical?


A very casual (layman's?) introduction intro to Homomorphic Encryption - https://news.ycombinator.com/item?id=13450015


Why do people always talk about arbitrary computation in relation to homomorphic encryption? What I really want is a homomorphic encryption system which allows me to arbitrarily slice and concatenate strings without knowing their contents. This would be immensely useful for implementing end-to-end encrypted collaborative editing of documents. Is homomorphic encryption there yet?


You can do this with a TFHE implementation, if I understand your use case correcltly. You encrypt bits and then you can operate/manipulate on those individual encrypted bits.

I referenced NuFHE in a comment, but you should give it a try and see if it will do what you're wanting. See https://github.com/nucypher/nufhe/. We also have a discord channel where you can ask questions on using it in the #nufhe channel -- https://discord.gg/rmSafk


I'm dying for this. My team builds ML models on text corpora. Most of this data is sensitive. My company has very strict data privacy policies and it's a pain to even share the data with other teams in the department. I've made it part of my long-term goals to facilitate secure sharing of sensitive data across the organization. Numerical data seems to be the easiest to anonymize (randomized response, etc), but I have yet to find any techniques for text other than generating synthetic data.


Hi, I've been replying to other people in this thread. I work at NuCypher doing some research and cryptography engineering. I work on Proxy Re-Encryption and Fully Homomorphic Encryption.

Do you mind sending me an email with your use case and needs? I'd love to have a chat with you.

john@nucypher.com


This guy right here, this guy knows whats up. gl john.


So you want to slice and concatenate strings without you yourself and any other collaborators knowing what the string is? what about hashing each word? you could slice and concat on whitespace boundaries if that's the case.

i'm not sure how this helps e2e encrypted collaborative editing though. why not just use asymmetric encryption? what am i missing?


Asymmetric encryption is great, but it means that all rebasing/transforming of edits has to be done client side. Having a homomorphic system would allow us to do some of this work server-side without revealing the documents themselves.


eh, I'm still not getting it. why not have a single or maybe multiple admins that all need to coordinate and decrypt before values can be decrypted for anyone else? upload their subkeys to the server for hashing the words, create shortened aliases for the hashes and then allow you to see them all. then you can do whatever you want with individual words that you want.

did you want to transform within words or something?


For a layman like me it sounds really cool, almost like magic. Consider a trivial operation like finding a maximum value in a list. How is that supposed to work on encrypted values while simultaneously providing strong encryption? So something like adding N to everything in the list is not an acceptable encryption.


Just like a Laplace transform maps differential equations into algebraic equations and convolution into multiplication - or Fourier for that matter - it's not so hard to imagine that there are encryption maps (that are hard to invert) but where something like a sum operation becomes a feasible operation in the encrypted domain. A max operation can similarly have an equivalent operation in the encrypted space.

I guess your concern is that the output is "one of the encrypted input" values and hence identified, although not decrypted. Subsequently, all the input values would be fed into the "max" module and their complete order can be determined by the one running the homeomorphic server.

In that case we will need to have an output where all inputs are returned. Perhaps a map with indices and values (all encrypted) as input and as output would be sufficient.


Today is the first time I heard of Homomorphic Encryption so I have 0 knowledge about this. But just to show this is not magic, you can provide N*N number of lists where each list has totally different results and then get the max index for each list as a return. Since you know what original list was the right one, you can keep that result and discard rest


Not sure I'm following you. Would you transmit in plain text N-1 random lists along with the real one? I would not consider that encryption.

I guess one brute force way to do it is making encryption unnecessary. For an input of N bits, have the results calculated/returned for all 2^N possibilities. Does not sound very practical.


You can do polynomial approximation to get a compatible function


I've run into a few people working on this over the last five years or so, but they've been a bit cagey about discussing their use cases and customers.

Any public applications outside of blockchain?


When I encountered FHE as a potential solution, it was in designing authentication and payment tokens.

Use case was you need to be able to verify that the output of a program was also a proof of the integrity of that program.

E.g. I receive a payment token from you, and I can verify that this token was produced by a program I could verify as being the "real," program, personalized to your identity, on a device also personalized to your identity, that you physically hold and verify yourself to.

Pretty good* with a chip/pin combination, but on a mobile general purpose computer with lots of other code on it, Hard problem. With some handwaving, FHE would ostensibly have enabled the secure personalization of the program and the signing of those token outputs. It was a variation on: https://en.wikipedia.org/wiki/Direct_Anonymous_Attestation as well.

FHE was the DRM holy grail where suddenly you can "tokenize," information. Other applications are in selling and metering software use.

In the case of health information, the ability to open up data sets to researchers to query and analyze without the risk of losing control of the data is huge. We know that de-identification of data is (information theoretically) impossible, but an ideal FHE scheme would facilitate queries against data that would mitigate much of the risk associated with it.

The other use case is in highly regulated environments where there are legal firewalls between lines of business. Basically wherever there is a use case for de-identification, FHE is a potential solution in that domain. In that regulatory case, it's sort of ironic that it's a solution for, "ok, we won't commit a crime, but we need the hypothetical output of that crime, so let's use cryptography to facilitate that outcome without explicitly breaking the law whose effect is to prevent this outcome."

Perhaps that's why people working in it seem so cagey.


There are as many usecases as there is sensitive data! Some of the obvious ones are automated medical diagnosis, genomics, biometric authentication, fraud detection, etc. What has prevented those usecases from happening at scale is the performance of homomorphic schemes


Cloud Computing. Use the compute from a big company, without that company possibly knowing what they are computing on.

Machine Learning. Apply neural networks to encrypted data and avoid privacy issues.


Online voting.

That's the big one.


To address the inevitable “what is this useful for” questions, my go-to example is cryptographic voting mechanisms.

The idea is that you segment a large integer into a couple of different bins by its bitwise representation. So you have a 60-bit integer and you segment it into four 15-bit bins. You use one of those to randomize what the encrypted versions are going to be, and you use the other three for different vote tallies of three candidates for some office.

You can then hand people three numbers each corresponding to a different candidate, and ask them to commit to one as their vote. Public authorities can then aggregate votes which they cannot actually see, and we don't decrypt until we get to some large enough context where your vote has been anonymized among ten thousand others, and you can check that the random seeds have been properly added, or other such things.

This also allows you to create a big online database where anybody can see their vote was counted, but nobody can figure out who someone else voted for.

There is a slight difficulty in that you cannot see directly what your numbers are actually voting for, so that the machines you are using to vote with need to be able to decrypt a ballot for you and then immediately destroy it, to verify that it was what you thought it was, so that you can trust that your three numbers do not all happen to vote for the same person because if someone tried that on any scale that could affect an election, even if they only poison 1% of ballots in a 500 person district, if everyone burns one to test the system then the fraud gets discovered at least once with 99.3% certainty. But the point is that all of these other issues can be handled “out-of-band” once you protect the important stuff.


I'd think there's a simpler way to accomplish what you said above (though in both cases, any voting mechanism that lets the voter verify their vote after the fact also runs into the problem of people complaining about encouraging vote buying).

i.e.

imagine every polling place would output to you (after you voted) a random number in the 128 bit space.

the votes are recorded with this random number. we can verify after polls closed that the voting machine has an appropriate number of votes (i.e. not more or less than people who came through the booths)

all these vote data is aggregated into public record. you can look up after the fact your random number and see that it matched who you voted for. No encryption needed (beyond the technology that goes into making a secure rng)


Well, no, if you do it cryptographically, at least with the proper mechanism, you can prevent votes from being buyable. In your case, if someone wants to buy your vote, they can ask you to text the number to them before it has appeared as a matter of public record—and if you voted for the Right Person they will pay you. The 128-bit number makes this very hard to forge, whereas to destroy vote-buying you want to make something very easy to forge.

Suppose that you receive a ballot from a machine which tears it down the middle: on the right hand side are bar codes containing the voting numbers; on the left-hand-side are candidates' metadata—names, parties, etc. So from the very moment I hand you the ballot, you can see that there is a connection between these numbers and those names, but as long as I provide a supply of other left-hand-sides in other orders, it becomes very easy for you to fake it when displaying it to someone else. That ease-of-forgery is the key to making it impossible to buy votes.


maybe I'm missing something, but I can't see any system that allows me to verify my vote after the fact not enabling a vote buying mechanism.

As I understand it (perhaps incorrectly), the primary thing that makes vote buying financially difficult is the fact that a person's vote can't be verified. how does homomorphic encryption enable me to verify my own vote but prevent anyone else from using the info I'd give them that I'd use myself to verify my vote.


Just the way I said in the comment you are replying to. Well, actually, I know two ways, that is just one of them.

Let me put it a different way. Let us suppose that you are in New York State in 2016, voting for the US president, and let's ignore the strange things that can happen with write-ins. After a random shuffle your ballot might look like this:

                         | BALLOT #5846
                         |
    1. Hillary Clinton   |  [  ]  [barcode]
       Democratic Party  |
    2. Jill Stein        |  [  ]  [barcode]
       Green Party       |
    3. Gary Johnson      |  [  ]  [barcode]
       Libertarian Party |
    4. Donald Trump      |  [  ]  [barcode]
       Republican Party  |
As this ballot is being presented to you, it is being cut by a sharp blade along that line through the center. So you have these two halves, and you know that they once belonged to the same piece of paper.

The right hand side is scanned and it is what we make public. Everyone can confirm that you voted in this past election, and you punched the third (say) square in your ballot. But we also make it really easy for you to take, outside of the voting booth, any of a number of other left-hand sides in other random permutations. So if you wanted a left-hand side that said "Trump, Johnson, Stein, Clinton" that is easily available for you to take out of the booth.

Now after the election you can keep either or both papers and go to a government-run website and confirm that that right-hand side corresponds to who you voted for, and you can start a political watchdog group to make sure that the homomorphic operations were properly done on all of these peoples' right-hand-sides-of-ballots. But that web site is not saying "Oh hi it's you, you voted for Gary Johnson," it's saying "Oh hi it's you, you voted for the third person on your ballot." You know that the left-hand side you have says that candidate #3 was Gary Johnson, you saw the paper cut with your own eyes. But to everyone else, that left-hand-side is just a piece of paper.

So: we have made it very easy for you to forge any other vote, as far as any other party would be able to verify. Nobody else can confirm the connection between the piece of paper you hold in your hand and the piece of paper that has been scanned and appears in the public database. And since this is very easy to forge it is very valueless as a piece of information for vote-buying purposes.

So that stuff is all really straightforward. The only dodgy thing is, what if I were to hand you a ballot like this where every vote on the right hand side happened to be a bar code for Jill Stein? Since the number is encrypted, that is not something you would otherwise have access to.

And the solution there is burning ballots on-demand. You can make requests to the election authority asking to decrypt a ballot during the election; indeed we print a lot of extra ballots expecting folks to do this and we declare it their civic duty. When you do so, you get to reveal the "true" left-hand-side for a given right-hand-side and confirm that they are the same—but that ballot is thereafter invalidated and cannot participate in the election. As more and more people do this, it becomes more and more costly to do less and less vote-rigging in this way. So you get an implicit assurance that no tampering has happened in the process of getting this ballot to you, if you can trust that your communication pipeline to the decryption authority is secure and they are not compromised. (And if they are compromised there is very little you can do in any case.)

(The other mechanism just has a ballot which is two pieces of paper attached above each other with labels on the one piece of paper and holes that let you punch out the other piece of paper -- you can go online after the election and verify that the hole which was punched was the one you punched, but your ability to get other front-sheets at the voting booth makes it very easy for you to forge a ballot for say your employer where you appear to have publicly voted for their preferred candidate but secretly you voted for another one.)


re burning ballots.

If it's possible to burn a ballot (i.e. associate the set of bar codes to actual candidates), shouldn't it be possible to "burn" a ballot after the fact as well?

i.e. we have 4 barcodes, I need a way to associate each barcode with a candidate to burn it, so why couldn't this happen after the fact as well?

I assume homomorphic encryption might help here, I just am missing it.


Homomorphic encryption does not affect that problem... It's just down to policy. If the decryption authority “stays open” after the election and no longer insists on checking ballots to see if they have already been cast, then yeah, you can abuse the system to decrypt placed ballots.

If the keys are destroyed after a valid election, as one would expect, then there is no possibility for that.

One way to better ensure the keys are destroyed is to use secret-sharing schemes so that multiple parties that are adversaries would have to lie similarly about destroying the keys, then conspire to work together to decrypt ballots after the fact. But I hope you see that this is all chasing social problems that must be solved as a precondition to have fair elections in the first place.


well, you would have to someone "close" the ability to "decrypt" the used ballot instantly, otherwise while the election is ongoing someone could "decrypt" it.

Though I tend to agree, its more of a social issue that technology can't really solve and hence why I'm more concerned about a user (and hence others) being able to verify that their vote was recorded correctly than doing out utmost to discourage "vote buying" schemes as at the end of the day, I don't think technology can really solve that problem but having more faith in the electoral system as a whole by being individually verifiable has more value (even if it can make vote buying more common). but I understand I might be in the minority on that.


Right, the protocol is essentially that you have a central server which supports in essence two SQL queries,

    UPDATE ballots 
    SET status = "burned" 
    WHERE contents = :ballot AND status = "unused"
which, if it succeeds, then sends the ballot to the decryption oracle with the private key, to be decrypted and sent back to the user; and

    UPDATE ballots
    SET status = "used"
      , voter_id = :voter
      , choice = :choice
    WHERE contents = :ballot 
      AND status = "unused"
      AND region = :region
which, if it succeeds, then sends back a confirmation that this user has been logged with that ballot and made that choice for that ballot.

If you allow people to access the decryption oracle without going through that first pathway, which simultaneously checks if the ballot was not spent and immediately spends it into the "burning" pathway, then either of those opens up the space to attacks which decrypt individual ballots. With that said, just about any auditing mechanism applied to the decryption oracle would be revealing the existence of those attacks anyway so you can still get a measure of security without this.

You can potentially even distribute the database (e.g. over a blockchain among several political parties), but as far as I can tell the decryption authority would still need to be centralized and could be a single-point of failure. (In this case it would be a program which is watching that blockchain and interacting with it via some “I publish a burned ballot onto the ledger after I think the blockchain has passed N blocks ahead of the ledger request to burn that ballot” algorithm, and nodes in the network need to reject requests to cast ballots that they think have been requested to be burnt.)


at the end of the day, this still relies on a heavy level of trust (i.e. on the infrastructure itself to do the right thing and that no one has a copy of the db).

As discussed, I'd prefer a system that increases trust without relying on trusted components (by making the vote verifiable after the fact) even if that can incentivize vote buying (but that's mostly because I view trusting the infrastructure as a bigger threat than being worried about vote buying, but I might be wrong about that)


I don’t understand why you think these things are necessary or opposite.

You can have a system where everyone has a copy of the database. That is not hard, it just requires the separation of what a ballot means, from what is stored in the database. That is just these two-sided ballots with encrypted values on the right-hand-sides: so that the fact that I voted for #1 on my ballot does not tell those who hold the database who I voted for.

You can have a system where encrypted ballots are known by the people to have the values that they say they have. That is not hard, it just requires a challenge-response scheme. If I give you a box and claim there is a pony figurine inside, you can be suspicious: if I give you twenty thousand boxes and claim that they all have pony figurines inside of them, and you ask me to open ten thousand of them which you choose randomly, then for me to omit one pony I am facing a 50% detection rate, for two I am facing a 25% detection rate; to disenfranchise even 10 people from their ponies I will be caught in the act 99.9% of the time, and even then I can only disenfranchise 0.1% of the boxes.

So I can have great confidence that my vote was recorded for the first person on my ballot (I can see the database), and I can have great confidence that the first person on my ballot was Alice and not Bob or Carol (because they passed my challenge/response test).

You can also have a system where nobody can pay substantial sums of money for votes. That is also not hard, it just requires the things that users take home with them out of the voting booth to be easily forged, so that they cannot prove that they did not forge the thing.

Absolutely none of this requires homomorphic encryption; homomorphic encryption just streamlines some of the process around the decryption oracle: with HE tallying and anonymization happen outside of it, so that its internal structure simplifies drastically.


You open yourself to vote buying and voter cohersion attacks, historically the most common voting fraud mechanism in the states.


any verifiable voting mechanism opens oneself up to vote buying.

if I can use homomorphic encryption to verify my vote I can give the same info needed (say this 128 bit number) to someone else.


Define "verifiable". There are absolutely schemes where I can't verify what my vote was after the fact, but can verify that it was counted.


verify that my vote was counted correctly. i.e. that my vote wasn't changed or fraudelently presented to me.


You can do that with quite a few pen and paper systems today, without being to verify what you voted for after the fact.


This could allow us to build a universal distributed network of compute resources and a real-time auction system that allows agents to bid for time on. No need to worry that the hardware owners are snooping on your computation. Of course, economics rarely favor distributed networks, and and the "no need to worry" is really just "diminished need to worry", but it's an interesting thought nevertheless.


So conceptually, I get sent a locked voting box, I slip my vote in, return the box. No one can open the box until the election and nothing is identifiable about the tallies at the end. Ok...

What is stopping me from putting in multiple votes?

Whats stopping someone from checking my single vote difference? (ie, skipping the anonymization through aggregation part)


1. The box only fits one ballot. This is easier to do with bits than with real boxes, of course. "This is box number 12345" and that number is present also within the homomorphically-encrypted payload and we can confirm that the sum of the box numbers in the public database is the same as the sum of the decrypted box numbers. And of course I can tie you-the-person with the box number that you voted with publicly, to prevent you from sending multiple boxes to be counted.

2. The tallies are added without opening the boxes, so anyone can confirm that computations to add together the tallies for a region were all done properly. But we don't give everyone the ability to decrypt ballots ad-hoc.

The only big question here is about key compromise at the end; that is a matter of properly destroying the decryption key at the end of the decryption of the tallies, so that this key cannot be leaked out to someone to try and decrypt individual votes. There are some options for making this part more robust—open-source software and secret sharing schemes—but I mean there can be very fundamental issues of trust at the highest level and if those issues are sufficiently pervasive then no amount of cryptography can protect the election; you just have a dictator who is prepared to fix it at all costs or so.


If any such decryption key exists that can decrypt single votes, it's already a failure. Not only can we not trust that it will stay secret, we also must ensure its secret from the vote counter themselves.


Could a fully homomorphic cpu architecture with fully encrypted cache be immune to Spectre and similar side channel attacks? Could this be tested on an FPGA?


Unfortunately, FHE doesn't work this way. You're operating on encrypted data, so performing some branched operations doesn't work due to the security (IND-CPA) security.

IE: You have a value that you need to do `if <condition> then <statement> else <other statement>`

Problematically, if that condition could work, then it would violate the confidentiality of the encrypted value, thus breaking the CPA security. Now there are some workarounds and methods to getting around this problem sometimes, but in many cases it's not possible.


Thanks for your explanation. When I read [1]:

> A cryptosystem that supports arbitrary computation on ciphertexts is known as fully homomorphic encryption (FHE). Such a scheme enables the construction of programs for any desirable functionality, which can be run on encrypted inputs to produce an encryption of the result

I thought that meant the program itself could be fully encrypted, but after a second look it seems that it is just the inputs that are encrypted. Still, other areas of the wiki talk about support for boolean gates and even arbitrary gates. I don't know what to think, but it is motivating me to revisit coding theory :-)

[1] https://en.m.wikipedia.org/wiki/Homomorphic_encryption#Fully...


So any unit of work in the FHE scenario is necessarily a basic block with no branching ?


In most situations, yes. Like I said there are methods and exceptions, but it's complex to get into.


My school is working on this right now. Seriously awesome.


Are these schemes theoretically resistant against quantum computing?


Yes, all the fully homomorphic schemes are lattice based and thus thought to be quantum resistant




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: