Hacker Newsnew | past | comments | ask | show | jobs | submit | sjudson's commentslogin

The main problem with this paper is that this is not the work that federal judges do. Technical questions with straight right/wrong answers like this are given to clerks who prepare memos. Most of these judges haven't done this sort of analysis in decades, so the comparison has the flavor of "your sales-oriented CTO vs. Claude Code on setting up a Python environment."

As mentioned elsewhere in the thread, judges focus their efforts on thorny questions of law that don't have clear yes or no answers (they still have clerks prepare memos on these questions, but that's where they do their own reasoning versus just spot checking the technical analysis). That's where the insight and judgement of the human expert comes into play.


This is something I hadn’t considered. Most of the “mechanical” stuff is handed off to clerks - who, in turn, get a ringside seat to the real work of the judiciary, helping to prepare them to one day fill those shoes. (So please don’t get any ideas about automating away clerkships!)

Right. Clerks do the grunt work of this sort of analysis, which could easily be handed off to agents. They do this in order to get access to their real education: preparing and then defending to the judge the memos on those thorny legal questions. It would probably be a good thing for both clerks and judges to automate the sort of analysis this paper considers (with careful human verification, of course). That's not where the meat of anyone's job actually is.

As I understand the scheme, I inform the mint I will have, say $200, wired to it. It gives me a code which I give to my bank when they do the transfer, so that the mint knows the money I told it would be arriving did actually arrive from me. I then get a cryptographic assertion that I have $200 in my wallet. Anonymous purchases work, as when I go to buy something all the information I need to give the seller is the assertion I have the available amount of money, and essentially the seller bills the mint for the cost of the game, which the mint transfers to the seller's bank. The first question is how is bookkeeping done so that money can't be spent twice? One option would be that when I go to buy, say a $60 video game the system cryptographically bounds the purchase to my wallet, so if I go to another retailer it says that I have spent $60 of the original $200 in the wallet, and therefore have $140 left. The other option would be that any transfer into my wallet can only be used once and in full. The discussion does say the wallet holds a transaction history, but that doesn't delinate which of the two previous modes of operation is the case.

There are a few issues with this if using the "wallet-bookeeping option" where proofs-of-purchase are bound to the wallet. One, it doesn't protect against dishonest sellers. Say a customer comes in with a wallet that originally held $200 but they've spent $80. The seller can still bill the mint for $150, and there is no way for the mint to confirm that the customer didn't have that much still left over from their transfer to the mint. Secondly, presumably the proof-of-purchase is cryptographically bound to the wallet, else the buyer will simply discard it. With it bound, discarding the proof-of-purchase discards the remaining money on it. However, for this to work, there must be no recourse for the buyer to recoup the money in a lost wallet. But the user can do this by backing up (which the discussion of Taler notes can be done), which would allow the user to simply discard the version of the wallet with a payment and simply return to the one without it.

Because of this, it seems likely that Taler would make any transfers one-use only, to prevent against this issues. In such a case, the user would then transfer the exact amount of funds needed (or something close, hoping to get change in cash, which wouldn't be sustainable for the sellers if all their buyers are using Taler) as needed right before making their purchase. In that case, it is fully reasonable for the mint to be able to link purchases to bank transfers, since there will only be so many $273.67 transfers (remember, the mint knows who the transfers are from) followed within the next five minutes by $273.67 purchases (and the mint will know the name of the seller).

Fundementally, the issue with all these schemes is that for the mint to not be able to track the purchase, it needs to be able to decouple the processes of taking deposits and paying out purchases among the folks who have money in the mint. It can do this either by making the money holder responsible for the bookkeeping, opening the door to fraud, or to make (as Chaum's Digicash did) each certification a one-time use only, which incentivizes buyers to transfer funds as needed, which allows for at least partial linkability, probably not something which would standup in court in and of itself, but would probably be sufficent circumstantial evidence in a number of cases.

I'm wondering if I'm misunderstanding something, since these issues have all been pointed out with previous anonymized e-cash attempts, and it seems unlikely GNU would go to build a scheme that has these well-known weaknesses. Hopefully I'm wrong, and there is some additional information here that I'm misunderstanding or has yet to be presented. Also, for a useful read on this type of thing see Brands' criticism of Chaum's pseudonym approach in "Building in Privacy: Rethinking Public Key Infrastructrures and Digital Certificates" (pgs. 25-32, although the whole intro is very useful).

EDIT: Making my points more clear, got pretty jumbled in my original post...


I tend to see Menezes and Koblitz as having three different arguments against the common focus on provable security. In roughly increasing order of importance.

1) "Provable security" is a misleading term. Those well versed in the field understand that it is a computational proof of a mathematical object, not in anyway a statement of the practical security of an algorithm when implemented in computer code. However, many people misunderstand this to mean "unbreakable" or "perfect". Menezes and Koblitz see it as too frequently acting as a marketing gimmick, something that allows vendors trying to sell security software to say "it's based on provably secure algorithms" without anyone understanding what that means. To them, the misleading quality of the term outweighs its value. I generally disagree with them on this point, not because the point isn't well taken, but because it is far too common for phrases from academic discourse to be misunderstood outside the specific field (e.g., teleportation in quantum mechanics) for such a semantic argument to carry any weight with the specific case of provable security.

(2) Provable security proofs have frequent gaps/the authors do not recognize important assumptions they are operating under. Their "Another look..." papers focus on these cases, and it is somewhat widespread. Most of the concerns tend to be completely academic, and don't open any practical attack vectors, however their point tends to be that computational proofs are dependent on assumptions that allow a positive proof of a negative, and if the methods of the proofs sufficiently obfuscate those assumptions to where their practitioners frequently miss them, it calls the validity of the arguments into suspect. The best example of this is probably their discussion in the original 'Another look at "provable security"' article linked to on the right of the proof of Optimal Asymmetric Encryption Padding (OAEP, required since textbook RSA and other asymmetric encryption algorithms leak information about the plaintext, in RSA's case, the Jacobi symbol). It's hard to argue with their evidence, and their work has definitely led to a much more critical eye of the proof justifications and assumption discussions given in newer articles.

(3) "Provable security" misses the point. To Menezes and Koblitz, the goals of cryptography mean it shouldn't be treated as a discipline of mathematics, and while academic cryptographers chase the abstract goal of provable security and debate the philosophical strengths of computational vs. information theoretic arguments, "real-world" cryptographers, including those working for the intelligence establishments, focus on useful goals like efficiency and ease of implementation in software and hardware. To them, building cryptosystems using primitives with strong heuristic arguments for their security which are easy to implement is far more beneficial for security than the most perfectly crafted mathematical algorithm will be. In addition, although the academic field of cryptography can easily adapt to any new result which challenges basic assumptions (say, a result attacking the hardness of the DLP or factoring), all the systems relying on those assumptions in practice will be broken or greatly damaged, so holding those assumptions as central to total security is problematic. As someone particularly interested in how cryptographic systems delegate and understand trust, this argument carries a lot of weight with me, and is part of the reason that although I consider having a solid understanding of the theory and practice of provable security to be a necessary piece to greater understanding of the field, my short and longer term interests in cryptography are focusing around secondary qualities cryptosystems and their primitives.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: