Hacker News new | past | comments | ask | show | jobs | submit login
Thinking about “traceability” (cryptographyengineering.com)
92 points by grappler 55 days ago | hide | past | favorite | 13 comments



One field of cryptography that only recently because available is deniable cryptography. Basically how it works is, for every message from sender->receiver, either the sender or the receiver can sign it, but nobody can tell who. That way, you always know the (signed) messages you receive are real, but you can’t prove their authenticity to anyone else. You know they’re real because it could only be from the sender or from the receiver, and you’re the receiver, and you would remember if it was you, so it must be from them. But you can’t turn around and show that message to anyone else to incriminate the sender - it could just as easily have been forged by you to frame them. (And actually the best deniable cryptography schemes are even better than that, but for difficult to explain reasons) Right now, if you really want to message securely, first of all you need to make sure your messaging app has E2E and perfect forward secrecy. Even better, it should be configured to automatically delete the perfect-forward-secrecy keys after some time, to limit the damage if your phone gets seized (only the messages from the past x days could be recovered). Signal has an option for that and it’s really nice. But I think the next wave is actually going to be deniability - not just at a crypto level, but at a ui level, so users can easily add fake messages from themselves or their conversation partner at any time in the past that are indistinguishable from real ones, making the practice of “leaking screenshots” irrelevant (or at least still no better than he-said-she-said). You don’t strictly need deniable cryptography to implement this but it would be the icing on the cake


> One field of cryptography that only recently because available is deniable cryptography. Basically how it works is, for every message from sender->receiver, either the sender or the receiver can sign it, but nobody can tell who.

If you mean what I think you do, Dan Bernstein described how to do that back in 2001:

https://groups.google.com/g/sci.crypt/c/73yb5a9pz2Y/m/LNgRO7...


Deniability though forgery doesn't really affect the traceability issue. If the police have good reason to think you originated a particular message they can just seize your phone and do forensics on it. Chances are they will find traces. Failing that they can just find a single person who will testify that you sent them a particular message, which with forgeablity they can state with certainty.

A prudent rumour mongerer would simply not sign the messages in the first place...


Most (interactive) protocols have non-non-repudiation, because it's easier and faster to implement. If you wanted to tie messages to either the sender, or the receiver, you'd need to sign them, which is way more expensive than a MAC or authenticated encryption.


You usually want to authenticate the sender, both to make it harder for attackers to learn anything by sending you chosen ciphertexts and because users actually want to know who they're talking to. And if you're doing that, then that authentication will be non-repudiable unless you go out of your way to make it repudiable.


Unless your users are the protagonist of the Hollywood movie "Memento" they already know whether they sent a message, so, you can simply arrange that both parties have the keys needed to encrypt any authentic message. The user knows they didn't send it, so the apparent sender must have. But nobody else can reach the same conclusion.

That is, if you show me a message purporting to be from Steve to Jenny in a well-designed modern protocol and give me access to Jenny's cryptographic materials, I can't be sure whether it is really a message Steve sent, or merely a message Jenny forged pretending to be from Steve.


Is this new? I feel like OTR had a version of this where after some time both parties would advertise their keys allowing anyone to create forgeries. Maybe slightly different implementation but feels like a very similar overarching concept.


Very nice article describing the challenges of traceability in an end-to-end encrypted messaging platform, in this case WhatsApp.

Quote from the end of the article -

What is scientifically interesting is whether we can build systems that actually prevent abuse, either by governments that misuse the technology or by criminals who steal keys. We don’t really know to do that very well right now. This is the actual scientific problem of designing law enforcement access systems — not the "access" part, which is relatively easy and mostly consists of engineering. In short, the scientific problem is figuring out how to prevent the wrong type of access.


Having trouble thinking of an example of a closed loop system that has sufficient means to prevent its own abuse. Maybe I'm way off, but the qualitative concept of abuse is necessarily an exogenous condition to any thing or system. The only way to reduce abuse is to reduce the domain of function of a system to where it only works with some kind of external key - and then you've just popped your abuse problem up out of the system to an encompassing logical layer where you have to manage that conceptual key problem.

The basic security problem is not about whether a thing has sufficient internal rules to prevent abuse, but how a thing relates to other things, like arbitrary substrates and abstractions - security itself is a logical layer problem where you are managing a dynamic between essentially arbitrary and incompatible categories and types.

I was about to argue that novel surveillance systems are not discovering anything about nature or advancing scientific knowledge, and that they are just laundering a political surveillance application through the prestige and respect of the discovery process - but that argument ignores all the amazing experimental techniques with general applications derived directly from cracking enemy codes (computers), side channel attacks (signal processing), data visualization (applications of graphs, fractal geometry), codifying and tooling applications of statistical techniques (Bayesean everything), etc.

Engineering simply does not provide general solutions to the problems of abuse and oppression because it begins with the flawed assumption they are concrete problems, and not open dynamics. Instead of engineering anti-abuse rules into a system, the absolute best we can do from a security perspective is engineer costs on inputs to a system, which create incentives for its use and abuse. The rest is just adding internal complications to hide them. In this sense, Security is the study of incentives, a kind of Economics by other means. It's the analysis of systems through the lens of incentives and developing technologies that optimize for them.

If security essentially reduces to a shell game or infinite three-card monte, adding arbitrary complications to systems should be seen for what it is, a qualitative distraction from the encompassing problem of whose interest prevails.

I wanted to agree with Green and add that those academics he criticized for designing yet another surveillance system were not advancing science or knowledge, but I'm not sure that's the right objection. I think it's that they're just assholes.


Not only that. How can you even deny access if the interface has no idea about the type of the access? Unless a request comes in signed by a court/judge with some attestation to the intent you still don’t know whether that is the real one.


This whole debate on tracing the origin of offensive messages misses an important point —

In some cases, actors that make such unscrupulous content viral can have as big or bigger part to play than the content creator themselves.

Content becomes more dangerous the more viral it becomes — the content creator usually has little control over virality, and is usually not the most "popular" figure in the chain.


I appreciate Matthew Green's reminding cryptographers and security researchers that _your research is plausibly not neutral_ and that misuse is not just a "side effect" at times, but it is inherent in the type of work you are proposing.

A gem on the topic is this presentation (also an homomonymous essay) by Philip Rogaway: The Moral Character of Cryptographic Work https://www.usenix.org/conference/usenixsecurity16/technical...


the article fails to mention that whatsapp already have central knowledge of shared content.

something very hidden is the fact that group messages are encrypted using facebook owned keys. also the recently introduced business account are completely owned and controled by facebook in terms of encryption keys.

basically what india and other countries are pushing is not ground breaking academic concepts, but just plain good old sovereign rigths to demand data the foreign company already have




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: