If you mean what I think you do, Dan Bernstein described how to do that back in 2001:
A prudent rumour mongerer would simply not sign the messages in the first place...
That is, if you show me a message purporting to be from Steve to Jenny in a well-designed modern protocol and give me access to Jenny's cryptographic materials, I can't be sure whether it is really a message Steve sent, or merely a message Jenny forged pretending to be from Steve.
Quote from the end of the article -
What is scientifically interesting is whether we can build systems that actually prevent abuse, either by governments that misuse the technology or by criminals who steal keys. We don’t really know to do that very well right now. This is the actual scientific problem of designing law enforcement access systems — not the "access" part, which is relatively easy and mostly consists of engineering. In short, the scientific problem is figuring out how to prevent the wrong type of access.
The basic security problem is not about whether a thing has sufficient internal rules to prevent abuse, but how a thing relates to other things, like arbitrary substrates and abstractions - security itself is a logical layer problem where you are managing a dynamic between essentially arbitrary and incompatible categories and types.
I was about to argue that novel surveillance systems are not discovering anything about nature or advancing scientific knowledge, and that they are just laundering a political surveillance application through the prestige and respect of the discovery process - but that argument ignores all the amazing experimental techniques with general applications derived directly from cracking enemy codes (computers), side channel attacks (signal processing), data visualization (applications of graphs, fractal geometry), codifying and tooling applications of statistical techniques (Bayesean everything), etc.
Engineering simply does not provide general solutions to the problems of abuse and oppression because it begins with the flawed assumption they are concrete problems, and not open dynamics. Instead of engineering anti-abuse rules into a system, the absolute best we can do from a security perspective is engineer costs on inputs to a system, which create incentives for its use and abuse. The rest is just adding internal complications to hide them. In this sense, Security is the study of incentives, a kind of Economics by other means. It's the analysis of systems through the lens of incentives and developing technologies that optimize for them.
If security essentially reduces to a shell game or infinite three-card monte, adding arbitrary complications to systems should be seen for what it is, a qualitative distraction from the encompassing problem of whose interest prevails.
I wanted to agree with Green and add that those academics he criticized for designing yet another surveillance system were not advancing science or knowledge, but I'm not sure that's the right objection. I think it's that they're just assholes.
In some cases, actors that make such unscrupulous content viral can have as big or bigger part to play than the content creator themselves.
Content becomes more dangerous the more viral it becomes — the content creator usually has little control over virality, and is usually not the most "popular" figure in the chain.
A gem on the topic is this presentation (also an homomonymous essay) by Philip Rogaway:
The Moral Character of Cryptographic Work
something very hidden is the fact that group messages are encrypted using facebook owned keys. also the recently introduced business account are completely owned and controled by facebook in terms of encryption keys.
basically what india and other countries are pushing is not ground breaking academic concepts, but just plain good old sovereign rigths to demand data the foreign company already have