
On Ghost Users and Messaging Backdoors - kandarpck
https://blog.cryptographyengineering.com/2018/12/17/on-ghost-users-and-messaging-backdoors/
======
merlincorey
Apparently some researchers from the GCHQ in the UK are proposing that
"secure" messaging systems like iMessage and WhatsApp which manage group chats
centrally in a manner that bypasses the end to end encryption should:

\- Add "ghost" users/devices to existing chats

\- Suppress notifications of these additions to users

This would perpetuate a currently known bug in secure communication protocols,
effectively turning it into "feature" for law enforcement.

Fortunately, systems like Signal and Briar have already moved past this flaw.

~~~
goblin89
Keybase Teams—also featuring e2e-encrypted group chat—appears to be proof
against the ghost-user-based attack.

It would be interesting to compare its implementation to how Signal does group
messaging in TextSecure v2.

[0] [https://keybase.io/blog/introducing-keybase-teams#anyway-
tea...](https://keybase.io/blog/introducing-keybase-teams#anyway-teams-have-
signature-chains)

~~~
maxtaco
Thanks for the mention! We designed Keybase with these exact attacks in mind.

------
tptacek
"Ghost users" are also a class of protocol and UX bug in secure messengers
that are worth looking for; you will find secure chat programs where it's
possible to add a member to a group with a very strong chance of not alerting
other members of the group, whereupon the E2E encryption scheme of the system
does all the work of decrypting the messages for you.

------
tjoff
Instead of creating a ghost user account and attempt to join a chat, why not
just copy they key of one of the participants?

~~~
bigiain
In a "properly designed system", the service only ever sees public keys not
private keys.

~~~
tjoff
If the point of a law is to circumvent encryption you shouldn't be surprised
that it doesn't satisfy anyone who wants the encryption to be safe.

Either the backdoor works and the system is bad. Or the backoor doesn't work
and the system is illegal. At least if the law doesn't have a loophole.

So not sure why the article complains about the design whereas the intent and
goal are the real issue. Seems like the design works as intended.

~~~
maxerickson
Consider that the article may be written for people on the law
enforcement/policy side of the debate.

~~~
tjoff
Not sure what that changes. If I was law enforcement / pro backdoor I'd say
that this article supports my view.

The main complaint seems to be that _Over time what seems like a “modest
proposal” could lead us to world where GCHQ becomes the ultimate architect of
Apple and Facebook’s communication systems._

But that is of course inevitable if the proposal is to be successful. What
other solution would the author propose?

------
RcouF1uZ4gsC
With the spread of misinformation and rage using messaging apps that have
literally resulted in people getting killed by mobs (see for example
[https://www.nytimes.com/interactive/2018/07/18/technology/wh...](https://www.nytimes.com/interactive/2018/07/18/technology/whatsapp-
india-killings.html)) maybe we should re-evaluate our belief that making it
impossible for governments to see what is spreading through messaging apps is
an unmitigated good?

~~~
pariahHN
How people use a tool is not the fault of the tool - there is an underlying
issue that drives that behavior. It would be like mandating that hammers have
to be soft enough that they can't damage a skull because people use them to
bash in peoples heads, which yes would prevent hammers from being used as
weapons but would render them ineffective at their original purpose.

~~~
burkaman
I don't think that's entirely true, sometimes tools have only one purpose. It
would not be ethical to manufacture nukes and sell them to people, for
example.

Even with a messaging app, imagine that you created a new one, and then found
that for some reason 90% of your user base is hitmen communicating with their
clients. Maybe that's not your fault, but I think you would be ethically
obligated to shut it down, or significantly modify it to stop enabling hitmen.

Obviously these are contrived examples, and often in real life it's impossible
to make a tool that can't be used for evil. But I don't think you're devoid of
responsibility just because you didn't intend for your creation to be abused.
If you accidentally created something dangerous, you have an obligation to
take reasonable measures to mitigate the danger.

~~~
pariahHN
I think a large factor in this is the range of intended uses - in the example
of a nuke, it can only be used for one thing which is evil, and so there is no
downside to banning it or mandating changes to the properties inherent to its
existence. But tools like private messaging and hammers have a huge potential
for being used for good (due to the same properties that make them useful for
evil) and targeting their properties to reduce viability for evil also reduces
the amount of good they can do.

All that being said, I do agree that in some cases there is a definite ethical
burden on a creator to consider the impact of his creation - I just think that
in many cases the best solution is not to change the tool to avoid misuse but
to figure out why the misuse occurs/would occur in the first place and try to
solve that. I would conjecture that the misuse more often than not points to a
deeper social issue that is for some reason not being properly dealt with but
which is actually a really big deal that no one wants to confront. I can think
of a few examples but I think that level of exploration may be better suited
to a blog post than a comment.

