A lot of the problems were caused by the legacy crypto implementations. The misery is not caused by ripping them out and replacing them with matrix-rust-sdk crypto, but by it (unavoidably) taking a while.
The risk being accepted is that a homeserver can currently add members to the group with the group being notified of this.
This risk will be removed completely once TOFU and signed control events are implemented, which is planned (and was planned before this research). It's just more work than could fit in the disclosure timeline, especially because it's a large change needing ecosystem coordination.
I don't think "the group being notified" that an unauthorized member can decrypt all their messages is quite the mitigation that Matrix advocates think it is.
This is the fundamental task of any secure group messenger. It has really one job: don't let unauthorized people read messages for the group. Here, Matrix has apparently accepted the risk that their group messenger can't do that job if the server is compromised. If you know where to look and your group is small enough, you can constantly watch to see if your homeserver has decided to stop protecting your group, but either way: your homeserver can spontaneously decide to stop protecting your group. Matrix, you had one job!
At the point where you accept this risk, you might as well just use Slack.
Your suggesting to use Slack, where a similar compromise would reveal your entire account message history.. ? Just enforce proper key verification for now and you're fine..
I genuinely do not understand the impulse people have to rationalize stuff like this. This is a devastating research result. It might be the most damaging paper ever published on a secure messaging system; I'd have to think about that.
For what it's worth, i just did a quick survey of other secure messaging systems to see how they manage group membership. These days Signal uses zkgroups as per https://signal.org/blog/signal-private-group-system; it looks like Wire is somewhere in a transition to MLS for client-managed group membership (although historically membership looks to be entirely controlled by the server). I dread to think what WhatsApp or iMessage do (anyone know if membership is server-controlled or not?)
So yes: we should switch to client-controlled membership management, and we've already started the work to do so. However, the Matrix spec and its implementations has always been transparent that it's up to the user to verify the membership of the room - for after all, if they don't bother verifying users, then all bets will always be off anyway. For instance https://element.io/blog/e2e-encryption-by-default-cross-sign... explicitly says: "You’ve verified this user, but they have risky unverified sessions logged in! A room containing any red users is shown as red." I'm not sure this exactly counts as a research result, let alone a devastating one.
However, totally agreed that we can improve on this, and we're on the case.
No, of course not: it's part of the premise of a secure group messenger that the server can't control the groups. Which is what makes it so incredible that Matrix screwed this up so completely.
It's true Matrix today doesn't implement end-to-end auth for room control messages, but it's a bit of a stretch to say you can spy on conversations in this way.
In a room where participants are verified with each other, you'd be warned of this with a loud red shield with an exclamation mark in the room header. Additionally, if you're extra worried about a room, there's a "Never send encrypted messages to unverified sessions in this room from this session" setting you can flip in the Element clients.
That said, this can and will be improved in the future, by signing room state events and implementing TOFU (trust-on-first-use) for user identities, so that you can have a large amount of protection even before you perform manual verification with other users.
> In a room where participants are verified with each other, you'd be warned of this with a loud red shield with an exclamation mark in the room header.
Really? Are you sure there would be this banner in the case of a malicious device being added to an existing user in the room rather than a malicious user?
> While the Matrix specification does not require a mitigation of this behaviour, when a user is added to a room, Element will display this as an event in the timeline. Thus, to users of Element this is detectable. However, such a detection requires careful manual membership list inspection from users and to participants, this event appears as a legitimate group membership event. In particular, in sufficiently big rooms such an event is likely to go unnoticed by users.
Looks like it would only be "likely to go unnoticed" for users that regularly disregard the massive annoying warnings about unverified devices and don't enforce verification
This link doesn't say anything. The paper explains the mitigations Matrix took and their limitations, and those limitations are obvious, and have been explained here as well. All you're doing is re-stating what the limited mitigations are, and then asserting without evidence that they're adequate. But they're obviously not adequate: this is a secure group messenger that will allow unauthorized people to decrypt messages to a group, and the mitigation is "you can notice that there are unauthorized people decrypting your messages if you watch very carefully".
You mean, allowed to decrypt unless following the discussed mitigations? I suspect you don't regularly use the client, which is fine, but these warnings and notifications are very annoying and essentially impossible to ignore. You are highly incentivised to resolve them. Obviously, I agree the exploit is bad. I just think the millions of users would appreciate practical discussion of the very practical mitigations instead of all the unnecessary doomsaying happening surrounding this.
The paper goes into detail on the errors and how they compare to the normal experience of using Element, but I think that discussion kind of dignifies the situation, doesn't it? We're talking about a warning that essentially says "an unauthorized person is now decrypting your messages". This isn't a reasonable thing to "warn" people about it; in a secure messenger, your job is to prevent it from happening at all.
It's weird that we're even discussing this. In Matrix, group membership is key distribution, and it's controlled by the server! That's not OK!
It may be a serious question, but I don't see how it relates to the part you quoted. It still is a dumb idea, regardless of the answer to your question.
You'll be thrilled to know this pretty much isn't true anymore, even with Synapse. My instance is running on a cheap Hetzner VPS with a not very powerful CPU and is currently using about 700M RSS and not much CPU. And I'm in a lot of rooms, some of them quite large and high in traffic.
I'm also not even using Synapse workers at all, just a monolithic instance. Splitting the setup into workers would buy me an additional speedup if things got overly slow.
Realistically, when would you be in a hundred-person encrypted group? Mostly this is the case when you're a member of some kind of organization, and there are ideas how to solve this case without pairwise verifying all participants (e.g. by delegating trust through a single trusted person such as the CEO, reducing the number of verifications necessary from N(N+1)/2 to N). Even without this, fully verified E2EE is still feasible and useful for smaller groups.
And even if you own the homeserver, you still want E2EE since you don't want the data to rest in plaintext server-side.
How important is it for these kind of groups to be E2E encrypted though? If you're sending a message to 100 people then you probably ought to consider it de facto public even if only the intended recipients receive it.
"You can fool some people sometimes, but you can't fool all the people all the time."
If you don't verify the keys, e2ee is basically meaningless against targeted surveillance. As long as some fraction of people verify keys, it is still effective against mass indiscriminate surveillance.
how is e2e better against mass indiscriminate surveillance than just normal TLS? The only time when e2e is meaningfully different then https is when the server you're talking to (i.e. your personal matrix homeserver) is compromised. In that case, aren't you already in the realm of targeted surveillance?
Some homeservers are larger than others (e.g. matrix.org). They don't all need to be compromised to enable mass surveillance. It also depends on where TLS is terminated. If you're running a homeserver on AWS or something behind their load balancer, there's a difference.
Generally, I'd argue that E2EE provides defense in depth against "unknown unknowns" if server infrastructure is compromised by any means. Although I do acknowledge it adds one more level of complexity, and often another 3rd party dependency (presuming you're not going to roll your own crypto), so it's not a strict positive.
> The only time when e2e is meaningfully different then https is when the server you're talking to (i.e. your personal matrix homeserver) is compromised.
Only if everyone's running their own personal homeserver, which seems pretty unlikely for regular people. You could've said the same thing about email (it's not meaningfully different unless your personal email server is compromised), but in reality the NSA ran mass surveillance on gmail and picked up a lot of data that way.
Serious question, if a surveillance organization had control of a certificate authority trusted by your client, would that allow them access to traffic whose security relied on a certificate from that authority?
By that logic the vast majority of users of whatsapp, signal and most other e2ee protocols/apps use it in a useless way, right? Most people I know who use these apps (even the security-conscious ones) never verified the key.
Signal tells you outright when someone's key has changed, though. It's usually pretty trivial to establish that the original conversation is authentic when you're just talking with people you know in real life (where an impersonation attempt would likely fail for numerous reasons), and you can assume that device is still theirs until their key changes.
There is still a risk that someone is running a MITM attack. The initial conversation would be authentic, but the key belongs to someone else who is just forwarding the messages. Your communications would no longer be private and they could switch from passive eavesdropping to impersonation at any point without changing the key.
Most people rotate their Signals keys every time they rotate their phone hardware (which is inexplicably often for some people apparently), because keys are just auto-accepted everywhere so there is no real incentive to bother moving them. In larger groups there's always someone.
It isn't helped by the fact that the backup process is a bit obscure and doesn't work cross operating systems. For the select few that cares, verifying keys is effective against attackers who aren't Signal themselves, Google or in control of the Play Store. Just make sure to keep an eye out for that key changed warning, it's easy to miss.
But this only solves the problem of magic constants expected in the input. If the check depends on dynamic properties of the input or happens deeper in the code after the input's already been through some transformations, it can't be solved like this. There are other techniques to help with this, though. One of the earlier attempts to solve such types of more complex checks is called laf-intel (https://github.com/AFLplusplus/AFLplusplus/blob/stable/instr...) and boils down to transforming a more complex check into a nested series of simpler checks. This makes it more probable that the fuzzer's random mutation will be able to solve the outer check and hence hit new coverage, enabling the fuzzer to detect the mutation as productive.
The problem of checksums is at times also solved by simply modifying the binary so that the checksum is neutralized and always succeeds, especially if you have access to source code.
As for the problem of fuzzing stateful things like the double ratchet, one way of tackling the problem is to think of the input to the fuzzer as not only the raw bytes that you'll be passing to the program you're fuzzing, but as a blueprint specifying which high-level operations you'll be performing on the input. Then you teach your fuzzer to be smarter and be able to perform a bunch of those operations.
So, let's say you take 512 bytes as the input to the fuzzer. You treat the first 256 bytes as the message to decode and the latter 256 bytes as the high-level cryptographic operations to perform on this message, each byte specifying one of those operations. So you could say a byte of value 1 represents the operation "ENCRYPT WITH KEY K1", 2 represents "ENCRYPT WITH KEY K2", 3 represents "DECRYPT WITH KEY K1", 4 represents "DECRYPT WITH KEY K2", 5 represents "PERFORM SHA2" and so on. Now you can feasibly end up with a sequence which will take a message encrypted with key K1, decrypt it, modify the message, then re-encrypt with key K2. Or, in the case of the double ratchet algorithm, have it perform multiple successive encryption steps to evolve the state of the ratchet and be able to fuzz more deeply.
Of course, the encoding needs to be rather dense for this to work well so that ideally each low-level bit mutation the fuzzer does on an input still encodes a valid sequence of valid high-level operations.
https://mitogen.networkgenomics.com/ansible_detailed.html
reply