I think about trying to hide the metadata of who is communicating...
I wonder about a public stream of end-to-end encrypted messages.
Anyone can add a message to the stream.
Everyone reads all of the messages, and tries to decrypt all of them.
There are lots of variants to this, lots of ways to optimize it, probably lots of ways to implement it. But that's the core idea.
One variant is that what everyone downloads is just enough of a message metadata identifier to see if they're the intended recipient (something about Bloom Filters or PGP Signatures or something, I dunno). Then, if you are the intended recipient, you request the message contents itself. To obscure which messages were for you, you also download some very large number of other messages.
Something about microtransaction fees to pay for all of it. Maybe something about distributed ledger. Mumble, mumble. Maybe messages only live for X days or something.
I believe there is a Usenet newsgroup, somewhere under alt.binaries, that's effectively a numbers station: it's just GPG-encrypted (but not signed) blobs with no titles. Anyone can post, anyone can listen, everyone has to download everything to figure out which things they can personally decrypt.
Sadly, googling related keywords doesn't seem to pull up the name of the newsgroup. I believe I read about it during a discussion on a Tor onion-site forum, on "why people keep getting caught doing illegal things on Tor, and what real OPSEC looks like."
I want to like bitmessage, but the last few times I looked it wasn't ready for prime-time. It is (was?) text-only, and I can't communicate without memes.
> something about Bloom Filters or PGP Signatures or something, I dunno
Would this not defeat the purpose? Once an individual was tied to a unique piece of data, they'd be tied to all data in the stream.
I think such a system would definitely require guaranteed expiration (impossible?). Or some sort of rotating keys or the metadata piece would still be uniquely identifying.
I like this idea, as a concept, but I have no idea how it would actually work in real life with bad actors who can and would download all messages as they appear.
I wonder if there's some way to enforce expiration?
rickycook responded to this part with a proposal, but I have my own take on some of it:
For the "tied to a unique piece of data," that's why I want Bob to download lots of messages, hiding the fact that the person at 14.85.101.86 is the user with the recipient ID of ntULzh2AeEgPH9bKxrn3gUL. Bob should also be rotating his IDs all the time. Maybe they're single-use. And if Bob wants Alice to be able to send him messages, then he (out of band) has to give her a huge list of IDs he'll be watching for, in sequence. If they arrive out of sequence, he knows to be supremely suspicious. Also, yes, I recognize that key management is THE PROBLEM. And I'm essentially inventing dead drops. But in my defense, I'm trying to come up with a way to make it easy for a lot of people to use, thus making it easier for everyone to hide in plain sight.
For the "guaranteed expiration," I am actively assuming bad actors would download and archive all messages. I only propose a limited number of days to lessen the storage costs.
For the rotating keys, as I understand it, there's Perfect Forward Secrecy, but it's very chatty (think of it as "online"). There's also a weaker form of Perfect Forward Secrecy (think of it as "offline"), but the risk is that if the communication if broken at any moment, then you can't recover from within that channel - meaning you'd need to go back to the person out of band, and restore communication. I'm probably summarizing it very poorly, but my mental model for it is roughly, at the end of every message I send you, I tell you what new password I will use when I send you the next message. It's actually way smarter than that, as I understand it, but that gives me enough of a mental model to work with it as a lay person.
I wonder if there's a cryptographically secure way to build a known "stream" of one-use tokens (addresses, if you will) based on known "public key." For metadata security, you only hand that public key out to those you trust.
Another thought is the ability to attempt decoding of every message (as you already alluded to). Encode some well-known bytes at the beginning of every message and see if your key can decode and match them. I'm not certain that protects against metadata snooping, since I don't understand the cryptography enough to know if that well-known text would always be the same for a target private key.
> wonder if there's a cryptographically secure way to build a known "stream" of one-use tokens (addresses, if you will) based on known "public key."
This is what Bitcoin's BIP-47[1] does, but you can hand that "public key" to anyone[2]. The communication layer in this case is the Bitcoin blockchain.
I recall there was a proposal to use "chaffing" as an alternative to encryption; partly motivated by cryptographic signatures not being export controlled.
The basic idea is to split a message into very small pieces, say individual bytes or even bits. And the sign each bit, and iirc add a sequence number. Then you end up with a triple: sequence number, data, signature. Then you generate random triplets - and distribute the lot: the recipient orders by sequence number and keeps the bits with valid signatures.
I'm not sure about how ordering was achieved, but it was a clever idea.
Ah, here's Wired's coverage of the Ronald Rivest's idea in 98:
You basically create a hash table over public identities (bucket = pubkey % n) with an n chosen such that you get a bunch of unrelated people in each bucket but not so many that peers are overwhelmed by the message load on their bucket. Messages can be as simple as ES-SS-DH (basically Noise_X), same properties as e.g. PGP, i.e. no forward secrecy, no KCI resistance.
use hash(best block + sender id + recipient id + sequence number) then only the senders and receivers would be able to get the metadata... though recipient would need to “check” their whole contact list to pull the data... maybe this is where the bloom filter is?
ricochet[1] is my preferred option for situations that would require something like tor messenger (which is very few situations, but I digress). I like that the UX has a built-in threat model (e.g. "do you really want to click on this?")
TAILS users can't use it because tor-over-tor is weird (ricochet uses its own tor process). but it looks like it's getting close.[2]
I wish the page had screenshots. That's usually a good measure of how the software is maintained. Currently the page mentions that it's "experimental".
As far as I can see currently the only widely used, secure protocols are Matrix and XMPP with OMEMO.
Well, that placeholder conversation in the screenshot sure made me cringe. That being said, I look forward to it being integrated and working with Tails.
Hi, I don't speak for @special, but I think some context might be useful.
There is currently a bunch of (official|unofficial) work going on in the wider ecosystem - most of it related to factoring out the base library[1] to go to make it more useful as well as porting the main client to go[2].
There are also a few experiments regarding running on mobile[3] and using ricochet and these other libraries for other things besides traditional IM[4].
Most of this wider work has been updated within the last couple of weeks, ricochet is still actively being worked on & used - just not in the most visible places right now.
Some of the features you have pointed to are sitting waiting for someone to come along and design/write them. For most there is a lot of UX work that needs to be done in order for these features (layered crypto, hidden service auth, multiple profiles etc.) to be secure & useable.
In many cases, adding them to the new Go code might be preferable in terms of ease of implementation. That is partly why work is being put into the libraries there, to make new changes to ricochet as easy and as useful as possible. These libraries are also in development right now, and it may take a little while before everything comes together.
So the best thing people can do right now if they want to move these things along is to contribute to the discussions on the feature threads if they have ideas about how those features could work & to submit code/pull requests to the various code bases.
The issue mentioned of reusing nodes doesn't seem like much of a concern considering how the onion routing works (even if the routes are all the same nodes).
this is interesting to ponder on, it is clearly over my head because I am not sure why this would cause an issue, even if they reused some of the same nodes. in theory if anything could be gained with tor over tor why would that same information not be attainable in a single tor instance? why is information considered exposed if any node is reused?
>It is not clear if this is safe. It has never been discussed.
> Running Retroshare over Tor has a number of definite advantages: it does not require firewall management (Tor does it for you); you do not need a DHT to find your friends (Tor does it for you), and whatever code is tied to ensuring security of your IP information is not needed anymore.
For what it's worth, they've provided a version that can use Tor for a while now. What's new is that the provide a portable version that ONLY uses Tor. Pretty handy!
I guess the idea is rather than a user@host identifier, that looks up via first dns, then at protocol level on the host (eg look up mx record for <host>; attempt rcpt to <user> via smtp) - or a dht protocol - one could simply use a tor node identifier as user identifier. Which might make rotating keys hard - but at least that makes sense; onion "addresses" are unique and "secure".
Would like to read, but it looks like my work is blocking access to torproject.org. I had not realized that this sort of blocking was in place. Gauntlet thrown. My project for today is now to gain access to Torproject on my work machine. Bonus points for installing and running Tor without elevated privileges.
Reading the material on other pages is cheating. I'm trying to bypass the blockade altogether, disproving its utility. Similarly, the issue isn't slipping the tor traffic through the firewall but actually installing the software on a machine theoretically configured to prevent installation of software.
Send an email or XMPP message to gettor@torproject.org, or a Twitter DM to @get_tor, to receive links to download Tor via GitHub, Dropbox and Google Drive.
The download is a zip file that can be extracted and run anywhere without installation.
Include the word 'linux' or 'osx' in the body of the message to get a binary for those platforms.
Perhaps try running TAILS in a VM and connecting through bridges? Tor project's hidden service is at expyuzz4wqqyqhjn.onion
You certainly should not need elevated privileges for Tor Browser but I realize that accessing their download site in the first place is the issue. I'd post a magnet link but doubt that follows the rules here.
Matrix.org/Riot.im has all the encryption you could wish for, a modern, useful interface, and a federated model in which everyone can run their own server and talk to everyone else, just like email.
Matrix doesn’t protect metadata on the server currently - so you have to trust the admins that run the servers you are participating in. In the longer term we want to fix this (https://matrix.org/~matthew/2016-12-22%20Matrix%20Balancing%...) but we haven’t started on it yet.
As far as I remember you needed quite big servers if you wanted to "federate" with others, like join big chatroom because Matrix will try to replicate the history and keep it in sync. Is it still the case?
Yes, if you want to participate in rooms with >10K users or >500 servers you need quite a large box (several GB of RAM) - although over the last few weeks we had several massive algorithmic performance breakthroughs which should help this a lot. these are currently being tested and implemented in Synapse (the python impl).
What metadata mitigations does matrix have (the point of Tor Messenger)? We already have a federated protocol called XMPP. Sometimes you are interested in not revealing who you are talking to.
That won't be enough for the average Tor Messenger user. Email's failings were the impetus behind both instant messaging and Tor. Users don't want/need federated models. Security aside, they want a convenient little app that will receive messages instantly while online but doesn't have to remain online 24/7.
I think the world of secure messaging is in an odd-way at the moment. It feels a bit like competing standards at this point[1]. I'm personally still using signal as the metadata shared by Wire is way too much imho.
Even more interestingly the EFF has stopped trying to recommend the best one and instead is encouraging the users to do their own reasearch (even redirects old urls[2])
Signal is great; except there's also tonnes of metadata.
If I'm trying to talk to someone anonymously, having to give them my phone number somewhat defeats that anonymity. Even having it installed is potentially dangerous; it scans your phone book and suggests other signal users (thereby outing you as a user in the first place).
I'll defend Signal here. This is all about your threat model:
My threat model includes:
- kids in my house
- Facebook selling my data to insurance companies
- future employers googling me
- etc
It does not include:
- NSA
- local police (in 2018)
I'll still try to give away as little as possible as while I trust local authorities now I've no reason to be sure I can trust them in 5, 10 or 20 years (see Turkey).
In my case Signal seems reasonable for some things and for now.
Personally I'm also annoying all crypto experts here by using Telegram for some communication and I might even use postcards for other communication (and there might even be communication channels I use but never talk about).
I believe you can deny the request for contacts list access. You can certainly add a Signal contact within the app by typing in their phone number.
I realise this doesn't solve your issue with having to share your phone number with them, or people with your number seeing you use Signal, but if you want anon communication XMPP+OTR+burner accounts is still the way to go, AFAIK.
I tried it a while back but never really liked it. Clunky UI and the project in general seemed to have a lot of problems. The kickstarter project was basically a ripoff, the project management is (or at least was) scattered and basically non-existing.
I have much better experience with Matrix[1]/Riot[2].
Matrix is an open protocol with end-to-end encryption (still beta IIRC) and is federated (like IRC) rather than fully distributed.
Matrix is now a stable project with funding and riot has a future business plan to also continue develop.
I rather support KeyBase or Wire (Open Source back-end exists and I think the clients are open source too!) as an alternative. I'm leaning cleanly toward Wire, though everyone I've suggested KeyBase to enjoys it. I like the free storage of KeyBase... sue me.
Note: The interesting part is not the vulnerability itself, that is relatively minor. The interesting part is where the tox developers explain that they don't really understand their code.
I think the most interesting part is irungentoo's (only) response in that thread:
"You are fucked if you get your key stolen. There are so many more fun things you can do if you steal someones key that I simply didn't bother trying to handle that case because it would not provide any actual security."
This seems like a pretty flippant attitude in a thread where other collaborators have already built anticipation for your response. I suppose it's possible irungentoo noticed this flaw and explicitly thought "this is outside of the scope of our security model, so I'll just leave that in there by design," but it seems much more likely that they hadn't considered it at all and are simply rationalizing after the fact. After all, if you recognize the negative security implications of a specific design decision and choose not to address it you are not really writing "secure" software. I think "I didn't consider what might happen if a secret becomes compromised" is obviously a bad look for security software.
I wonder about a public stream of end-to-end encrypted messages.
Anyone can add a message to the stream.
Everyone reads all of the messages, and tries to decrypt all of them.
There are lots of variants to this, lots of ways to optimize it, probably lots of ways to implement it. But that's the core idea.
One variant is that what everyone downloads is just enough of a message metadata identifier to see if they're the intended recipient (something about Bloom Filters or PGP Signatures or something, I dunno). Then, if you are the intended recipient, you request the message contents itself. To obscure which messages were for you, you also download some very large number of other messages.
Something about microtransaction fees to pay for all of it. Maybe something about distributed ledger. Mumble, mumble. Maybe messages only live for X days or something.
Thoughts?