Hacker News new | past | comments | ask | show | jobs | submit login
Briar Project (briarproject.org)
350 points by fishmaster on Aug 2, 2020 | hide | past | favorite | 185 comments



In an authoritarian regime with large masses of human and technological resources determined to have control over its population, nothing is really secure. Sending a message that can't be read by a third party? You're suspect. Have an illegal app installed on your registered "report to big brother" phone? Expect an unfriendly visit by big brother police. Don't have a "big brother" phone? There are various ways of sniffing you out. The bottom line is that while technology can help in the process, technology can't bring freedom from oppressing regimes. That is only achieved when a synchronised, large enough collection of people feel that they are willing to change things even at great personal risk. Authoritarian regimes know this, and thus put a lot of effort into using fear of consequences to suppress any hint of such development.


This is precisely why it's important to make these tools (protocols/applications) part of the core layer of how businesses operate consumer-facing services online; it's only true if the ratio of "interesting" communication over these channels is high enough.

In an alternative timeline where ISPs where more strictly regulated and trusted and everything was cleartext HTTP, I'm certain that HTTPS/TLS would face pushback from regulators. There's no way it can be banned today, though. Similarly, you won't be marked as suspicious just from opening an encrypted TLS connecting over port 443 to an arbitrary endpoint.

I don't think it's too late. There is a significant probability that today's centralized incumbents will Myspace at some point in the future. These federated, decentralized and secure solutions could be the next iteration after that.


Oppressors can (and I think some already do) subvert HTTPS by mandating installation of government-issued certs so they can do their MITM.


That is an irrelevant detail. The comment you are responding to isn’t saying that https is foolproof. They are saying that it would help people to be more free of digital control if privacy respecting technology is so common that it isn’t suspicious when you use it.


I agree that it helps (though I'm probably more pessimistic about how much it helps). I wanted to counter the assertion that "There's no way it can be banned today, though" (which I should have quoted). Even a disconcertingly non-trivial number of western lawmakers make regular noises about requiring all encryption to be backdoored.


My sibling response was apparently already out of touch when I wrote it.

https://www.zdnet.com/article/china-is-now-blocking-all-encr...


I know there have been attempts (was it Iran and Kazakhstan that was in the news last year?), but is anyone aware of this actually being done in practice today? My understanding is that they were forced to roll back for practical reasons (which highlights my point).


If only Google and Apple would make a fully end to end encrypted chatting platform to take place of SMS that is fully federated and not controlled by a single entity, something the likes of Signal could support / join in on and other chat apps. When you turn crypto into something the masses use seamlessly it gets a little more complicated to figure out who the suspects are. Also default to not synching to the cloud, and explain why syncing to the cloud could be compromised.


From what I an tell, this is the goal of matrix.org - though full federation and identity portability is not there yet.


If spying on you is their business model, why would they build an app to prevent you from being spied?


Spying on you is not Apple's business model.


It is, alongside limiting what you can do with their leased equipment that people think they bought.


This made me jump. You make one excellent point, with appropriately shocking language:

When I can’t do what I want with my phone, I may as well be leasing it. Hmmm.

But I don’t believe Apple’s business model is to spy on me.


> But I don’t believe Apple’s business model is to spy on me.

"Location-Based Apple Ads: Your iPhone will send your location to Apple in order to provide you with geographically relevant ads on Apple News and in the App Store."

More: https://support.apple.com/en-us/HT207056

So not exactly "business model", but does it matter?


> But I don’t believe Apple’s business model is to spy on me.

Apple business is not spying on you, but Apple business is much easier to do if they do spy on you. Meaning they do it anyway, don't worry.


Almost every app in the App Store, per Apple's guidelines, has tons of spyware in it. Apple asserts that you agreed to this experience when you accepted the TOS of the App Store.

It's impossible to use an iPhone with any popular apps and not be constantly spied on. Insofar as Apple's business model is to make the (full of spyware) App Store successful, it's Apple's business model to spy on you.


Oh indeed it is. I spent years reading apple reports and my conclusion was that they want the data for themselves so they can sell it. Devices don't make much profit when you factor in how much is spent buying up almost all old devices that hit the market.


How much does Apple spend buying up old devices?

My impression was that Apple did trade ins to acquire stock to refurbish and resell in India, and to incentivize users to upgrade.

I would be pretty surprised to learn they were losing money on trade ins, and I've never heard of any kind of direct buy back.


> I spent years reading apple reports

Which ones?


Do you have any evidence of Apple selling user data?


It's right in Apple's privacy policy, see Disclosure to Third Parties section: https://www.apple.com/legal/privacy/en-ww/

They obviously sell user data in aggregate - not at a personal level, but which of the big tech companies sell personal data (maybe FB / Cambridge Analytica?)

Also, Apple has Google as the default search engine which Google pays billions for. Is that selling your personal data?


You mean the disclosure to third parties section that explicitly says "Apple does not sell personal information"?

I can't see anything in that section that says that they sell information to third parties, personally identifiable or aggregate (I would consider the latter to be "personal" data as well fwiw). Is there a specific sentence you're thinking of?

It seems to be talking about the necessary sharing of data that happens when Apple contracts with third party services to run their own business. E.g. when they ship you a product they need to provide your address to the courier company. Or when they pay an advertising company to run ads for Apple products targeting certain markets/their own existing customers (not the same thing as selling personal data to an advertiser so that they can run ads for other products using said data - that would be selling personal data)

As far as I can tell you're either using a definition of "sell" that is different to mine, or you're claiming Apple is using weaselly language to make it sound like they don't when they do (which is not unheard of of course). But you haven't provided enough information for me to really know what you're talking about - which is it, and why?

Also no, making the default search engine google is not selling personal data.


They target ads to your interests, default on:

"Ads that are delivered by Apple’s advertising platform may appear in Apple News and in the App Store. If you do not wish to receive ads targeted to your interests from Apple's advertising platform, you can choose to enable Limit Ad Tracking, which will opt your Apple ID out of receiving such ads regardless of what device you are using. If you enable Limit Ad Tracking on your mobile device, third-party apps cannot use the Advertising Identifier, a non-personal device identifier, to serve you targeted ads. You may still see ads in the App Store or News based on context like your search query or the channel you are reading. In third-party apps, you may see ads based on other information."


A third party being able to list ads on apples ad platform that target some collection of desired user data is not the same thing as said third party obtaining user data.

Third parties are buying ad listings, not user data. They have no way to extract user data from the ad platform, unless there's some kind of data leak.

If you think Apple is harvesting data off of bought back phones to improve their ad targeting that would also be a scandal (that I would expect some evidence of - otherwise it's just baseless speculation), but referring to it as "selling user data" is just obscuring what you're actually trying to communicate.


Unless those ads contain anything, that is loaded from the ad creating company, whichis then loaded in the user browser. Of course, no platform would allow requests to a third party …


None of the big tech companies sell user data.


Idk if you consider Twitter a big tech company, but they do: https://developer.twitter.com/en/pricing. I don't believe Google or FB does though.


That isn't selling user data. All that is on the site itself, they are just making it easier to access. I'm talking about their click streams and other things that are invisible to the public. That data no one sells because it is how they target their ads.


Twitter is literally selling data (tweets) that users generate via API.


The data is public and available to everyone. They are selling an API to it.


Funny that you should mention Signal as example for joining a federated service when they have actively and vocally moved away from federation.

https://signal.org/blog/the-ecosystem-is-moving/


When someone reads the above comment and thinks 'Oh well, it is a risk the user is willing to take by using these apps in an Authoritarian regime' should think again.

Targeting users who rely upon secure apps is becoming common in flawed democracies as well and more countries are eager to join that list. Several people, including minors were arrested in Kashmir when police found VPN app on their mobile during routine checks[0]. Government's logic being 'Why use VPN, if you are not a terrorist?'

At the same time journalists, activists are heavily dependent upon secure apps like this to make their voice heard outside, all the more reason for all of us who are lucky to not have gestapo knocking our doors because we used a VPN to watch PornHub to make usage of such secure apps (messaging, email, VPN etc.) very common.

[0]https://scroll.in/article/954711/in-kashmir-a-spree-of-arres...


I've pondered the merits of someone spreading a virus that just sends (small, but relatively constant and random) amounts of encrypted data (maybe to other infected devices, and various other endpoints). Spread it widely enough and everyone gets to communicate privately with plausible deniability.

Of course it doesn't even need to be real data, random gibberish would work too.


Perhaps. Back in the 80's, folks on Usenet would add words like "nuclear", "bomb", "spy", "communist" to their email footers and posting signatures, in hopes of overloading the (suspected/expected) NSA monitoring of traffic. I'm going to guess that sophisticated filters dealt effectively with such things back then, and even more sophisticated filters would deal with your random encrypted bits too.


Curious anecdote, some underground Japanese P2P networks have files with contents filled with references to Tiananmen, Tibet and the Uyghurs etc. probably to attract attention from CCP to deter Chinese users from downloading them.


Looks like this app uses TOR in long-range communication.

Although it's annonymous, your connection to the network is detectable and would trigger redflags in government monitoring software...


Yup. In communist Poland even having a typing machine was a big no-no unless you were a registered writer (because why would you have it if not to make leaflets, and if you're making leaflets you are clearly anti-government).

If you're getting mail from abroad you're already on a list. If you're getting encrypted mail - you'll probably disappear.


Authoritarian regimes are often far less technologically advanced than free countries. They will often import technology from free countries and be constrained by whatever freedom respecting decisions they've made.


China is an authoritarian regime that is as technologically advanced as the US and Western Europe, and they will gladly export freedom restricting technology.


This is like saying that the weapons advanced countries sell to less advanced countries are more "life-respecting".


Imagine a world where a gun sold is by the US to a poor wartorn nation. When the trigger is pulled, it does facial recognition to figure out who it is aimed at, and if it detects someone friendly to US interests, it will refuse to hit them.


> Imagine a world where a gun sold is by the US to a poor wartorn nation

There is no need to imagine it.


I've been looking for secure messengers during the last few weeks. I use WhatsApp, Signal, and Telegram. Telegram isn't very secure, WhatsApp is owned by Facebook and even Signal - while very secure - requires a cell phone number... Briar seems great in this regard but isn't available on iPhone and has no support for images, calls, voice messages, etc. Apparently they're going to support images and a desktop client, though.

In short, I just don't know what to use.

Edit: Session looks great but is not fully released yet: https://getsession.org/ This might be what I'm looking for in the future.


FWIW, I think you can just get a Google Voice or other short-term burner number to sign up for Signal, and then never worry about it again. Signal's an order of magnitude more trustworthy than the other messaging players and have built out a good base of features at this point. (telegram specifically is a joke... proprietary closed-source encryption is a recipe for disaster.)

I would strongly advise against picking a tiny new-comer without some serious research beforehand... They're not battle-hardened, so will typically be less reliable than any of the existing larger players.


I don't think never worry about it again is quit correct

  What if someone registers with my old number? 
  If someone were to register with your old number on a new phone, then they will have an empty message history. Your contacts will also be made aware of a safety number change if they start messaging with the old number.

https://support.signal.org/hc/en-us/articles/360007062012-Ne...


I had no luck getting people to use signal. Family thinks that I'm a freak because I try to make them understand that in socialist Denmark your communication is not secure from the government.

The app is easy to use, but people are not using it. They use sms and FB etc. to message friends.


Have you tried Jabber with Conversations or Pix-Art android app? It has end to end encryption (OMEMO), support for sharing media, voice and video calling, multitude of desktop clients, (a less secure) web client etc.

Most importantly, you can host the server yourself without cutting off from the network.


Signal is working on getting rid of the cell # requirement, but it'll take a while.


It can't happen soon enough. I installed Signal a few years ago, and the first thing it did was notify a bunch of people I had in my contacts, that I was now using Signal...

...Including the unstable frenemy-guy who was only in my contacts so I'd recognize the number if he called and I'd know not to answer...

....who immediately PM'd me on Signal to push his latest delusion and make sure I didn't disagree.

Great, just great. For a privacy-focused product, that's a pretty colossal fuckup.


I agree. I found that behaviour disgusting, and that's why I avoid Signal.

Anyone who says Signal has good privacy is just wrong. When Signal say they have good privacy, that's false advertising.

Of course, Telegram does the same thing. I have Telegram installed, and I use it, but it wasn't really a choice. I needed to access a forum which is only on Telegram :/ Unfortunately that meant I had no choice but to have people who know me notified that I installed it. Someone messaged me about 30 seconds after I installed it to say hi. I'm not comfortable about this, but as I say, I didn't really have a choice.

Then there's WhatsApp. I was surprised to read an article which recommended that, of the three, WhatsApp is probably the most privacy-respecting of the apps. I have WhatsApp installed after a long period of avoiding it, because of all people recruiters started expecting it. Hmph. I still refuse to grant it access to my contact list, because I'm not handing that over to Facebook. Which means every message is associated with a raw phone number only, and I have to guess who it is from the content :-)

You can tell WhatsApp doesn't reveal so much, by the people who have sent WhatsApp messages without being told that the recipient doesn't have WhatsApp installed and won't see the message. I've known a few people this happened to. One installed WhatsApp and found they'd been sent a message a year earlier, from someone they thought wasn't talking to them. They were talking, but the sender assumed of course the recipient would have WhatsApp.


You might want to read Whatsapp terms of service, in particular the part where they claim copyright and even the right to make derivative works on anything you transmit over their platform.


Oh don't get me wrong. I don't want to use WhatsApp. I'm only on it at all because I need to speak with people who are using it without antagonising them. I still haven't granted it access to my contacts DB.

All three of Signal, Telegram and WhatsApp make me a bit off about using them for various reasons. None of them are what you'd call "user's privacy first".

As it is, I'm currently having occasional confidential chats (at someone else's request) on Telegram secret chats, and at least that probably is what it says it is.

I don't think any of these three apps are awful.

They are pretty slick, and useful.

I don't feel too bad actually using them, any more then using say MSN, Yahoo or Freenode.

They just don't meet the advertised bar of respecting individual privacy first. And I find that really misleading in the case of Signal and Telegram in particular, which emphasise the privacy angle, and then without letting you know, sprays everyone you ever interacted with outside Signal with a notification, including professional contacts, customer service agents, people you don't like, spammers, etc.


That's because most privacy design is done by nerds who think "CIA/NSA/GCHQ/Mossad/FSB etc. might secretly whisk me away to Gitmo for my thoughtcrimes" is a far more pressing problem in people's lives than "my partner is a coercive, controlling domestic abuser", or "my employer might fire me for trying to set up a union"...


Signal can't fix social problems. Sounds like you probably ought to cut the toxic person out of your life. Or if that's too confrontational, there's always the old, "Sorry, I just got this number, don't know who you're talking about."


I understand your frustration, but Signal didn't notify your contacts because you installed it. It notified the other person, because he had your phone number.

Your local Signal installation regularly checks if any of your contacts (with the phone numbers you have of them) are registered at the Signal servers - and then lets you know it, such that you can text this contact securely.


The technical details do not really matter.

Many people might have my phone number, possibly from a long ago. But the number itself is pretty safe -- there is no way to tell if this phone is in use or not.

Signal breaks that assumption -- it immediately tells every other user that this number is alive, valid, and can be contacted right now.

This is a terrible idea to do by default, especially if one cannot disable it.


But in this case they do matter. When you give someone your phone number, you give them the possibility to contact you. And then it should be securely, if it's possible.

A check for "aliveness" of a number can be done without Signal. Just call and see if it is ringing. You get the same information. Your Signal profile name and picture, however, will only be shared when you accept it.


Although it lacks a notification in WhatsApp it's just as easy to check a phone number for 'aliveness'. When you select 'new conversation' you get a list of your contacts with a name (from your address book) and a picture and tagline.

This is all based on your address-book so it doesn't matter if the other party knows your number or not.


doesn't make a difference. it shouldn't do that without user confirmation


It still leaks all the connections metadata. It's very easy to correlate TCP connections to and from the server.


How exactly does Element leak metadata?


I'm surprised nobody mentioned Jami https://jami.net/


Self hosted matrix? It has e2e encryption.


Even better, I'm pinning my hopes on peer-to-peer matrix via dendrite.


It still leaks all the connections metadata. It's very easy to correlate TCP connections to and from the server.


also, very soon signal won't require a phone numbet as the PIN feature (what is present now for storage encryption) will be extended to serve as account ID


Have you looked at 'threema'? I recently installed it and I'm actually pleasantly surprised.

However, all of these bloody messengers mean that my contacts list is spread across a multitude of programs: we need the iOS/Android equivalent of pidgin.


Threema is closed source which is something I don't really like when it comes to security as there are no independent audits.

Edit: Generally, Threema seems interesting feature-wise, but I think the price (4€) will prevent my contacts from using it...


Interesting thought experiment though; if you’re not paying for the development and hosting- who is?


threema claims no having no identificable information yet the account IDs are just the unsalted hashes of your telephone number.


Pidgin is exactly what we need.. and each messenger needs to be a pluggable module. Then we dont need to hound friends and family to switch messenger apps they just use pidgin.


But why would something like Facebook open up their walled garden? Does it even count as a walled garden when you have 2 billion people on the platform?


Obviously yes. Does any wall count if there are more entities on one side of it than the other?


the walls i think refer to transparency ?

so is regardless of user count


What did Riot, the Matrix client, get renamed to?


"Element". had to look it up, forgotten it already. :/


It is a really forgettable name, to be fair.


> Telegram isn't very secure

Can someone elaborate?


There are different aspects to this. The first and the easily verifiable one is that they default to client-server-client connections, not end-to-end encryption. If you want to have an end-to-end encrypted channel, you have to explicitly open a "secret chat". However, this removes the convenience of cross device syncing.

The second one is more difficult to evaluate. If you use the above mentioned "secret chat" feature, Telegram employs their own closed-source encryption scheme. That's usually an indicator to be cautious from the get-go. Since it's closed source, it can't really be trusted.

See [Wikipedia](https://en.wikipedia.org/wiki/Telegram_(software)#Security) for a timeline in regards to the security.


Telegram clients are open source. I downloaded and built MacOS version recently - it was very straightforward.

https://telegram.org/apps#source-code

Encryption for secret chats doesn't involve server, so technically it can be analyzed.

It's a pity Telegram decided to roll their own encryption scheme. I use Telegram a lot for daily business because it's superior desktop messenger product. I would gladly participate if somebody started a crowd-funding for Telegram's security and encryption audit.


> Encryption for secret chats doesn't involve server, so technically it can be analyzed.

Except if you are on desktop, you have no secret chats at all. And "desktop" includes GNU/Linux phones.


To add to my sibling's comment: Contact lists also get synced with the servers (in contrast to e.g. how Signal handles this).


The project is no longer maintained but Oversec had a cool concept of hiding your own encrypted text as non-printing chars in Android text fields in any other app, like e.g. WhatsApp.

https://github.com/oversecio/oversec


I'm a big fan of Keybase https://keybase.io/. The team was acquired by Zoom, but there's a new website and development is still going on.


"Wire" is the only one, AFAIK, that ticks all those boxes.


care to elaborate on how telegram is unsecure? Their fat bug bounty program yielded no security issues for years and I'm unaware of any issues with MTProto


Their server backend code is closed source and they're running custom crypto. Also groups and desktop clients don't support end-to-end encryption.


iMessage? Natively on your phone/macOS and peer to peer encrypted. Never really saw a need for yet another messaging service. I think I'd trust Apple to do the right thing over any of these.


Great unless you or someone you want to communicate with doesn't use an apple device. Which covers the majority of the population.

I've also had terrible reliability issue with imessage in the past, messages not delivering, not showing up, errors sending, showing up on one device but not another, etc. Was a mess that caused a lot of confusion.


Guess it depends on your circles. For my group, everyone has an iPhone or a Mac. For the outliers, everyone has cell phones, and iMessage supports SMS.


>iMessage supports SMS.

Which isn't end to end encrypted which defeats the whole point in terms of this conversation.


Thank you for identifying that.


Briar Project (and other projects like Signal and Tor) are funded by Open Technology Fund.

OTF is being killed by the current US government and this will affect all projects!

https://en.wikipedia.org/wiki/Open_Technology_Fund

https://saveinternetfreedom.tech/

https://saveinternetfreedom.tech/updates/


Now that I think about it, why aren't most messaging apps peer to peer? Shouldn't that be the standard? I mean it's literally the point of messages: sending from one person to another.


Because:

a) Often the intended recipient isn't online when the message is sent, and it may happen that there is never a time when both sender and recipient are online simultaneously (e.g. sender's device only turns on to send the occasional message, receiver's device is usually off but turns on occasionally to check if there are messages)

b) Often one or both devices can only connect to, but can't be connected to (because behind a NAT, a mobile device, firewall, etc.)

c) Communication is often desired between accounts rather than devices -- I may want to send/receive on my work computer, home computer, phone, and watch


In before someone suggests storing encrypted messages in a public blockchain. This is a really good overview of the challenges with these messaging systems. Store and forward has been our MO since email and Usenet were invented because always on always connected devices that aren’t restricted by some network obstacle are not really feasible or even desirable most of the time. I do wonder what alternatives we have to something like a trusted online service to store and forward messages or a public blockchain. Some kind of crypto based system where nobody but the owner of a private key can even locate the message? A decentralized system where multiple copies of multiple fragments of your message are stored so nobody can piece together your (encrypted) message without controlling the majority of the nodes?


It seems like this might be the eventual intent of the Scuttlebutt protocol, and so far that's also the furthest along in approaching such a solution.


Isn’t matrix basically all that’s needed? It even has out-of-band verification of your friend’s keys.


If only the ecosystem had been built to use E2EE by default, always. They fucked up with the design allowing bridges and bots, left E2EE for later, and now they're in the vicious circle of downgrade attacks until all major clients switch to E2EE with no insecure fall-back option.


there aren't downgrade attacks. we turned on E2EE by default in May for private rooms, and there's no negotiation involved. if you're on a client that supports E2EE (i.e. almost all major ones, now) and you try to DM someone, they simply won't be able to read you unless they support E2EE. i.e. they can't downgrade the convo.


That's good. The last time I had a look at Matrix clients it was a mess. IIUC the E2EE isn't enabled by default for the old Riot client, only RiotX and Riot web have it.

What happens if someone with old Riot client creates a room and someone with e.g. RiotX joins it, will it force E2EE on? Or will it fall back to non-E2EE messaging?


The creator and admins of the room picks the encryption preferences, iiuc: if you have a client that doesn’t support E2EE, you might be able to create an encrypted room (?) but it would be pretty useless. The clients all clearly mark the encryption status of the room you’re in.


So if an ignorant/malicious user creates a room without E2EE and doesn't care to enable it even when requested, all users are forced to converse in effectively plaintext, and the solution is "clients tell users it's not E2EE".

IMO it should be the case that it's always E2EE, no other options. Until that's the case I think Matrix ecosystem isn't keeping up with centralized solutions like Signal.


E2EE is really annoying, in lots of ways: if the users in the room want encryption, I’d rather they just create a new room.


Yeah, I really like the ability to have an unencrypted channel: easy bots and bridges are one of the main advantages for me of matrix vs. IRC.

My only big issue is that the iOS client doesn’t support multiple simultaneous identities.


I should have said “easy bots and bridges are one of the main advantages for me of matrix as a successor to IRC”


I'm really intrigued by the Scuttlebutt protocol, but in practice it's super hard to get plugged into the community because, as a new user, nobody follows you. I haven't figured out how to just engage people in conversation -- I reply to their posts but nobody sees my replies.

If there are other applications that can run over the protocol, I'm interested in learning about them.


Yeah, that behaviour's designed to counter spam and unwanted bots, but it does mean newbies need to be invited into a community. Meanwhile it's lonely talking into the void.

You could connect to a pub — an automatically-friendly bot account; see a list at https://github.com/ssbc/ssb-server/wiki/Pub-Servers — scuttle.space seems to be active right now.

If you're happy posting your SSB ID publicly, I'll follow you, and that may help. Or you can use the #new-people tag if you want to introduce/announce yourself :)


The dealbreaker with Scuttlebutt for me was the inability to delete messages.


Do you have an opinion on Skype like centralised coordinator that then sets up P2P connections?


This really isn’t my field, I am more of a full stack developer who mostly works on web apps with a heavy interest in networking. So definitely not an authority on the subject.

I think that’s basically the sort of system that most places employ. It’s nice because it’s easy to set up but you really need to trust Skype. Say they actually try to use end to end encryption. How does that work? Well, you could say “I am Alice and I want to establish a connection to Bob. Here is my public key he can use to send me messages and I’d like his public key so I can send him his.” That of course would need to happen in addition to establishing a network connection. So no if Skype is a good actor they will pass my public key to Bob, get his and send it to me as well as coordinate us establishing a direct connection.

But what if Skype is a bad actor? Well they could take my public key and send Bob one of their own. Then they could also send me their own. Now they can listen in on my conversation. They can also in a similar fashion make it seem like I’m connected directly to Bob’s networked device but really just relay the connection through their servers. Neither Bob nor I would have any way of knowing that without having exchanged public keys prior and having verified them. So this system is basically insecure against Skype wanting to listen to my conversations or being compelled to do so by a state actor.


Is IPv6 likely to be a practical solution to the router/NAT issue? Are routers assigning globally-routable IPs to their clients, is that already a thing?


IPv6 solves one of the reasons to use NAT. One more intractable reason is that many players (cellphone network operators, corporate networks) consider it a positive thing that individual devices are not globally accessible.


For cellphone networks it's not just a provider-side incentive, as inbound traffic will drain battery and you'd have no good way to stop it. But this only requires some sort of spam filter to be in front of the cellular link, like with a friends-based system where you keep connections to some friend's online nodes when you lock your phone, and just exchange IP+port info on both sides to just send a UDP packet to each other's IP+port from your own IP+port, punching your firewall. Theoretically you might even actively control your firewall to allow closing it off and also maintaining some permanent open entries for your friends to reach you with no indirection from their usual network(s). A provider could make money by selling (quota for) provider-side user-controlled firewall entries, and you could have your OS give out quota to apps.

It's feasible once you reach the point where it's worth the effort of implementing.


> Are routers assigning globally-routable IPs to their clients, is that already a thing?

Yes, but a sensible router will also firewall all incoming connections unless the port is explicitly opened.


The NATs will be gone, but they'd be just replaced by firewalls with "default deny incoming" policies for most users. Some users might change this, but there would be enough people using defaults that one could not rely on p2p connectivity.

(The current networks are often not set up to handle malicious incoming internet traffic, and new protocol is not going to change this)


> why aren't most messaging apps peer to peer [...] it's literally the point of messages: sending from one person to another.

Messaging apps are more like postal services — "please deliver this message to $person" — you're describing driving across town to drop something in a mailbox directly. A peer-to-peer messaging system wouldn't have many benefits over an E2E-encrypted one (in a centralized E2E-encrypted service you already enjoy technical guarantees that the courier can't peek inside the metaphorical envelope), but would have several usability drawbacks that would drive away casual users, which the sibling comments mention.

Driving away casual users has its own problems: you might drive them away to insecure services ("ah fuck it, this thing doesn't work, I'll just DM them on Twitter"), and the lack of casual users will make your remaining users stand out in traffic analysis (e.g. state agency says "hmm, askxnakjsn is using SuperEncryptoP2PMessenger, better go make sure they aren't a dissident").


For multiple reasons but first of all because it would require both devices to be connected and online.

The common alternative to overcome this is to pass the messages through the server and encrypt/decrypt them on the device (aka e2e encryption). I acknowledge that might not be secure enough for certain use cases.


P2P requires both clients to be running at the same time in order to communicate. If you friend's phone is off and you send a message, they won't get it when they turn their phone on.


Seems to me like there should be a DHT way to solve this. When you boot up, you take your place in the table and query your neighbors for messages. If someone's unreachable when a message is sent, you hand the message to their neighbors to hold it until they appear.


Which requires you to trust your neighbours. To which you might say: aha! Just use end to end encryption! And sure, you can. But at that point, what benefits are you getting over using E2E with a centralised system? Very few. And you’re getting a bunch of drawbacks in terms of reliability too.


E2EE only protects content. Metadata is also very important and where as p2p apps like Briar, Ricochet, Cwtch and TFC hide it from all, centralized and decentralized apps have one or more weak points that allow eavesdropping on larger amounts of metadata.


I dunno, a centralized server means a centralized off-switch, that's a pretty huge drawback in terms of reliability.


which is why centralized/peer-to-peer is a false dichotomy the solution to many of the problems of both of them is (forkable) federated networks


Depending on whether you expect IM to work like physical mail or a phone call, this may be the expected behaviour.

It seems the majority here seem to be expecting the former, although I think of IM as more like the latter: if the recipient is not available, then the message is simply dropped, much like I can't call someone who is not answering the phone.


I don’t have an answer, but a slightly different perspective. Many different segments have a deep interest in using highly secure encrypted communications: politicians working on deals within/between governments (that should be auditable, but many try to avoid that), whistleblowers, organizers operating in adverse governments, dissidents, terrorists, pedophiles with a lot to lose (similar to Epstein’s network), healthcare professionals trying to talk to patients or other doctors in a hippa world, illegal transaction networks, attorneys with clients, VCs trying to debate the future of the world, companies trying to preserve trade secrets, you name it. It takes one of the egregious bad actors using the system to commit a crime worthy of public attention before the entire system is justifiably unpacked, banned, or considered a signal of bad intentions.

How can a system be made decentralized, but able to self-police against legitimately, publicly agreed upon bad behavior? If the system is able agree upon and exclude legitimately bad behavior automatically, the governments would not have a claim upon needing to police it and regular users would probably find it beneficial as well.

How could the self policing possibly happen?

Maybe you have a blockchain of anonymized encrypted messages that is read by open source scanning bots - if enough independent bots flag a message, then a group of anonymous judges can adjudicate to ban those user accounts?

Encryption is one challenge, but if you want true ubiquitous privacy, you need to deliver internal safety to prevent the need for external policing of activity. Social creatures of any species from dolphins to macaques have evolved some kind of internal behavior policing mechanism or trust is lost, and as such the system of value exchange grinds to a halt.


This is the same way of thinking many politicians subscribe to: "There has been one terrorist attack, which killed 20 people, quickly now, surveilance everyone and everything! Think of the dangers!"

Throwing out the baby with the bathing water (does this proverb exist in English?) is not going to do society much good. Just because there some bad actors, one does not need to discard the whole idea of encryption.

Also the dehumanized way of checking for bad content will not help. Bad actors can pre-encrypt or disguise content, whatever you do. Furthermore when the bots have the key to decryption, then the backdoor is built into the system. Bad actors and politicians will try to make use of that.


The English idiom is precisely "throwing out the baby with the bath water." So, as they say, you "hit the nail on the head."


> It takes one of the egregious bad actors using the system to commit a crime worthy of public attention before the entire system is justifiably unpacked, banned, or considered a signal of bad intentions.

Banning encryption because bad actors use it is not justifiable.


The ideal solution will not throw away encryption, but enable self-policing. It's the difference between HN and 4Chan. AI with safety vs Pure AI unleashed.


As much as I don't like Epstein's network, it is better for him to go free than to live in an authoritarian state with locked down protocols which control what someone can and can't do.

Someone like Epstein could easily communicate with coded messages. Or send someone to convey messages in person.


What I don't understand about Briar is how it can scale. Surely it can't know ahead of time which users are going to "travel to another part of town" and should therefore have messages pre-loaded onto their devices. Therefore to me this seems like it must use some kind of broadcast delivery model and so would be vulnerable to flooding attacks.

Edit: seems there are some thoughts about this already https://code.briarproject.org/briar/briar/-/issues/511


Earlier this year, I finally took the time to revisit the state of instant messaging services. My requirements:

- open source

- cross-platform (linux, mac, windows, ios, android)

- group chats

- end-to-end encryption

- well-understood crypto ciphers & protocols

- mature enough for a reasonable expectation of security & privacy

- easy enough for most computer users

- some way to protect metadata (e.g. self-hosting)

- signup without real-world ID

- offline message delivery

I ended up choosing the Matrix network. The reference client is called Element[1] (formerly Riot). There are things I dislike about the client, but they're pretty minor compared to the benefits of the underlying protocol, and lots of alternative clients are in development[2][3].

On top of meeting my requirements, all signs indicate that development is both active and moving in the right directions. Reading the team's weekly reports and issue tracker convinced me that they are making very sound decisions.

[1]: https://element.io/

[2]: https://matrix.org/clients-matrix/

[3]: https://matrix.org/clients/

Here's what I didn't like about the others:

Briar: Lacked cross-platform support and (iirc) offline messaging. Tor brings baggage that not everyone is ready to accept.

Cwtch: Not mature yet.

Jami: Very fragile code base in my experience, which was also true when was called Ring, and when it was called SFLphone. Only about 25% of the builds I've tried over the years actually worked. I was unable to determine whether it had offline messaging.

Keybase: Now owned by Zoom, which is a privacy nightmare.

Ricochet: Same problems as Briar.

RocketChat: Crypto is not mature yet.

Session: Not mature yet. Small limit on number of group chat participants.

Signal: Required phone number for signup. Required Google Play Services (aka spyware) for quite a long time. Weak cross-platform support. Some of that is finally changing, but Moxie will surely make more intolerable design decisions, and refuse to fix them for years, again.

Telegram: Homebrew crypto.

XMPP: Most clients are hard to use (or to teach others to use). Good servers are hard to find. Protocol standards are a mess. I couldn't find a real-world e2ee group chat implementation.

Everything else: Failed to meet my requirements even before I looked closely, mostly due to closed code and/or problematic corporate interests. (For example, I will not use an app from Facebook or any of its subsidiaries.)


"some way to protect metadata (e.g. self-hosting)"

From whom are you trying to protect metadata? Briar distinguishes itself as a platform that doesn't leak it to anyone. Matrix always has at least one central point for metadata eavesdropping, and that's the device the entities interested in your communication will hack first. Or maybe the threat of the group is in the inside -- John, the creepy IT-guy of the peer network who has a crush on Karen and is jealously eavesdropping on her every action, including content when E2EE is disabled for some chats.

Thanks but no thanks. I'd much rather just centralize the trust to a known crypto anarchist like Moxie who doesn't know me in person, and if I can't trust anyone I'll just use Briar despite lack of offline-messages. It's not like my phone isn't on 24/7 anyway.

Wrt. Session, it's not at all clear how anonymous their onion routing network is, if there's enough nodes etc.


I think it's important to distinguish mass surveillance from targeted surveillance. They present very different threat models.

I need a general-purpose chat tool for use with friends, family, and business contacts. Protection from targeted surveillance by a state actor (or someone with equivalent resources) is neither a priority nor realistic today in light of my other requirements. I'm okay with using a separate tool if I ever need that kind of special-purpose protection.

Roughly stated, the goal is to regain the convenience of older tools like talk, ytalk, irc, ICQ, AIM, Yahoo Messenger, Facebook Messenger, and Google Talk, without being inaccessible to swaths of the computer-using population, and without exposing us all to mass surveillance any more than necessary. Matrix succeeds at this admirably, and continues to get better at it over time. (You might want to look at their in-progress P2P work.)

Briar fails unless you only talk to people using smartphones.

MoxieTalk fails because it exposes people to mass surveillance. In multiple ways. Over and over again. (Also, I've never seen a good linux client for it.)

I acknowledge that both those tools look very useful for certain purposes, and I have a good deal of respect for Moxie because of his contributions to the crypto/comms community, but neither tool does what I need.


"I think it's important to distinguish mass surveillance from targeted surveillance. They present very different threat models."

Targeted attacks against centralized points that enable wide-scale surveillance are mass surveillance. Imagine NSA would claim "A fiber optic splitter in the bottom of the ocean is a targeted attack against one device (repeater), or one inch segment of glass wire, it's not mass surveillance".

It's vital that we define targeted surveillance as something where the target is a single entity. Hacking Moxie's phone is targeted surveillance. Hacking Signal server is not. Hacking every visitor of a CP site is mass surveillance https://www.eff.org/deeplinks/2016/09/playpen-story-fbis-unp...

"You might want to look at their in-progress P2P work."

It will be a nice to have sure, but I think P2P should work exclusively via Tor if you want to hide metadata. wrt that, you might find my work interesting https://github.com/maqp/tfc

"Briar fails unless you only talk to people using smartphones."

A picture is worth a thousand words

https://twitter.com/Amlk_B/status/1286642831239647232/photo/...

"MoxieTalk fails because it exposes people to mass surveillance."

Jabs like these aren't really appreciated. Extraordinary claims require extraordinary evidence.



I didn't, but since you linked to it, I took a peek. I couldn't find any linux, macos, or windows support within a couple minutes of visiting the site, so it fails my "cross-platform" and "easy enough" requirements.

It seems to be married to Ethereum. That's mildly interesting. It raises questions about its relationship to cryptocurrency and blockchain tech, but until it meets my requirements, I'm not inclined to spend time investigating the answers.


The desktop client (linux, macos, windows) is a work in progress; alpha builds are available. See the third link in my comment to which you replied. I could have been more clear on that point — I replied on my phone just before going to bed.

The wallet functionality is tied to Ethereum, but the chat functionality works separately.

Originally Status used the Whisper protocol, which used to ship with some Ethereum clients but never gained real traction. Status has switched to a protocol named Waku that's in development but progressing nicely.

If at some point you're interested and have questions, let us know! (I'm on the team developing the desktop client)


I've been out of the loop a bit on Status recently; do you know if it's currently utilizing Matrix protocol or if it's on the roadmap? I take their partnership and $5M investment into New Vector in 2018 as an indication of such intentions. Or perhaps they're intending to just bridge Whisper and Matrix.

https://matrix.org/blog/2018/01/29/status-partners-up-with-n...


No, that investment was for establishing good relationship. There were no attempts or discussions to marry or bridge protocols.


Huh. They do mention this as "potential obvious advantage" in the post I linked:

> Bridging between Matrix and Whisper (Ethereum's own real-time communication protocol) - exposing all of the Matrix ecosystem into Ethereum and vice versa

But maybe this is just meaningless marketing fluff and something that's effectively left to "the community".


Quick unrelated comment from the peanut gallery:

Every time any crypto-currency related messaging app is published, I think it should be mandatory to immediately explain what the currency and/or blockchain brings to the table. Is it a paid app? Does it store ciphertexts indenfinitely to the blockchain? Or public keys?


While I understand where you're coming from, Status is an interesting project even if you completely disregard their (IMO shoehorned utility-) token.

The main thing they have right now is an app acting as a wallet (ETH and Ethereum-based tokens) and IM app (Whisper protocol).

It's standard practice that what you request is answered in a whitepaper, which is also the case for Status: https://status.im/whitepaper.pdf


Jesus Christ, that must be fourth or fifth "whitepaper" I see from the cryptocurrency community. Here's what a proper whitepapers should look like:

https://signal.org/docs/specifications/x3dh/x3dh.pdf

https://signal.org/docs/specifications/doubleratchet/doubler...

Status' whitepaper looks like marketing material for investors, not a technical description for infosec professionals. I find the content almost repulsive.


> Status' whitepaper looks like marketing material for investors

TBF, that is the more commonly understood meaning of "white paper".

https://en.wikipedia.org/wiki/White_paper#In_business-to-bus...


Oh, TIL. I'm still unable to find the information from the white paper that was promised to be there.


Status core contributor here. Which information in particular are you looking for?

In answer to your questions above:

What [do] the currency and/or blockchain brings to the table?

Decentralized messaging tech aims at making 1-1 and group messaging more secure. It is not inherently tied to blockchain / cryptocurrency, though for an economically incentivized network of mailserver nodes it could make sense to realize the incentivization mechanisms via cryptocurrency transfers.

See: https://vac.dev/vac-overview

The Waku protocol, currently used by the Status mobile and desktops apps, is being developed by the Vac team, which is part of Status.

The Status client includes an Ethereum wallet because many people interested in the messaging technology are also interested in cryptocurrency.

Is it a paid app?

No, it is free to download and free to use, and all the software developed by Status is open source: https://github.com/status-im/

Does it store ciphertexts indenfinitely to the blockchain? Or public keys?

Sending and receiving messages does not involve transactions on the blockchain; messages are not stored on the blockchain.

You may be interested to read more in the technical FAQ:

https://status.im/technical/FAQs.html


Key exchange algorithm, symmetric cipher and mode of operation, is it forward secret, future secret, how are keys generated (CSPRNG algorithm), how are the ciphers tested (test vectors, links to test vector sources and to unittests doing test vectors), rationale for primitive choices. How are key changes handled, what kind of warning is displayed when public key changes, fingerprint encoding, cryptographic protocol descriptions, threat model (the more transparent the better), if blockchain is used to provide e.g. key authenticity.

"The Status client includes an Ethereum wallet because many people interested in the messaging technology are also interested in cryptocurrency."

So it's an encrypted messenger with a wallet, OK.

"Decentralized messaging tech aims at making 1-1 and group messaging more secure."

So you should discuss the monetary incentives for people to host a decentralized server, and also discuss how malicious state entities running servers is not economically viable, if that's the case, or tell that it's not the case. You should also discuss how it makes it more secure.

So tl;dr create an article for cryptographers/infosec folks that need to understand how the security works (and please remember most people in those circles are really allergic to marketing language and buzzwords unless it's a technical property).


I think most of the information you're looking for is documented in the specs:

https://github.com/status-im/specs/tree/master/docs

https://github.com/vacp2p/specs/tree/master/specs

Monetary incentivization for running a mailserver is being researched and (afaik) is not yet implemented or specified. There is a relevant discussion forum:

https://forum.vac.dev/

Activity there is currently light because most core contributors' time, at present, is being spent on other pressing tasks.

See also: https://discuss.status.im/



Status now uses the Waku protocol, Whisper is pretty much dead.


You might want to look at Wire

https://wire.com/


"some way to protect metadata (e.g. self-hosting)"

is incompatible with Wire. Let's not just scream product names without understanding if it's for their threat model. If you're here to promote Wire then I perfectly understand why you'd recommend it anyway.


Yeah, Wire failed my requirements, but I probably should have mentioned it. (I simply forgot about it when I was posting.)

There's the issue you mentioned, and there's also the issue of them violating their published policy (either by the letter or in spirit) when they accepted new owners/investors.[1] Even if it met my requirements, I would be leery, and reluctant to suggest that others invest their time and build their communications network on such a foundation.

Of course, things can change over time. Maybe Wire will do things differently in the future, and become more appealing. That doesn't solve a problem for me today, though.

For the record, there's some discussion of Wire and other apps scattered about the privacytools.io issue tracker[2]. The signal:noise ratio there isn't great, but some folks here might find it interesting. As long as I'm posting links, their main site is worth a look, and the section about instant messengers[3] relates directly to this thread.

[1] https://nitter.net/Snowden/status/1194396764293550080

[2] https://github.com/privacytools/privacytools.io/issues

[3] https://www.privacytools.io/software/real-time-communication...


"when they accepted new owners/investors."

IMHO we should be able to determine the amount of trust we can put on the app from the client alone. If FOSS client uses E2EE, no matter what the server starts doing when the service changes ownership will have an effect on it. Of course the new owner could e.g. start selling user metadata, but that's something you should kind of assume the service is doing anyway (just because they can), and if you can't take the risk, you should use something that prevents it by design (like Briar/Cwtch/Ricochet/TFC).


I agree in principle, and I look forward to the day when all my requirements can be met without self-hosting or trusting another party with metadata. After all, most people don't have the means to self-host. Multiple projects (including Matrix) are working in that direction, but I'm not holding my breath; metadata exists at multiple layers, and is a hard problem to solve.

Until that day, a public host with the right incentives and track record remains valuable, even if only to include people who don't have tech-savvy friends to host for them.

Regardless of all that, given the choice between rewarding an organization with good behavior vs. one with bad behavior, I choose the former.


What's the issue with self-hosting? It's not that expensive to run a netbook 24/7 with an Onion Service. I'm doing that for my FreedomBox with radicale etc. ATM.

There's no overhead costs like static IP or hostname.

I can agree with the point of "average user lacks the skills" so that's something that needs good tutorials.

But then there's apps like Briar that just run on your phone, that bring the complexity down to these users.


I have nothing to do with Wire, I just thought it was close enough to his requirements that he would benefit knowing about it.


Just installed it. How do direct messages work? Is it just a room with two people in it?


Yes. The reference clients recently started displaying those differently from group conversations, but the visual distinction has been pretty minor so far.


Support for a TEMPEST mode of communication would be a killer feature. Perhaps vibrate mode on one phone being picked up by the accelerometer of another?

In our hypothetical dystopian future The Regime will probably jam 2Ghz to 5Ghz in public spaces. TEMPEST mode would also force them to install vibrators into all coffee shop tables.


> Support for a TEMPEST mode of communication would be a killer feature.

You are misusing the term. TEMPEST is an attack.

> Perhaps vibrate mode on one phone being picked up by the accelerometer of another?

At very close distance vibrating our vocal cords and eardrums would be much easier and works without battery.


I can't exactly modulate my vocal chords to send a file. But I think you're right, using some sort of high-frequency beacon tone like what they use for the creeping tracking identifiers might be an option.


What’s TEMPEST? Something that communicates by vibrating a table?

In my dystopian future theory, the government that has no issue jamming 2 and 5 ghz channels to keep people from talking would probably notice two people on on a table continually picking up And dropping their phones. Why wouldn’t they simply whisper to each other in that scenario?


TEMPEST is a generic term for extracting data that emanates from channels which were are not supposed to carry data. It’s usually an attack, used to spy on people.

The classic example is pointing a high speed camera at an office window across the street and recording the brightness of the walls. Even if the office computer is hidden out of sight the attacker can reconstruct what’s on screen by analysing subtle changes in brightness reflected off the wall.


The more classic example is display cables like VGA, DVI and HDMI emanate signals that can be used to reconstruct the image displayed. Doesn't work for newer high speed HDMI though.

Old BBC documentary on the topic

https://www.youtube.com/watch?v=mcV6izFG3vQ


> The classic example is pointing a high speed camera at an office window across the street and recording the brightness of the walls. Even if the office computer is hidden out of sight the attacker can reconstruct what’s on screen by analysing subtle changes in brightness reflected off the wall.

Is this feasible now?



That was specific for CRTs. You can still exploit diffuse reflections, though that's much harder.


Since they are on their phones via factory rootkit, good luck.


Use of wifi during blackout? Wifi does not work during wifi. Only over the air comms are secure. Any wired connaction is tapped


>Any wired connaction is tapped

Thats what encryption[1] is for.

[1] https://en.wikipedia.org/wiki/Encryption


Oh; wow. Not all heroes wear capes. Install now!


What the fuck ever happened to communicating through plain old radios? Impractical for someone to track you, trivial to speak in codes.


Distance is an issue. And it's unlawful in the United States to encrypt ham radio traffic. No one's really monitoring CB much anymore though.


It’s unlawful to protest violently as well, I don’t see the point in obeying ham radio laws there.


And the way to agree on the code over the radio anyone can eavesdrop is?


many of my friends are activists, and i'm hesitant to disclose to them which technologies they could use. 95% chance they're going to use it for getting drugs, or avoid monitoring to organize gatherings, which, without law enforcement protection always have potential to turn violent. i just don't want to take responsibility for these actions.

then you have heavy stuff, people trafficking, bomb threats, suicide threats, organ trade, child abuse, and crypto seriously limits the options for a response. as long as we're talking about functioning democracies, it does more bad than good.


Criminal conspiracy as a service. I don’t think I’d invest my money.

Edit: to clarify their marketing is transparently targeting organizers of street violence. I have no problem with encryption and don’t think government forbidding it is a good idea.


No it's being marketed to protesters, it's not helping violent people in any more ways than oxygen is. Let's ban oxygen from street thugs too? The cops are already doing great job on that.

Also, privacy this app helps to protect is a fucking human right too. You're not welcome here, please leave.


You're going to need to be more specific. People are downvoting you because it's not really clear what you're talking about, and I'm getting serious alt-right conspiracy vibes.


I can assure you that you cannot in fact read minds and any "vibes" you are experiencing are autogenerated.

Also, please remember "Please don't comment about the voting on comments. It never does any good, and it makes boring reading"[1].

[1] https://news.ycombinator.com/newsguidelines.html


> their marketing is transparently targeting organizers of street violence.

The cops already have radios.


One of the core contradictions of liberal democracy is that all of the freedoms we hold up as advantages of it were obtained by people violently protesting it: labor rights, LGBT rights, environmental legislation, and obviously we are still fighting..

So, congratulations, I guess, on being privileged enough that your interests have always aligned with the interests of the state.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: