Hacker News new | past | comments | ask | show | jobs | submit login
Signal threatens to dump US market if EARN IT act passes (pcmag.com)
862 points by tzm 46 days ago | hide | past | web | favorite | 343 comments



1. The police are either lazy or incompetent if they say they cannot trace criminals because of E2E secure chat.

2. You don't need to know the contents of a chat to glean massive amounts of metadata. FB Messenger and WhatsApp going truly E2E encrypted will still put FB (and anyone serving them with warrants) to know in real time who is talking to whom, what their IP addresses are, and possibly real location (if they are using the app on their phone). This can be used to created a Signature profile... many Pakistanis and Yemeni have died from a Hellfire missile strike because they matched a pattern of activity. Google "signature strike" for more info.

3. The terrorists and pedophiles that are the most dangerous are using far more sophisticated means of communication than Wire, Signal, WhatsApp, Wickr, etc. Saying that this is "for the children" or "for our safety" is complete bullshit and anyone saying otherwise needs to prove it.


> The terrorists and pedophiles that are the most dangerous are using far more sophisticated means of communication

The "most dangerous" part is doing a lot of work there. Just like I think law enforcement needs to admit what they can and cannot do (e.g. they cannot protect a golden key), I think we need to admit some things too. A lot of dangerous criminals are stupid. Maybe not the most dangerous ones, sure. But if law enforcement has a tactic that lets them catch, say, the stupidest 30% of terrorists, that's an extremely valuable tactic that probably saves a lot of lives in practice. It would be wrong to claim that society loses nothing by engineering away that tactic.

I think this sort of thing leads to a lot of frustration on both sides. As a programmer, I find it very frustrating that law enforcement and the media consistently get some of the most basic details wrong about how communication and encryption work, and about the negative side effects of the new laws they're proposing. But I assume that law enforcement folks also feel frustrated about how people like me have no idea how they actually get their jobs done day-to-day, or the negative side effects of the technologies we're building.


> A lot of dangerous criminals are stupid.

The nice thing about stupid criminals is that they tend to be indiscriminately stupid. The ones who don't use encrypted messaging are the same ones who proceed to brag about their crimes in front of strangers, and have their phones turned on and with them during the commission of their crimes, and post incriminating pictures on Facebook, and choose equally stupid and unreliable criminal partners.

They are the low-hanging fruit, so you don't need powerful and invasive tools to catch them because they're practically self-incarcerating. When there are 100 other ways to catch them, there's no point in paying a high price just to have 101.

It's the non-stupid criminals that they have trouble catching, but those are the ones this won't catch either. So you're still paying a high price for really nothing in return.


I think you may be missing a large group of criminals in the middle. Like with ordinary humans in non-criminal context, you have a group of indiscriminately stupid people, a group of very smart people, and a large group - I think majority - that just parrots what everyone else is doing or recommending around them, with very little individual thoughts given.

You can compare it to COVID-19 reactions among the people you know. Almost everyone now keeps distance in public, because everyone knows they should and are expected to. But how many people don't connect this with the fact that they should absolutely not meet up with their friends now? Or that they should absolutely not visit their families this Easter? Or that it would be wise to wash groceries and deliveries?

We could say this parroting group is doing cargo-cult OPSEC. They can know they shouldn't brag about their crimes in person or on social media, and yet at the same time they could easily trip using communication tools they don't understand - unless the industry goes out of its way to make such tripping impossible. I think this is the group the law enforcement is talking about. Not the idiot criminals, not the smart criminals - just regular ones, who don't understand the world they live in well, and occasionally make mistakes.


The group in the middle is the group I'm talking about. At the far edges of stupidity are the sort of criminals who break into an electronics shop to steal GPS tracking devices or try to stick up a police station. The far extremes give you 1000 ways to catch them instead of 100.

The guy who carries his phone with him during the commission of the crime is the guy at the median.

It also doesn't hurt that the average criminal skews dumber than the average law-abiding citizen to begin with. But even for the somewhat above average criminal who gives you ten ways to catch them instead of a hundred, you still don't need eleven because you only need one.

What do you suppose the percentage of criminals is who are so diligent that having default insecure communications is the only way to catch them and they wouldn't have chosen a secure alternative regardless?


>It also doesn't hurt that the average criminal skews dumber than the average law-abiding citizen to begin with.

Is this true? I'd be interested to see the research for this. I would believe that the average convict is dumber than the average law-abiding citizen, but how many criminals are lumped in with the law-abiding citizens simply because they don't say "oh yeah, I break the law all the time"?


You're going to have the Three Felonies A Day problem there, where in practice everybody commits crimes all day long and the people "not getting caught" is really everybody, even including people currently incarcerated who are still guilty of many other crimes they haven't been convicted of.

But if you want to talk about, shall we say, "real" crimes then that's another story. The solve rate for murders is actually pretty high (because they're given significant investigative resources), to the point that the population of convicts is probably not a terribly unrepresentative sample, and the lower intelligence of the convicts is pretty well established.

It also depends how you measure intelligence. The IQ of people who commit politically-motivated bombings is often significantly above average, but they also choose to commit a crime that attracts a hugely disproportionate level of investigative resources and correspondingly has quite a high solve rate despite the perpetrators' supposed intelligence, so maybe there are different kinds of stupid too.


Even if it caught 100%, it would not be worth violating the privacy rights of all of the people who are not criminals.

Signing up to be law enforcement comes with an implicit acceptance of the frustration caused by mechanisms designed to prevent infringing upon the rights of the innocent. It’s part of the job to work hard for a long time and sometimes have to let the criminal go free.

Unfortunately, many prosecutors and cops never learned this, and are all too happy to pursue illegal and invasive methods, or to employ parallel construction to conceal illegal methods.


> Even if it caught 100%, it would not be worth violating the privacy rights of all of the people who are not criminals.

Sounds nice, but have you really thought that through? I think you might be surprised what people would be willing to give up to live in a crime-free society.


It is not possible to imprison 100% of criminals without imprisoning some innocent people by accident (or perhaps intent, as is the case in the USA today). What you are describing is a totalitarian society without the presumption of innocence.

I don't think the "well it wouldn't happen to me" delusion is strong enough for people to actively want that, no.


> I think you might be surprised what people would be willing to give up to live in a crime-free society.

1. Very often, they're quite willing to give up _other people's_ privacy.

2. Have you considered what many people are willing to give up to live in a mass-surveillance-free society? Probably not, because we're never given these options for serious considerations and for us to choose. It's a false dilemma - the state makes the decision, eats away our privacy and uses things like pedophilia as the excuse because it's scary.

3. Let's start with making some sacrifices to prevent criminal behavior by elected officials (Trump family, Biden family, Bush family, Clinton family - I'm looking at you people), and in high finance (2008 crisis racketeers who never faced any criminal action) and once that's sorted out, then let's talk about what more needs to be done to achieve a "crime-free society".


You're making several false assumptions:

1. Presumption of guilt: Law enforcement doesn't go after "terrorists" or "criminals"; they go after _suspects_ in acts of terror or crime. Part of the norms in non-totalitarian states is that people don't get subjected coercive, violent and otherwise harmful action as though they are guilty of anything - until they are formally proven guilty.

2. The assumption that what the state legally defines as "terrorism" is indeed terrorism, i.e. "the calculated use of violence to create a general climate of fear in a population and thereby to bring about a particular political objective. " There is a definite tendency to broaden the operative definition in many states in the world beyond the dictionary definition.

3. The assumption that the state, and its law enforcement organizations, always have the moral high-ground legitimizing its pursuit of terrorists. This is often not the case, as many states engage in terrorism against populations or groups they are hostile towards, while at the same time facing terrorism from those groups.

4. The assumption that the state, and its law enforcement organizations and personnel, don't misuse their capabilities to spy, harass or harm people who are not suspected of committing "terrorism" or any other crime for that matter.


It doesn't seem like they're catching most of the stupid ones now. Companies report child pornography tens of millions of times a year: https://www.nytimes.com/2020/02/07/us/online-child-sexual-ab...

Most of which is Facebook posts, which is perhaps the worst platform to use if you wanted to keep your crime secret.


The worry, though, is that the state grows too powerful. A lot of things in our society are built on the foundation of curbing state power (it's actually about curbing absolute power - the state is part of the solution to that). Constitutions serve that function. Every time we let a state erode those kinds of protections we take another step towards the state gaining more control. That usually doesn't end well. Countries that were part of the Soviet Union are still recovering 30 years later.

Speaking of the Soviet Union, you have to remember that it was "law enforcement" that carried out the oppression by the government. Limiting law enforcement seems reasonable to me.


> and pedophiles that are the most dangerous are using far more sophisticated means of communication

Tiktok?

That has a ton of content that can be considered CP if reviews on reddit/yt are believed. Which given its sordid past as Musical.ly its totally believable.


The 1993 WTC bombers got caught when they tried to recoup the deposit on the rented van they blew up. OTOH, we tapped bin Laden's sat phone.


Law enforcement needs to understand that they live under the rule of law.

Otherwise, it would just be another police state: a shadow dictatorship.

Either the rule of law is universal or the country is not free.

Freedom of communication means freedom to hide it (I must be able to use a one-time-pad with whomever I choose).


>But if law enforcement has a tactic that lets them catch, say, the stupidest 30% of terrorists, that's an extremely valuable tactic that probably saves a lot of lives in practice.

Let's mix time frames. Should police be able to catch the laziest/stupidest 30% of people who sell weed? Of people who marry across racial boundaries? Of people who traffic freed property back north? Should the police be able to catch the 30% laziest gays?

This is a ridiculous argument - now. But in 30 years when the US still has the laws, we have to understand there are social norms now that will be completely different and maybe people in the US won't want police to do their job.


The stupidest 30% are walking around with phones that are already easily tracked.


Anecdotally, I can assure that upwards of 70% of criminal defendants have some form of tracking capable phone on them when a crime happens, I don’t think this makes them stupid, just a general reflection of society that such tracking doesn’t happen or isn’t something anyone can see.


It makes them at least careless, and that borders on stupid.


If you have a tactic that catches 30% of the stupidest terrorists you have multiple tactics that catch 30% of the stupidest terrorists, because they are the stupidest.

The actual problem is not being able to catch the smart ones who every now and then do something stupid or lazy or expedient (since even the smartest of humans have moments where they are not at their best).


Engineering society around the police is how you end in a police state.

>I think this sort of thing leads to a lot of frustration on both sides.

The police can be frustrated with the fact that catching the bad guy is hard sometimes. I can live with that.


Aren't most child sexual abusers (90%+) close to the victim and half of them are from family?

Odd for government to go after chat apps and online encryption when they can't stop child sexual abuse in those places where it happens the most.


> Odd for government to go after chat apps and online encryption when they can't stop child sexual abuse in those places where it happens the most.

No, its totally consistent with the State's MO; using the 'Helen Lovejoy' argument [1] is entirely specious reasoning when even the most superficial analysis on the perpetrators of said crime is done... but its not meant to appeal to reason, rather its meant to create a knee-jerk reaction when someone tries to refute it before being coaxed down the collectivize population's throat.

It's so easy and simple to say 'what, do you want pedophiles to use this tech now?' and end any semblance of coherent logical discourse on the matter: and that's the aim, to end any discussion or counter arguments before its enacted and further erode privacy and civil liberties.

When I really started to delve into the 'why and hows' of cryptocurency I came to the conclusion that after Wikileaks/Assange got cut off from the legacy system in 2010 that we were already in the 2nd Crypto War (Julian is a key target and is shows [2] as he's been treated like a POW) that followed after Zimmerman's PgP project succeeded and ended the 1st.

I'm a Signal user and I'm not entirely sure what that 'dumping the US Market' would entail, will they pull Signal from an app store? Meaning I could just compile it while accessing it from a VPN, or compiling it myself on PC?

1: https://www.youtube.com/watch?v=RybNI0KB1bg

2: https://www.washingtontimes.com/news/2020/apr/9/australian-p...


> I'm a Signal user and I'm not entirely sure what that 'dumping the US Market' would entail

Yeah, it's decidedly weird turn of phrase since it is (a) open source and (b) they don't try to monetise it.

> will they pull Signal from an app store?

I don't really see what the app store has to do with Signal - it's just a way of distributing it. It's not like you need the app store to avoid compiling it - there are other avenues.

The risk for them is they or their servers come under some pressure from the US Law Enforcement Agencies. Given their programmers and servers are based in the US, that seems like it could be a real risk. Withdrawing from that would involve moving themselves and presumably families out of the US. It sounds like an almost impossible ask.


It's a bad faith argument. They don't care about pedophiles.

You're right. It's usually a teacher, neighbor, pastor, uncle, etc. "Stranger danger" is mostly BS unless you live in a really dangerous neighborhood, and there the risk is more likely to be simple robbery with incidental harm to the child.

Child sex abuse is also under-prosecuted and under-sentenced. Your average child rapist serves less time than people convicted of selling small amounts of drugs. It's really bad if the abuser is wealthy and can really put up a fight. Google Jeffrey Epstein's original indictment and the non-punishment he received.

If they really cared about child abusers they'd prosecute them more aggressively and sentence them more severely.


> [...] serves less time than people convicted of selling small amounts of drugs.

Did you mean to compare this to racial biased drug sentencing?


Generally when people say “90% of child sexual abusers are known to the victim” they are referring to “contact sex offenders.” All child abusers take advantage of vulnerable children, but children are more likely to be physically vulnerable around a trusted, known adult. In the last decade it has become much more common for children to be psychologically vulnerable to online predators as many more children, disproportionately those who are vulnerable for other reasons, have private access to the internet via smartphone 24 hours a day. Some predators use the internet to groom children and then commit contact offences against them. Others manipulate children into creating more child pornography.

In 2014, Aslan and Edelmann [1] undertook “a comparison of sex offenders convicted of possessing indecent images of children, committing contact sex offences or both offences” and, while expressing caution about the “contradictory findings” of previous studies, examined a data set of “230 offenders who had been convicted either of possessing indecent images (Internet offenders n = 74) or committing actual direct abuse of children (contact offenders n = 118) or committing both offences (Internet-contact offenders n = 38).” They found:

> There were significant differences between the three groups of offenders in the way the victim was found. Internet-contact offenders (45%) were more likely to target their victims online and use downloaded indecent images to help recruit their victims … Only 15% of Internet offenders initiated online contact, grooming their victims then requesting indecent images without physically coming into contact with the victim. The majority of contact sex offenders (87%) were known to their victims … Internet-contact offenders were more likely to target stranger victims than contact offenders.

[1] https://dx.doi.org/10.1080/14789949.2014.884618

This data reflects the offences that are detected and prosecuted, so you could read it as suggesting that law enforcement (in London) is focusing on internet offending at the expense of contact offending. It’s hard to say. The data also says nothing about whether anti-encryption laws are needed. However, it does indicate that there is a substantial amount of internet-enabled child sexual abuse and that law enforcement bodies should use some of their finite resources to address it.

What is proportionate is certainly debatable. There is often a fundamental difference of values between civil liberties advocates on the one hand, and victims’ advocates and law enforcement on the other, with respect to the seriousness of internet-based non-contact offences, including the possession of child pornography. When these offenders are counted among child sexual abusers, the proportion who are known to their victims is much less than 90%.


It is hard to tell there, because someone who commits a physical crime is much more likely to get caught than someone who does not, but it does seem like some resources should be devoted to the dangerous ones.

This doesn't excuse the government trying to destroy security for everyone else. One of the biggest problems highlighted by the NYTimes is insufficient funding leading to an inability to apprehend culprits, not the widespread use of end-to-end encryption.


On 3, I don't know about pedophiles but terrorists do indeed use consumer apps; they're 'good enough' and the traffic doesn't stand out. Many (most?) cases are broken open because law enforcement turns a human source or manages to place someone undercover.

Of course, those apps are not all that they use. There are definite advantages to things like encrypted digital radio vs IP communications, and concomitant downsides such as standing out like a sore thumb in the RF spectrum or being more vulnerable to zero-days against niche platforms.


The problem with the idea that if all consumer apps played ball with law enforcement (and as always, I would like to point out that it isn't clear which nation's law enforcement agencies are supposed to get access) then suddenly there would be no choice but to roll your own encryption tools is naive. It is born from a type of politician's mindset where communication between people is done via an 'app', and an 'app' means that there is a large company that invariably wants to deal with the US or the EU that can be pressured into building a backdoor. And for 99% of today's messaging apps they are right (which is a nice mess we're in by the way).

But anyone can use OpenPGP (or any other tool) today, and anyone can tomorrow, even if such a project stops completely. The source is out there, and so is the source for hundreds of other related tools. There will also be people with — subjectively, depending on whom and where you ask — non-nefarious reasons to have their communications end-to-end encrypted who will find ways to provide such software in a decentralised manner without the point of failure that laws like EARN-IT target.


Maybe the terrorists. Anyone who's seen "to catch a predator" knows that most pedophiles are borderline mentally handicapped and are way more likely to get caught by their own incompetence; no extra laws necessary.

But you're otherwise right that people running CP rings are probably using more sophisticated means that can't be stopped by conventional means.


>Anyone who's seen "to catch a predator" knows that most pedophiles are borderline mentally handicapped and are way more likely to get caught by their own incompetence; no extra laws necessary.

I wouldn't be surprised to learn that pedophilia correlates with lower intelligence, but a more accurate conclusion to arrive at after watcing TCAP is that most people who fall for a fairly obvious sting operation (in some cases, after having watched the show themselves) are borderline mentally handicapped.


> The terrorists and pedophiles that are the most dangerous are using far more sophisticated means of communication

Terrorism is mostly opportunistic radicals communicating via YouTube and Twitter and Fox News, or national / quasinational governments that are brazen and flagrant and don't need to worry about being noticed.


Sometimes these idiots have posted on Facebook about their planned attacks. And we still did not manage to stop them.


They're just in it for the fame. Or the lulz.


Perhaps I'm not hip enough but I'm pretty sure there is nothing more sophisticated than Signal.


Session -- it just doesn't have as many features.

BTW, one of Signal's weaknesses is that you MUST use a phone number with it. If you're savvy you realize this can be a Twilio number you control making your account immune from SIM hijacking. However, unless you override a bunch of defaults Signal is not immune to other attack vectors like attempting to unfurl a URL sent in a message -- which can expose your true IP address -- or generate a thumbnail of a video -- which can launch a malware attack -- which is the method of attack alleged to have been used by Saudi intelligence to hijack Jeff Bezos' phone (via an E2E encrypted WhatsApp message no less). A more sophisticated messenger system would turn off lots of "convenience" features by default and let me pick a random username and NOT make me enter a phone number or email address. People who care about security don't need a way to reset their randomly generated 128 character passwords.


> BTW, one of Signal's weaknesses is that you MUST use a phone number with it.

This isn't a weakness, it is a tradeoff. You use phone numbers (downside) but the server does not have to store any information about who is talking to who (upside). Other tools reverse this choice and don't use phone numbers but do need to maintain the communication metadata.


It's not a tradeoff, it's a weakness by design. All features you mention are 100% doable without a phone number


Sure, and Signal is already working on usernames. Here's the kink: When you have low latency (video) calls, you can't route via Tor. When you can't route via Tor, you leak your IP to the server. When you leak your IP you're not anonymous, and when you're not anonymous, the server having the hash of your phone number isn't adding too much data to them.

When the server knows who you are, the app can use your existing contact list to discover contacts. This means unlike e.g. Telegram, Signal server doesn't store your contact list.

I e.g. constantly see people whose phone number I've already deleted appear on my Telegram contact list "X joined Telegram". Telegram knows I had the number at some point. This would never happen with Signal.


> the server having the hash of your phone number isn't adding too much data to them.

Wait how big is the hash of the phone number?

If it's enough bits (e.g., a full sha hash) then it's not that secure to hash at all. 10^10 or even 10^11 is just 10 or 100 billion. I can easily try all phone numbers until I find the one that matches the hash.

It maybe protects against attacks against lots of people, but it really doesn't protect an individual.


It's 10 bytes, so 80-bits.

You are correct that using a hash does not protect an individual from other users discovering that they can contact them with Signal, which is to be expected because that's the purpose of this feature. If you suspect that Bob, with phone number +15555551234 has Signal installed, you can verify that by... typing Bob's phone number into your contacts list and installing Signal so you can send messages to Bob.


For the purposes of entropy, you need only consider 10 valid choices for each symbol of a phone number so it's closer to 33.21 bits (10 * (log(10) / log(2))) and smaller still when discarding impossible area, trunk & subscriber numbers.


So given than 80 bits is much bigger than 30-40 bits, if I know someone's hash I can very easily narrow down their phone number to one or sometimes two candidates.


No, I'm describing the truncated hash. Signal truncates SHA1 to 10 bytes of output.

No matter if your phone number is six digits or sixteen, Signal uses 10 bytes (80 bits) of the hash.


And then a bit larger again when including foreign phone numbers.


The point isn't hashes anonymize you, the point is you're already leaking IP in most cases, so phone number doesn't really reveal anything additional.


I'd much rather leak an ip than a phone number.


Under what threat model?


Pretty much anything I can think of. If someone get's my phone number they can bug me or identify me much more concretely/completely.

What threat model has a IP be worse to leak than a phone number?


> Here's the kink: When you have low latency (video) calls, you can't route via Tor.

Sure, but you can use VPNs. Or Orchid, which is a multi-hop VPN that routes through multiple VPN providers.

Or you can just use VoIP, which can be done via Tor, as long as you can force TCP mode.


That system a) has a paytrail, b) involves companies that can be coerced / hacked with relative ease, c) is a paid system and d) is quite a bit for average user to handle.

Also, if you're going to stay anonymous, you need something that is extremely hard to misconfigure. I use wireguard on my Android and I've set the VPN to activate automatically, and I only allow connection via VPN, but I'd never imagine any of the apps I'm running are properly anonymized.

Also, since you're apparently working for or affiliated with VPN providers[1], you might want to be more transparent about possible vested interests.

[1] https://www.ivpn.net/privacy-guides/what-is-a-vpn


I've never hidden the fact that I've worked for IVPN and Restore Privacy. But they pay me by the word, so I gain nothing by promoting them.

I haven't actually used Orchid, because there's no Linux app. But I did buy some of their Etherium currency. And I recall no money trail. As I recall, I converted well-mixed ~anonymous Bitcoin to plain-vanilla Etherium, and then to Orchid's currency.

But whatever, I'm not going to defend Orchid.

Anyway, I use nested VPN chains. It's like a multihop VPN, except that each hop is a different VPN service, and each of them is leased with a different pool of well-mixed Bitcoin. I do all the Bitcoin mixing via Tor, in Whonix instances. That way, I don't need to trust any of them, only that an adversary won't manage to compromise or coerce all of them. It's the same logic as Tor uses, based on Chaum.

If you want to read more, just search "mirimir" on IVPN's and Restore Privacy's sites. There's also https://github.com/mirimir/vpnchains which is pretty over the top. And I've also played with something like that which routes VPNs via Tor.


I'm not an expert on cryptocurrency so I can't say how well you managed to anonymize the paytrail but the problem of logs and the lifetime of the chain concerns me.

When you start to chain VPN nodes you gain latency so you might as well use Tor. These days Tor has enough bandwidth to play 720p video with ease and there's less hassle. Also once you hit three modes you won't really benefit from longer chain so mixing VPN with Tor isn't really beneficial unless you're evading censorship of Tor.


OK, fair enough. I'm no expert on Orchid. I rather lost interest, after it became clear that it was useless to me.

You're wrong about nested VPN chains, however. Depending on geographical distribution, each VPN adds 50-100 msec rtt. And bandwidth doesn't drop that much after the first VPN.

I use both nested VPN chains and Tor to mitigate the risk of Tor circuits being compromised. The lesson of CMU's "relay early" exploit for the FBI was sobering. Given that lesson, only fools use Tor without protection.


Bad guys might rather hack different servers in different countries and use something like a chain of SSH tunneling after making sure they patched the security vulnerability they used to get into.

Add in some routing trough Tor.

That would be harder to beat by a single law agency.

Particularly harder if the countries implied are not friendly towards each other.


> I e.g. constantly see people whose phone number I've already deleted appear on my Telegram contact list "X joined Telegram". Telegram knows I had the number at some point. This would never happen with Signal.

This literally happens with Signal. And it makes sense too, the message that Signal gets telling it someone is now on Signal is presumably the same one letting it know it can use encryption rather than SMS to talk to that person.


Signal is not built for anonymity. It's built for message privacy. It's a lot like PGP in that the government know who emailed whom, but they cannot read the email. That's the whole point. If you are trying to hide your phone number, Signal is not going to help you and it's not meant to.


PGP doesn't hide metadata, anonymous remailers hide metadata. Add a sufficient volume of dummy messages and all of a sudden nobody can do traffic analysis, either. Think ATM: There's a constant volume of "cells" but only some of them are actually carrying anything.

That, or blasting your message to a huge number of people, only one or a few of whom actually receive it because it's encrypted and then steganographically hidden in spam. Again, use dummy messages and there's no way to predict anything by divining the ebb and flow of spam volumes.


I've never understood the point of privacy without anonymity. Or of plausible deniability. Both depend on rather idealistic assumptions about adversaries.

https://xkcd.com/538/


The practical upshot of Signal's deniable authentication is that a Signal message isn't proof of anything. It has zero weight because everybody can make fake Signal messages apparently from somebody else to them about anything.

If Alice tells Bob a secret via Signal, this means Alice cannot be worse off than if she'd used any other means of telling Bob. Can Bob reveal the secret? Yes. Can he claim Alice told him? Yes. Can he prove it? No.

This is a sharp contrast to something like PGP where Bob can prove Alice sent the message.


I doubt that any part of law enforcement or worse parties will agree with you on the zero weight part.


That's nice. But choosing to believe nonsense won't make it true. The United States of America chose to believe that torturing people is an effective means of securing reliable intelligence. Because that's how it works in Hollywood movies, so how can reality be different? But of course the "intelligence" they obtained this way was not in fact reliable, because a person being tortured doesn't magically know the truth and you don't know if they're telling the truth, so they'll say whatever they think will make you stop hurting them, which is utterly useless.

The only way you can know if intelligence obtained is reliable is to actually test it. With systems like PGP you get proof. Did Alice send this message as Bob alleges? Yes, the message includes proof so he was telling us the truth.

With Signal all you have is Bob's word as I described.

Signal can't stop the Secret Police from torturing Bob, but they can ensure they don't have any way to know if he told them the truth. If the Secret Police were rational that's enough reason not to bother torturing Bob. But we can't make them rational, for some people just inflicting pain for no reason is their goal.


Wouldn't you need to clone their SIM or otherwise fake their number?


Nope. Signal's messages are relayed by Signal's servers over IP like anything else, your phone has no evidence this message ever came from anybody's phone, let alone that it was Alice's phone. If you use Signal Desktop it didn't come from a phone at all. Signal doesn't keep any proof that it got these messages from an "authentic" source. Either they check out as from Alice or they don't and in the latter case they clearly shouldn't be displayed at all.

The way you normally know a message is from Alice on Signal is that the message was sent using keys only you and Alice share†, and you know you didn't write the message. But a third party has no way to verify that last part. That's the entire trick (in layman's terms).

† Signal and similar systems provide a means to do out-of-band verification that the long term identity key for people you know matches. You probably don't use this with most people, but you can and it's made easy if you want to.


The vast majority of communications occur between people who are publicly known to have an association and have no need to deny the association. Some common examples:

1. Friends

2. Family members.

3. Members of a business.

If your life or freedom is on the line because of an association with someone then most systems out there are somewhat dangerous due to the weakness of the endpoints. You would want something like an airgapped computer with on or off line dead drops possibly hidden with stenography.


> You would want something like an airgapped computer with on or off line dead drops possibly hidden with stenography.

Well, "the best is the enemy of the good". That's the whole point of risk management. As a practical matter, I do the best that I can manage, or at least, be bothered with ongoingly. If I were as paranoid as you're advocating, I'd be cowering in a bunker. Also, for me there's the fact that I have little left to lose.


I believe it is both a weakness and a trade-off


Then why has nobody done it?


Beyond the (slightly behind trend) enthusiasm for blockchains Session is the same punt on contact discovery as lots of other systems that went nowhere. This works great for little secret decoder ring cliques but doesn't actually secure real people's day-to-day messages due to lack of discovery - your local butcher and the guy your sister went to college with never find out that you have the same secure messaging app, and so their messages to you aren't secured.

In contrast to your disinterest in convenience features, Session does have a bunch of things that presumably its principles felt were non-negotiable but clearly harm security. The "Open Groups" feature for example is basically "Eh, this is hard, we give up" for larger groups (500+ people). No end-to-end encryption and you're given either a moderator tool that doesn't work ("Ban" pseudonymous people who can for zero cost just create a new pseudonym) or one that's onerous ("Invite" everybody manually).


"BTW, one of Signal's weaknesses is that you MUST use a phone number with it. If you're savvy you realize this can be a Twilio number you control making your account immune from SIM hijacking."

Does Signal not ever send messages from, or otherwise use, SMS shortcodes ?

I ask because no twilio number can receive an SMS shortcode (because no twilio number is classified as a "mobile" number).

Genuinely curious.


The do it once for the initial setup. But iirc, one can also get an automated call for the pin.


FTI's report (1) (the security company doing the forensics analysis) about Bezos' phone "hack" is a joke.

Not only they do not show anything but use misleading terms in order to confuse the reader.

We do not even know if he was hacked. Right now it is just vague accusations.

I do not care about Saudi Arabia, they are a middle-ages, totalitarian and profoundly sick country. What I care about is misinformation.

(1) https://www.documentcloud.org/documents/6668313-FTI-Report-i...


To be fair, "Signal the App" and "Signal the Protocol" are two different things. If you were talking about the later then your statement is quite possibly correct.


Signal is all about making good cryptography usable for the general public. If you actually use the "safety numbers" to verify the identity of who you are communicating with then you have real guaranteed end to end encryption. Unfortunately not everyone does that.

People that really really need to be sure probably use something super simple like PGP after they take the time to learn how.


Why not Keybase?

https://keybase.io/


The lack of PFS is a big negative about keybase.


There's also a targeted attack that allows arbitrary keys to be linked to your keyring to DoS you.


You can set messages to expire in keybase:

https://keybase.io/blog/keybase-exploding-messages


That looks completely orthogonal to Perfect Forward Secrecy.


It's based on an ephemeral key schedule underneath. Here's the design doc: https://keybase.io/docs/chat/ephemeral


I was under the impression it's the same for Signal. Quick duckduckgo led me here: https://signal.org/blog/asynchronous-security/ The more you know!


I was just looking into keybase, and.... deleted. Thanks for the heads up.


I multihop VPN through service A. My criminal friends multihop VPN through service B, C, etc. All hops are through non-US friendly countries

We then communicate over a secure messaging platform like Signal, Telegram, etc.

Knowing just that I communicated with one or more people, how you would conduct your investigation to "trace" the participants in this conversation?

The feds would be really put up to unravel this (and are on a daily basis), let alone the police.


Why are the feds watching these conversations in the first place? Has a crime been committed? If they’re investigating a crime, surely there are more avenues of investigation than Facebook chats that didn’t even exist ten years ago. Whatever happened to good old fashioned police work? Seems like they just expect everyone’s chats to be handed to them on a silver platter when they ask for it.


I'm responding to this statement and showing how it is rather ignorant:

   1. The police are either lazy or incompetent if they say they cannot trace criminals because of E2E secure chat.
As for the rest of your comments: The feds are watching criminals online because lots of crime is committed online. I do not think weakening encryption will help them in this pursuit.


>Whatever happened to good old fashioned police work?

That implies effort and people are lazy. "Hey, Mr Criminal, can you be so nice to use App X when you plan to commit your crime so our automated system can mail us when you are going to break the law and also set up an event in our calendar so we can come and arrest you. Please be nice, we can make each other lives easier if we work together. "


> The terrorists and pedophiles that are the most dangerous are using far more sophisticated means of communication than Wire, Signal, WhatsApp, Wickr

You'd be surprised how poor their opsec can be. Regular file transfer services for instance see this traffic, entirely in the clear, not even the slightest attempt at encryption is made.


> You don't need to know the contents of a chat to glean massive amounts of metadata.

Signal is actually working on fixing that (https://signal.org/blog/sealed-sender/).


Exactly. So they never caught any criminals before 1998? They investigated and got probable cause and then got warrants. Nothing needs to change now.


Yep, we know the CIA makes kill decisions based on metadata.


> anyone saying otherwise needs to prove it

Sorry pal, that's top secret intel. Just Trust Us™.


I agree with number 3. Laws are introduced "against terrorism and pedophiles" and then used against drug dealers and activists.


The term "extremist" is used for certain activists whose views skew too far outside of the usual range, although one man's "extremist" are another's "unorthodox" view. Depending on the era, a view may be perfectly reasonable or ridiculous, compare the idea of "protecting the environment" in the past.

Some countries do (or have) crack down on really outlandish views for a time. One country's views may also differ from another.

As a matter of principle, I don't much like terrorists as they operate under the goal of spreading terror. I have strong doubts cracking down on encryption would stop them, as they operate perfectly fine with fairly mundane tools and the "mass-surveillance" machine loses them in the noise.


It's hard to collect metadata if the traffic is inside Tor, I2P or behind some clever tunneling.

But government programs have other means of collecting data: OS level backdoors, flawed random number generators like DUAL_EC_DRBG, "unintended hardware bugs" in Intel's CPUs.

I guess they mostly rely on these alternative means. These "let's forbid strong encryption" might be just dust in the eyes to make their targets feel secure if they use apps with "strong encryption".


> far more sophisticated means of communication

Or far more simple means. It's trivial, really, to write your own app for encrypted communication or signaling. I bet I could build one in a day.

Even without programming skills, you could set up a shared drive containing only a keepass file. Download the file, use your key and password to open it, then read the message. Monitor the last updated timestamp to see if there have been any changes.

Securing your communications is not hard.


> far more sophisticated means of communication than Wire, Signal, WhatsApp, Wickr

Like better apps, or something homebrewed?


What are those sophisticated means of communications?


I think its better to just admit that freedoms / tech will always be misused by criminal actors, and that's just a price we agree to pay for privacy, security, and liberty. I don't think think that's a controversial statement, and we make such trade offs all the time unconsciously. The United States has largely agreed to accept a certain amount of criminal gun violence in the name of personal gun ownership. We agree that a certain amount of money laundering will occur due to shell corporations and foreign ownership of assets. We agree that police have to let a certain amount of crime go unpunished in order to protect against unreasonable search and seizure. The only difference between those things and this is that no one has the balls to stand up and admit that a certain amount of child abuse is an acceptable price given the stakes at hand, even though it is true.


> I think its better to just admit that freedoms / tech will always be misused by criminal actors, and that's just a price we agree to pay for privacy, security, and liberty.

It's possible for both things to be true at the same time.

If Signal exists and is secure, will criminals use it? Sure they will, criminals are people and people want private communications.

But if you ban honest citizens from using Signal, will criminals stop using secure communications? No, they have an unusually strong incentive to use them and will seek out alternatives. The percentage of criminals who switch to insecure communications will be lower than the percentage of honest people who do.

Which increases the amount of crime, because the amount you're helping law enforcement catch criminals is smaller than the amount you're helping criminals exploit victims. This is also compounded by the fact that there are more honest people than criminals.

There is a theory of bureaucracy ("an institution will attempt to preserve the problem to which it is a solution") that says law enforcement agencies will ask for this even when they know full well that it will increase the overall amount of crime, because more crime is good for them since it means more law enforcement.


I agree that criminals will use secure communications regardless of the law. I don't understand what you mean when you say it will increase crime though.

Regardless, I feel like there's a deeper motive from governments/law enforcement. It would allow them to claim that anyone using secure comms must have something to hide and is thus a criminal. Combine that with mass surveillance and anyone you see sending encrypted traffic can automatically be assumed to be a criminal. I'm not saying this is right, it's certainly not right. But I'm sure that's the argument that will be used by those trying to push it.

The only way to fix this is secure-by-default comms, such that all traffic looks the same and you cannot make any claims of criminality based on that alone.


> I don't understand what you mean when you say it will increase crime though.

Suppose you're a criminal organization or a foreign government. You break into AT&T or Amazon or whomever and get access to a bunch of data streams. If they're all E2EE, you have a bunch of inscrutable ciphertext. If they're not, you have everybody's passwords, trade secrets, credit card numbers, information useful for blackmail etc. Lack of strong encryption enables crime -- that's why honest people use strong encryption.


This is also a good reason to use a VPN and tools like Tor, even when you have nothing to hide. The more normal it becomes the less likely it can be used as presumption of guilt or probable cause.


> But if you ban honest citizens from using Signal, will criminals stop using secure communications? No, they have an unusually strong incentive to use them and will seek out alternatives.

This has been a 2nd Amendment argument for ages: "If we outlaw guns, only outlaws will have guns."


It has been a 2nd Amendment argument for ages because it's tautologically true.

It correctly identifies that the proponents need to justify the cost from substantially all law-abiding citizens following the law against the benefit from only the law-abiding criminals following it.

And say what you will about the benefits of law-abiding citizens carrying firearms, but if you want to seriously dispute the benefits of law-abiding citizens using encryption, try convincing a credit card company to let you accept credit cards on your website without encrypting the traffic.


It sounds like you accept the bill's authors' claim that EARN-IT is about protecting children.

I'd be very interested in hearing from child abuse investigators how the controls in the bill line up with how tech is used in abusing children. My expectation is that there is very little alignment, because "for the children" is most often the rallying cry of politicians who want something that is not in the best interests of the people they are supposed to represent.


> It sounds like you accept the bill's authors' claim that EARN-IT is about protecting children.

No, you're putting words in their mouth.

You have your head in the sand if you don't think people use perfectly legitimate encryption service to discuss illegal activity. But that is not a reason to ban encryption. The entire US constitution is built on the premise that people have rights.

But it has always been true that some people use their rights to avoid having their criminal activity detected. That doesn't make our rights any less important.


>“Our goal is to do this in a balanced way that doesn’t overly inhibit innovation, but forcibly deals with child exploitation,” US Senator Lindsey Graham (R-South Carolina) said last month in announcing the legislation.

Nobody is putting words in anybodies mouth.


Just because Lindsey Graham said that, doesn't mean Thriptic is agreeing with it.


> The entire US constitution is built on the premise that people have rights.

As much as I'm near-absolutist on civil liberties, I think it's also valuable to recognize that the intrinsic good of individual rights are only one part of the story; the other is the balance of power between government and the governed.

I recently heard Sam Harris opine that from a utilitarian perspective, an absolutist right to privacy pales in comparison to allowing harm to come to children, and so the tech community needs to flex a little on the privacy question, and meet law enforcement halfway. Through that reductionist lens, it's hard to find fault in the argument.

The problem isn't limited to privacy, though. Unbreakable digital locks exist, and they aren't going anywhere. [0] And there is power in the ability to keep secrets. You can bet the Feds have little interest in a Panopticon, where they too are obstructed from keeping digital secrets, as "meeting us halfway" for some greater good. Rather, they want to hoard that asymmetric power as their exclusive purview. No matter how well-intentioned, that asymmetry of raw power is something We The People have a vested interest in taking seriously, far beyond some abstract notion of "I want to Google ${CONSENTING_ADULT_SEXUAL_ACTIVITY} without worrying the neighbors will find out".

[0] https://www.youtube.com/watch?v=VPBH1eW28mo


If privacy isn't an issue, we could insert tracking chips into the children and give them identifying tattoos, then track their locations.


I don't know about US but in EU electronic passports and electronic IDs are becoming mandatory. So all people will have an RFID device with them all the time. And let's not forget the mobile phones which can be localized with high accuracy even without GPS, usually because the device can be seen by more than 3 base stations at a time.

The Chinese made mass surveillance even simpler: they have lots of cameras and face detection.

We don't have much privacy these days.


Just because it's possible to use something as a source of information, it doesn't mean it is used as part of a massive dragnet. Yes, it's possible to track phones, but most countries don't have a dragnet implemented based on this information, as far as I'm aware. It's not a lost battle and we still need to push back to ensure it is not.


There is never any logical reason to suppose that the right solution lies in between 2 extremes. If the question is the answer to 2 + 2 the answer isn't halfway between 0 and 9000.

Secondly when a party consistently pushes for an extreme position if you meet them halfway as a matter of policy you will shortly find yourself within spitting distance. The only productive position is extreme obstinacy.


Yes! Sometimes if you compromise, you still loose - just more slowly.


> I recently heard Sam Harris opine that from a utilitarian perspective, an absolutist right to privacy pales in comparison to allowing harm to come to children, and so the tech community needs to flex a little on the privacy question, and meet law enforcement halfway. Through that reductionist lens, it's hard to find fault in the argument.

I'd say it's pretty easy. For utilitarianism to make sense, it has to take the future into account. And what looks like an absolutist right to privacy might be a utilitarian argument of the type that if you grant a monopoly of power (private or public) the right to make use of your private information, then it could well use that private information against you later.

An integral utilitarian might then say "it's worth some harm to children today to ensure there won't be great harm tomorrow". That kind of being able to trade off different scenarios of harm without regard to absolute principle is pretty much what characterizes (act) utilitarianism.


I don't believe that. I'm simply saying that if the stated logic for this bill is that we need to regulate encryption because there is an unacceptable risk of misuse, then my response is that I actually accept the current level of misuse risk given the current level of regulation. Instituting further controls in the form of regulation would cost us more than the perceived reduction of risk that it affords.

Obviously this bill is about more than that, but I think that statement pretty much torpedoes their main public argument.


I completely agree with you about acceptable risk; sorry I misread your last sentence above.


It's a difficult question to answer because most of what HN complains about is speculation based on assuming bad faith, and doesn't seem to line up with what is actually in the bill (from what I can tell).

What specific controls are you asking about?


Just because an Ethernet cable can be used to strangle someone doesn't mean that failing to stand in opposition to network wiring is to accept a certain amount of murder by strangulation. Don't focus on the tool being used for the crime but on the tool committing the crime.


I think this depends on the tool. Certainly we could see the tool being a problem if it was a mini nuke or Anthrax (I don't for the record think encryption rises to this level).

I'm very concerned that technology will put something devastating (at scale) in people's pockets and then we're kind of screwed (do we choose big brother and all that entails, or indescribable mass destruction?). I don't have a solution but it keeps me up some nights.


There are degrees to which tools are useful for committing crimes, and it's naive to pretend otherwise. Encryption is obviously an incredibly useful tool for committing a number of crimes, and I think it's better to argue that it's worth it than to act like there's no connection.


The government wants to expand surveillance so that way potentially disruptive social movements can be monitored and disrupted. Activists use signal too.

In case you hadn't noticed, the government is currently on its backfoot and disruptive social policy reforms are back on the table. They want to make sure that corporations get everything and the people get nothing.

The encryption fight has been going on for decades, but at root their complaints about terrorists and child trafficking are covers for expanding a lazy version of COINTELPRO. Lazy meaning that they can just sit in an office and see everything. Let's not forget the FBI's role in trying to get MLK to commit suicide. These shadowy agencies are not in any way the good guys.


A crowbar is also an incredibly useful tool for committing a number of crimes, and yet I don't see any legislators pushing to ban Home Depot from selling them, or to ban me from buying them.


As is a car and yet no one proposes banning or wiretapping cars to make sure they're not planning to run people over.


Cars used to be a good example, but this is quickly changing. Modern cars relay OBD-II (unofficially OBD-III, not entirely ratified) data over cellular networks. Most electric cars and especially self driving cars are sending and receiving telemetry data and software updates all the time. Some people are even voluntarily adding OBD-II cellular dongles to their car to get lower insurance rates. This includes real time GPS coordinates and speed. Some regions are already considering making this a requirement for cars sold after {n} date (date to be determined) so they can see your smog emission data real time. This almost happened in California, but car manufacturers were not ready and successfully pushed back, for now. I would suggest that within a decade or so, a majority of cars will be wiretap devices.


Category error. Encryption isn't a tool for commiting a crime, it is a tool for concealing a crime.

It's more like saying "sensors can but whatever crap they want in food they sell, but they have to disclose it accurately".


This is my view, 100%. Yes there are downsides to strong e2e comms, but the downsides of not having strong e2e comms are far worse.


Truly, this is a stance we have to have for everything.

If we want criminal justice reform, too, for example, we have to agree that some criminals will come out of prison after their shorter sentences and they will get into positions and jobs where they will cause harm.

Any lightening of sentences will come with bad people getting through and hurting others. But, this is an acceptable price to pay to allow the other felons redemption in this world.


> I think its better to just admit that freedoms / tech will always be misused by criminal actors, and that's just a price we agree to pay for privacy, security, and liberty.

Yes! Also, one sure way to know that we have "privacy, security, and liberty" is that criminals are abusing them. And, as an added benefit, efforts to identify and apprehend criminals help identify weaknesses and OPSEC failures.


And you need a warrant to go through a person's mail. How is that not defacto policy for digital privacy?


The EARN IT law enables warrants for digital privacy. The problem is that the choice is between "warrants are impossible due to encryption" and "warrants can be skipped by misbehaving actors".

There's no way to guarantee a middle ground.


Well, this is not truly defacto - if it's less than six months old, sure. There's some ancient history that complicates it. In practice I'm pretty sure Google and other providers will fight for a warrant (citing US vs Warshak), but technically speaking anything older than six months could be gotten with an administrative subpoena.

Of course, there's a whole 4th Amendment discussion there. And IANAL, so feel free to fact check whatever.

This has seen attempts at being fixed but dies in the Senate each time: https://en.wikipedia.org/wiki/Email_Privacy_Act#Background_a...


Third-party doctrine. It is awful but well-established.

If you want a good grounding in the legal precedents - both laws and decisions - that have gotten us here, read Habeas Data. Great book laying out all the terrible implications.


There is also no rule enforcing people to write mail in English. They are legally allowed to use made up languages, codes, encryption, etc.


> ...and that's just a price we agree to pay for privacy, security, and liberty.

I think this is fine here, but I am compelled to point out and remind, given the amount of concurrence in the thread:

In a more rigorous discussion, I think this is a particularly dangerous line of thinking to the Stallman-level advocate and their campaign down the line.

Edit (oops. Chopped off a long version of this paragraph when I edited down the post): Privacy, security, and liberty are maintained by the advocate to be the natural rights that are paid in price for justice.

This isn't to speak of those in agreement here or myself (and not just limited to said advocate), but on the part of anyone that uses such framing, for risk of it massively normalizing, even if I find it an artistically made point.


(This subthread was originally a child of https://news.ycombinator.com/item?id=22825957)


> The only difference between those things

You listed two things that easily and obviously line up with a Bill of Rights amendment... not sure there is one of those for encryption. Unless I’m just blanking...


An argument to tie encryption to the fourth amendment: https://cyberlaw.stanford.edu/blog/2020/03/earn-it-act-uncon...


2nd Amendment has precedent given that ITAR considers encryption and secure communication and control apparatuses a form of armament.


Not controversial?

Liberty is what wars are fought over.


Maybe flame wars. It would be nice if people believed in abstract principles that strongly, or rather, almost that strongly would be perfect. Empirically wars are fought over which groups get to control resources.


What else is liberty used for, if not resources?


It depends on your definition of resources. In some cases it may be a mountain or river near my hometown, while in others it might be my house or my husband.


If you haven't already, please take the time to email your federal representatives. The EFF's tool [1] only takes a few clicks to use.

[1] https://act.eff.org/action/protect-our-speech-and-security-o...


I was going to contact my senators. One of them is Dianne Feinstein, and... ugh, why is she always on the worst side when it comes to privacy? She's actually a sponsor of this thing.

I've written her enough that I can already write my own reply from her office. Shorter Feinstein: "Thank you for your concerns, but you're wrong."


> why is she always on the worst side when it comes to privacy

Because she is a terrible Senator. Please, please, please stop voting for her already.


She's 86 now and the next time she'll be up for reelection is in 2024. There's a good chance she won't be around long enough to ever lose reelection.


on the other hand the late Senator Strom Thormond didn't leave office until he retired at age 100.


I know this is a day old now, but Pelosi was born March 26, 1940, so she is 80 (not 86).

https://en.wikipedia.org/wiki/Nancy_Pelosi


This discussion is about Dianne Feinstein, not Pelosi.


LOL. Sorry about that. Not sure where I got my wires crossed there. Must have gotten distracted somewhere in the middle while looking it up. I also didn't realize that Feinstein was that old.


> There's a good chance she won't be around long enough to ever lose reelection.

No need to be so negative. Isn't it nicer to say, "There's a good chance that she won't win reelection*."?


I think that phrasing is more ambiguous, not nicer (nor meaner.) "Won't win reelection" is phrasing that's compatible with a 'loses the election' scenario, which I consider implausible (If she's still alive in 2024, she'll undoubtedly win.) Adding ambiguity to my comment doesn't make it nicer; it only increases the chance that I might be misunderstood.


She's a Democrat from the district including SF. She could eat a baby on live TV and still win. It's like being a Republican from a heavily rural Texas or Utah district. Congressmen and senators from hard single party regions are basically tenured.


She’s basically a moderate republican:

https://projects.fivethirtyeight.com/congress-trump-score/

She used to vote with Trump more than any other Democrat. Now she’s number 2 (and she is much further right wing than many republican senators).

I don’t understand how she keeps winning in places like SF. Even a California republican would probably be further to the left than she is.


You aren't sorting by the proper column. She's right in the middle of the Democrats.


> The EFF's tool [1] only takes a few clicks to use.

Your input is discounted at least in direct proportion to how little you sacrificed in order to provide it. If you really want to make an impression, telephone your representative.


Your input is discounted at least in direct proportion to how little you sacrificed in order to provide it.

One of my college roommates works for a congresscritter. He says, at least for his guy, written letters still have the most impact, followed by telephone calls. He didn't mention faxes.

E-mail and social media are waaaay down on the list because they take the least effort and can be gamed so easily.


Many of my reps have stopped providing phone numbers on their websites. Kinda cowardly, but it allows them to validate emails with addresses that come via their website from actual constituents.


Yet they all hve phone numbers that can be found with a cursory search. Do the dilligence.


I'll probably send certified letters in this case.


Hand delivered.


I'll break out my calligraphy pen, ink, and sealing wax.


I use my owl.


There's an interesting age bias there. My parents write letters, my generation much less so. I don't know if I currently have stamps. Our votes count the same though.


Don't let that discourage you if you've only got time to tap a few buttons. Better to send a weak signal than none.

In either case, contact instructions are here: https://www.usa.gov/elected-officials/


I take issue with the premise that there is anyone who doesn't have time to send a better signal? It takes all of about 4 minutes to call the Capitol offices of your two representatives in Congress. They'll get your name address and you can make it as quick as "I just wanted to let Rep./Sen. so-and-so know that I am for/against HB/SB 1234." and it's done. You will absolutely spend more time looking up their phone numbers than you will on the phone.

You can do this while walking out of the office to the parking lot or metro station.


I've heard this so many times but I'm not sure it is true.

I helped with processing the results of a large government RFC for a large government aid bill (Farm Bill 201?) and the exact opposite was true. There were too many responses to individually read each one so the responses just got bucketed and counted. You could be fine with a one off response but it would be less likely to be bucketed correctly and would still only be counted once per bucket at most.

To cover your bases I would always do the easy one click option and then write the handwritten letter as well.


If you really want to make an impression, create a SuperPAC and donate millions to their campaigns.


No, don't! After you've spent the money they'll do whatever they want. Instead, threaten to donate to their opponent if they don't bow to your will, then after roll call you can wire the money to them.


Hehe, the only difference between what you and parent said, is that he is wiring it to their _next_ election campaign. Do it preferably in smaller sums so you can strong arm them multiple times before the next election.


> Do it preferably in smaller sums so you can strong arm them multiple times before the next election.

Really, you want to have a steady stream of payments flowing from you to them. That way, they're accustomed to it, and you always have the implicit threat of suspending the payments. This basically mirrors the structure of an ordinary ongoing personal relationship.

One-time donations, which would reflect an ordinary commercial relationship, don't work well, since the thing that makes them work outside of politics -- conditioning payment on receipt of the good purchased -- is illegal in politics.


> Your input is discounted at least in direct proportion to how little you sacrificed in order to provide it.

If this were true, corporations would be completely ignored when they provided a measly few million dollars in campaign contributions...


I attempted to call them today. All of their offices were closed due to COVID-19. I was unable to leave a message.


I was a little apprehensive, but decided to try this. I called my representative as well as both senators. In all cases (3:30pm on a thursday) I just got a voicemail. I left a short message in each case. Nothing could be easier.


I just filled it out! I didn't realize from the previous HN post Signal was threatening to leave the U.S. market altogether! I don't recall if they have ever done that before. So I'm taking this seriously.


I tried to use it, but it is very confusing. I added a personalized message at the start of it, hit submit, and it comes back saying "Please check that all required fields are completed and try again." The only thing I hadn't checked was the "Yes I want to join EFF's mailing list". I gritted my teeth, checked it, hit submit, and got the same message.

FF 74.0.1, 64b, windows 10


Thanks for the link! I never knew it was that easy. Submitted!


For reps that require it, which topic should we select for this - Science/Technology or Communications/Telecommunications/FCC?


It would depend on the representative - if they're on committees related to one or the other, I'd select that one. If it's 50/50 I'd probably choose Telecom/FCC as that seems more technically accurate.


EARN IT will affect all encryption software, not just Signal. This bill is just the newest way Congress is trying to enforce required backdoors in all apps/devices. Last time it was under the guise of protecting us from terrorists, this time it's under the guise of protecting the children from pedophiles. I wonder what they'll try next time, when this inevitably fails again.


I feel like as soon as someone uses a "think of the children" argument they immediately invalidate any point they may have had. It's a total cop out argument. I wish more people could see through it.


There are a million better ways to help children.


Like food, health care, and education. Encryption is not the reason human trafficking exists. Poverty is the reason it exists. The separation of wealth is the reason it exists.


> when this inevitably fails again

May I ask where your confidence comes from?

I’ll actually be more surprised if this doesn’t go through, at least in some form.


To be fair even if they get what they think they it will fail and then they'll pout and try to move the goal posts again like how the DMCA failed to stop piracy or DRM from being cracked.

Of course indulging their utter folly leaves us all worse off so we need to stop them. I notably haven't gotten even an email or after sending an email calling out EARN IT as downright nationally suicidal given the how much of the US economy is dependent upon secure cryptography, and the obvious relationship between GDP and power, and that if they gave a damn about the children they would be investing more in social services and investigation instead of trying to seize more power.

Not sure if I reached them or got it put in a proverbial circular file or "enemies list/ban from volunteering as disgruntled" by a staffer but the fact they didn't send a "for the children" form letter bullshit is somewhat reassuring that it reached a real human and they at least recognized one case of "too pissed to even try to form letter bullshit" is a small victory and enough negative tickmarks to say "this is a bad plan" is the current win condition.

Of course a large victory would be dropping from sponsorship but that would be near impossible even if I was a connected great speaker who called him out in person.


That said, does DRM comes under E2E messages?


No, because if the government wants to inspect DRM-encrypted media for some reason they can simply play it like any other customer, or order the company that encrypted it to provide an unencrypted version.


With so many eyeballs locked up at home, bored, not paying attention to congress. I think this is definitely much more concerning.


Not paying attention to Congress...until a big player gets taken down by this bill and makes a loud fuss about it.


They just need to word it correctly:

"This product is designed with the highest levels of security in order to keep you safe from criminals and other illicit actors on the internet. Because of this, it has been deemed inappropriate for use by citizens of the USA by the EARN IT act. Until this changes, it is only available outside of US jurisdiction. Please contact your congressional representatives for more information"


The federal government enjoys a freely accessible and wide open back door to our entire financial system under the guise of protecting us from terrorists. What makes you so sure the same trick won't work again?

Most Americans don't seem to know enough about how the government uses the backdoor to care.


That’s a good point. I would like to plug taler here. There is no technical reason why the federal government needs to have access to all our financial information as far as I know.

https://en.wikipedia.org/wiki/GNU_Taler


> I wonder what they'll try next time, when this inevitably fails again. We're at a major disadvantage, so I'm not sure where that optimism is coming from.

We have to stop it every time, and in every variation. On the other hand, they can keep trying over and over again.

I'd much rather see EFF and others working with congress to introduce laws that _prevent_ this kind of thing, saving the long sequence of future fights as this resurfaces under names. One of those fights, we're bound to lose.


I hope it's not against people who vote "wrong".


"guise" implies misdirection. What is their true intention?


If everybody who cares doesn't take a strong unrelenting stand against it, I will eventually pass. Hitler had minority support when using backroom politics got his way.


It's not really a "threat". I don't think Signal could legally operate in the US with this act in place. More like saying: "If you effectively ban end-to-end encryption, we can't offer our end-to-end encrypted chat app in your jurisdiction any more."


> I don't think Signal could legally operate in the US with this act in place.

Of course they could operate. They would just have to backdoor their encryption. Which, presumably, is what this legislation wants to achieve.

They don't want a world with no chat apps, they want a world with chat apps they can listen to.

What Signal is saying in this blog post is that they would rather give up the US market than weaken their encryption. Which is worth saying, because it's probably not true for most other apps. Most corporations would not give up the US market, no matter what compromises they have to make.


> Of course they could operate. They would just have to backdoor their encryption.

Is it even possible to have end-to-end encryption (in the technical sense of the term) with a backdoor? If your product's marquee feature is security via end-to-end encryption your product is a non-starter in a jurisdiction that bans end-to-end encryption, no?


Spot on. The thing is, content is still valuable and companies would like to access it on behalf of the government, but they now have to compete with private messaging apps. The big tech companies want the government to force them to make more profits on user data by forcing the backdoor. If this was something the tech companies didn't want, they'd be spending billions to lobby for the human right to privacy.


> It's not really a "threat". I don't think Signal could legally operate in the US with this act in place. More like saying: "If you effectively ban end-to-end encryption, we can't offer our end-to-end encrypted chat app in your jurisdiction any more."

Could they operate, so long as they implemented a mechanism to scan for and report child pornography? Assuming (optimistically) that the government committee that the EARN IT act mandates adopts reasonable standards.

I think this article gives a good background on the problem: https://blog.cryptographyengineering.com/2020/03/06/earn-it-...

I (personally) think that client-side photo hashing and automated comparison against one of the child abuse databases should be sufficient. Alternatively, Signal could probably just disable features for sharing images in the US.


> Could they operate, so long as they implemented a mechanism to scan for and report child pornography?

Signal's model is that their servers are never able to understand any user content. You can't effectively scan for prohibited content on the client side for several reasons:

A) someone who wants to send or receive prohibited content could alter the client to skip the checks.

B) shipping the check to the clients makes it possible for distributors to run the checks and alter their content until it passes the checks.

If client side filtering was effective, the ask should be for Google, Microsoft, and Apple to scan and report prohibited content on their operating systems, which together cover the vast majority of user terminals.


> You can't effectively scan for prohibited content on the client side for several reasons:

I disagree. I think these scanners can only be good, but never perfect, so they're mainly effective against technically unsophisticated abusers. Weakness that are only exploitable by someone with advanced technical skills are not actually a problem.

> A) someone who wants to send or receive prohibited content could alter the client to skip the checks.

That's true in any kind of scanner. Server side checks could be defeated pretty trivially by using any encoding scheme not anticipated by the scanner's authors (e.g. sending an image as text messages encoded with rot13 Base64). No scanner can be robust against even a mildly technically savvy opponent unless the scanner has complete end-to-end control over everything, including the clients.

> B) shipping the check to the clients makes it possible for distributors to run the checks and alter their content until it passes the checks.

My understanding is those databases and algorithms are not secret information, but are publicly available to provide low barriers to implementation, so someone could download one and do what you propose now.


> I disagree. I think these scanners can only be good, but never perfect, so they're mainly effective against technically unsophisticated abusers

Assuming the checks are not hash-based (literally any mutations to a file make these worthless, and the libraries of hashes of illegal content are gigabytes and growing), the computing power required on the client side is infeasible to ship in a product intended for any modest consumer hardware.

Let's assume this is limited to child pornography only. You first need to store some perceptually-encoded version of _every_ illegal image on the user's device (in such a way that it's impossible to reverse-engineer one of the images back out). Then you need to try to match the image being sent against each of the encoded versions of each of those images. On a server farm, that's _maybe_ practical. On someone's crappy Samsung Galaxy phone from 2013, it would take days or weeks to process a single image.

Let's assume it _was_ some hash based check. People complain that the Facebook Messenger app is over a hundred megabytes. Do you think someone is going to download the Signal client onto their phone with a gig of file hashes so that they can get reported to the police in the event that one of their images has a prohibited hash? No, that's crazy.

And even if it _was_ feasible, Signal is open source [0]. It would take a single person maybe a day of works tops to create a version without those restrictions and throw an APK onto a static website.

[0] https://github.com/signalapp/Signal-Android


You're right that it wouldn't work technically. But legal compliance doesn't always make things work the regulators want them to.


It’s not just that. Without safe harbour they are wide open to copyright infringement lawsuits, defamation lawsuits, and being charged with being accessory to, or aiding and abetting any fraud, scams or anything else illegal that takes place on their network.


> I don't think Signal could legally operate in the US with this act in place.

I could do that, because nobody knows who I am.

But then, I'm not technical enough. And I couldn't do that as Mirimir, because that persona has existed too long, and has been far too public.

The point, though, is that I'm confident that it's doable.


it is a threat. signal could still operate, they would just be at risk of being killed by a thousand cuts.


isn’t Signal funded indirectly by the CIA? that’s not a joke.


The state of respect from law and corporations upon consumers is already the single most depressing thing and now earnit. Grew up wanting to live in the future now i just want out. Remember that 15 year joke ‘dont be evil’?

I believe i could self immolate a million times over in front of a variety of scenes and meanings, people could call, write and click, teach and learn. There is however an absolute, it seems, that there is no profitable path for relatively infinite powers (politicians and corporations) to allow any meaningful movement towards the more humanitarian, civil/passionate version of a culture.

Instead we will visibly or not be corralled into a highly monitored and monetized form of drone happiness. Its cool.. as long as zoom always works, right? In a sort of twisted ‘we will do things to them but it wont happen to us’. Perhaps quarantine brain is boiling over into my comment style.


> The state of respect from law and corporations upon consumers is already the single most depressing thing and now earnit.

After five decades of the bloody War on Drugs, I have zero respect for the rule of law.


> Perhaps quarantine brain is boiling over into my comment style.

Or you’re channeling Dostoevsky.

https://en.wikipedia.org/wiki/The_Grand_Inquisitor


[flagged]


I appreciate your sharing this view... but i lack background as to why. This tone and line of thinking is quite regular with the exception of a few friends who prefer surrendering privacy for safety.

Tell me more about your views? Basically im trying to get at does this non psychologist have valid insight or is this just a knee jerk disagreement+quarantine comment?

And to better clarify my boiling over thing it is really to say that with the added time on our hands we all have so much time to read and think about our lives.

Just in case you’re right... hello from loony town. Haha. Sorry not funny.


>Basically im trying to get at does this non psychologist have valid insight or is this just a knee jerk disagreement+quarantine comment?

Neither. I am only reacting to your writing style, which reminds me very much of some schizophrenic people I have known. If I had to describe it, I would say it is characterized by disjointedly jumping around a theme, often using sentence fragments instead of complete sentences. It makes sense to you, but it is difficult for others (well, me) to follow. Again I don't mean this as an attack at all, just as an encouragement to reach out.

I don't have a lot to say about the actual content of your comment, except to say that it sounds awfully pessimistic and that life can surprise us with history's twists and turns. I'm sure things felt similarly hopeless in the early 20th century with the robber barons, or during the plague that immediately preceded the enlightenment. Chin up!


Thanks. I am guilty of being either loved or hated for my communication style. Whether it is or isnt an emerging psychological problem i can only proceed to read always and try to write carefully without impeding my mental rhythm which some may call add. Anyway i really appreciate your thoughtful reply.

Cheers


> this comment sounds like you're entering a schizophrenic episode

> I am not a psychologist

This is wildly inappropriate.


That does appear to be the crowd consensus. However I have recently witnessed, on IRC, someone who was genuinely entering a schizophrenic episode be brought to reality and convinced to seek help by a kindly and understanding channel. Knowing that this is possible, I find it difficult to remain silent if I think there's even a small chance my words might actually make a difference for someone, and the harm inflicted seems to me to be relatively minor; I seem to have upset the crowd here considerably more than the person I was actually addressing.

However, the overwhelming negative feedback suggests there might be a flaw in my logic. So I would appreciate some feedback - why is this so very objectionable?


> why is this so very objectionable?

I think the objections are similar to those against Pascal's wager. Of the four quadrants, the "Believe in god, but god doesn't exist" lists an outcome of "no downside". But that isn't true -- from the perspective of a non-believer, there are downsides to living the life of a believer.

Similarly, your position seems to be "If I give a diagnosis of schizophrenic, but they aren't schizo, there's no downside" (the charitable interpretation of your motivation being "it's better to err on the side of caution"). What you are missing is that it is harmful to tell someone they are schizophrenic when they are not.

Edit: try to put yourself in their position and understand how they would feel. How would you feel if someone at a party pulled you aside and said "I think you might have mental retardation" or "I think you might have cancer". How would a mother feel if a babysitter told her "I think your son has Autism"? Even if they believe these things to be true, they aren't professionals and it isn't their place to give such a diagnosis, and it is inappropriate to do so because of the potential harm caused by an incorrect diagnosis.

Edit2: Also, your post didn't contain any actual help, e.g. "here's the number of a hotline you should call". It was just "Here's my diagnosis".


I understand where you're coming from.

I think in all those circumstances, the delicacy of the delivery, disavowance of unearned authority, and presence of actionable advice makes the difference between "appropriate" and "inappropriate". I would not be in the slightest bit offended if someone at a party said to me "Listen, I'm no dermatologist, but that mole you said wasn't there last month looks a hell of a lot like one my co-worker had, and it turned out he had skin cancer. Couldn't hurt to get it checked out." Broadly, that tone was what I was trying to achieve.

While I don't disagree there's a negative effect from a "false positive", I think it's likely very small. The worst you'll likely do is offend someone. And while your Pascal's wager analogy is astute, I don't think the same objections quite apply - this case is a much more straightforward one of "high probability of very low harm, vs low probability of very high good", closer to buying a lottery ticket than Pascal's dubious infinities. Precisely calculating the expected return isn't possible, so you have to apply your best estimates.

My post did in fact contain actionable advice - "call your loved ones" - which has the useful benefit of working for many psychological issues besides schizophrenia, and also just being a nice thing to do generally. It was the best advice I could come up with. However, I can see now that in this instance I could probably have achieved the same result by giving the advice without the "diagnosis".


> and I am not a psychologist, but

You probably should have just stopped there.


Are companies afraid that opposing the Anti-Encryption Bill will automatically label them as in favor of online child exploitation?

I'm honestly curious about why there's no widespread opposition to the bill yet.


In general they seem to be afraid of standing up to the administration on virtually everything. Facebook in that regard seems particularly embarassing with Thiel on the board apaprently writing Facebook policy.


Because they want to be regulated in this fashion. It increases the amount of resources a competitor will need to just to start business and they don't really give a shit about E2E encryption.


Which companies? Most companies don't use e2e encryption because they read your data for ads. Apple, maybe?

Big companies don't generally make ethical stands, and small companies can't afford to. Apple makes some stands but only to be competitive against Android.


There are other methods of lobbying than just public, visible disagreement. They probably are registering their disagreement in private talks with people in congress.

Facebook publicly coming out against this might not be helpful: most people just don’t care. Those that (potentially) do care are far more likely to be mobilized by the EFF or ACLU, which they tend to trust. Facebook isn’t the most trusted brand name in privacy, as far as I can tell. Their support might actually be detrimental for the cause.

An open split of Silicon Valley and Republicans would also “politicize” the issue. Almost instantly, you’d have the 35% of Trump supporters galvanizing around the bill, even if they were previously ignorant or lukewarm on it. See the recent train wreck around Qunines-against-covid for a great example of this effect.


The Internet Association which represents them wrote a letter opposing it to Congress, although there hasn't been much other noise out of them, except for a minor statement from Facebook.


Senator Feinstein (D-CA) is a do’s-onshore of the bill. Here’s the form to contact her office and encourage her to not support the bill: https://www.feinstein.senate.gov/public/index.cfm/e-mail-me


do’s-onshore = co-sponsor?

Thanks for the link, I sent an email with it.


What is wrong with the wording of the title? The first line is "Signal is warning that an anti-encryption bill circulating in Congress could force the private messaging app to pull out of the US market." Being forced out of the market is different than "threatening to dump the market".


It might be a bit hyperbolic, but the end result is the same. Rather than compromising the integrity of their app, they'd rather no longer offer it to an entire country's market. Whether it is "dumping" the users or "pulling" out of the market, what's the difference? Lavabit shut their entire operation down once they were forced to compromise their system. While Lavabit didn't have much notice, Signal is signaling their intent to their users. If that signals their users to take action by contacting their congress critters to put pressure, then it seems like a good idea.


The bill seems like it would result in forcing e2e out of the market. Each product that offers e2e would then need to make a choice. Remove e2e or keep e2e. If they keep e2e then either they proactively dump the US market or they face legal peril. It seems like the same thing to me.

They don't want to offer a product that doesn't support e2e.


Thread of the blog post (source of the article): https://news.ycombinator.com/item?id=22815112


The sheer irony being that Federal workers have started using Signal instead of other apps, because it's encrypted.


They achieved this in Australia by saying "we don't care how you achieve both security and putting backdoors in, just have a 'capability'". If you don't have the ability to open a backdoor for them you've committed an offence

The best counterargument I came up with at the time is the security of our children. Who the hell knows what teenagers are sending to each other these days? Do we even want to know? I don't, and it's weird that Attorney General Barr wants to open this door. Why risk letting the wrong person sneak into a position where they can see all of our children's messages, everyone deserves real security


If Signal were federated, there would be no single entity to shut down. Alas...


I get where you are coming from but given the walled garden there is no need to kill the severs. Merely blocking all clients from the Apple Store and the Play Store would accomplish the same thing, federated or not.


That's why it is essential for end user to have an ability to sideload apps to the phone (i.e. to bypass vendor's store).


Which they do. Well, on Android phones anyway.


Android sideloading has been progressively weakened. Google’s plans for AOSP are that sideloading will soon require using the ADB debugging bridge, which would require enabling developer mode on Android versions that allow it (which would scare most ordinary people away) and would be impossible on Android versions where the manufacturer has forbidden it.


Signal recommends downloading through the play store. They don't endorse downloaded apks


While Signal would prefer that most people with Play store use Play store, downloaded APKs are not entirely unendorsed. Firstly, Signal continues to offer an APK for download. Secondly, if you install Signal from the downloaded APK, then Signal acts as its own app store in order to prompt the user to install updates when available.


And for a very good reason. They're also offering the APK for those that need it, and they might make it more accessible e.g. for US users if EARN IT passes.


Given the amount of open source code already, it should be possible to clone.

Edit: see below, server code is open. Keeping original text below:

IIRC the server code is proprietary, but the clients are open. That's a decent starting point.

https://github.com/signalapp/Signal-Android


The server is also open source https://github.com/signalapp/Signal-Server


Am I mistaken or isn't there some way in which Signal effectively prevents anyone from running their own server? I seem to recall hearing this.

(I mean, there's the obvious practical problem that the official server URL is hardcoded into the app, so if you wanted to use your own server you'd have to build your own copies of the app for you and your communicants, but other than that...?)


A pile of separate Signal clones = zero interoperability = zero functionality. So that's why there aren't any.

You could solve that by Federating, except... Federation would be lovely if you could actually deliver Signal's goals and do federation for free, but what we always see from proponents of Federation is that was their goal and so they're done. Oh you wanted security? Sorry, we federated everything, so you'll need to get every single member of the federation on board with every single change you need, we know you can't get that done but that's fine because our priority was federating stuff, so we are successful, shame about your goals.

As an example, somebody earlier in this thread mentions you can "just" know who is communicating with who anyway. Signal got rid of that, because they can, and it's a security improvement, so they put all the work in and did it. Now even Signal's own servers don't know who sent most messages! "Sealed Sender" means Signal has no idea who is sending this message to my friend Steve. Maybe it's me? No idea. It just has to be somebody who Steve allows to send him messages. Could be Steve loves spam and so it's a spammer. Could be Steve loves the AfD and so it's a Nazi. No way to know without reading the message which only Steve's Signal client can do.

Now imagine trying to roll that out to a federated system. After years of effort maybe you switch it on, and then you find a bug and have to switch it off again for a few years while you fix that. Hopeless.


> You could solve that by Federating, except... Federation would be lovely if you could actually deliver Signal's goals and do federation for free, but what we always see from proponents of Federation is that was their goal and so they're done. Oh you wanted security? Sorry, we federated everything, so you'll need to get every single member of the federation on board with every single change you need, we know you can't get that done but that's fine because our priority was federating stuff, so we are successful, shame about your goals.

I have a lot of serious criticisms of Matrix, to the point where I don't recommend it to friends (yet?), but this feels like an unfair and unserious criticism. I don't think you can fault their motives.

And as another user points out, if Signal goes down in the United States because of legislation, so much for the supposed convenience of your non-federated central server approach! If that happens I'll take Matrix over nothing, thanks.


But conversely, if legislation really succeeds in killing Signal in the entirety of US (and EU won't be far behind!) to the point where they're forced to use geo-IP blocks, the end result is still strictly worse off.


Your comment makes zero sense, let me explain: most people use signal through the iOS app. It is very easy to shut down an iOS app.

Hope you got it!


> Your comment makes zero sense, let me explain: most people use signal through the iOS app. It is very easy to shut down an iOS app.

If Apple users actually controlled the software running on their devices that wouldn't be an issue.

A want for federated services complements a want for control over our computing.


If you care so much about uncensorable resilient service you probably already use either jailbroken iOS or Android. And if you don't, then do. iOS has a 13% market share anyways.

Hope you got it!


> iOS has a 13% market share anyways.

Not in the US, where as of March 2020 it maintains a 60.1% share.


That'd be easy to fix, if Apple wanted to.


Why the stupid downvotes? Signal's walled garden can be its demise.


Signal is open source. If you want to develop and host your own Signal, go right ahead. You’d just be opening yourself up to the same problem facing the Signal Foundation. As it is, the Signal Foundation would suddenly be open to lawsuits, and they’re the main developers of Signal.


> If you want to develop and host your own Signal, go right ahead.

...and have an ecosystem of users that cannot communicate with users on Signal. That's what lock-in is and "go right ahead" is just not enough.

> You’d just be opening yourself up to the same problem facing the Signal Foundation

citation needed


So don't be discoverable!


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: