Hacker News new | past | comments | ask | show | jobs | submit login
FBI's ability to legally access secure messaging app content and metadata [pdf] (propertyofthepeople.org)
562 points by sega_sai 58 days ago | hide | past | favorite | 450 comments

Some FBI agents came to my house once and told me that my home Internet had been used to visit Islamic Extremist websites. They brought a local police office with them and a 'threat assessment' coordinator from my workplace. They asked me if my family was Muslim and wanted to know if we had been radicalized.

We are not religious (at all). We do not attend church, synagogue or mosque. We are lower middle class white Americans born and raised in the USA and have never traveled outside the country.

I have no idea why they thought this about us. Maybe it was an IP mix-up, but it was very disturbing. I feared that I may lose my job. I became very afraid of the FBI that day. I think this could happen to anyone at anytime.

"threat assessment' coordinator from my workplace" "I feared that I may lose my job."

I understand that police/FBI have to conduct investigation. What dont understand is involvement of the employer , it's extremely disturbing - you have not been convincted, you have not been charged, you are not even a suspect or accused of anything at this point - how is your private life the business of your employer?

Why is your privacy being breached and livehood being placed at risk?

Surely the FBI is not allowed to publicise random dirt they find on innocent people?

The FBI still has buildings named after J Edgar Hoover. That should tell you everything you need to know about their institutional respect for justice and due process.

For non-americans, what was J Edgar Hoover known for?

I'm also not an American - but as far as I've read - massive abuse of power in using the FBI to spy on political rivals, illegal wiretapping, illegal surveillance of US congressmen and even presidents, running the FBI while they were doing extremely controversial programs like COINTELPRO and programs and investigations that tried to hinder the civil rights movement, etc.

As a non American, I think COINTELPRO is the single most anti-democratic abuse of power ever done by US government.



Turning the FBI into a blackmail operation.

Being in the pocket of the mob

Cross dressing

> Surely the FBI is not allowed to publicise random dirt they find on innocent people?

If they're doing an investigation, they very likely got the employer involved in order to get more information on the person they're investigating, and companies have liaisons for law enforcement, as well. If the FBI comes knocking and says, "we think you've hired a terrorist," it's going to ruffle some feathers at the company no matter how unfounded or untruthful the claim is.

It isn't just the suspicion of terrorism that might have law enforcement or the FBI knocking at an employer's door. If someone is suspected of any type of cyber crime, the FBI will be coming for all of their computers and electronic devices, including the ones they use at work.

"If they're doing an investigation, they very likely got the employer involved in order to get more information on the person they're investigating"

What is an employer going to contibute, realistically. "Oh yeah, he always carries potassium nitrate and makes explosions during lunch breaks!"

Depending on the company they would likely audit their activities incase the company itself was a vector, assuming that terrorists also require intelligence networks.

This is par for the course FBI intimidation tactics, along with interviewing everyone you've regularly conversed with. Serves a double purpose of investigation while simultaneously making you radioactive to be around.

Thereby isolating the person during a period of high emotional anxiety.

Employer might have been defense contractor. Most jobs without clearance don't even have "threat assessment coordinaror".

> Most jobs without clearance don't even have "threat assessment coordinaror"

The title may vary from place to place but all companies have people filling this role, even if you've never met them.

Normally falls somewhere under a team like Global Intelligence, Workplace Security, Business Continuity, etc.

No, most places do not have Global Intelligence, Workplace Security positions. Business Continuity is most often a IT business function ...

Companies that employ software engineers likely are divided into those that have that role and those that don't have it yet.

You deserve to be always assumed innocent until proven guilty, and you will have to be proven guilty to be found guilty, and realistically speaking, those premises are extremely technical.

You don't have to be found guilty to be punished, lookup "case load". That can keep you on probation and monitoring as long as they want to draw out the case and the whole time you are required to make monthly payments or risk going to jail.

In the US, the process IS the punishment.

One of the principle argument for the "speedy trial" clause of the US 6th Amendment, and similar rights in other jurisdictions.

Note that the US law does not apply to noncriminal processes --- civil lawsuits or other elements of law.

How about a State felony case that has taken nearly two years?

How about it?

Without specifics, or some indication of who is triggering the delay (e.g., defendants may request delays), I couldn't possibly comment.

Given law and legal processes are not my baliwick, I'd probably not be able to comment intelligently regardless. But you've posed a null-content question.

The State Attorney General dragging the case out because they refuse to look at it. They also filed it under the wrong statue so their arguments are incorrect.

Seems possible grounds for a challenge. The entire case can be dismissed if the right is denied.



Not during COVID times...

He is already being punished.

I'm reading a book where the main character receives a subpoena to go to a interview with the Portugal dictatorship political police. Nothing happens to him (till now) but everybody in the hotel where he is hosted starts to treat him differently.

Who will be the first in the line when a firing is necessary? Probably the guy that has problems with the FBI.

It's (scarily) interesting that they react with actual personal attendance based purely on a very limited set of electronic information.

From your further description:

> We are not religious (at all). We do not attend church, synagogue or mosque. We are lower middle class white Americans born and raised in the USA and have never traveled outside the country.

Would not the FBI have been able to any amount of background searching (read: further electronic information gathering), that would be less effort-intensive than getting arranging a 'threat assessment' coordinator from throw_away_dgs' actual workplace and a local police officer for an in-person door-knock. If such background checks were performed, then they either don't have much data or their threat weightings are set to red-scare levels of paranoia. Either way, it's scary.

Unless there's more to the story.

I think what he experienced is another manifestation of the same phenomena as zero-tolerance policies in schools; institutions ask their enforcers to suspend common sense and strictly enforce the letter of the law/guideline/etc, even in situations where any reasonable person would decide it made no sense. They do this because such common sense and gut feelings is how bias and prejudice might creep into their oh-so-perfect system.

It used to be that if a teacher saw a kid get bullied and then punch his bully back, the teacher was empowered to evaluate the situation using their best judgement, and punish the bully while congratulating the bullied kid who stuck up for himself. The system sees a problem with that; the teacher's perception of the incident might have bias and prejudice. The system's solution is to have zero tolerance for any violence and punish both students equally. The system's solution to the possibility of prejudice against one student is to ensure prejudice against both students.

At my school it was worse than that. Any one "involved" in a physical altercation would be suspended. Someone could walk up and punch you and you would be suspended for it. This obviously had a chilling effect on reporting. No more bullying. Problem solved.

Such policies also justify and encourage excessive retribution. If you’re getting suspended whether you fight back or not, may as well cause some real damage to earn it.

> the teacher's perception of the incident might have bias and prejudice.

I mean that's not entirely wrong either. Bullying was still a thing before zero tolerance policies.

Not to say zero tolerance policies are the right solution, but personal bias _is_ a big problem when it comes to enforcement.

Of course. Bias and prejudice is always a real concern. In situations where the teacher gets it wrong and punishes the bullied kid, the kid learns an unfortunate but useful lesson; that some agents of the system cannot be relied on.

But the zero tolerance response to this circumstance ensures the bullied student is prejudiced against, judging him guilty before considering the facts of the individual circumstance. What does that teach the kid? That the system itself cannot be relied on.

to be fair, that's a pretty valuable lesson to learn out here. it would be neat if we had a system we could rely on.

was about to comment the same thing. I teach future teachers, and I always say that_ everyone forgets their school math and chemistry lessons after cramming for the test. What sticks is learning how to survive in an unequal, dysfunctional system where you're the oppressed class, fighting among each other while you can't touch the people in power.

This is how 95% of the world works. In most countries, people are conditioned to "join" the rulers from a very young age, and people who use critical thinking are a tiny minority (often invisible)

Bullying is still a thing.

But they did not establish how legislation has elevated itself from that.

They are right that everyone is biased, what they completely fail to establish is how they improved their own perception. Actions justified because of the presence of bias and prejudice very closely mirror religious dogma by a more objective metric.

> It's (scarily) interesting that they react with actual personal attendance based purely on a very limited set of electronic information.

Either their intel is better than they let on and didnt think they would be walking into an ambush or they are more stupid than we think.

Actually, I think they had no intel. You NEED intel for a judge to order a subpoena—and if a subpoena was issued, the ISP would open their firehose, and overwhelm the FBI with evidence suggesting that there’s nothing to investigate. And having visited extremist sites a handful of times—even if advertantly—is probably not going to meet the threshold for a subpoena.

If the FBI visited me and casually asked about my web history, I would casually ask them to pound sand (as should everyone!). But if the agent was accompanied with someone from my employer, I would eagerly cart up every single device in my home and offer to carry it out to their vehicle (as I fear most would).

It smells like someone is taking massive investigative shortcuts, at very significant cost to the accused. Then again, I can’t even fathom the upside for the FBI.

My gut reaction is simply speed. Why sit at my desk for a few hours reading documents when I can a couple phone calls and be scary for 20 minutes to feel secure in saying “yep - not terrorists”.

Or - you know - “weeeelp, I’ve been sitting at this desk all morning, let’s go talk to someone”.

As commenter below says: Power.

Why spend the extra time and effort, let's just hit the road and totally and completely fuck at least one citizen's opinion of the entire system upon which their life and livelihood depends.

Saves me a couple of hours, and the sun's out. Sold!

Ironically, maybe this will actually radicalise the people they're investigating for radicalisation.

> Then again, I can’t even fathom the upside for the FBI.

The upside is power.

You yourself said as much: "If the agent was accompanied with someone from my employer, I would eagerly cart up every single device in my home and offer to carry it out to their vehicle."

You fear them. Rightly so. The FBI has incredible power, backed by the full might of corporate media. To cross them is to be crushed.

Why would they need a warrant, when Apple and Google climb over each other volunteer every scrap of your private information? Why take the time for a trial, when justice can more efficiently be served by both your employer and your union gleefully ruining you financially upon request?

People have been demanding[1] this for years. Now it's here.

[1] https://xkcd.com/1357/

Apple have famously refused FBI requests.


>If such background checks were performed, then they either don't have much data or their threat weightings are set to red-scare levels of paranoia. Either way, it's scary.

They're not gonna have anything happen to them if they go tough on (and fuck over an) innocent guy.

They're gonna look bad if they miss a terrorist.

So they have no incentive to not have "red-scare levels of paranoia".

That's true, I still remember the fact that the Boston Bomber(s) were on international watch lists and their home countries warned the US (whichever TLA, may have been an issue of crossed wires) that these guys were on the move, and it was all ignored.

Now, visit a 'bad' website, or somehow be mistaken for someone that visited a 'bad' website, and you'll get some deep personal treatment.

Feds can't win, but it seems to be through their own laziness or incompetence or lack of interagency cooperation.

Or maybe because it's motives, and what level of capture they have over their 'customers'? Seems pretty simple to me. They have a monopoly of service and the only retribution people can take is political which means everything is done on appearance.

Imagine being a muslim in such case. Trying to convince them that this can be a mix-up ( which is possible easily) won't be successful.

Imagine being a muslim and e.g. having a kid visiting those sites.

Or just going there out of intellectual curiosity, like how a leftie might read Main Kampf to check what that shit is.

You can end up in a very bad position...

A couple years after 9/11, my father and I had donated to the Holy Land foundation.

The IRS proceeded to audit me (16 years old) and my $8k a year woodselling business I had with my dad. You tell me.

That's absolutely fucked. The whole story of the Holy Land Foundation being railroaded and labeled as terrorists when all they did was advocate for human rights of Palestinians...it's an incredibly chilling story. To hear that those who merely donated to a worthy cause were also then audited...the outrageous injustice makes my blood boil.

That’s better than my friends who were having their phones randomly tapped

I’ll be the dissenting voice and say this reads like a “sow discord in the US 101”. Why on earth would the FBI bring both the police and a “threat assessment” coordinator from your work to interview you? Why would your workplace ever agree to it? That screams lawsuit waiting to happen.

And on that note, why didn’t you sue your workplace for harassment? Whether you’re religious or not isn’t any of their business and is a protected class.

A decade ago the FBI harassed me at my home waking me up from sleeping twice and at a past employer before on entirely unfounded claims.

They didn't care what the consequences were for targeting someone innocent.

They also made nasty threats like "Someone has to go down for this, and if you help us collect intel on your industry peers we suspect then someone else can be that person"

I told them politely to go die in a fire because I was not about to help them harass other innocent people but it was terrifying none the less that they seemingly had the power to end my whole universe.

I became convinced through that ordeal that the FBI is a deeply corrupt organization that creates pressure to close cases by any means needed.

The OPs post seems totally believable and consistent with stories I have heard from others, particularly if they work for an organization that has the US government as a customer like a defense contractor.

> Someone has to go down for this

The so called "justice" system, I guess.

You're incredibly naive if you think this kind of stuff doesn't happen all the time since 9/11. I personally know several people with similar stories in the US.

You know several people whose employers sent someone to their house with FBI agents to harass them about their religious beliefs?? And none of them sued?

I’m not surprised at all that the FBI is harassing people, I find it incredibly hard to believe a private business would touch the situation with a 4,000 foot pole. They have absolutely nothing to gain and massive liability.

Is it prohibited to visit those websites? I once was interested to understand the way radicals think, to read about their arguments, so I spent some time hanging around some radical websites.

I was visited by the FBI for doing security research that made them at least pretend to assume I was a blackhat they wanted to take down.

Use Tor browser if you are going to research anything a criminal might regardless of pure motives.

If you so much as want to research lock picking, use Tor.

ISP traffic logs can and will be twisted against you in a court of law.

I openly participate in locksport communities and I haven't had any visits from the FBI.

I'm fairly confident that those agencies use context in an automated manner to get any meaningful results.

So "keyword" (could be a word, domain or some other pattern) X may trigger only if Y and Z was already triggered. And some keyword A may only trigger if B was NOT present.

This way you can distinguish doctors, reporters or people studying history or chemistry from those who plan something.

Or e.g. ML applied to patterns over time. Globally.

And yes I do not like it at all, HN is full of people that may likely research some kind of bomb, religion or whatever else out of pure curiosity, but since there are not many such people it can be problem in court one day.

Mix in some Snowden, your hardware stack, gag orders and the fact that we have more laws that anybody can read and you may feel like watching some stupid memes.

OPSEC is about lowering the probability of things going sideways, there are no guarantees either way

To quote a Dartmouth history professor who taught a class on the subject: "if you don't get randomly selected for a search on your next flight you aren't doing your homework"

It's not prohibited but they notice and subject you to harassment by the system at every action with every part of the system that is integrated with their database.

This seems like hyperbole.

Did they have a warrant? Never talk to the police without counsel, refuse all searches without warrants, "we might think you went on a website" is not probable cause, you have a right to an attorney and silence.

Do you work for a government agency or contractor? That might explain why they contacted your employer so readily.

This is why you and everyone should use DNS over HTTPS (DoH).

Next day they might visit you to ask you why you are visiting an opposition party web site.

How exactly is DoH a protection? Wouldn't they just see that as a red flag? Then, get the data from cloudflare or whomever.

Most of the time they log your plain DNS queries. But DoH is encrypted, thus they won't be able to log your DNS queries. Cloudflare is not the only DoH provider. There are many. If you want you can grab a several lines of PHP code and create your own DoH link in another country. Becouse DoH is https they cannot distinguish it from normal https. Of course if the use deep packet analyses tool they will know what website you are visiting but they are not being used widely but are used to target specific people. To sum up; DoH is better than plain text DNS queris.

Please disambiguate acronyms in the absence of context.

> and a 'threat assessment' coordinator from my workplace.

What was the reason for this? What type of workplace?

I'm assuming any workplace which requires a government security clearance to enter and work in.

That's extremely disturbing. Accessing some random website should never cause police to show up. They should never even know what you did. That's like keeping tabs on what books people read and raiding somebody's house because they looked up how bombs are made.

I never use a VPN. That changes today.

You should use Tor instead. With VPN, you just shift your browsing history from one place to another.

Or worse yet, the VPN provider can sell your data.

Build your own VPN https://github.com/hwdsl2/setup-ipsec-vpn

> We are lower middle class white Americans born and raised in the USA and have never traveled outside the country.

I am most curious why you believe that is a defense against radicalization. In the US that is perhaps the most common demographic for radicalization of any type.

Radical to him only goes before Islamic terrorism apparently

It absolutely can.

The only words you should ever say to the FBI "on advice of counsel I am taking the fifth".

This is awful advice for this specific situation.

OP apparently managed to clear up the mistake without much bother by speaking to them (although they were understandably shaken up by the experience). This presumably wouldn't have happened if they'd done what you suggest.

Not speaking to law enforcement outside the presence of your attorney is excellent advice. There's no downside to having the attorney there, and potentially life shattering downsides to attempt otherwise.

On the other hand, they could accuse OP of lying (something that's highly subjective), which is a serious federal crime.

They left off one very popular messenger, SMS:

* Message content: All

* Subpoena: can render all message content for the last 1-7 years

* 18 U.S.C 2703(d): can render all message content for the last 1-7 years

* Search warrant: can render all message content for the last 1-7 years

* Vague suspicion plus a small fee to the carrier: can render all message content for the last 1-7 years

There's also:

* Law enforcement simply asks nicely: can render all message content for the last 1-7 years

The Stored Communications Act makes disclosing the contents of messages without a search warrant unlawful

Just like the NSA spying on Americans is unlawful [0] the FBI terrorizing political movements is unlawful [1] or the CIA operating in the US is unlawful [2]

Yet, I'm pretty sure all these are still happening, to a certain degree, to this day.

[0] https://www.reuters.com/article/us-usa-nsa-spying-idUSKBN25T...

[1] https://en.wikipedia.org/wiki/COINTELPRO

[2] https://en.wikipedia.org/wiki/Operation_CHAOS

Larger point would be if it's obtained unlawfully it can't be used in a court of law against you.

Recent cases:



That is little consolation in the court of public opinion, where FBI management and the Justice Department have demonstrated willingness and capability to hold mob court and manipulate public opinion outside the formal legal system. They will SWAT you themselves if they like, on live TV.

Parallel Construction makes this a technicality/nuisance, not a show stopper.

came here to say this.

Generally only if you have the means to hire a good lawyer.

The NSA doesn't need to illegally spy on Americans when an ally can do it for them and then share the data legally.



That's not really how it works. Sure, it is also a way to circumvent such local legislature, but for that to work American allies would need to run actual surveillance structures in the US mainland proper out in the open..

You know, like the US does in the countries of it's "allies" like Germany [0]

Do you really think the US would allow German intelligence agencies to build whole complexes, plugged right into the US's largest IPX?

That's why this situation is not nearly as "symbiotic" as it's often made out to be. At best that applies to Five Eyes countries, and even there only to a very limited degree as no Five Eyes member as as much foreign presence as the US.

[0] https://en.wikipedia.org/wiki/ECHELON#Examples_of_industrial...

To this rhetorical question, a resounding “yes” answer. There is credible suggestion that GCHQ has been invited to operate US facilities on US soil for this explicit purpose.


The people responsible for investigating and prosecuting such crimes have some not so great incentives to avoid doing so and keep the whole thing secret though, don't they?

And then when they get caught, they do this:


Sounds like an easy way to have your case tossed out in court.

It's funny how much this differs from my own personal experience with law enforcement. The friends I know are timid as hell and don't do anything without a warrant just to stay on the safe side- even if they probably don't need one.

Good luck with that. In my case there was a ton of violations of the SCA. Violations of the SCA are only actionable if they are "constitutional" in nature. (That essentially means that if the government indict you based on information they illegally gathered through violating the SCA but the information did not belong to you - say it belonged your wife or business partner - then you can't get the information suppressed/excluded in court)

In my case the government did violate the SCA and my constitutional rights, but two judges have looked at it and both stated the same answer - the police must be allowed to commit crimes to gather evidence. Next stop: appeal courts.

Yep, the courts side with law enforcement. The whole 'truth comes out in a fair fight' is completely undermined by this. The system protects itself above all else.

I was involved with a case that sounds similar - the judges don't care about your rights and blatantly missapply the law. Also, magistrates are also complete BS, and don't even know basic legal stuff. I had one think I called him prejudice when requesting a case be dismissed with prejudice... Complaints do nothing. There's no real oversight, leading to a completely incompetent system.

> There's no real oversight, leading to a completely incompetent system.

It's the system working as intended. If you want something that looks like justice, you'll need substantial wealth to get it.

You have to generally assume that the FBI and other government agencies are competent. My baseline, starting assumption is that if everyone in the US was too scared to use programs like PRISM, they wouldn't have been built.

So these kinds of claims just don't make any sense in a world where we know that government has conducted surveillance without a warrant, and where we know that the FBI has built entire programs designed to make it easier for them to conduct surveillance without a warrant.

From the article posted that you're replying to:

> What Administration officials tend to obscure is that what they seek is not immunity for future cooperation with lawful surveillance, but rather telecom immunity for assisting with unlawful surveillance conducted from October 2001 through January 17, 2007, as part of the warrantless wiretap program initiated by the White House.

I'm not sure I understand what your implication is. I don't understand how it's possible to respond to an article that is about telecoms seeking immunity for previous unlawful actions by saying, "the government/businesses would be way too scared to do anything unlawful." I mean... obviously not, they sought immunity for it. They wouldn't just randomly do that, the most likely explanation is that they made immunity a pressing issue because they thought they needed it.

It does not seem to me that the optimistic world you describe and the observable actions and lobbying efforts of companies/administrations line up with each other.

I'm just glad you're here to stick up for your friends without any corroboration or linking story. It's just a good thing to do.

Being charitable, let’s assume his friends work as homicide or theft detectives. If so, they need a high standard for admissible evidence to build their case.

If on the other hand his friends are street cops tasked with clearing a corner of drug dealers because some neighbor complained to their council person who complained to the police chief then those cops don’t necessarily care about extrajudicial activities.

Having been harassed by street cops and interacted with homicide detectives, I can tell you they vary tremendously in professionalism.

They definitely need a high standard for admissible evidence, that doesn't stop them from purchasing large amounts of data from all-too-willing communications companies and using parallel construction to build their case once they find out what happened via warantless spying.

They can also query these messages to see if there is something on the dealers they get paid from and then warn them if something comes up. It works both ways, no?

Cybercrime. Lots of scams and child abuse.

The really smart cops get the tips using “less than legal” means, then walk back and reconstruct using legal evidence.

"Sounds like an easy way to have your case tossed out in court."

This is terribly naive in my experience.

Imagine a world where the entire law enforcement complex followed the law. What a world.

Let's be honest, how often do people share with their pals about how they commit crimes, or are less than scrupulous, at work, assuming their pals aren't criminals, as well? People tend to keep things like that a secret, even from people that are close to them.

EO12333 makes it lawful without a warrant.

> EO12333

An EO making it lawful for a federal agency to collect doesn't mean it is lawful for a private company to disclose, it doesn't change when a company is permitted to disclose the content of messages under the SCA

I mean, this whole discussion is moot since nobody will enforce things like this, especially against themselves.

You are correct. There’s also varying 2-party/1-party consent required depending on the state in the absence of a warrant. But unless you’re targeting the devices, you will not get much at all from service providers. They simply don’t keep it contrary to what I read here.

The reality is that many times the only barrier to sensitive information is a shared login which many people know and a statement that users represent that they have legal authority to access that info.

Tell that to the FAANG companies that provide white glove access to authorities any time they ask.

* Law enforcement wants to stalk ex-girlfriend: can render all message content for the last 1-7 years

Companies also sell data to law enforcement.

Many tech companies even develop nice portals for law enforcement to use where they can request and view data, with or without a warrant or subpoena.

Major service providers do not maintain SMS history beyond 24 hours, let alone 1-7 years (last time I worked a case that is). They’re transparent about it as well. Look up the LE liaison contacts on their sites and they’ll clearly list what is available or not available. That’s why it’s crucial to get the actual devices themselves. Reason: the infrastructure to manage SMS content for every customer for 7 years with zero business justification/use case is phenomenal. They’d spend most of their time responding to civil and criminal subpoenas/warrants. That would be a feat the NSA would be proud of. Been there and done that a 100 times. (This also aligns with certain VPN providers refusing to keep logs. It’s a cost that provides zero returns, so they cut it as a business decision, not because they’re trying to stick it to the man.

I went to a major cell provider and asked them nicely for access to SMS for all their customers and they happily took money and gave me an API.

This was for a startup.

I have no doubt they do the same for governments.

If I understand this correctly you’re saying a major cell provider is selling you access to subscriber SMS message content?

They sold access to send or send/receive messages for use cases where customers would legitimately consent. E.G. a wireless Bluetooth accessory that wants to access and reply to SMS message content on Apple devices that Apple won't grant access to.

Still. It meant a very powerful API key had to be protected and never abused.

I can only imagine others obtain God SMS access like this with less than ethical intentions.

I'm surprised to hear this has changed so significantly since the snowden leaks. Especially after the blatant attack on Qwest CEO Joseph Nacchio for refusing to spy. It was established then that the major mobile telcos in the USA were keeping and providing sms full data for 2-5 years (t-mobile, at&t, verizon, etc).

and also that the government was subsidizing the programs when the companies complained about the added costs.

There's no reason for them to keep those records, other than for law enforcement's sake. No use case for calling up your operator to ask about that text message you got "from Fred at 4am one day a couple years ago."

> Major service providers do not maintain SMS history beyond 24 hours, let alone 1-7 years

Nobody should make decisions based on this comment.

Agreed. Do your own due diligence.

You forgot email... and they don't need a warrant for messages older then 180 days if in the cloud (they never delete them, too): https://www.consumerreports.org/consumerist/house-passes-bil...

IIRC the only reason this amendment was made was because the 180 day limit was found unconstitutional anyway by an appellate court. So, technically the amendment did nothing.

It doesn't matter where your data is held, locally or cloud, (if you are an American resident and your data is in the USA) as it is _your_ data and it is unconstitutional for them to read it without a warrant. In theory.

> It doesn't matter where your data is held, locally or cloud

In the US it does


This ruling has been adopted by the US Supreme Court: https://privacylaw.proskauer.com/2007/06/articles/electronic...

Look at the link I posted up there? it is 10 years newer then yours

If they are local and encrypted... oops, forgot the encryption key.

Source is a few years old, but I suppose we can make another FOIA request to find out how long carriers store text messages these days - it was basically 0-5 days a decade ago:


Idk... back in the mid 2000s my parents managed to get a transcript of all of my (minor) sister's SMS messages for a few months back (as part of a billing dispute).

You’ll be lucky if it’s any longer than 24-hours now. There’s no business use case for building and maintaining the technological infrastructure to manage it for years. It’s private info and they can’t sell it to anyone without legal liability. If LE gave them the funds to build this infrastructure and use it for retention then the service provider is essentially an agent of the state at that point.

You're overstating the technical difficulty of archiving and retrieving text.

I can only imagine that the scale of all US SMS messages is absolutely staggering. It probably eclipses all other text formats combined in terms of daily production. Here's a blog post from a few years ago estimating it at 26 billion text messages per day and rising: https://www.textrequest.com/blog/how-many-texts-people-send-...

Not counting media and assuming they are all 160 byte messages, that's 4 terabytes per day, or about 200 wikipedia's per day. I guess that's not too bad in terms of storage requirements, certainly a management amount of data for a telecom to store. But assuming that you want those indexed and easily retrievable somehow, it could get very burdensome to manage and interact with, and that tends to balloon the size at least a little bit as well.

The liability and legal issues around it (both externally and internally - don't want employees spying on their exes, leaking data from celebs, in addition to the policing issues, etc) makes it pretty undesirable to store though.

It's about secure messaging

This seems like a good place to say that I strongly recommend Yasha Levine's Surveillance Valley book (https://www.goodreads.com/book/show/34220713-surveillance-va...) where he suggests that all of this is working as intended, going all the way back to the military counter-insurgency roots of the arpanet first in places like Vietnam, and then back home in anti-war and leftist movements. The contemporary themes that are relevant are the fact that current privacy movements like Tor, Signal, OTF, BBG are fundamentally military funded and survive on government contracts. It distracts from the needed political discourse into a technology one where "encryption is the great equalizer" and everyone can resist big brother in their own way on the platforms the government has built. Encryption does exist, but it also distracts from other vectors like vulnerabilities (that led to Ulbricht getting caught), what services you would e2e connect to, how you get the clients to connect to those services, what store can push binaries for said clients etc.

Yasha Levine is a conspiracy theorist hack. There’s really no other way to say it. His narrative is attractive to a left leaning audience with shallow knowledge in this area, but the reality is that without publicly funded software like Tor, Signal, OTF, and my own Lantern, our world would be more fully saturated with corporate control of the internet. We need more public funding for open source software (with public security audits, mind you), not less. Without them, we’d basically be left with Wikipedia as the only popular entity on the internet outside of corporate control.

All of these projects are more properly grouped with government funding in other spheres, such as the BBC or PBS in media, than they are with the surveillance state or the NSA. Levine overlooks basic details, such as reproducible builds, that quickly collapse the house of cards that is his narrative. He tries to paint them all with the NSA brush, when, in fact, they’re simply projects that have historically received some of their funding from the government while fulfilling missions with extraordinary humanitarian benefits. Levine’s own knowledge and experience in this area is shallow. Look elsewhere.

I don't disagree with what you're saying. I'm not sure your statement is in disagreement with mine either? I don't think he's saying less OSS is better or anything dogmatic? All he's saying is that using Tor/Signal shouldn't be the end all be all of your surveillance concerns.

> would be more fully saturated with corporate control of the internet

You might disagree. His point was that the "corporate controllers of the internet" support projects like Tor because A) it gives a (somewhat ineffective) channel for people to focus on rather than political recourses and B) there's no real threat to the corporate model. What would you do in this e2e encrypted internet without corporate services?

> such as reproducible builds

Seems like a tangential point. You can have an untampered copy of a client with a vulnerability.

> funding from the government while fulfilling missions with extraordinary humanitarian benefits

I don't think this is in disagreement with anything either

> from the government while fulfilling missions with extraordinary humanitarian benefits

Ahh yes, the famed operation Condor, operation Gladio, operation iceberg and so many other famed "humanitarian" projects

At the end of the day all that you mentioned goes back to a post-facto "it is good because *we* do it", I would go to say that most people here in HN are well aware of the start of Google when it was funded by us Intel as a way to parse Vietnam era datasets, or how US Intel uses Radio Free Asia to destabilize enemy countries abroad, but again, it is only good/not bad when "*we"* do it

Apologies for a rather low quality comment, but these types of persons handwaving the actual structure behind all of this really get on my nerves, specially when I have had family members be tortured as a consequence of these US activities

I’m certainly not defending all US government actions. That’s exactly the point. Levine tries to lump all of this in with surveillance. The US government funds the NSA, that is true. It also funds food stamps. And torture. The trick is to untangle it.

> The trick is to untangle it.

USAID is specifically designed and called that so as to tangle it, tell me, how would your average joe understand that USAID is a intelligence agency spinoff designed to sound "good" while doing evil all over the world rather than what its name suggests? You know... Aid?

The NSA, CIA, Extraordinary Rendition and so many other things dont exist there by accident, if said """government""" wishes to spend such amounts of money and resources to enact such evil under the veil of security, then i dont know about you, but then that to me and several other people just reads as "US Gov being flat out evil"

Do remember that there was *wide* support and acceptance back on the Kennedy days to just dissolve the CIA

> Levine tries to lump all of this in with surveillance.

I am not particularly kind to the guy, but he's just merely looking at it on a holistic system design level, any programmer minded person would do the exact same thing when presented with a black box problem

But as far as the foodstamps go, wouldn't it be great if the system where set up in such a way as that foodstamps where not needed to begin with? And on the flipside, why would "the government" allow for such a societal structure where the maintenance of "foodstamps" is necessary for the organization of the nation? I see that last bit in particular if anything as a national security problem...

As Clintonites would say: "It is the economy stupid"

It seems obvious that USAID is an intelligence front (I've encountered a few instances where it was mentioned that someone worked for USAID at the time, while it was simultaneously obvious that it would make way more sense if they were Intelligence), but is there any concrete evidence for that?

> any concrete evidence

What do you mean by "concrete evidence"?

Nothing of this is disputed, they even have their own wikipedia pages for their different operations and branches within USAID


*Specially* that we are talking of USAID, on the case of NED for example, things get slightly murkier because then it is a matter of private rather than public record, but it still works as a tool for management of semi-clandestine operations and operations which need plausible deniability from CIA's end, or at least as much deniability as it can muster, tho these days they prefer to work with shell groups and other associated partners such as for example Atlas Network, Radio Free Asia also falls on that category, same with Voice Of America

If you are interested in books both, Killing Hope by William Blum and Legacy Of Ashes by Weiner are very, very, very good authoritative sources on the matter

If you prefer podcasts, Warnerd Radio has a couple very good episodes on the National Endowment For Democracy, tho they both quote excerpts of the books above

Radio War Nerd EP 274 — National Endowment for Democracy, Part 1 https://podcastaddict.com/episode/121232504

Radio War Nerd EP 275 — National Endowment for Democracy, Part 2 https://podcastaddict.com/episode/121522126

Yes, there is concrete evidence--specifically, the Office of Public Safety mentioned by Cyanbird, was an official cover given to CIA personnel to train local and national police forces in puppet countries how to fight a 'countersingurgency'. This included setting up national ID cards to track everyone, NSA style signals intelligence, and extensive use of torture. One of their favorite methods was to use portable US army telephones, as they had a hand crank generator capable of producing enough current/voltage to torture but were unlikely to cause cardiac arrest, they had an obvious non-torture use case so ordering them was not suspicious, and they had very fine wires that could be inserted up the urethra or stuck between teeth to deliver very painful electric shocks to sensitive areas. Dan Mitrione was a USAID OPS guy who was killed in South America in the 70s (Uruguay, i believe) in retaliation for his role in abuse and torture, who was known for adbucting homeless people upon whom his trainees could practice their torture techniques. The 1980 documentary "Inside the Company" about the CIA lays this out very well. It's long but is worth a watch, and I have seen no comparable films exposing this level of CIA activity since. Vietnam and the Phoenix Program is another classic example. John Manopoli was officially working for OPS in USAID, but was in fact CIA, and he first implemented the national ID card program they used to generate the lists of thousands of names of folks to abduct, torture, and either imprison or kill, and he was also instrumental in that part of the plan as well. Almost the only references to John Manopoli are in books about torture in the Phoenix program, or listings in USAID OPS phone books, or a handful of official OPS papers showing he did the same type of work in a handful of other countries.

While those programs certainly existed this is blatant a false equivocation, you can still have humanitarian programs while being a military hegemony. It's not one or the other.

This is in fact a distinct reason CIA/NSA (and vice versa) won't accept recruits who have served in the peace corp previously, amongst other reasons.

This comment is an incredibly naive attempt at a smear.

> Without them, we’d basically be left with Wikipedia as the only popular entity on the internet outside of corporate control.

Wikipedia is absolutely not "outside of corporate control". It is trivially astroturfed to advance special interests.

> All of these projects are more properly grouped with government funding in other spheres, such as the BBC or PBS in media

Both BBC and PBS routinely publish outright disinformation to advance the special interests of their corporate/government clients, including the intelligence community. For example, look at PBS Frontline's ridiculous puff piece for the violent extremist group HTS last year.

> Levine overlooks basic details, such as reproducible builds

Reproducible builds are also easily circumvented by selectively deploying backdoors and other malware, based on IP or other fingerprints.

If there are good reasons to dispute Levine's investigative journalism, they're not here.

Um, ok. All of the above projects use not only reproducible builds for many platforms, but they’re all open source, and they all have public security audits. Those three pillars are about as good as it gets. Is there something you would add?

I’m not claiming PBS and the BBC are perfect entities, but they do offer an alternative source of information that runs against the grain of corporate media. You would prefer…what exactly?

> Is there something you would add?

Let's start with "not being created/funded by the State Department or Pentagon".

> You would prefer…what exactly?

Again, let's start with "not being blatant propaganda produced by warmongers".

First, there’s a vast difference between the state department and the pentagon. Lumping those two together just reflects an unsophisticated understanding of the federal government. Signal has never received any state department or pentagon money. Tor had a significant early contribution from a researcher at Naval Research. That’s the extent of any pentagon funding. They have received significant state department funding, but to call the state department “warmongers” is just not accurate.

Please stop spreading misinformation. From the Tor Project's public IRS documents:


Yes they’ve received funding from DARPA. I realized I forgot that after I posted. Good catch. To my knowledge, that funding is for new anti-censorship transports to sneak traffic in and out of censored countries.

And the State Department are definitely warmongers.

SecState Kissinger orchestrated the incineration of Laos, Cambodia and Vietnam.

SecState Powell orchestrated the flattening of Iraq.

SecState Clinton orchestrated the butchering of Libya.

SecState Pompeo tried and failed to orchestrate the annihilation of Iran by assassinating top officials and drawing them into war.

And so on and so forth. These aren't even theories. The State Department is closely involved in destabilizing sovereign governments through the full spectrum of means, including war, to advance Washington's interests.

>my own Lantern

Brilliant reposte, but I am curious what software are you referring to here?

A quick look through their comment submissions points at https://www.getlantern.org/:


Signal isn't funded by the military, by OTF/BBG, or any branch of the USG government. People who claim otherwise are confused (deeply) about a program OTF ran that sponsored third-party security reviews and development projects (summer-of-code style), none of which was mediated through OTF --- it was just a bucket of money.

You should be extremely skeptical about people who bring OTF/BBG up in these discussions. I have complicated feelings about Tor stemming mostly from culture and effectiveness concerns and would push back on claims that it's co-opted by the Navy or corporate interests, but at least I can see a clear (if silly) line connecting Tor to these supposed conflicts of interest.

> Signal isn't funded by the military

Correct, it is not funded by "the military", but this is incorrect

> any branch of the USG government

Because Signal/TextSecure received considerable amounts of seed capital from Radio Free Asia which is a CIA spinoff with the explicit aim to fund the development of the cryptography at grass roots level, not per se to have full control of it like NSA would have done, but because having strong cryptography on such platforms (Telegram might be other) is highly effective against perceived US enemies like well... Iran, or Syria, and to allow their assets/agents to communicate more easily while abroad without bulky extra proprietary phones or software

All of that above is mentioned at length on Surveillance Valley btw

It's like you read 4 words from my comment and stopped.

As I understand it the technology behind Tor is strengthened by an arms race. You want several different well-funded entities running nodes, because that makes the service better for everybody. Even if some of those entities are hostile they still help unless one entity controls a large portion of interior nodes and even then you're only giving metadata to that single entity (whichever it is) by using Tor, not anybody else - which is better than you're going to do with alternative technologies.

This analogy unfortunately cuts both ways, if you've got technology that undermines the majority government / power structure in a secure fashion, you'll always have the ability to come in as a intelligence agency and foment an insurgency movement.

Which also unfortunately points to them having exploits no one has discovered yet in said technology tools.

They can still maintain generalized situational control via additional superiority vectors(MASINT, HUMINT, GEOINT, OSINT, FININT etc.)

Ulbricht was caught via poor OPSEC and not via a Firefox/Tor 0day afaik. Though there was/is speculation that a Firefox/Tor 0day was used to bring down some Tor markets and possibly to locate the Silk Road's server. Silk Road 2.0 was brought down in like a few months, which could indicate such a 0day existed. Or that it was ran by some former Silk Road staff members who got doxed when Silk Road 1.0 was shut down.

Ulbricht was caught because an FBI agent, who would read things slowly and twice, recognized these 4 letters : heyy.

That's how Ulbricht sometimes spelled hey, and the agent had seen that particular spelling before in his investigation, in an email from Ulbrict’s student email address.

Nick Bilton's book “American Kingpin: The Epic Hunt for the Criminal Mastermind Behind the Silk Road” is a great read, highly recommended.

it strikes me as extremely naive to take this at face value. see https://www.reuters.com/article/us-dea-sod-idUSBRE97409R2013...

much more likely -- sigint tooling was applied to identify ulbricht, bulk metadata was turned over for his comms history, and it was pored over for things they could connect with sr to get warrants. imo, at least.

but getting to claim you're such a sharp investigator that you can figure it out by noticing the word heyy makes for a much better story to tell an author.

It was more complicated than just heyy, but I won't spoil the book.

It's been awhile since I've read it, but my impression was that solving the case was mostly traditional casework, and a lot of it, by many different people/agents/agencies.

That Reuters article certainly gives pause. Thanks for the link.

That's what they want you to think. He was caught because; Nothing can match against the surveillance arsenal of the NSA.

That's not what I think, that's what Nick Bilton thinks. The quality of his book makes me partial to his thesis, of course, but NSA conspiracy blah adds nothing.

Also, lots more went into catching him than just heyy, but that was the lucky break that had him caught. Now he shares a prison with Dr. Unabomber Kazinsky.

That could be the story but since parallel construction is routinely used to hide the existence of surveillance tools and back doors it’s not unreasonable to doubt it.

I thought I had heard it was stackoverflow, is that looped in somehow?

I don't recall StackOverflow being mentioned, no, but it's been a few years since I've read it.

Correction: He was transferred to a penitentiary in Tucson, Arizona.

Have to admit. I was impressed with the USGOVs ability to recover bitcoin ransoms paid for cyberattacks. I'm not sure if impressed is the right word.

Wtf, who doesn’t add extra y’s to hey sometimes? That wasn’t evidence.

I don't want to spoil the book; but, yes, that detail got him caught.

It’s not fiction you’re spoiling, but a factual conversation about events that you’re not going into due to spoilers. It is an odd defence that kills the conversation when other people bring up good points.

The parallel construction argument seems way more plausible if there’s nothing else besides “heyy”. If there is more, please say what it is instead of mentioning it exists but refusing to say it.

Where is any evidence of Tor being a military surveillance project? I find it hard to believe an open source project like this has been infiltrated. Yes, there is suspicion some ECC curves are compromised, but only the ones provided by NIST. I'd really like to see evidence of Tor.

The seed for that line of thinking is the fact that a US Navy lab built it.[0] Having said that, I believe that's the only basis and is a far cry from making the theory convincing or even probable.

[0] https://en.m.wikipedia.org/wiki/Tor_(network)

Wow, I feel like an idiot. All this time I had no idea the Navy built it, when a simple Wiki search would have pointed that out. Thanks!

“The Navy built it” is a bit of an exaggeration. Paul Syverson did early work on it at the Naval Research Lab, and Roger Dingledine and Nick Mathewson added to the collaboration at approximately the same time, with neither having anything to do with the Navy. That’s the extent of the military connection - some relationship in the first year or so of an 18 year or so project.

There's a been a suspiciously downplayed number of ephemeral hidden services that get raided / internationally taken down on the Tor network for it to be mere circumstance.

No one tries to take notice since they're hosting the worst content on the internet regularly.

Could as well be insiders though and operations that were planned for years.

Did you even click on the link? Signal gives away NOTHING.

Thank you. I never knew the source of the ridiculous theory that the internet sprang from spying attempts on the Vietnamese. I am always looking for keywords to filter conspiracy weirdos. Yasha Levine added

"are the fact that current privacy movements like Tor, Signal, OTF, BBG are fundamentally military funded and survive on government contracts."

Are those "facts" avaiable for investigating, without having to buy the book?

(that Tor is partly US administration funded is known, but Signal? And what is OTF and BGG?)


Funded by Open Technology Fund (OTF) https://en.wikipedia.org/wiki/Open_Technology_Fund

Which is funded by Radio Free Asia (RFA) https://en.wikipedia.org/wiki/Radio_Free_Asia. It had a few reboots but was created as a CIA program in 1951 (https://en.wikipedia.org/wiki/Radio_Free_Asia_(Committee_for...) to blast shortwaves into China from Manilla to try to overthrow the Chinese government. Rebooted more recently since the advent of the great firewall of China.

Wow, that is so thin it is transparent. If this is the sort of 'proof' that we are going to find then I am glad you posted the ref here so that I could add yet another kook to the list of those whose privacy/security rantings and books I can ignore. The biggest danger to long-term privacy projects is not the risk of taking advantage of an opportune partnership with a government agency when incentives align, it is conspiracy nutjobs poisoning the well with their paranoia and delusions.

And Signal?

The main tool, used for private communication?

So if you have something to hide, don't use iCloud backup.

And Whatsapp will give them the target's full contactbook (was to be expected), but also everyone that has the target in their contact list. That last one is quite far reaching.

> if you have something to hide

Most people don't realize that most people have something to hide. The USA has so many laws on its books. Many of which are outright bizarre[0] and some of which normal people might normally break[1].

And that's only counting current/past laws. It wasn't that long ago a US President was suggesting all Muslims should be forced to carry special IDs[2]. If you have a documented history being a Muslim, it could be harder to fight a non-compliance charge.

[0] https://www.quora.com/Why-is-there-a-law-where-you-can-t-put...

[1] https://unusualkentucky.blogspot.com/2008/05/weird-kentucky-...

[2] https://www.snopes.com/fact-check/donald-trump-muslims-id/

I always liked this one I found in the Illinois statutes - it basically criminalizes every person online:

Barratry. If a person wickedly and willfully excites and stirs up actions or quarrels between the people of this State with a view to promote strife and contention, he or she is guilty of the petty offense of common barratry[.]


Barratry typically implies that this is specifically being done with lawsuits and other legal instruments, not in the general case.

There is a renaissance of such laws regarding causing offense. That would basically be anybody whose face you don't like? I wonder how much considerations go into suggestions like this. Side effects should normally hit your face like a truck.

Did you even read the snopes article you referenced before making what seems like a definitive claim about how Trump was suggesting muslims carry special IDs? Because Snope's own rating is "Mixture" of truth and false and if you read the assessment, it is grasping at straws to even make that conclusion.

Yes, "mixed" means you have to read the nuance. I think I accurately captured the reality. If you have a correction to offer, please do.

EDIT: Ultimately, the nuance in that history is not relevant to the point that criminal law changes to include new categories in unexpected ways.

Sure, I can accept there is some nuance but the phrasing and definitive manner of your original statement is very misleading. I'm not the biggest fan of the guy but casually mentioning that he suggested the idea when in actuality it was an idea posed by a reporter is bad faith in my opinion.

> “Certain things will be done that we never thought would happen in this country in terms of information and learning about the enemy,” he added. “We’re going to have to do things that were frankly unthinkable a year ago.”

> “We’re going to have to look at a lot of things very closely,” Trump continued. “We’re going to have to look at the mosques. We’re going to have to look very, very carefully.”

That's all he said to the interviewer. The interviewer was asking the hypothetical and suggested the special identification! He wouldn't take the bait, so since he didn't answer the hypothetical they said "he wouldn't deny it" and wrote the campaign of hit piece articles anyway. Whatever response they got they would have wrote that same piece. If he would have answered one way they would have quoted out of context. Since he responded generically it's obviously drummed up. The fact check is hilarious. "Mixed", lol.

Never answer a hypothetical, it's always a trap.

Your last sentence just made me freak out thinking that I've previously done such stupidity in front of a "law officer". I never for one second thought it could be a trap; I was overly willing to cooperate and truthfully respond to a "theoretical" inquiry. Damn, it hurts in retrospective.

> That's all he said to the interviewer

And then the next day, he clarified:

Reporter: "Should there be a database or system that tracks Muslims in this country?"

Trump: "There should be a lot of systems, beyond databases. I mean, we should have a lot of systems."

And then he tried to backpedal. Decided it was a watch list, not a database, etc. Basically the usual shtick of his where he tries to say everything and nothing at the same time.

Again that's a generic response:

> There should be a lot of systems, beyond databases. I mean, we should have a lot of systems

Beyond databases. What does that mean? That could be analog systems, that could be anything not stored in a computer.

Nothing to do with identification which would need a database. It's a generic answer to avoid a hypothetical. It's a nonanswer.

He said nothing, not everything. You are attributing the reporters question to him. The reporter is posing the hypothetical that they created in the first place by the initial interview.

My main point was hypotheticals are always trap (unless among friends!), but that's a great example of an obvious one.

The usual shtick is to say nothing, because the journalistic usual shtick is to ask gotcha hypotheticals.

You're kind of quibbling over details. The below quote is already bad enough:

> "We’re going to have to look at the mosques. We’re going to have to look very, very carefully."

I already do not trust the person who has said that. Does it really matter if he proposed a full-fledged ID system? He still proposed monitoring mosques. He still proposed surveillance based on religious identity.

The correct answer to that question, "should Muslims be subject to special scrutiny" is a simple "no". I don't really get the debate about hypotheticals; this a question that does have a straightforward, right answer. And the implications here in regards to surveillance and ordinary people having stuff to hide -- those implications are all the same regardless of whether or not Trump actually proposed a literal database.

He was open to increased surveillance on Americans based on their religious identity, he didn't immediately shut the idea down.

Details are important. The media campaigns are claiming he wanted Muslim identification, a system THEY proposed in their hypothetical. When he didn't confirm they said "he wouldn't deny it" as their proof of support.

> The below quote is already bad enough. He still proposed surveillance based on religious identity.

He said nothing about citizens or monitoring them based on religious identity. He said look at mosques, that's all. Mosques are often the target of attacks.


Are you proposing that increased surveillance of mosques is to protect them? That requires a certain level of imagination given the full context of the quote:

> "Certain things will be done that we never thought would happen in this country in terms of information and learning about the enemy," he added. "We’re going to have to do things that were frankly unthinkable a year ago."

> "We’re going to have to look at a lot of things very closely," Trump continued. "We’re going to have to look at the mosques. We’re going to have to look very, very carefully."


And once again, it kind of doesn't matter. An increased focus on monitoring places of worship is monitoring people based on their religious identity. I don't know a single Christian who would argue to me that monitoring churches isn't the same thing as monitoring Christians.

Mosques and churches are not abstract concepts that are divorced from the people inside of them. When you monitor an institution, you are necessarily monitoring the people inside of it, and it is reasonable for them to be concerned about the government taking an interest in their religious-identity. To argue otherwise requires someone to completely divorce religious identity from the practice of religion, and that's just not a reasonable argument to make.


> Details are important.

Not in the context of the original statement, "ordinary people often do have something to hide, and should care about privacy." Look, whatever, you trust Trump. You shouldn't, but you do. Fine.

Do you trust Biden? Do you trust the current government not to attempt to monitor you based on your vaccine status?

You're fighting over the idea that "your guy" wouldn't surveil ordinary people, but this also kind of doesn't matter because your guy isn't in the Whitehouse right now, and I can guarantee you that Republicans are never going to have permanent power over the government. No party wins forever. You have as much reason as anyone else to care about personal privacy, why are you fighting over who specifically is a threat? Does it change anything about the overall privacy debate?

> Again that's a generic response

Like I said, he always manages to say exactly the right things so the people who support him will read between the lines, but leave just enough ambiguity so those same people can quibble constantly over whether that was what he really meant.

> hypotheticals are always trap

He could have just said "No." Or "I have no such plans at this time." if he wanted to sound like a typical politician. His circumlocution is legendary, because it allows everyone to believe what they want to believe. Politicians all have this problem, but Trump elevates it to a whole new level.

You and the person you are communicating with must both not use iCloud backup. And since apple pushes the backup features pretty heavily, you can be reasonable sure that the person you are communicating is using backups. IE, you cannot use iMessage.

I got off all Apple products when they showed me their privacy stance is little more than marketing during the CSAM fiasco, but IIRC the trouble with iCloud backup is it stores the private key used to encrypt your iMessages backup. Not ideal to be sure, but wouldn't iMessage users be well protected against dragnet surveillance, or do we know that they're decrypting these messages en masse and sharing them with state authorities?

You wouldn't think most large states have hacked apple's icloud backup servers 20 times continuously at this point?

iCloud backup can backup your whole phone, specifically the files section. iOS and OSX users can save anything to that.

Has Apple made any public statements regarding iCloud's lack of privacy features. It takes the wind out of their privacy marketing that is effectively hurting ad tech but not truly protecting consumers from state-level actors with data access.

Kind of. These details are indeed publicly written on their website[0]. Do many users ever read this page? Probably not.

[0] https://support.apple.com/en-us/HT202303

Here is an excerpt. The language sounds like encryption is enabled and the chart includes iCloud features as server and in transit protected. Seems like smoke and mirrors then.

> On each of your devices, the data that you store in iCloud and that's associated with your Apple ID is protected with a key derived from information unique to that device, combined with your device passcode which only you know. No one else, not even Apple, can access end-to-end encrypted information.

E2EE was in the iOS 15 beta for backups but it was removed? (Did not land for release) after they changed the time table of CSAM scanning feature. So we will see if we get E2EE backups once that image scanning lands.

Can you turn that off if you have icloud or do you need to not use icloud all together?

Yes, and you can delete old backups on iCloud - and then switch to local, automatic, fully encrypted backups to a Mac or PC running iTunes.

HN tends to get very frothy-at-the-mouth over Apple and privacy but the reality is that iPhones can be easily set up to offer security and privacy that best in class, they play well with self-hosted sync services like Nextcloud....and unlike the Android-based "privacy" distros you're not running an OS made by a bunch of random nameless people, you can use banking apps, etc.

The only feature I miss is being able to control background data usage like Android does.

You can turn it off individually just for Messages, but you're still left not knowing the state of the setting on the other end.

It says Telegram has no message content. Isn't telegram not E2EE by default, instead required explicit steps to make a conversation encrypted?

Either way looks like Signal wins by a lot. The size of it spot is so small, it seems almost squeezed in. But only because they have nothing to share.

for signal users this means the messages of course do exist on your phone, which will be the first thing these agencies seek to abscond with once youre detained as its infinitely more crackable in their hands.

as a casual reminder: The fifth amendment protects your speech, not your biometrics. do not use face or fingerprint to secure your phone. use a strong passphrase, and if in doubt, power down the phone (android) as this offers the greatest protection against offline bruteforce and sidechannel attacks used currently to exploit running processes in the phone.

My advice if you’re not on the level where three letter agencies are actively interested in your comings and goings:

- Use a strong pass phrase

- Enable biometrics so you don’t need to type that pass phrase 100 times per day

- Learn the shortcut to have your phone disable biometrics and require the pass phrase so you can use it when police is coming for you, you’re entering the immigration line in the airport etc. - on iPhone this is mashing the side button 5 times

> Learn the shortcut to have your phone disable biometrics and require the pass phrase

On my Pixel (Android), it's hold the power button for ~2 seconds then select Lockdown.

In case anyone with an Android is confused because they don't see the option: I believe that you have to explicitly enable the Lockdown option in Android's system settings before it shows up.

There are a couple of apps that will also lock down with a tap instantly. I'm sorry I forget the names though, but handy if you have it in hand and "open". I have been using iphone too long now to remember the names of the apps though. you can put a shortcut on every "page" of your android and tap it, it enforces locking the phone by passcode. so on most phones it would be a swipe and a tap, probably less than a 200 milliseconds if you practiced it.

On recent iPhones, the way to disable biometrics is to hold the side button and either volume button until a prompt appears, then tap cancel. Mashing the side button 5 times does not work.

Not sure how recent you're talking but I have an iPhone 11 Pro and I just tested pressing the side button 5 times and it takes me to the power off screen and prompts me for my password the same way that side button + volume does.

Apple's docs also say that pressing the side button 5 times still works.

> If you use the Emergency SOS shortcut, you need to enter your passcode to re-enable Touch ID, even if you don't complete a call to emergency services.


Pressing it five times starts the emergency SOS countdown (and requires the passcode next time) on my iPhone XS. Maybe you have the auto-calling disabled?

It doesn't on my 2nd Gen iPhone SE (2020). That said, anything that causes the "swipe to power off" screen to appear has the same affect, so essentially holding down the button for 5 seconds does the trick.

The side button 5 times thing is disabled by default, but can be enabled from Settings > Emergency SOS.

I just verified this on iOS 15.1 on an iPhone 12.

Works fine on my 11, my wifes 12, her backup SE gen 2 and my backup SE gen1.

Just tested all of them

I’m on an iPhone 13 and the latest iOS and it does work here. But so does your method…

But I guess yours is the “official” way to do it indeed:


If you _are_ at the level where TLAs are interested in you they will not give you a chance to mash that button. You will have a loaded gun pointed at your head out of nowhere and you will freeze. From experience.

Is that a story you mind sharing?

He got popped for pedophilia if I remember correctly.

Not sure why this is downvoted; you are right.

I use this app on my phone


It locks the phone when a movement threshold is broken, and then requires the password instead of biometrics to unlock the phone.

So the snatch the phone when it is unlocked vector gets harder.

In most cases you are going to want to separately passphrase your messaging stuff so it is locked up when you are not using it. That makes every thing else a lot easier. For example, there is a Signal fork that supports such operation:

* https://github.com/mollyim/mollyim-android

So you're saying I should have to type a secure passcode every single time I want to read or send a message on my phone?

No thanks.

I think that it would stay unlocked for a time, possibly till you locked it. Possibly such an arraignment would be more practical for something offline like encrypted email.

A compromise would be to just save the messages to a passphrase. You could use a public key so that you would only need the passphrase to read the old messages. I haven't heard about anything that actually does this.

I just tried this an it does not work for iPhone is it only on a certain iOS? I am a bit behind on updates. Thanks

That's actually the old method for iPhone 7 and before. Now, you can activate emergency SOS by holding the power button and one of the volume buttons. Assuming you don't need to contact any emergency contacts or services, just cancel out of that and your passcode will be required to unlock.


Try: Hold "volume up" and "power" for 2 seconds

You'll feel a vibration, and biometric login will be disabled until you enter your passcode.

That did the trick thanks. But ultimately I’m behind on updates so my phone could probably be broken into trivial with the forensic tools available to most law enforcement. I’m going to update soon.

Don't have any family or friends, either. If you refuse to talk and invoke your rights the government will just threaten to hurt those you love until you break and give up your passwords. From experience.

I liked it in Wrath of Man where one guy is acting tough as fuck until they bring his girl into the room.

Also, if you can, if you are encrypting data, use a hidden volume inside the first - that way you can give the government the outer password and they'll be happy thinking they have everything.

Signal recently added 'disappearing messages' which lets you specify how long a chat you initiate remains before being deleted.

Not "recently". Disappearing messages have been there for at least 5 or 5 years.

Almost _all_ my Signal chats are on 1 week or 1 day disappearing settings. It helps to remind everyone to grab useful info out of the chat (for example, stick dinner plan times/dates/locations into a calendar) rather than hoping everybody on the chat remembers to delete messages intended to be ephemeral.

The "$person set disappearing messages to 5 minutes" has become shorthand for "juicy tidbit that's not to be repeated" amongst quite a few of my circl3es of friends. Even in face to face discussion, someone will occasionally say something like "bigiain has set disappearing messages to five minutes" as a joke/gag way of saying what used to be expressed as "Don't tell anyone, but..."

(I just looked it up, https://signal.org/blog/disappearing-messages/ from Oct 2016.)

Maybe it was only added recently on the desktop client.

Keep in mind that any time a message is on flash storage there might be a hidden copy kept for flash technical reasons. It is hard to get to (particularly if the disk is encrypted) but might still be accessible in some cases.

I think encrypted messengers should have a "completely off the record" mode that can easily be switched on and off. Such a mode would guarantee that your messages are never stored anywhere that might become permanent. When you switch it off then everything is wiped from memory. That might be a good time to ensure any keys associated with a forward secrecy scheme are wiped as well.

And a screenshot, or another camera, or a rooted phone can easily defeat that.

The analog hole ALWAYS exists. Pretending it doesnt is ridiculous.

> And a screenshot, or another camera, or a rooted phone can easily defeat that.

Not if the message has already been deleted. Auto-deleting messages are so the recipient doesn't have to delete them manually, not so the recipient can't possibly keep a copy.

Exactly this. Even more: Auto-deleting messages are also that the sender doesn't have to delete them manually. Most people do not understand this. I even had a discussion with an open source chat app implementer who insisted on not implementing disappearing messages because they couldn't be really enforced.

That's a different threat model, no messaging app is trying to protect the sender from the receiver. Disappearing messages are meant to protect two parties communicating with each other against a 3rd party who would eventually gain access to the device and its data.

Wickr has a "screenshot notification to sender" feature (which of course, can be worked around by taking a pic of the screen without Wickr knowing you've done it).

What made you think I was pretending it doesn't?

Also IOS has a panic button. Hit the main/screen button (on the right) five times really fast and faceid/touchid is disabled and passcode is required

Your statement on the 5th amendment is no longer accurate broadly, but the matter still has some cross-jurisdictional disagreement: https://americanlegalnews.com/biometrics-covered-by-fifth-am...

District courts don't make law. Magistrates working for those district courts even less so. The case this news article cites has no precedential value anywhere - not even within N.D.Cal. - and should not be relied upon.


Agreed. That decision is unlikely to be repeated by any appellate court. IMO, all the rulings on biometrics not being testimonial are constitutionally correct, even if that sucks. A lot of constitutional rulings suck.

The real solution is for a federal statute to require warrants.

> do not use face or fingerprint to secure your phone

but can't they force you to put your password in that case, instead of your finger?

In general, no.

The contents of your mind are protected because you must take an active part of disclose them. Of course, they can still order you to give them the password and stick you in jail for Contempt of Court charges if you don't.

Check out Habeas Data. It's a fascinating/horrifying book detailing much of this.

To err on the side of caution, it's best to make all your passcodes themselves an admission to a crime.

"Your honor, the state agrees to not prosecute on any information inferrable from the text of the password."

"Understood. The defendant's Fifth Amendment right to protection from self-incrimination is secured. As per the prior ruling, the defendant will remain in custody for contempt of court until such time as they divulge the necessary password to comply with the warrant."

I don't know why you're being downvoted. For a start, if it was a third party that had the passcode and refused to divulge it they can be held in jail until they release it, e.g. if your wife knows it. (There are many cases where people have been sentenced to years or decades in prison for not testifying)

If it is you not divulging your own passcode, then legally the judge can't give you contempt, but in reality they could give you contempt until you fought it through the appellate court. Contempt is a special type of thing - certainly here in Illinois you have no right to a jury trial on contempt charges. You're just fucked.

I believe judges can, in fact, hold a defendant for refusing to give up their own passwords, and that the contempt could be indefinite. This is a point of law that is not settled at the federal level yet, and at the state level it varies from jurisdiction to jurisdiction.

In one case, the appellate court at the federal level simply refused to hear the case that had been decided at the sate supreme court level.


They don't actually need your passphrase to unlock your phone - they just need somebody with the passphrase to unlock in for them. And if there's any doubt about who that is, then having that passphrase counts as testimonial; but if there's not - it might not count as testimonial.

Although there are apparently a whole bunch of legal details that matter here; courts have in some cases held that defendants can be forced to decrypt a device when the mere act of being able to decrypt it is itself a foregone conclusion.

(If you want to google a few of these cases, the all writs act is a decent keyword to include in the search).

The defendant never needs to divulge the passphrase - they simply need to provide a decrypted laptop.

We really should up our game on encryption, perhaps some kind of time-based crypto rotation that inherently self-destructs rendering the data unusable if you don't authenticate with it every so often. If you are physically unable to unlock a device you can't be compelled to do so.

My passwords are so obscene it's a crime to write them down.

great, so they'll just be able to hit you with lewd charges on top of everything else they are filing.

I think a fingerprint is easier to get if you’re not willing to cooperate. However, I think if they really, I mean really want your password, they will probably find a way to get it out of you. I think it also depends if it’s the local sheriff asking for your password or someone from the FBI while you’re tied up in a bunker somewhere in Nevada.

Apple should allow for 2 PWs, one the real PW, the other triggers a "self-destruct" mode.

Knowing that is possible law enforcement would then hesitate to ask.

using such a self-destruct mode would be a certain way getting yourself charged with destroying evidence/contempt of court/... though.

This would be difficult to prove. They would have to know for certain the evidence was on there to begin with. I don't see the prosecutor easily meeting their burden of proof on this charge.

This is how the statute is worded here in Illinois:

"A person obstructs justice when, with intent to prevent the apprehension or obstruct the prosecution or defense of any person, he or she knowingly commits any of the following acts: (1) Destroys, alters, conceals or disguises physical evidence."

Ugh. It's a vague law. I don't even know how they would prosecute that for virtual evidence held on a device that they didn't already have a view inside of.

i was under such duress that i was shaking so badly that i made typos in my 30 character password 10 times. the loss of evidence is not my fault as it is the people putting me under that duress. don't think it'll hold up though

No 5th Amendment protection? If you spoke the command / "password", would it matter?

FaceID can already prevent a device from unlocking if someone is sleeping. In theory devices could detect if they were being unlocked "under duress" by using biometrics to look at facial expressions, heartbeat, etc, and then wipe themselves. I don't know how practical in reality but perhaps it could be a feature you turn on in a sensitive environment.

How? They can physically overpower you and place the sensor against your finger, or in front of your eye and pry it open without your consent and gain access with 0 input from you. How do they similarly force you to type something that requires deliberate, repeated concrete actions on your part?

In my case they threatened to harm my wife if I didn't stop refusing. After my case is over I'll happily release the video tapes so you can see how this shit works.

Please do. Very few people realize just how bad things can get with law enforcement.

no. The 5th amendment has been read weirdly by the supreme court.

The fifth amendment doesn't protect either speech or biometrics. Nor does it protect passwords.

You are wrong. It protects passwords as speech, as they are testimonial, per many court rulings. It does not protect biometrics based on law that basically says the police can force you to give up your fingerprints for their records, so they can sure as fuck force your finger onto a reader.

(Oh and by the way, as I mentioned in the comment you replied to, the fifth amendment DOES NOT PROTECT SPEECH. That's the FIRST amendment. The FIFTH amendment protects AGAINST SELF-INCRIMINATION.)

> It protects passwords as speech, as they are testimonial, per many court rulings.

Not true.

https://www.reuters.com/business/legal/us-supreme-court-nixe... for example.

Can they force someone to LOOK at the phone? FaceID with attention check will need you to look before it opens.

Arguably, yes. That's why it's important to know the shortcut on iOS to render faceid inoperable until you give it the password - mash the power button five times fast!

Telegram is encrypted OVER THE WIRE and AT REST by default with strong encryption no matter what you do. It's E2EE if you select private chat with someone.

Lots of FUD out there there about Telegram not being encrypted that's just not true. There's nothing either side can to do send a message in clear text / unencrypted.

"Encrypted OVER THE WIRE and AT REST" means that telegram has easy and unfettered access to chat logs. So they can give it up to authorities. (I don't argue that they DO, just that they very much CAN).

This is proven by an extremely simple experiment: you log in on your new phone, enter password and instantly see all chats.

Another simple experiment points that chats are unlikely to be even encrypted at rest is that Telegram has an extremely fast server side message search. You log into a web client, half a second later you can type a search query and uncover chats from years ago.

It kinda depends on if images and videos are encrypted separately and only indexed at first.

How much data there are on your chats? 1 megabyte is around one thick book in plaintext.

AES-CBC as example method decrypts more than 2 gigabits per second with hardware opcodes (2012 processor), for example if we look this data https://www.bearssl.org/speed.html

It is impossible to say based on delay when searching plaintext on this level whether there is encryption.

Encryption over the wire and at rest is a basic expectancy of any web service today. They would meet that criteria just by using SSL and disk encryption on their servers. E2EE is a much stronger criteria.

> It's E2EE if you select private chat with someone. And its not E2EE if you fail to select private chat.

What this means is that any conversations where you do select E2EE are the ones the "authorities" will take interest in, even if only to the extent of metadata.

That's the fundamental problem with E2EE-by-exception, rather than by default. It calls attention to specific data, even if its not cleartext, rather than obscuring everything.

(how) does the telegram server prevent unencrypted content?

also curious - how does telegram support encryption for chatrooms without the parties being known in advance? or are those chats not encrypted?

Telegram only uses end to end encryption for secret chats. All other chats are only encrypted on the wire with Telegram's keys. Your comment was encrypted on the wire to HN but that's not going to do anything to keep it away from the FBI. The majority of all Telegram messages are only secured by Telegram's unwillingness to cave to outside pressure. It's in plaintext as far as they're concerned.

For somebody who isn’t super cyprtography-savvy, what’s the difference between over the wire and e2ee? Does the former mean that telegram itself can read non-private-chat messages if it so chooses?

> For somebody who isn’t super cyprtography-savvy, what’s the difference between over the wire and e2ee?

E2EE: As long as it is correctly set up and no significant breakthroughs happens in math, nobody except the sender, the receiver can read the messages.

> Does the former mean that telegram itself can read non-private-chat messages if it so chooses?

Correct. They say they store messages encrypted and store keys and messages in different jurisdictions, effectively preventing themselves from abusing it or being coerced into giving it away, but this cannot be proven.

If your life depends on it, use Signal, otherwise use the one you prefer and can get your friends to use (preferably not WhatsApp though as it leaks all your connections to Facebook and uploads your data unencrypted to Google for indexing(!) if you enable backups.

Edited to remove ridiculously wrong statement, thanks kind SquishyPanda23 who pointed it out.

> nobody except the sender, the receiver and the service provider can read the messages

E2EE means the service provider cannot read the messages.

Only the sender and receiver can.

Thanks! I edited a whole lot and that came out ridiculously wrong! :-)

Haha, no problem. I do that a lot too :)

Forgot to upvote you yesterday, done now ;-)

Yeah, if you connect to https://facebook.com and use messenger, it's encrypted over the wire because you're using HTTPS (TLS). But it's not E2EE.

Pretty much. End to end uses the encryption keys of both _users_ to send. Over the wire has both sides use the platforms keys so the platform decrypts, stores in plain text, and sends it encrypted again to the other side. Over the wire is basically just HTTPS.

over the wire is when its encrypted during transmission between the User and Telegram's servers. HTTPS or SSL/TLS, etc. At Rest is when its encrypted in their DBs or hard drives, etc. Theoretically, Telegram can still read the contents if they wished to do so if they setup the appropriate code, or tools inbetween these steps.

E2EE means that the users exchange encryption keys, and they encrypt the data at the client, so that only the other client can decrypt it. Meaning Telegram can never inspect the data if they wanted to.

I very much doubt that Telegram really does encrypt messages "at rest": their server side full text search works extremely fast.

That's a fair assessment, I didn't make the original claim, just answered the definitions of the encryption states.

I haven't dug enough to know what telegram does or claims to do.

yes. worth remembering also that even with e2ee, a ad-tech-driven company could have endpoints determine marketing segments based on content of conversations ad report those to the company to better target ad spend.

Also, as is the case with WhatsApp, they siphon off your metdata and even have the gall to make an agreement with Google to download message content unencrypted to Google when one enable backups.

are you trolling? telegram (are therefore the fbi) has full access to all content of every message. unless you use private chat, which nobody does, and isn't even available on desktop. i use it. but it's about as private as discord. which is to say not at all

> the FBI's ability to legally access secure content

Maybe there are laws preventing legal access to message content? Maybe related to wherever Telegram is incorporated.

> Maybe there are laws preventing legal access to message content?

Well sure. A lot of laws require a court order. In the U.S. that's usually not too difficult.

It helps Telegram is HQ'd in the UK and the operational center is in Dubai.

Does it? UK and Dubai are USA partners in Intelligence gathering and work together several times.

Biggest example as of late: https://www.bbc.com/news/world-middle-east-58558690

I don't know whether Telegram is E2EE by default (probably not.) When you do a call on telegram you are given a series of emoji and they are supposed to match what the person on the other side has, and that's supposed to indicate E2EE for that call.

Verification in band seems pretty meaningless, approaching security theatre.

For voice? It's hard to fake the voice of someone you know.

you don't have to fake the voice, just mitm and record cleartext

But they have to fake the voice, if I call the other person and say "my emoji sequence is this, this and that" for the other person to verify and vice-versa.

Person A calls you. I intercept the call, so person A is calling me, and then I call you (spoofing so I look like Person A). When you pick up, I pick up, then I transmit what you're saying to Person A (and vice versa).

How do you know I'm intercepting the transmission? Does the emoji sequence verify the call, perhaps?

The emoji sequence is a hash of the secret key values generated as part of a modified/extended version of the Diffie-Hellman key exchange. The emoji sequence is generated and displayed independently on both devices before the final necessary key exchange message is transmitted over the wire, so a man-in-the-middle has no way of modifying messages in flight to ensure that both parties end up generating the same emoji sequence.

I'm not a cryptographer, but that's what I glean from their explanation: https://core.telegram.org/api/end-to-end/video-calls#key-ver...

The emoji sequence represents the secret key exchange between you and the other party. If you intercept the call, you are making one key exchange with person A, and another key exchange with person B. Due to the mathematics involved, there is no way for you to force both key exchanges to yield the same result.

For a "standard" DH key exchange it would be possible to brute force the emoji sequence to be the same (since it's too short to be resistant to brute forcing), but the protocol that Telegram uses specifically defends against that by having both sides commit to their share of the key ahead of time, so they cannot try different numbers.


So person A and person B are going to see different emojis no matter what you do. To fake a phone verification while performing a main-in-the-middle attack you'd also have to fake their voices to each other. That's hard.

Both connections would show different emojis on both sides then. So you would need to somehow deep fake the voice of the one telling their emojis to the other one.

If i'm talking to a person I know in person, I'd recognise their voice.

Real privacy is too burdensome for most users, so they feel just fine if the service owner promises in a stern voice that their chats are really secure.

It is not necessary to provide real security, do fingerprint verification, etc if the users are already happy with the level of security they are promised.

The emoji comparison thing is mathematically solid. Assuming the clients aren't backdoored (and the Telegram client is open source, so that's not that easy), there is no way for an attacker to make both sides show the same emoji. If they want to convince two users that they have en E2EE connection while performing a man in the middle attack, they'd have to fake their voices to each other to change what emoji sequence they each read out. That's hard, and therefore this is real, meaningful privacy.

Telegram can potentially perform mitm at any time and generate matching emoji images for both sides of conversation, since you can't really trust the app code to be the same they put on GitHub. If you've built it yourself, that'd reduce the risk, but nobody does that because blind trust is much more easy.

This is true, and IMHO somewhere that App-Stores could potentially assist in building trust for OSS Apps being distributed.

What I'm envisioning is a 'build hash' that is reproducible based on the public source code with a given set of compiler settings (i.e. same used for publish.) The systems app-management widget could then display this build hash in the app-check menu.

This would likely require more care in packaging, as well as some form of secure config API that allows companies to provide certain bits of configuration (i.e. remote servers to contact) without impacting the build output. This would mean that yes, people would still need to audit the code, but at least it's easy for anyone to canary out to the internet that the hashes are mismatching, same for when someone does find something on an audit.

OTOH, I'm sure Telegram's competitors in the chat space would love a reason to de-legitimize them, so it wouldn't surprise me if -someone- out there was already doing some sort of compare on published builds.

It is not, by default, and none of the group chats are.

This chart is showing what messaging providers are willing to give to law enforcement, not a reflection of the technical capabilities of the messaging provider.

I assume what they're showing for Telegram (basically no data except IP/phone data if Telegram decides it's for a legit counter-terrorism activity) is a matter of Telegram business policy.

Signal gives the limited information they do because I assume they are subject to warrants from U.S. courts. Telegram is run, to my understanding, from jurisdictions where enforcing a U.S. court order would be difficult-to-impossible, and they keep the private keys to decrypt their stored message content split between servers in relatively non-overlapping legal jurisdictions, so even a successful seizure of data in one wouldn't be enough to decrypt message content.

That's all well and good -- and I appreciate Telegram for setting things up that way -- but that means at any time Telegram could make a policy decision to cooperate with law enforcement and provide much more than what is shown on this chart. Signal, on the other hand, could choose to cooperate as much as they want but not have the technical capability to provide more information. (Barring them updating their client to intentionally build in a backdoor, etc., but I'm basing this on what the current implementation is.)

The other important thing about this chart: this is the unclassified version. Is there another classified document out there which says "we have a secret relationship with Telegram/whomever and they give us all the message content we want" but they don't advertise to the law enforcement community at large? They secretly use it to aid in parallel construction so they don't ever have to reveal that a messaging vendor is giving them message content in court? We have no idea.

tl;dr: Telegram looks great on this chart because of policy, not technology. I love Telegram, but I'm under no illusions that it's appropriate for talking about things I wouldn't want law enforcement to have access to. Luckily, I haven't found myself needing to talk to my friends about illegal activity.

>Telegram looks great on this chart because of policy, not technology.

This is what puzzles me about Apple, they absolutely have the capability to mitm iMessage pretty discreetly. Because Apple just completely hand waves away key distribution and they can silently add and remove keys at their leisure it's largely only policy that underpins their security. They're not Telegram, they aren't structured to be in a position to be able to ignore demands from the justice system to assist with some agent's latest fishing expidition. How are they getting away with not providing stuff that they obviously have access to? The PDF lists "Pen Register: no capability"

TLDR: Telegram depends on trusting Telegram. Signal is trustless.

Telegram isn’t E2EE by default.

My bet is on the fact that they are based in Russia, so they don’t give a shit about a US warrant or subpoena.

Telegram isn't based in Russia (anymore). The company is incorporated in Dubai since 2017 [0]. They opposed Russian warrants in the past, resulting in the blocking of the app in the territory for some time [1].

[0] https://www.bloomberg.com/news/articles/2017-12-12/cryptic-r... [1] https://en.wikipedia.org/wiki/Blocking_Telegram_in_Russia

That is correct. By default all messages sent over Telegram are stored permanently in their servers unencrypted.

Not exactly. Non-secret chats are stored encrypted on Telegram's servers, and separately from keys. The goal seems to be to require multiple jurisdictions to issue a court order before data can be decrypted.

https://telegram.org/privacy#3-3-1-cloud-chats https://telegram.org/faq#q-do-you-process-data-requests

"Not exactly" means "completely incorrect" now?

Telegram doesn't store your messages forever and they are encrypted and seizing the servers won't allow you to decrypt them unless you also seize the correct servers from another country

Of course they store your messages forever... They've kept all of my messages for over 7 years now.

If you really think that kind of shit will float...

Source for Telegram storing the information unencrypted at rest?

It is widely known and confirmed by Telegram themselves that your messages are encrypted at rest by keys they possess.

This is a similar process to what Dropbox, iCloud, Google Drive, and Facebook Messenger do. Your files with cloud services aren’t stored unencrypted on a hard drive - they’re encrypted, with the keys kept somewhere else by the cloud provider. This way somebody can’t walk out with a rack and access user data.

How do they provide near-instant full text search on server side if the chats are "encrypted at rest"?

Encrypted at rest means the data is encrypted as stored on disk, not that they do not have access to the keys. That would be end-to-end encryption.

What Telegram claims to have done is set this up in a way that makes it very hard for a single party/state to get these keys. It's not possible to make this completely impossible (if you have a server processing user data, it will have the keys loaded at some point, and there is always some way to physically attack it), but it is possible to make it very hard (physical tamper detection on the servers, secure boot tied to machine identity credentials required to access key material, etc - it's hard, but not impossible, to make this too difficult for any nation state to bypass). We don't know how good their set-up is, but it's certainly possible to do a good job at doing what they claim to be doing.

It doesn't matter at all, if you consider the risks of FBI (or FSB) accessing your chat logs. Telegram can produce your unencrypted chats to them, wether they are encrypted at rest or not.

I just don't see why they would make life harder for themselves developing stuff, given how often Durov lies. He claimed that all Telegram developers are outside of Russia, but then it turned out that they were working next floor from his old VK company office, right in Saint Petersburg.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact