Hacker News new | past | comments | ask | show | jobs | submit login
Zoom says it won’t encrypt free calls so it can work more with law enforcement (twitter.com/nicoagrant)
1094 points by sneak on June 3, 2020 | hide | past | favorite | 458 comments

This series of tweets from Alex Stamos has more specific information and tradeoffs being considered: https://twitter.com/alexstamos/status/1268061790954385408?s=...

That was a good thread, thanks.

The relevant bits I got from it were that there are a bunch of bad actors who create new Zoom identities and host meetings a few times before moving on, and Zoom needs a means by which they “can, if they have a strong belief that the meeting is abusive, enter the meeting visibly and report it if necessary.” Their E2EE design will make it impossible for Zoom employees to enter E2EE meetings without permission, so giving E2EE to the free tier will enable people to use Zoom meetings to facilitate abuse.

Looking back on this comment now I realize it can be construed as condoning Zoom’s decision here. I was not intending to pass judgement with that comment, but perhaps I should:

I think Zoom is wrong.

End-to-end encryption should be available to everybody no matter who they are. This means making it available to bad actors too. Expanding the scope of human communication should not be used to justify state surveillance. Privacy is a fundamental human right and one we should not be forced into giving up because it annoys the government.

I so want to be on the same side of this discussion but the argument is nuanced.

The reason "think of the kids" works so well to justify blocking E2E all the time is because child abuse happens literally all the time.

When someone solves this problem, and I don't think any of us really believe it to be solvable, we can move on. I don't want the government in my private conversations, but I don't want my kids in someone elses either.

To extend this - we recognise a duty of care to our users and their privacy when we build these systems, but if those users plan and carry out an act of terrorism did we not also have a duty of care to their victims to not aid their killers in planning their murder?

We can't shunt this responsibility forever, the public will not take our side down the road - because we are ignoring the counter-argument even if we wedge our fingers in our ears.

Child abuse, terrorism, drug trafficking and professional crime in general is a needle in a haystack compared to boring petty crime let alone normal communication.

Law enforcement entities that try to prosecute these kinds of crimes doesn't do it by building haystacks of data and then combing through looking for needles because that's a waste of their resources relative to the amount of results obtained. They do it by attacking the endpoints where the abuse or terrorism, or other professional crime has to actually happen. They find a terrorist, or a child abuser or a drug trafficker or they find evidence of their handiwork and then work from there. They see where they get their money, their bombs, their drugs, etc, etc and follow the links as much as they can. When law enforcement is actually trying to target crime they don't go fishing, digitally or in meatspace because that's not an efficient way to obtain results if the goal is to go after some genre of professional/organized crime.

Running a mass operation with no specific target (like speed traps in meat-space or dragnet operations in the digital world) is great for padding stats because you can say "look, we got X pounds of meth off the street" or whatever but it doesn't actually do much to target the professional crime because professional criminals take steps to avoid being caught in lowest common denominator type policing.

Neutering encryption (so that cops can continue to run surveillance dragnets) doesn't do anything to help the cops catch real criminals, that's just a talking point made up by the people who want the government to have the ability to put any arbitrary person under a microscope.

Users choose their technology. Technology doesn't choose its users. There's no way to make it impossible for criminals to use some communications service. The same technology that protects the lawful person will protect the terrorist and drug dealer. There's no solution available that doesn't also involve sacrificing the safety of upstanding citizens.

In case anyone doubts the above fact: government agents abuse their surveillance powers to spy on their loved ones.


This article was posted here recently, a reminder of how easy it is to become the target of warrantless government surveillance:


There's no reason to believe the government is any better than these criminals. Cryptography must be strong enough to defeat even intelligence agencies as well as ubiquitous so that it'll be hard if not impossible to enforce legal limitations or bans.

> child abuse happens literally all the time... I don’t want my kids in someone elses [private conversations] either

93% of the time, the perpetrator knows the child. If you’re seriously worried your children might be victims of abuse, then your first line of defense should be against your own family and friends.


The statistics for children are in the paragraph after the first graph.

Yes, this is correct. Children bullying is one such example.

And this is exactly the case where we want to be able to have digital evidence.

So the solution is to record teens’ private video chats?

What could possibly go wrong?

You will always be able to communicate with someone else in an encrypted manner, if you both want to do so, and no legislation that forces popular platforms to go unencrypted can change that. So, no illegal activity will be harmed.

Apparently it's okay for Zoom to shunt this responsibility for its paid users? Even if I were to accept your premise that omitting E2EE is a legitimate trade-off to detect abuse, Zoom's choice to selectively apply this standard for its free users suggests that this is NOT why Zoom chose to do this.

> I don't want the government in my private conversations, but I don't want my kids in someone elses either.

Easy: don't let your kid join zoom meetings without your permission/supervision until he/she understands that there are bad people out there.

on the other hand, zoom has a vested interest in identifying the people in the call (say to allow linking to a linkedin profile or other revenue-generating reasons).

I think you are wrong. The reason is, people seem to continue extrapolating reasonable privacy laws that were originally meant for the physical world to the virtual world. In the physical world, however, there's always reasonable workarounds to break into these privacy barriers if there's a suspicion of crime. In the virtual world, often, there's no possible way to break strong encryption barriers even if everyone agrees there needs to be a check on what's inside for the public good.

As an illustration, if we get reasonable evidence suggesting that someone is growing marijuana in their ranch, we can get a warrant and go inside. There's not too much the owner can do to stop it. However a perfectly encrypted iphone cannot be broken into, no matter if the entire world agrees that there's evidence of crime in it.

From what I can see, no one argues against warranted search of personal property in the physical world, except maybe some sovereign citizen crazies. Given this, why can't we strive for a similar system on the virtual world as well? I too agree warrantless or unfettered govt surveillance of technology is bad, but that's a policy failing not a technology one. We should try to focus on how we can hold governments responsible instead of making fully protected crime caves for anyone who cannot whip up a conscience.

I agree privacy should be a right, but not at the expense of many people enduring a life of hell in these cordoned spaces for that cause.

> why can't we strive for a similar system on the virtual world as well

because any crime in the virtual world can be uncovered by good police work. nobody has perfect operational security, including the government. so the solution to law enforcement is hard work by the law enforcers.

consider: prior to electronic communication, all private discussions were perfectly encrypted, because if you weren't there, you didn't hear what was happening. And society continued to function.

You simply can't trust the government to respect boundaries that they created but have the ability to breach, especially when it can be done completely surreptitiously.

We need to learn the lessons of Snowden, and fight tooth and nail to prevent anything less that complete, unfettered access to private communications by human beings. Anything that falls short of that will eventually be complete, unfettered surveillance, because there is no metastable equilibrium point in the middle.

consider: prior to the internet, /all/ telephone conversations could be monitored by the government. And society continued to function.

The controls on surveillance are not technical, they are political. The technology was the same, yet the Stasi listened to every call they could; other governments did not.

Fix the politics, because it /will/ win in the end. Learn the lessons of Germany and China.

No, they could not monitor all conversations. They could only listen to as many calls as they had agents to listen to them. It was not possible for them to listen to everyone at once, nor could they use this as means of discovery. They had to suspect someone in the first place in order to decide to expend the human resources to listen to their calls.

This is fundamentally different from modern technology where they can have a computer listen to every single call, pick out whatever keywords they're looking for, and flag it for later review. Technology now makes it possible for them to truly listen to everyone at once. This is why end-to-end encryption is necessary for everyone.

Politics is not going to solve this problem. A lot of what America's police and intelligence agencies do is already illegal. They don't care. They're going to do anything they can with the technology.

If you can't fix the politics it's _not going to matter_. The politics will just make the technology illegal. That's what's happening in China.

It's a weirdly blinkered concept to say "America's agencies already do illegal things and their politics is broken but what will save us is American corporations deploying technology".

(The "we need universal E2E to protect our freedoms even if there are downsides" is not, in logical form, a million miles different from 'we need guns everywhere to protect us from the government and damn the negative consequences of having guns everywhere', frankly)

What if we fix the politics and forget about the technology, then the politics later become broken again? We won't be able to take back those private unencrypted conversations that could be used to retroactively incriminate us.

I actually believe that technologies such as strong encryption are creating important checks and balances that make our democracy stronger. They are not subverting it like you are implying.

>The "we need universal E2E to protect our freedoms even if there are downsides" is not, in logical form, a million miles different from 'we need guns everywhere to protect us from the government and damn the negative consequences of having guns everywhere', frankly

I agree, and I agree with both of those. Giving up freedom/privacy for safety is almost always a losing bet.

The actual trade-off is giving up safety to gain the illusion of freedom.

With guns, the state will always outgun you. So the gun-riddled society sees children in its schools murdered staggeringly often, while its (supposedly free) citizens are tear-gassed with impunity by a state for nothing more than a photo opportunity.

That was not a winning bet for that society.

It's similar with E2E. It can't protect you from the government, because the protection is illusory – it protects just you so long as the state wants it to. When it no longer wants it to, it makes it illegal. Administrations are already heading in this direction.

Meanwhile E2E enables a number of proven harms, from lynchings to child abuse. Is that a worthwhile trade-off just for the protections it gives from corporate or illegal privacy invasion? Would it lose all of those benefits if legitimate law enforcement were allowed access? There is at least a debate to be had, there.

I see it as the exact opposite: giving up freedom for the illusion of safety. Using the tear gassed protesters as an example, when there have been protests where a large number of protesters were openly carrying firearms, nobody gets tear gassed. Neither the cops nor the protesters get remotely violent.

The people with the guns aren't attending the current protests, and you can see how that has worked out.

You can't do physical harm with encryption (unless you want to count superficial burns acquired from touching a Bitcoin-mining GPU), though. The presence of guns is a necessary and pretty much sufficient condition for certain classes of physical harm, which in the eyes of many _does_ make or qualitatively different.

One of the defences Facebook uses when confronted with WhatsApp-orchestrated lynchings in India is that e2e encryption means it can't know what people are talking about or help police track the source of the messages.


Your point? If those lynchings had been orchestrated by people meeting up in person instead, nobody could know what people are talking about or help police track the source of the messages either.

In either case, to actually lynch someone, you still need to go there physically and actually do the deed. WhatsApp chats don't kill; dudes with weapons do.

The point is the scale. Law enforcement was scaled and equipped to meet the challenge of in-person lynch mob formation. In-person meetings are risky, finding like-minded people can be a challenge, etc.

Encrypted comms gives a huge asymmetric scale benefit to those who have these crimes committed. What it hasn't scaled is the ability of law enforcement to respond. And that's a choice, one which is open to criticism.

>No, they could not monitor all conversations. They could only listen to as many calls as they had agents to listen to them. It was not possible for them to listen to everyone at once, nor could they use this as means of discovery. They had to suspect someone in the first place in order to decide to expend the human resources to listen to their calls.

I think you're taking this a bit too lightly. As a side topic, I am surprised to what extent state surveillance was a thing here in the telephone era.

The secret police had about 50k full-time agents, 600k double-agents and about 400k-500k informants. From a population of 18 mil, that's about 1 in 18. Consider an usual family. You have a brother or a sister, two parents, 4 aunts or uncles and 4 grandfathers. Odds were in favor of one of them being at least an informant.

For your community? There definitely was an informant or double agent among them. Just knowing that the threat is there has a massive effect in how people communicate and bond with each other, effects that can still be felt to this day.

You're saying this was in America? Sounds more like Cuba or former soviet states.

You are correct. This is in a former soviet state.

We can work on fixing politics AND fix technology. We don’t have to choose between them.

What stopped the Stasi until politics was technology. And I think the encryption used helped to bring about the political change. If the Stasi had what Zoom is offering then perhaps the wall wouldn’t have fallen for 10,20,30 more years.

Consider: prior to the telephone, to monitor a conversation government had to actually send people to where the conversation happened, and that meant that they could barely monitor any conversations - and yet society continued to function.

The government still can send people to watch people use their phones or computers. On the other hand, it seems hard to dispute that all our most efficient examples of totalitarian states are post-telephone.

> You simply can't trust the government to respect boundaries that they created but have the ability to breach, especially when it can be done completely surreptitiously.

No, but in a functioning democracy we can vote them out. Democratic governments by definition have an large concentration of power, otherwise they can't fulfill their functions.

But this is bound by laws, time, and the ballot box. Surreptitious (warrantless) government surveillance should absolutely be illegal. Searches with a legal warrant (through an accountable, non-abusive, warrant granting judicial system) are absolutely necessary to gather evidence for prosecution of crimes to take place. Without trustworthy investigation and prosecution of crimes, the social contract will fail, and this has already started happening in many areas, as we are seeing in a way right now

However, this goes both ways - the populace should get far more transparency into the functioning of the criminal legal system - especially in to the training and conduct of physical law enforcement (police officers).

I think your analogy fails in many ways.

In the real world one will notice law-enforcement breaking into their ranch, in the virtual world, they won't (and comparing growing marijuana to voice/video over Zoom is wrong).

In the real world law-enforcement wouldn't have access to the complete history of a conversation, in the virtual world they would. Even to anything in the past which is irrelevant to the topic.

The events in the real world are ofthen ephemeral, we don't expect our friendly conversation to last forever, in the virtual world, however, they can be recorded and stored forever.

Basically you should compare spying/wire-tapping in the real world vs. spying/wire-tapping in the virtual world.

I think there is a debate here that we need to have, as a society.

But I think that broader society are not going to understand the technical issues, and are going to be swayed by overly-emotional appeals to "think of the children" and similar.

Therefore I think that we, as engineers - the people who will be asked to implement the results of any such debate, need to have this debate ourselves so we can take responsibility for our actions.

I can see both sides of this debate.

There is a legitimate need in society to gather evidence to discover the guilt or innocence of accused criminals. We cannot have a system of justice that assumes innocence until proven guilty but provides no method for gathering incriminating evidence.

There is also a legitimate basic human right to privacy. We must not be subject to constant surveillance by the state.

We have to find a middle path between the two extremes.

> Given this, why can't we strive for a similar system on the virtual world as well?

Because there is no technical solution that allows something similar, such a solution

1. Must be exclusive to use by lawful authorities, a criminal cannot get a search warrant 2. Must have some reasonable per-instance cost to prevent overreaching

1 is very hard in a tech space if even possible, backdoors can always be used by other parties.

But even if 1 is possible, by the nature of digital surveillance it is very cheap and relatively easy to do mostly secretly, leading to things like NSA literally inspecting all internet traffic.

Yes, theoretically you could the seconds this if it is the overwhelming political will, but it isn't, and the general public doesn't care.

So in the end it's better to encrypt everything

You make a good point, but finally encryption is just a tool. The virtual and the physical spaces are both domains, whose different nature offers different tools at their disposal. I don't think you can protect anything in the physical domain with the same certainty and mathematical elegance that's available to digital files, but if there were I wouldn't be opposed to it.

Imagine if there were a safe that couldn't be opened by anyone but the owner without destroying its contents. Would you be opposed to that? What if the design mechanism of this safe were as easy to implement as the encryption protocols are? Yes, one day some expert safe-cracker might break it. And in the even farther future the advent of "quantum safecracking" would perhaps make the safe as secure as a luggage lock. In the meantime the police would have to resort to their traditional methods.

Unfortunately all kinds of damning evidence have been lost to time. Fire is older than paper.

I beg to differ at some point.

In physical world two people are talking.

If police has suspicion that they are commiting a crime it can request a warrant to install listening device and only then they can listen.

In Zoom like scenarios any third party (like technological companies using law as excuse) can listen without warrant (and they will say something like "no one is listening" as training AI is not considered "someone")

As such communications should be encrypted with asymetrical cryptography where only warrant giver can decrypt them (not warrant giver giving the private key to law enforcement, but decrypting the symetrical per session key and give that to the law enforcement). And this goes as phones too.

And quite frankly I dont care if police with warrant is listening to my conversations. I dont want to any company listen to them as they are not doint it for law enforcement but for profitting on my data (quite possibly against my interest) and this is something completely different.

This is the scenario where technology gives people MORE privacy, prevents police illegal wiretaps (without warrent giver consent), prevents technology provider wiretaps and on the other side still allows legal wiretapping based on warrent-giver.

But interesting, no one has any interest doing it, guess why?

Do you see a fundamental difference between Zoom and telephone companies here? Or do you think how we've handled telephony over the past century has been a clear failure? If the latter, do you think most people would agree?

I dont really care for telephone companies as heads would roll if they would dare to intercept my phone calls without court order. We had one case just 2 weeks back where one of mobile phone/internet installed some security "firewall" that was doing mitm on https, they are now under investigation and under consideration of criminal persecution. They had system in place for less then 1 week. I am protected regarding those by laws.

So to answer your question, telephone companies are a failure in USA (wild west and lawlessness), in my country they need to obey laws. Corporations doesnt obey any laws outside their country (which they select based on inneficient laws) and need to be harshly regulated.

My personally favorite would be legislation that would mandate e2e encryption that must not be backdoored by anyone else except law enforcement getting warrent but private keys are staying under judge supervision without possibility to give it away (in pkcs#12 manner) and can be only used to decrypt communication when he presses the big red button. Quite frankly you want to be able to wiretap organized crime.

So open source solutions should be banned? I should not be allowed to use or create a program that allows me to talk with e2e encryption? Finding someone in possession of undisclosed keys should be a crime?

Care to see what happens then? Check China. They are implementing this very thing. For the children, I suppose.

Those are not simple debates and you are just taking them as black and white and then offer one solution (e2e) and making huge issues on the other side (organized crime, corrupted politians (If I understand you correctly, you are most worried about them - China?). The "think of the children" and "terrorists" are the least problematic topics here).

The judge only access prevents mass data gathering of law enforcement agencies and three letter agencies (at least in my country). And enable control of further institutions. Secret and hidden backdoors (Crypto AG, Dual_EC_DRBG,...) or corporations bribed by government deals are the worse solution here as it doesnt prevent the access to the data to either corporations or secred agencies while it might hold away law enforcement or also not. And surely enables mass data gathering from all without any supervision or control. What the real issue here is that no one is mentioning any court orders. Everyone would just want to have access to everything. Now THAT IS an issue.

I was talking about legal entities operating in same manner as telcos were. Also in real world you can invent your own one time pad encoded speaking and no one will understand you even if they wiretap the communication. And actually mafia historically has been using slang to cover up the communication. Same as you can do it in open source.

Anyway, do you communicate over the "secret encryted communication channel" covered with rag, to prevent recording your lips, recording with laser measuring shaking of window glass, you face muscles, IR recording and probably next 100 methods I am not even aware of. As this are the issue you also have with warrant being issued. Guess not. So the police looks like is not an issue for you (or warrant).

Then the three letter agencies, except for "warrant" methods they will use rubber hose cryptography to break you and any of your e2e communication and actually you might wish they would be able to read from your communication without contacting you in person. So e2e doesnt change anything for you here either.

I refuse to handle open source solutions that you install on your server to use them in same manner as corporation entities that use their solutions to wiretap the communications of everyone so they can earn more money from informations they gather.

And I also think that "encrypted Apple" phones (and everyone else doing any business with government) and the whole FBI story is just a sharade to bait people that are hiding something in ecosystem where the can simply access the information by agencies that CAN issue gag order. The whole story surely looks like counter-espionage operation from 1970. Time will tell if I am right.

It's not black and white. And I am not offering e2e as THE solution to privacy and freedom, but as a part of it and an important metric of whether a solution is actually working right. Just because encryption does not protect me from EVERYTHING, like physical surveillance, that does not mean we should abandon it - THAT is black and white thinking.

Having the law being able to access encrypted communications at any time will trample at the examples I brought up, which are examples that came up with zero effort, no matter what you try to put into your proposed solution - if the goal is to prevent crime, and there are available solutions out there that allow for e2e communication, the goal does not stand. You can't have a corporation banned from e2e, but allow any random dude spin up a secure communication platform without any keys compromised - what are you even banning then.

It amazes me that "corrupt politicians" is shrugged off just like that, while corrupt officials of any kind is exactly what everyone need defenses against with ANY means. In China, they are in the process of legislating exactly what you propose - no private encryption key to be withheld from the law, and yes, you did not misunderstand, it's at the scale this implies, total control and ability to observe over all traffic and restive data at any time - even forgetting all that is happening now, that leaves little unattended by law there.

Now, what, China is a "bad example"? An "exception"? I'd say this attitude coming from governments is the norm around most the world. Where people are at real risk from what say say over the net.

Out of all such countries, let's take China. Do you believe China should reverse its course and allow encrypted communication for its citizens? Based on your words and thoughts, I say you would answer "no". It's doing exactly what you propose after all - now, the only tiny step to totally suit your proposal is to use their powers for "good"! Right? And they indeed using it for good, according to their own legislation.

Because, if you nonetheless said "yes, China should allow e2e in favour of its citizen's rights", you would in essence be saying that "Freedom loving Western countries" should give the law total access to any information (they will always do it only when needed, of course!), but the same countries should pressure "totalitarian regiments" to maintain their citizens rights including encryption. That's contradictory, at least by thinking about it only for a bit.

There's a correlation between these things. Any power given is sure to be abused. If that is not prevented and pushed back, it will not stop but worsen. Trying to find a formula to give absolute power and restrict it at the same time is just fooling around, it's the core assumptions that matter. Unless you really think that some governments are somehow immune to becoming corrupt ant totalitarian when meeting no resistance - their people must be saints indeed! - in which case, I am sorry to say, but I can only chuckle.

Read what my proposal was and stop beating the strawman (i wont attribute this to malice as you clearly havent read any of it).

With my proposal law enforcement can access to the unencrypted data far less that they can do it now (under the rag) and when they access they are under scrutiny of judges while it prevents corporations accessing it.

Maybe do take time to think about what country is, what government is and to who it serves, what corporation is and to who it serve, maybe ask yourself what the law enforcement is and who does it serve, if you dare go into further, what if there would be no law enforcement? Do you have the muscless for that?

Or chuckle mindlessly on. I think your whole statement is demanding advantages in system where someone else takes care for you to allow you to not think about dissadvantages.

> As an illustration, if we get reasonable evidence suggesting that someone is growing marijuana in their ranch, we can get a warrant and go inside. There's not too much the owner can do to stop it. However a perfectly encrypted iphone cannot be broken into, no matter if the entire world agrees that there's evidence of crime in it.

These examples are talking about a different thing, we should be careful to not mix them up since the arguments for and against can be different.

The discussion prior to your comment was about protecting data in transit (end-to-end encryption); both your examples are about data at rest (full disk encryption).

With encrypted data in transit, not only can it be broken into by intercepting at the endpoints (in the case of video or audio calls, even through the physical world by pointing a camera and a microphone at the user's device), but also the end result of an end-to-end encrypted connection is much closer to a physical world private conversation (can be "broken into" only by intercepting the endpoints, that is, pointing a camera and a microphone at the persons involved).

With encrypted data at rest, the best physical analogy is a diary written in code; even if the whole world agrees that it contains evidence of embezzling, it cannot be decoded without the help of its owner's mind.

> In the physical world, however

With regards to security analysis the only difference between the physical world and digital word is proximity (hops) between agents, or evidence, in a conversation and convenience of access. Software developers tend to think purely in terms of controls and exploits, which is a tiny subset of security. Even conversations in the physical world can be encrypted, for example if two people are speaking Pashto I would have no idea what is said. If it isn’t recorded for later translation it’s encrypted forever.

Those few distinctions are important from a legal perspective where agents of digital concerns are more likely operating across political boundaries at any given moment.

> There's not too much the owner can do to stop it.

They can make available fail safes to store the evidence in a physical safe with tamper proof mechanics. Breaking such a safe would destroy the contained contents in the process much like attempting to break an iPhone with supposedly perfect encryption.

Since you are talking about surveillance another common misconception I have noticed many software developers make is equating the terms: security, privacy, anonymity which are all distinct. Privacy and anonymity are both aspects of confidentiality but privacy is concerned with hiding the contents of a message where anonymity is concerned with hiding the agents of the message. Those two do not overlap. Confidentiality is one of three aspects of security, though from a legal perspective privacy is available in many contexts without application of security controls.

I don't think that any of the surveillance powers that the state is demanding with respect to electronics actually map that neatly to what was possible before electronics emerged. We're talking about conversations rather than physical effects, and it's not like you could obtain a warrant to retroactively obtain the contents of a conversation a marijuana dealer had with his client yesterday: once the vibrations were gone from the air, that data has been erased irretrievably. To listen in on the conversation you actually had to go there, which naturally forces you to be judicious with your surveillance powers by virtue of limited resources, whereas the electronic version scales indefinitely. On the other hand, as long as the people who are of interest to law enforcement still exist in meatspace themselves, everything that used to be possible is still possible: just as you could obtain a warrant to bug someone's room to listen in on a conversation, you can obtain a warrant to bug someone's room to observe their phone (or bug the phone itself, with physical access! Maybe that would be one rationale to finally force Apple to make its phones "repairable" by individuals :)).

You mean Zoom employees will present a warrant ?

> As an illustration, if we get reasonable evidence suggesting that someone is growing marijuana in their ranch, we can get a warrant and go inside. There's not too much the owner can do to stop it. However a perfectly encrypted iphone cannot be broken into, no matter if the entire world agrees that there's evidence of crime in it.

Is that a terrible thing? It's not like they are hiding guns in their iphone. While there could be evidence in there, at some point there is physical evidence in the real world. Just making it easier to convict them is not a solid argument for weakening protections for everyone.

I think this argument makes me more sympathetic to law enforcement’s desire than any other I’ve heard.

I can really flip my brain around and see how this desire for non encrypted communication to be the standard could come from a good place.

That said, I still come back to my default stance: crimes need to be exist outside of the private communication, to be a crime. At least under US law, where it’s very hard for just pure communication alone to be a crime.

So go investigate whatever it is that is an actual crime and causing actual harm. Making communication not private has tremendous potential chilling effects on actual thought, because people think by talking!

> Given this, why can't we strive for a similar system on the virtual world as well?

1. Encryption is an indispensable part of pretty much everyone's life. I can't imagine there's many people in our society that go more than a few days without using it.

2. If encryption can be broken by the police, it can be broken by other actors. Full stop.

2.1. It is been shown impossible for our government to keep a secret like a master key.

2.2. Math

> As an illustration, if we get reasonable evidence suggesting that someone is growing marijuana in their ranch, we can get a warrant and go inside. There's not too much the owner can do to stop it. However a perfectly encrypted iphone cannot be broken into, no matter if the entire world agrees that there's evidence of crime in it.

let's wait until something like a "perfectly encrypted" phone actually exists before we go down this road. AFAIK, the feds have eventually been able to break into the phone in every high profile case where the issue has come up. it's not impossible, they just don't want to pay what it costs.

If zoom doesn't encrypt, bad actors will use (or build) another app, that does. All they are doing is giving a free peek for Chinese or Russian spies.

Maybe they need weaker encryption for the masses who pay nothing, like DES

Speaking more broadly, surely there is a point between zero privacy and 100% surveillance that we can all move to. If we take the encrypt (E2EE) approach to everything and all aspects of our lives, i.e. we should be able to protect our faces from video recording when walking into banks, then surely the system would be more open to abuse by bad actors. Accountability in society is what drives good behavior....if we take that away then chaos reigns. Therefore there must be a balance which is why the whole 'let's encrypt everything and protect everyone' feels like sometimes it goes too far and borders on zealotry...Yes I feel that personal freedoms are important but so is the state in mantaining peace.

I think it's fine to have cameras in a shop or in a bank or whatever the virtual equivalent becomes because they serve a clear security purpose.

The problem with not end to end encrypting private communications is during a lockdown people now have nowhere they can go for a private conversation. If you invite someone over to your house for drinks or for dinner you feel you can talk freely because the government doesn't have cameras in your house, that would be an invasion of privacy. Where is the virtual equivalent of that once zoom is not longer private?

Remember your freedom will be taken an inch at a time. Not all at once.

I'm not completely sure what the answers are here yet, but I do agree that it can be very psychologically reassuring to know that the only people involved in a conversation are the genuine invitees.

That said, abusive people do exist and are a legitimate problem, given the damage they can cause. Their abuse may be overt (threats, violence, noise, etc), or it can be subtle (for example, manipulation over long periods of time).

Some of that abuse may come from prior anger and frustration outside their control, and perhaps it's good to allow people to let that out -- as long as it doesn't end up harming other people in the process.

Would the situation be improved if the service provider could only step into the meeting when explicitly requested by participant(s)?

To follow your analogy, that could be seen as the equivalent of someone experiencing a medical emergency during dinner at your house and requiring outside assistance.

All these options would be gamed and misused, as they are during existing use cases in real life. Some people over-react, many people under-react, and society itself changes so it's important to build in flexibility for transparent and accountable change.

If you somehow accidentally invited an abusive person to dinner and they started acting abusive you would ask them to leave. Then when they don't leave you call the police. Really a stretch to imagine that happening more than once a lifetime.

In video calls you don't even have to do that, you can just kick them from the call. You don't need Zoom to step in you just kick them.

I really don't understand what you are getting at with the abusive people thing. What sort of situation are you imagining exactly?

Yep, kicking the participant would be a good option in many cases.

To answer your question: phishing scams could be one example. I'm sure there are many others.

Phishing is already handled by email. Don't click the zoom meeting link in the email from a Nigerian Prince and you will be fine. In general I don't see this being a problem with Zoom but rather a problem with clicking links from dodgy sources, zoom meeting links just happen to be one of many.

If there are others please list them because I'm struggling to understand the overarching thing you are getting at and examples would help with that.

You know that Zoom is not the only video chat software, right?

Exactly, which makes it worse. Real bad actors will just switch meanwhile normal people get surveillance.

True, and I don't support Zooms decision. You said:

> Where is the virtual equivalent of that once zoom is not longer private?

What I meant by my comment is that there are good alternatives that are E2E and free, like FaceTime (ok you need an iPhone or Mac), WhatsApp, Signal etc. So people can just use that if they don't want to pay for Zoom (again, I don't think they're making the right decision either).

People will choose convenience over privacy when they can't see the threat. Zoom is convenient for video calls because it allows you to see all the people in the call on one screen, you can schedule meetings, the connection quality is good, it supports screen sharing and it's fully cross platform.

You already disqualified Facetime because realistically some of your friends and family have Android and Windows. WhatsApp connection quality is flaky, same with Skype. Does signal support group class? If it does maybe it could replace Zoom. Maybe. But all these options existed before lockdown and people still settled on Zoom because it's more convenient.

Either some massive scandal has to happen to make the public more privacy conscious or there needs to be e2e encryption by default, as a standard. With or without invisible state surveilled cameras shouldn't factor in to which dining table I buy.

> People will choose convenience over privacy when they can't see the threat.


> WhatsApp connection quality is flaky

Disagree. Might be anecdotal, but at least in Europe, most my friends use WhatsApp for personal video calls and not Zoom.

Genrally agree with you that we need E2E as default (as we need SSL as a default).

>that are E2E and free, like FaceTime (ok you need an iPhone or Mac), WhatsApp, Signal etc. So people can just use that if they don't want to pay for Zoom (again, I don't think they're making the right decision either).

https://twitter.com/alexstamos/status/1268199863054811136 2) None of the major players offer E2E by default (Google Meet, Microsoft Teams, Cisco WebEx, BlueJeans). WebEx has an E2E option for enterprise users only, and it requires you to run the PKI and won't work with outsiders.

Any E2E shipping in Zoom will be groundbreaking.

You're totatlly wrong! Cisco does offer E2EE even for free accounts... You can choose, if you want to use E2EE or not (via website).

Make a free account and test it.

I'm referring to personal use (i.e., friends and family) as to per the parent comment's argument.

It'll stop feeling like zealotry as governments develop more and more stasi-like tendencies.

The trouble is, by the time they do (and they will, China is halfway there), it will be too late to protest.

Actually, the government already has, and has always had, stasi-like tendencies, they just happen to mostly target people who are not given a platform to talk about it.

Police and other law enforcement are already using legal powers to infiltrate and monitor 'radical' political groups such as Black Lives Matter, just like they have in the past with civil liberties groups. In fact, as we discovered in the COINTELPRO leaks, they were going way beyond the legal limits, having been complicit in the assassination of Malcolm X, and having tried to blackmail MLK into committing suicide.

Of course, it could be that such things don't happen anymore. Or, seeing how the police in Minneapolis are actively targeting journalists, it is significantly more likely that we just don't know about it yet.

No large state has ever tolerated real dissent to any great extent. The state doesn't have to be as paranoid about dissent as China or the USSR (which almost require(d) enthusiastic support) for police powers to eb abused against the legitimate interests of citizens.

Fair point with regards to government overeach. I feel like this is both a practical and political question we would need to answer as a society. What level of surveillance would be acceptable? Zero? Some? Maybe when for temporary accounts with no identity attached?

Inheritly in society, there was always some form of surveillance. When we left our homes, people around us could see and hear what we are doing and there in report suspicious behavor. Now we are adjusting to a new way of life with new forms of surveillance which are harder to detect. I completely get it and I more for encryption then for not. I guess I am also challenging myself to see both sides and think about a middle ground.

While Western governments could be better behaved, I'm feel like comparisons with the stasi are somewhat extreme and out of wack. I live in the UK and generally speaking I'm happy with the government here when it comes to surveillance. Maybe the US has a greater focus on security but they are a long way from stasi.

They might be long away from stasi now but you don't get there in a big leap you get there slowly, justifiable inch by justifiable inch until the surveillance is enough you don't have to justify it anymore because people are too afraid to protest.

If you want to discuss surveillance then yes of course it's a matter of degree. Putting cameras in a bank vs putting cameras in a pub vs putting cameras in your home. As you can tell in the real world it's clearer when it's a step too far. In the digital world we need to be more careful because it's unmapped territory.

You need to think hard about why it's an invasion of privacy to put cameras in your home. It may seem obvious but it's not. Once you understand the reasons why that is an invasion of privacy then you can start to draw analogies to the digital world and understand what is going too far and what is not. The problem is people don't have a deeper understanding of the reason we need privacy so they are easily sold security in the form of digital surveillance without understanding the eventual consequence.

I don't think it's practical to focus on the amount of surveillance as it is its nature and whether or not it can justify itself.

We already live in a society where widespread aggressive authoritarian surveillance that doesn't justify itself is commonplace. Snowden proved this. Your emails are read. Your naked selfies looked at. Personal data is used frequently to crack down heavily on legitimate dissent. These are unquestionable and it's getting worse and more entrenched, not less and it hasn't caught a single act of terrorism like it was set up (ostensibly) to do. The question is, how do we personally react to the unchecked growth of stasi-friendly surveillance infrastructure?

I think arguing that western governments could be better behaved is a fair point. The stasi also could have been better behaved. Frequent appeals to moderation didn't make them behave though and they haven't and won't make western governments behave either, though.

Appeals to moderation have a null effect because if the goalposts keep being moved, so does the moderate position. If you want your opinion to never matter at all, always pick the moderate, middle ground opinion.

That's equivalent to saying that there is a point that provides suitable authentication/privacy/… for me to ask my bank about things and instruct my bank to do stuff on my behalf, and also privides little enough privacy that any criminal goings-on can be surveilled.

Since money is key to many crimes and finding out who controls the money is an important way to investigate crimes, this in turn means that that point of agreement has to secure me from surveillance by badguys when I talk to the bank, and permit surveillance of the same badguys when they talk to the same bank in the same way.

This might perhaps be possible but the word "surely" seems inappropriate.

The point of view depends on the experience.

When you actually see the horrors of abuse, helped by internet and you realize that there are voluntary walls to protect these people (encryption for instance, but others too) you may have a different position. I would willingly give up that just to see children (or whoever) saved.

You may not, that's a choice. I would just like to know whether you have seen what actually happens in these circles before making a decision.

Also, I live in a normal country where this concern (state surveillance) is less of an issue.

Sounds as if you could also say that all photos people ever take, should be accessible to the government and police, to protect the children. And things people say in their homes, need to be accessible to the police, to maybe rescue kidnapped children.

> When you actually see the horrors of ...

I heard about someone working as a nurse, in an emergency room, and because of witnessing injuries from traffic accidents, she decide to never be in a car again. I can understand that, I think that decision makes sense.

But not handing over people's communication to people like Trump and Putin etc and their men.

If you had read all the tweet, you will come accross to this https://twitter.com/alexstamos/status/1268199863054811136

2) None of the major players offer E2E by default (Google Meet, Microsoft Teams, Cisco WebEx, BlueJeans). WebEx has an E2E option for enterprise users only, and it requires you to run the PKI and won't work with outsiders.

Any E2E shipping in Zoom will be groundbreaking.

The tweet you linked is newer than this discussion. And it's also misleading. The major players in the business world aren't E2EE today, so Zoom is breaking ground in that way, but as far as free offerings go FaceTime is E2EE and that's certainly not a niche service. WhatsApp is also E2EE. And there's a platform called Wire which I'm not particular familiar with but claims to be E2EE. And it's also a paid service, which suggests it's targeting businesses. I guess it just doesn't count as a "major player".

I forgot, there’s also Google Duo which is E2EE as well.

Wrong. Cisco WebEx does have E2EE for free users!

Yup, I was a bit confused about what you were trying to say. Thanks for clarifying.

I'm having trouble reading between these lines.

What a few good examples of "abusive" meetings?

dick flashers for one thing, and the pattern of connections or attempted connections should be a partial indicator of this sort of abuse, perhaps there should be somesort of flag that could be set on such an account for inappropriate behaviour, or even liberally thinking there should be an adults only adult activity flag to set.

this is a hydra that shows up every time someone creates a video chat there is a problem with sausage parties and sextual blackmail that needs work arounds

> dick flashers for one thing

How are these people invited to meetings and why aren't they kicked from the meeting? How would anyone handle this in a real world meeting?

luring people to invite them as per clickbait methods, using obfusication so they are accidentally invited, straight up blackhat hacking to manipulate the system, early on the public meetings were getting bombed by flashers

these types are handled by identifying and outing them, or chasing them away, internet has to go beyond the "your ip number is" thing and demonstrate that there is a real knowledge of who they are, and that they are bothering people and it isnt going unnoticed.

Even if identifying dickflashers is their primary concern, situation can be helped by improving security, not decreasing it. Non-guessable meeting IDs, passwords, maybe unique invite links so it's possible for meeting orgs to identify who invited the flasher.

If someone sends clickbait invites to abusive meeting, then victims can trivially report it, possibly with screenshots.

They didn't specify, but I assume from context that they mean meetings conducted expressly for the purpose of aiding illegal activity.

> Their E2EE design will make it impossible for Zoom employees to enter E2EE meetings without permission

If this is acceptable to view a meeting without permission under abuse pretense, then it's also possible to do so even if someone's doing nothing wrong.

Even worse, if their system is compromised, a bad actor could monitor free users' meetings without any protections. And what is stopping those bad actors from getting a paid subscription, or maliciously gaining access to a paid account? Or these bad actors could use another system that doesn't compromise (Signal) or host their own.

Security comes in layers and logs. A system without these layers and accountability isn't secure. Zoom isn't secure, and is using law enforcement as a scapegoat and pretense to keep their security low.

Zoom has said that employees can enter a meeting, but there's no way to do that without being seen on the participant list and there is no way to record a meeting secretly. They've also said they wouldn't build these things.


>We also do not have a means to insert our employees or others into meetings without being reflected in the participant list. We will not build any cryptographic backdoors to allow for the secret monitoring of meetings.

I don't get it. So they're fine with abuse, as long as you're paying them for it? Or do they have some sort of E2EE backdoor (probably, since they manage the keys), that they want to selectively apply, but can't do so if people are constantly using burner accounts, and thus want E2EE users to be somewhat anchored to their payment information?

Specifically, Zoom is selling Abetting as a Service. For a fee, Zoom will take active steps to shield criminal activity from law enforcement.

Also I could pay for the service and still be abusive.

That's a significantly better explanation, and they're really between a rock and a hard place.

They get to pick between headlines like the current one, and claims that they support child porn rings (he isn't saying it explicitly, but everything I saw looks like that is the problem they're trying to fight).

Personally I think the whole topic could just have been avoided by not saying anything at all.

Zoom needs a business model and saying "if you want encryption you need to pay for it", to me sounds like a reasonable approach to making money.

Once you start dragging other reasons into it, you need to start defending them.

I think it's perfectly reasonable to have multiple reasons for doing something. Making money and deterring malicious users are both valid reasons. Some forums have paid fees for the purpose of deterring unwanted users (MetaFilter, Something Awful, Bitcoin Wiki in the past), so it's certainly a strategy with a precedent.

Yes, you can have as many reasons as you want, but the more you list them off to the public, the more time you have to spend defending them individually.. was the point that I was making.

There is a damn good reason why good PR people often refuse to comment. People remember stupid responses, but forget that “Zoom declined to comment to this article” very quickly.

> child porn rings (he isn't saying it explicitly

Well he didn't use the word "rings" but he did says CSAM (Child Sexual Abuse Material).

What, like a child porn ring can't pay for the version to get E2EE?

The whole CSAM thing is terrible because it provides a great excuse for surveillance at any turn. Even though it's a tiny minority of people participating in the abuse because it is so bad people are willing to give up their own privacy for it.

There are other ways to track these criminals and we should be using those. We know they are smart enough to stop using Zoom once its no longer encrypted. Meanwhile normal people will be left holding the bag of surveillance.

A few bad apples, right? And it’s technologists and privacy advocates that are shielding their behavior.

I don’t think this argument really holds but I think it’s funny how quick we are to downplay our own “bad apples” and say that encryption is more important.

No one is defending child sex offenders. Just find another way to catch them that doesn't involve invading private conversations. It's the first step down a really bad path that you only have to look once at China to see the results of.

No obviously not, but you are creating a system that allows them to operate and shields them from being discovered. And if we're going to turn the political tide on E2EE from being the thing the "bad people" use to "the standard for private communications" then we have to have a better answer to this.

You don't win any political battles by being the preferred tool for child molesters and then telling the gov't to pound sand when they come asking for help finding them.

E2EE plus the client looks up images from the NCVIP database and refuses to send/receive messages would at least be something.

If the FBI comes knocking looking for access to a particular user's messages then have a system that kicks that user off the network until they agree to add the FBI's key into all their chats for a specified time. Make it a bright-line visible action to the user being monitored. You have PFS right, so they can't see old messages, and once the FBI's access is revoked you can prove that all your chats are private again.

This wont work because real child sex offenders would never give the key to the FBI. The reason I don't provide a solution is because there isn't a good compromise. However you slice it normal people will end up getting their communications monitored while the real bad actors will be one step ahead. You also can't effectively ban e2e encryption because there will always be a new software that pops up to do it. The end result is always normal people get surveiled and the real criminals are still shielded. That is why I'm saying it's a red herring. They will not be any closer to catching these people by opening up one app to surveillance. Maybe they catch one or two lazy ones but then it will dry up real quick.

If as a layman I had to guess another way to catch them it would be to go to the source. Follow cases of missing children. Investigate reported child abuse. Once you have caught one of them you are free seize their computer and use it to honey pot all their contacts, with e2e encryption so the contacts believe it's person you just caught.

Right! The point isn’t to ban E2EE it’s the design your chat system in such a way that it’s less effort for the worst actors to go somewhere else and pay lip service to the FBI. I don’t think any of these would actually solve the problem. Just that we might have a popular E2EE chat service that could survive the political fight.

Excuse me? I'm not actively shielding abusers who I have witnessed.

But it's okay if you're passively shielding abusers? I mean that's the crux of the argument here. "Sure, we're the preferred tool for child abusers and sex trafficking but a few bad actors don't invalidate the need for private communications" is an argument I would accept as a technologist but doesn't seem to fly with the public. It doesn't mean that E2EE is DoA it just means that you can't just throw up your hands and say that doing something is impossible.

Have the client check the FBI's CP database and refuse to send pictures that match. Sure it's open source and abusers could recompile it but they wont. In the same way that blocking the default curl useragent stops 99% of spam at my company. Would be attackers could change it, but they don't.

Zoom is not encrypted today.

Uhhh yeah since the announcement. That's what we are here to discuss right?

The parent post said “once it is no longer encrypted”, implying that zoom is encrypted today.

This was really interesting, thanks for linking it.

When he puts it in the context of their typical abuse pattern - anonymous emails, VPNs, and just a few meetings - this decision makes much more sense.

I hope they expand on their thought process in a blog post at some point, I'd love to read more.

I like the explanation and seems fair trade off to me for complying with laws. Meanwhile, I think Zoom sucks big time with their PR team unlike Google, Microsoft, due to which they are being pushed again and again on privacy issues.

The thread is very good and offers some reasonable points putting their decision in perspective.

I'm not sure I understand the trade off w.r.t. possible abuse.

For harassment / offensive content, if anything E2EE will make it easier to prove where offensive content came from. You've got a cryptographic chain leading to the source after all. All you need is a button to record and report things (which admittedly seems to be exactly what they intent to build). The E2EE aspect doesn't really change things, except that Zoom can't record things themselves, which they claim they didn't do in the first place (although they might have relied on the small server side buffer they had, but that's an iffy solution at best).

Also not sure what to think of Zoom's Trust and Safety team breaking into a private conversation when they think some kind of abuse is going on. Yes E2EE would make it impossible, but why on earth would Zoom want that kind of role?

I feel like "we need to protect the children" is the new reductio ad hitlerum. This is such an easy way to shut down a conversation.

Do you want to:

a) accept lack of E2EE


b) do you hate children? Pick one.

Hurry up, your precious internet points™ are at stake here.

Of course, in this case they’ve set up the counter move easily:

“So what you’re saying is that Zoom is fine protecting pedophiles from law enforcement, as long as you profit?”

This is why you shouldn’t play such dumb games as a company, there is only downsides from a PR perspective.

I thought it was the Four Chans of the Infopocalypse? shrug

Nice, the upside of posting a comment with >10 upvotes (not a common occurence in my case) is that:

1. you end up getting more responses

2. more responses === a higher probability of seeing gems like this wikipedia wormhole I'm about to get sucked into:) I had no idea about the horsemen/chans or that May identified the reason behind the alpha particle problem. Cheers.

I didn't know there was a specific term for this concept. Thanks!

We are rarely given more than two choices. There is no room for nuance in 140 characters.

> I feel like "we need to protect the children" is the new reductio ad hitlerum.

At the risk of being pedantic: it’s not new at all. That’s part of almost every political campaigns from the past century. Technically “reductio ad hitlerum” is the new “protect the children” argument.

I don't think history supports that. There were essentially zero childrens' rights until post-war. In some locations child abusers were originally prosecuted via animal-protection laws.

The “protect the children” argument has little to do with actual children protection laws. It’s about using pathos of the crowd to manipulate opinion.

Regarding history, I don’t know about the US specifically, but Europeans countries have legal children protections since end of 19th, beginning of 20th centuries (depending on the country).

See https://en.wikipedia.org/wiki/Declaration_of_the_Rights_of_t... for a pre-WW2, international effort.

These fall into the class of 'thought terminating cliches':


With all of the calls for platform inter-mediation of content for the protection of the disenfranchised b) is just turning into 'do you hate?'

It's not even new, "the children" have been routinely trotted out since the 90s

And the Satanic Panic of the 80s.

As one of said children in the 80's that stuff freaked me out. Riding bikes at night with friends and hearing weird noises in the woods would get the ol' heart (and legs) pumping.

I was just the opposite. It just sounded so bizarre to me, that when the rumors started flying in my area of certain group's activities at certain locations, my friends and I would sneak off to investigate. It wasn't that we were interested in said activities, or that we were necessarily investigating the validity of the rumors. We were just curious. It turns out, it was all false and made up. Put serious doubts in my mind at an early age about "believing everything you hear", but also seriously lowered the credibility of the people repeating the information in my mind. Had it not been for my interest in computers, I might have turned into some sort of investigator.

And the Communists of the 50s.

And the prohibitionists of the 1890's.

Good point, let me rephrase it to "reductio ad hitlerum du jour". If we're scoring the internet points, we might as well be a bit pretentious.

It’s not new. Schneier has been saying this for a long time.

Uhm, it's definitely way older than Hitler even being a person.

Not clear from history.

You think it's just cover for it being a hard technical problem for zoom?

But... what the fuck?

Also... I have a paid account. How can I tell if my connection is encrypted or not? Is it only if all other parties have paid accounts? Is there an indicator?

Under an "encrypt some calls" approach, if even paid users can't tell easily and reliably if they have an encrypted connection... basically nobody can count on it.

I don't think it's a hard problem, they're already doing it for paid calls. It's probably more work to do it for some calls but not others. From what I understand, zoom has the encryption keys anyway, and they decrypt and encrypt on their server during routing, which can have some benefits. But, that means it's more processor intensive (costly) to use encryption. So, I think they're not encrypting free calls just because it's an added expense (which really adds up at scale).

Working with law enforcement might be true, but it doesn't make sense that it has anything to do with free calls. Again, they have the encryption keys so they could decrypt any calls that they want to work with law enforcement on. This might even be a really poor attempt at upselling to paid accounts.

If you're concerned about security, I don't think zoom is the conference tool of choice -- maybe they've fixed everything I mentioned, but they still have among the worst track records.

Security aside, the feature set and user experience is attractive. Except for one thing, why does it take two clicks to end a call? That's awkward every time. If people are accidentally leaving calls, that's a different problem and two clicks is a lazy solution.

I don't think zoom provides e2e encryption. iirc zoom decrypts all messages at their server before encrypting it again and forwarding it to the destination.

Huh, right.. so this makes it all even more confusing.

Are they saying that WHEN they implement true e2e encryption, it will only be for paid accounts?

Or are they saying the encryption they've already got, which they are inaccurately calling "e2e" when it is not, was formerly enabled for free accounts, but no longer will be?

Or something else?

(Who would have thunk that lying and calling something "e2e" that wasn't would end up confusing!)

I also still don't understand if you get the encryption (whichever one they are disabling for free accounts) if the 'host' is a paid account but some/all of guests are not...

Exactly this. End to end encryption does NOT mean "Encrypted on each end but not in the middle".

On the desktop app there's a green icon at the top left that says encrypted when you hover it. Not sure about mobile.

A thought:

Sometimes the distinction between physical and digital security is brought up in these discussions, the idea that physical security is imperfect (you can always break a lock) but that digital security may truly be impenetrable. This is a false dichotomy.

If people have a conversation in a pub or on a park bench, then law enforcement can surveil them individually or bug the venues in a targeted manner.

But the same methods can also be applied to digital communication. This is opsec 101 right - if one happens to be a high value target, one would totally expect their house/apartment to be surveilled - no amount of digital privacy can make up for a pinhole camera installed on the wall behind one's monitor, LE doesn't even need the keys, they see the content directly.

I think the argument that digital security is 'too perfect' falls apart if you take into account the reality that physical security is a component of that. "If you control the physical hardware" and all that.

TL;DR Digital security is just a subset of physical security. You can always just drill through the side of the safe.

I think it's pretty clear that unencrypted communication with law enforcement access allows for the discovery of individuals you'd never find via traditional means. Over the years, I've seen this argument brought up a number of times, but oddly seldom explicitly stated:

It seems like law enforcement wants to be able to use digital communication to discover criminals, and

privacy experts want law enforcement to rely on HUMINT, a traditional warrant, and physical access.

I believe the second method is far more just, but I seldom see anyone acknowledge that it's almost certainly less effective.

Does it need to be acknowledged? I consider it to be a priori knowledge.

The distinction is between targeted and untargeted surveillance.

Digital communication is so easy to monitor, particularly by a state-level actor, that if it's unencrypted, it's pretty much all being hoovered up by someone by definition.

That's not the case for physical security, even if everyone leaves their doors unlocked, their windows open, and their notes on the kitchen table; everyone is not automatically a suspect, so most people aren't being put under the microscope.

The government likely has the ability to know, instantly, within milliseconds, everything I've ever done on the Internet that's unencrypted.

By contrast, they will likely never see the contents of the love note on my kitchen table. Well, if that pinhole camera isn't there, anyway. ;)

All of the approaches applicable to physical communications apply to digital communications too.

It's just that the _additional_ level, which in the physical world would be equivalent to knowing the contents of all of the conversations/interactions that people are having in person, is something that people wish to fight against and prevent from becoming normalised.

> Does it need to be acknowledged? I consider it to be a priori knowledge.

I think you have a good point. The reason I'd like to see it acknowledged is because the two sides of the argument often talk past each other. Police power should not be unlimited, and it's clear that our constitution intended for the power of the state to be limited, with the intent of maximizing liberty.

However, for years people made the claim that the "liberty vs. security" argument was a false premise. ie, that ultimate liberty and ultimate security are both possible. I don't believe this is correct. (Broadly I think liberty is more important than security, but everyone has their set of exceptions to this rule) I might just be dating myself. People had this debate constantly in the years after 9/11. Maybe this argument is not getting made any longer?

In either case, I often hear these two sides talking past each other. I wish instead that both sides were more overt. Digital information can make police work more broad and effective, but we should treat it with quite a bit of cautious. We don't want police effectiveness to encroach on liberty in most cases.

> but oddly seldom explicitly stated

In democratic societies, law enforcement usually has no right to run "criminal discovery" processes like those. That's why they don't cite their intentions, because it's illegal (more often than not, a crime).

Notice that limits on crime policing are a very important factor on maintaining a democracy.

Which also explains why they keep trying to classify encryption as prohibited munitions, making users of it automatically criminal.

Is it less effective, though? From what I remember about successes e.g. against organized crime or other groups that operated partially or totally over the internet in the news over the last years, it was mostly the simple and dull ~~physical~~ police work that leveraged mistakes of operational security, but not an evaluation of minable data (even when learned in hindsight that it existed).

E.g. looking at the logs of relevant servers and waiting for someone to login without their VPN at some point.

Of course strong encryption and privacy protections makes surveillance less effective. The problem with mass surveillance is that it's very easy to abuse, and the abuse potential is very scary. The abuse potential is unacceptable. I'd rather get less effective surveillance.

Existing judicial system is calibrated for HUMINT. False accusations are costly, but rare.

Using digital communications to discover criminals can accidentally sweep in many more innocents, who would then have to hire lawyers and carry all kinds of other costs to defend themselves.

Not to mention, the court system in the US is already at capacity, even though the vast majority of cases are settled before reaching a decision. Sweeping in more cases resulting from a flood of new information won't make it any easier to defend oneself.

Then there are the unintended outcomes. What does the correctness look like for those found crimes based on bits from a sea of untapped information when Bayes theorum is applied to an entire populace? And if crimes are prosecuted before being verified using the real world investigation methods already in use?

Youtube generates several minutes of video per wall clock second. Now many of those videos are innocuous, but one must assume that occasionally someone uploads a video of a street fight or something more grotesque that is of interest to law enforcement or intelligence apparatuses.

That's public. You can analyze all of those. The NSA is free to pull them just as much as you and I.

And they don't as far as we can tell. Is it the cost of analyzing that much content? Is it that the NSA doesn't care? Is there something difficult about stripping audio off a video for keyword spotting?

Well I have a theory, and the theory is based off what little comes out of that side of the community. The theory is that the NSA can't meaningfully process the data it ingests. There's too much, it's too hard to query and they hit the same roadblocks of telling the difference between an actual crime and a videogame or fiction story.

So then we must ask, why do they want more? They have more data than they can analyze, why even bother ingesting more? It's not because it helps their mission, it's not because there's some value to it.

Well, why do we see, regular businesses fall into this trap? A billion points of analytics data that they can't make sense of. When I see it, it's because it's easier to blame a lack of data than to explain the difficulty of the problem. You can always say "Well I just don't have enough data" but it's much harder to explain that a bunch of crappy error-filled data isn't good for anything except wild goose chases. Adding more bad data doesn't improve the quality of your data, it just adds more of it.

I think it would feel pretty good to have a database of potentially incriminating evidence against a wide swath of the population that could be used if a person became a high-profile target. For example, if you're in one of those videos and then run for public office 10 years later you better hope the intelligence agencies like your positions and don't want to tank your chances.

So, no, they can't process all of it. But they can more easily trawl it for specific data they need. Especially 10 years from now.

It's a fine supposition, but these suppositions often get passed around as if they're true and self evident. The reality is you don't have distinct information about what the government is collecting. Instead, what you have is information about what's probably possible.

From that standpoint it makes sense to err on the side of caution, and assume it's all being collected. But, while this is an effective risk calculus, it's different from having access to the ground truth.

> And they don't as far as we can tell.

That's wishful thinking which you have no evidence for. But let's assume that you're correct - eventually they will have a way to analyse it en masse.

There are, then, two things we need to bear in mind:

- is the time horizon likely to be close enough that data currently collected will be relevant then - if we allow the collection now, will it be easy to roll back that collection later when the threat is on the horizon

The answer to both of those questions is yes. Similarly, we use high strength encryption now, even if we think 128-bit is fine, because in time it won't be.

The above is theoretical. The next bit isn't - they will _always_ be able to decide that agent A should look at video B from N years ago.

They can't do that for a letter on the hypothetical table, or a message stored with strong encryption that stands the test of time - it won't exist in N years.

> The above is theoretical. The next bit isn't - they will _always_ be able to decide that agent A should look at video B from N years ago.

Unless they didn't collect and store it, as the parent suggests.

Why in the world would the NSA care about a street fight?

I have the opposite opinion: it is trivial and inexpensive to create and store an indexed archive of text from speech in audio, and to run image recognition models on video and pictures. There's value in having that data archived, so that they can go back and go through it should whoever created the data become a target in the future.

However, I doubt the NSA would waste resources investigating a street fight, but I'm pretty sure the video would be mined of any valuable data that could be gleaned from it.

>So then we must ask, why do they want more?

How do you know they want more?

[edit] Or was this meant as a rhetorical? ie, "who would want more in this case?"

I cannot quite discern whether you intended it, but "law enforcement wants to be able to use digital communication to discover criminals" sounds quite ominous, even if someone has never read 1984.

That is primarily a problem even at the best of times that law enforcement wants to create criminals whenever it fits their fancy; Even more ominously, if police had any greater command of the voluminous criminal codes and the incentive structure is changed, they could basically be charging/locking up most people they ever come across for any number of arbitrary violations of convoluted laws.

Maybe it is being a bit anxious, but with the full on surveillance state unfolding right before our eyes where wrongthink has you "cancelled", we seem to be racing, headlong into something not all that different than what Orwell envisioned would be the consequences of self-righteously benevolent tyranny … for our own good, of course.

Without warrants, and thus probable cause, searches are highly discriminatory.

This is evidenced by stop-and-frisk, which was effective only in finding criminality among select individuals.

> seems like law enforcement wants to be able to use digital communication to discover criminals,

Nope. Please remember these words. The surveillance system is about control, not security (finding criminals).

William Binney and thinthread are a great starting place to understand this.

I think that's reductive. I agree that it causes control, and may even be motivated by control, but there's clearly a law enforcement incentive here.

I think the evidence disagrees. I mean I get the on the face justifications, which in Aspen Institute type circles revolves around the fall of the nation state as the threat actor and the move towards a reality where a single non-state actor can be a viable threat, but thats just the surface level justification that makes it palatable to the average person and policy maker.

I think we just have to look at the history of surveillance not just since 9/11 to understand this. Forest and trees and all that.

If someone is a high value target, LE needs a warrant to bug their homes and to listen to their conversation, which implies they should have some indication of a wrong doing after which a judge grants them a warrant. With digital communications being non encrypted, it could increasingly be used for just surveillance in the name of national security, and as a way of finding out who is indulging in a crime vs just getting more evidences to prove that someone has committed a crime.

If the same physical system were to work in a digital age, a company could share a special encryption key with LE for the collecting evidence part provided they get a legit warrant for that. Physical security was never perfect, but we aspired it to be as close to perfect as we could. Same applies to digital as well.

I can only break so many physical locks in an hour. My computer can break thousands digital ones at the same time

Operators must go to the premises and covertly install the equipment needed to monitor the target. This fact alone limits the scope of surveillance operations. Usually there's a lot more oversight.

Unencrypted communications will be intercepted by default with no warrant, no oversight, no limitations on its processing and on a world-wide scale.

"If people have a conversation in a pub or on a park bench, then law enforcement can surveil them individually or bug the venues in a targeted manner."

My home internet connection could have spies from 30 different countries all over it and I wont see anything. If I'm sat on a park bench then anyone with a Russian accent asking for directions to Salisbury Cathedral is going to stand out somewhat.

That's not how HUMINT works. Most likely an acquaintance, friend or co-worker would be the one that actually collects the information from you.

In this scenario the person on the park bench with you.

Do you really believe that Russian spies have Russian accent?

I don't, but it's amusing that this comment has a Russian accent.

If you watch enough English language spy movies you will know full well that a Russian spy has a Scottish accent. So do Russian submarine captains. Funnily enough a British spy and a Russian submarine captain were played by the same actor - Sean Connery. Russian spies are played by less posh Scots.

Hollywood and co. doesn't get out much or something.

The 256 bit encryption is no good when they zip tie you to a chair and cut off body parts until you give up the passphrase.

Precisely. This is about mass surveillance vs targeted surveillance, not obtaining evidence on known criminals.

I know right, someone think of the children!

This all seems rather moot:

1. There is no way to verify that you are actually connected to a particular person. i.e. Zoom has no identity management.

2. The client is closed source and can't be verified.

3. Zoom can trivially impersonate any participant as they control the servers. They can MITM at will and they won't get caught at it.

This discussion is like talking about the security of the bank vault door when you are planning to make the vault out of drywall.

Not being E2EE, it doesn't matter what Zoom logs or does or doesn't know, packet captures can definitely record this information. This is basically extortion from Zoom, pay us or information will be in the clear that you probably don't want to be.

They encrypt "end point to end point" now. It's just that they have easy access to the keys. I was suggesting that what is being proposed for the paying customers might not have any real value over what they are doing now.

The craziest thing with this is that this is even a discussion.

US citizens should have a 'right to privacy'. But that's been stripped away due to post-9/11 reforms, among others.

Whether or not you have a 'right to privacy' does not mean Zoom has to provide it. You can choose a provider who allows you to exercise that right.

Yes, but the quote says:

“Free users for sure we don’t want to give that because we also want to work together with FBI, with local law enforcement in case some people use Zoom for a bad purpose,”

So they want to keep the data unencrypted so they can give it to the feds. That doesn't sound like privacy to me.

edit: So I mean, something like that should not be allowed by law. Though it's rather the FBI that is breaking the law here, but Zoom explicitly says they want to work together with them. So that means they approve that injustice, making them also unjust. If they would encrypt their data to protect their user's privacy, they would not be unjust on this aspect.

It's not obvious to me why Zoom should have any less right than, say, a hotel to block people using its service for criminal activity.

A hotel that suspects you're taping child porn in one of its rooms is well within its rights to call the police. If Zoom has reason to believe you're distributing child porn in a Zoom room, why shouldn't it be allowed to take action, too?

But you're arguing for the ability for hotels to install peephole cameras in every room to make sure you're not up to no good.

The point of encryption is that no one knows what you're doing, because they can't see it. Just like no one can see what you're doing in a hotel, most of the time.

No one can see what you're doing inside your hotel room, but they know when you enter and exit, who you're with, and they can hear you if you make noise. They can track what you watch on TV, and when you're on the Internet. Housekeeping goes in every day and sees all your stuff, and rearranges some of it.

So they can't literally see you every moment, but they have a lot of visibility into what you're up to.

US citizens have never had a right to privacy when it comes to commiting crimes. That's why court-authorized wiretaps have existed for decades. Long before 9/11.

Do you think courts shouldn't be allowed to wiretap the phones of suspected criminals? Because Zoom is just a modern phone.

Now might be a good time to mention that if you join the Free Software Foundation as an associate member you gain an account on their Jitsi server.


How well do you find jitsi works? For multi-party video? Large number of parties like zoom?

Dunno what you consider large, but it works well for me with 20 participants.

> […] There will not be a backdoor to allow this.


Eh, am I supposed to trust you just like that? If history taught is anything, it's that there will be.

Current encryption imementations (AES GCM) will not be downgraded. Meetings will still be encrypted and meeting content is still not going to be used for tracking users.

E2E will be an opt in choice for paying users who are willing to sacrifice some features for the benefits from additional security.

See this thread for more details: https://twitter.com/alexstamos/status/1268061790954385408

Edit to credit vjeux for the thread link

Do I have any way to verify this as a user?

Sure. Study a PhD in cryptanalysis and reverse engineering.

If you want to look at something now, the white paper for the E2E protocol design is public and open right now: https://github.com/zoom/zoom-e2e-whitepaper

On a more serious note, until there is a protocol and implementation available then we can't say anything for sure. Us Security folks aren't magicians.

That was uncalled for . Yes it is hard or impossible to do in zoom .

If these tools use open standards and well documented protocols this will not be a problem.

I can verify without a phd in cryptoanalyis and reverse engineering my browser is running a secure connection to a website and certificate is signed by the source(for sites enabled with FS and HSTS ).

Don't get me started on browser certificates. That's a whole week of my life I'll never get back.

The short versiom of it is, your browser trusts CAs to say whether a certificate is valid. But CAs often trust other CAs who may not actually be that trustworthy. Those CAs then trust other CAs who definitely are not as trustworthy... Etc.

So that certificate/padlock picture in your browser may not be as trustworthy as you think. It's an active problem.

Mandatory Ceryificate Transparency is solve problem of trust to CAs quite well though.

In terms of an actually relevant reply that's not bemoaning browser certs...

Yes I was a bit harsh. But I was trying to demonstrate a point - no one knows for sure until we can look at this stuff in detail. Until the researchers get to pull it apart then no one can verify anything. The little green tick on a zoom call is practically worthless until some external work is done.

The protocol is documented and open. I linked to it in my comment.

The open proctols for RTC today is webTRC. Zoom does not use webRTC. If the proctols Are open like http then I can build my own client and do not have to use theirs (just like you can your own hacker news app) . Zoom will not use webRTC for this precise reason. If they and all others did I can choose my client and I can choose a client who I trust and will give my green tick open source or not .

Google supported jabber in chat for a long time , slack supported IRC (both dropped the support ) but when they did you could any irc client in slack or use google chat using jabber with any client

If an open protocols for video are used like email (although not good example for encryption). It does not matter who your service provider is you can verify they are secure , or move to another one .

Today I have more than 10 video conferencing apps on my devices (zoom, Hangouts, meet, Webex , teams, GoToMeeting , chime , Skype, SfB, FaceTime , signal, telegram , ring central and Uber conference... ) because a customer , partner friend or family uses one of those . I have only one email and browser client though, it does not have to open source at all, ppl happily pay for closed source gmail or o365 without worrying will my mails deliver to you while still using official client or client of their choice

Sorry, to clarify, when i say protocol I specifically mean the E2E encryption protocol.

Also, have you thought about asking your clients/whatever to use one app to communicate with you? Even if you get half of them onboard, it sounds like it would save you a lot of mental bother.

I wish , every company has their own app to use , they will invite you to their conference by default, asking 10 people on the call to change for 1-2 is not feasible.

Many of them cannot install any new native application on their desktop / phone without IT approvals or their vpn does not allow traffic to consumer apps like Hangouts . They also need to record for compliance , pre-Covid some apps like Webex are connected to their conference room bridges using dedicated lines and hardware etc

Family / friends do not use use biz tools , it not easier to convince Apple users who like FaceTime, messenger is popular in few countries , wechat in China , WhatsApp in other places .

It is easier install another app rather than trying to get your grandma to switch from one thing someone installed on her phone and she learnt to use.

Well, even if all of your software was open source, do you have the time to validate all of it, from the app to the OS? what about the CPU?

This is well known issue since Ken Thompson’s trusting trust paper and not what am I getting at it

It is degrees of trust . Trust is not absolute , neither is security . Depending on your threat models you have to secure yourself. More transparency improves security does not solve all the problems just makes it costlier for an attacker . If cost outweighs the benefit they will not attempt to do it.

Https does not magically make your communication 100% secure ,however the number of people who can issue a certificate from a comprised root CA or control one is considerably less than the number of people who can monitor your plain text traffic .

I like your tone, and it lead me to think:

Any sufficiently advanced cryptography is indistinguishable from magic.

Which isn’t entirely untrue from a layperson’s perspective.

Edit: fixed a word. I’d accidentally written “is” rather than “isn’t”.

Go open source.

as the colloquialism goes: Talk is cheap...

How will you show this going forward?

Reverse engineer/packet sniff the implementation and perform some sort of cryptanalysis to see if the implementation works according to the published protocol. It's pretty standard.

That's sort of what happened with the ECB mode stuff that kicked this whole thing off in the first palace. See section 4 from the below for more info.


That doesn't show that someone on Zoom's end isn't decrypting data in an invisible way, without a warrant, with the key they have access to, in the non-e2e case.

This whole discussion is about the E2E case. That's what the CEO was referring to. There's a twitter thread link somewhere.

I think I read that Zoom do decrypt AES-GCM server side already. They have to so they can put the little green box around the person currently speaking. EDIT - this is incorrect for AES-GCM, it's not decrypted server side.

Edit - at least e2e is the angle I'm approaching it from as it's the new information.

This is what you said:

> Meetings will still be encrypted and meeting content is still not going to be used for tracking users.

And the person responding to you asked "how will you show this?".

Which is a fairly ambiguous question and could be interpreted several ways so I went mainly for the E2E case (as it's the new thing).

Could also be interpreted as how can we show only paid users can access it? Or that certain features will be disabled with E2E?

What I replied with covers both E2E and the current state equally tbh (the linked article did it before with ECB). There are always limitations to what is possible.

I could break into the Zoom servers to make sure everything is kosher. But that's illegal.

If WhatsApp started transmitting E2E keys back to their servers people would find that out client side through network packet inspection, not server side.

Security researchers are limited in the tools/methods they can use. We have to work with what we've got at our disposal.

> Security researchers are limited in the tools/methods they can use. We have to work with what we've got at our disposal.

Which is exactly why "trust us, we're not going to do anything with these keys" is a ridiculous state of affairs and shouldn't be tolerated. We can't show that they're actually doing what they say, and it'll be years after they implement mass surveillance on the behest of law enforcement before someone leaks something.

Perfect security doesn't exist. If humans are involved at any point, it's not perfectly secure. The One time pad is a great testament to that.

Should we work towards an ideal? Sure. Should we stress out that things aren't perfect? Probably not.

It's an iteresting technical idea though. Would be interesting to see if any existing systems have a "canary" element to them.

There's a very simple solution to "the provider cannot prove that it will not misuse its access to the stream" - it's to use e2e encryption.

> I think I read that Zoom do decrypt AES-GCM server side already. They have to so they can put the little green box around the person currently speaking.

Why is this not a client thing?

Edit: Civility

* it used to be with ECB. not anymore. My bad.

> Matthew Green, a cryptographer and computer science professor at Johns Hopkins University, points out that group video conferencing is difficult to encrypt end to end. That’s because the service provider needs to detect who is talking to act like a switchboard, which allows it to only send a high-resolution videostream from the person who is talking at the moment, or who a user selects to the rest of the group, and to send low-resolution videostreams of other participants. This type of optimization is much easier if the service provider can see everything because it’s unencrypted.


This was 2 months ago so their new white paper clarifies the current situation:

> For use cases such as meeting real-time content (video, voice, and content share), where data is transmitted over User Datagram Protocol (UDP), we use AES-256 GCM mode to encrypt these compressed data streams. Additionally, for video, voice, and content share encrypted with AES, once it’s encrypted, it remains encrypted as it passes through Zoom’s meeting servers until it reaches another Zoom Client or a Zoom Connector, which helps translate the data to another protocol.


Ah, so their proprietary codec has come back to bite them in the ass.

(I realize codec is not the correct term.)

That's what you get when you are a MITM-by-design solution.

In short: Zoom E2EE eventually will encrypt corporate conferences, but will not solve the privacy problems they have, because their structure stills the same.

Seems it will be a feature just to make customers have the feeling they are safe (and pay more, indeed).

But, as usual, they are not.

Poor Keybase.

Keybase got acquihired, which is a much better outcome for both founders and investors than that which befalls most startups that never invent a viable revenue model.

Don’t shed any tears for anyone making hundreds of thousands of dollars per year while a third of the US has approximately zero income.

I'm certainly sad to see a company that took security seriously and "did it right" being bought by a company that arguably did not and got large mostly by trading security for convenience, even going as far as saying they were E2EE when they were actually not [1].

How do you compete with that?

Most users don't care about security/privacy and maybe that's fine but it was nice to see a company that seemed to genuinely care about these things.

[1] going by the commonly used definition

Not touching for a moment whether or not Keybase “did it right”, but, very simply: without a good revenue model, their only other option was to eventually go out of business. They were default-dead, near as I can tell from outside looking in.

You can do all the open source e2e crypto trendiness you like, but unless you’re a nonprofit like Signal that can generate a stream of donations, if you don’t eventually get people to pay you for the service, you’re not going to be able to stick around.

This was the best possible outcome for them, given the circumstances.

Skype was safe before it got acquired by microsoft.

I a way, any platform that is able to evade government surveillance will be deemed as an exception, and anyone using or building such platform will be a suspect.

Ah, so if I understand the Zoom CEO correctly, "protecting" pedophiles is fine as long as they pay Zoom. It's only an issue when they try to use Zoom for free.

The issue is that the free tier makes creating and disposing of burner accounts easy. If you pay, they have credit card info, etc. That makes it easy for law enforcement to link the Zoom account to a real person if it turns out that a particular account is a pedophile.

That’s how I understand it. What disadvantaged groups would be using Zoom for free that law enforcement would want to target? Protest organizers maybe?

May I recommend https://jitsi.org/ for meetings of 5 or fewer people. Easy to deploy on any cloud provider.

Does it bog down with more than 5? For it to be a serious alternative, it needs to support meetings with dozens of participants (even if it requires beefy hardware).

It fares pretty good in my opinion, I was taking Japanese classes before this COVID situation and they implemented remote classes using a self-hosted jitsi instace, we're 15 people in the jitsi room, we have the class for 5 hours and it's pretty stable, I imagine there are more rooms working simultaneously too, since we were not the only class taking place before this.

Unfortunately yes, I mean, the software allows for thousands of participants but the stream would be terrible. For 5-50 people, use BigBlueButton instead.

The scaling of bandwidth with the number of people _actually transmitting video_ is not perfect, so you results will vary with internet uplink speeds.

Jitsi works great for me with 20 participants, never tried more.

As Zoom now owns Keybase, I am really worried about the future of Keybase, especially after statements like these.

This also makes none or very little sense - if this is actually just to cooperate with law enforcement, why would encrypting corporate (or paying) calls be any better, the bad people that are referred to in the statement could just get a paid plan?

Keybase is dead. In the PR they were explicit they were hired to work on Zoom and not continue working on Keybase. Luckily there is keys.pub

Is keybase fully open source? Or is the server closed source?

The server stills closed but seems there're people saying the server-side is not needed(!) to trust the platform.


Why (!)?

Of course servers are untrusted. If you think you need to see the server source then any trust you have is mistaken.

Same as with HTTPS. If you think you need to trust the MITM, you've already lost

I think you don't need to trust the server just if you can audit the frontend and assure it does not share any sensitive information with the server-side. If they apply the concept of data minimisation, decentralisation and distribution properly, there are fewer risks involved.

But, if the server manages sensitive information, yes. It is preferable to audit the server code to understand how they handle the lifecycle of the information.

"If you think you need to see the server source then any trust you have is mistaken."

Sorry but I don't agree. I trust in systems I can verify. Trust without verifying is not trust, it is faith.

OK, but that doesn't help you when they shut down the server, which I think is what this thread was about? That zoom purchased it as an aquihire to have staff work on zoom, and isn't committed to the platform.

Agreed, but a server turned off has lower risks to leak information. And I think also they bought the expertise of the team to improve zoom, more than getting the solution per se. It will take some months to have this question answered (about what will really happen to keybase)

Clearly, a product which has been discontinued has less of a chance of violating your privacy, that's true.... I think there are a couple different non-intersecting conversations going on here...

But was it discontinued?

I was looking into their terms (https://keybase.io/docs/terms) and they should notify before it. And seems my account is up over there.

Paying leaves a verified trace though.

True, but they can't know the content at all.

-No worries, they'll just assume the content is malicious, then. (/s. I hope.)

It sounds like bullshit. I don't think the reason to avoid keybase is necessarily because it draws their security from authorities into question, but it does call into question if a company that would say something that looks like impulsive nonsense is someone you want to trust.

Interesting Zoom timing co-incident with the announcement that the DEA has been authorized to surveil protestors. https://news.ycombinator.com/item?id=23397868

Next up: Zoom meeting attendees raided for unlawful assembly. https://www.persecution.org/2020/05/24/wuhan-preacher-taken-...

Catch up: Identifying influencers from sampled social networks. https://ui.adsabs.harvard.edu/abs/2018PhyA..507..294T/abstra...

Note that warrantless and illegal bulk military intelligence gathering and “law enforcement” in the US are now practically indistinguishable: several US federal law enforcement agencies, including the DEA and FBI, receive intelligence products from the military’s domestic spying apparatus, which they then use to conduct what’s known as “parallel construction”: illegal evidence that they then use in court, unrelated to the spying (because that’s illegal, as well as the evidence that they found later as a result of the illegal spying).

Additionally, federal “law enforcement” (and concentration camp-operating) organizations like the CBP are engaging in domestic mass spying using aircraft to collect mobile phone identity data from millions, even for peaceful protests and the like. There isn’t really a line between “state surveillance” and “law enforcement” anymore in the US.

The title of this item as I submitted it to HN prior to its edit by mods ended in “to aid in state surveillance”, which I think is a more plain, accurate, and unbiased description of the practice, as I think that pretending that this illegal military spying practice (PRISM et al) has anything to do with legitimate “law enforcement” is basically state propaganda at this point.

I stand by my previous snarky, HN-rulebreaking flame of Zoom’s announcement of end to end encryption support from a month ago:

> I'm sure the result of this will be lots of good and secure trustworthy software that I'll be eager to install on my computer.


Latest upgrade of their macOS app bricked me and I have to use Zoom in the browser. We have a paid account and started to record our meetings and post on YouTube, although I've increased the quality setting, it's still 360p! In general, it's the most complex and confusing app, it's expensive, and the quality of the software is the worst. Unfortunately, Google Hangouts Meet isn't better and it has much higher internet connection requirements so our nonprofit was forced to use Zoom. Unfortunately, Jitsi Meet is even worse. It's kinda ridiculous that Zoom has no popular alternative in 2020.

WebEx and Skype are the popular alternatives. And if you want to talk about poor software quality I think Zoom was created because WebEx is such poor software from a user's perspective.

Skype absolutely wrecks my MacBook, not sure if the experience is similar on other laptops but I can't even open a PDF while on a Skype call. It would be impossible for me to give a presentation over Skype with my current setup. The software on its own seems reasonably user friendly but if the resource usage is a common issue I don't see Skype being a viable alternative right now.

I would add google meetings... I think it has the nicest interface and works better than the others cross platform.

Its a pity there is no cross platform communication standard - it looks like it will evolve like messaging with dozens of companies.

We have kids with Chromebooks and older laptops and on meeting with 10+ kids, Hangouts Meet becomes intolerable. Not to mention the lack of control - kids mute the teacher, can speak at any time, there's no raising hands, etc. We have to install a bunch of extensions so that we can have a basic functionality in place like the Nod Chrome extension.

Yeah, there's actually tons of alternatives. Everyone and their cousin has a chat app, and video conferencing is a basic feature.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact