Hacker News new | past | comments | ask | show | jobs | submit login
Zoom says it won’t encrypt free calls so it can work more with law enforcement (twitter.com/nicoagrant)
1094 points by sneak on June 3, 2020 | hide | past | favorite | 458 comments



This series of tweets from Alex Stamos has more specific information and tradeoffs being considered: https://twitter.com/alexstamos/status/1268061790954385408?s=...


That was a good thread, thanks.

The relevant bits I got from it were that there are a bunch of bad actors who create new Zoom identities and host meetings a few times before moving on, and Zoom needs a means by which they “can, if they have a strong belief that the meeting is abusive, enter the meeting visibly and report it if necessary.” Their E2EE design will make it impossible for Zoom employees to enter E2EE meetings without permission, so giving E2EE to the free tier will enable people to use Zoom meetings to facilitate abuse.


Looking back on this comment now I realize it can be construed as condoning Zoom’s decision here. I was not intending to pass judgement with that comment, but perhaps I should:

I think Zoom is wrong.

End-to-end encryption should be available to everybody no matter who they are. This means making it available to bad actors too. Expanding the scope of human communication should not be used to justify state surveillance. Privacy is a fundamental human right and one we should not be forced into giving up because it annoys the government.


I so want to be on the same side of this discussion but the argument is nuanced.

The reason "think of the kids" works so well to justify blocking E2E all the time is because child abuse happens literally all the time.

When someone solves this problem, and I don't think any of us really believe it to be solvable, we can move on. I don't want the government in my private conversations, but I don't want my kids in someone elses either.

To extend this - we recognise a duty of care to our users and their privacy when we build these systems, but if those users plan and carry out an act of terrorism did we not also have a duty of care to their victims to not aid their killers in planning their murder?

We can't shunt this responsibility forever, the public will not take our side down the road - because we are ignoring the counter-argument even if we wedge our fingers in our ears.


Child abuse, terrorism, drug trafficking and professional crime in general is a needle in a haystack compared to boring petty crime let alone normal communication.

Law enforcement entities that try to prosecute these kinds of crimes doesn't do it by building haystacks of data and then combing through looking for needles because that's a waste of their resources relative to the amount of results obtained. They do it by attacking the endpoints where the abuse or terrorism, or other professional crime has to actually happen. They find a terrorist, or a child abuser or a drug trafficker or they find evidence of their handiwork and then work from there. They see where they get their money, their bombs, their drugs, etc, etc and follow the links as much as they can. When law enforcement is actually trying to target crime they don't go fishing, digitally or in meatspace because that's not an efficient way to obtain results if the goal is to go after some genre of professional/organized crime.

Running a mass operation with no specific target (like speed traps in meat-space or dragnet operations in the digital world) is great for padding stats because you can say "look, we got X pounds of meth off the street" or whatever but it doesn't actually do much to target the professional crime because professional criminals take steps to avoid being caught in lowest common denominator type policing.

Neutering encryption (so that cops can continue to run surveillance dragnets) doesn't do anything to help the cops catch real criminals, that's just a talking point made up by the people who want the government to have the ability to put any arbitrary person under a microscope.


Users choose their technology. Technology doesn't choose its users. There's no way to make it impossible for criminals to use some communications service. The same technology that protects the lawful person will protect the terrorist and drug dealer. There's no solution available that doesn't also involve sacrificing the safety of upstanding citizens.

In case anyone doubts the above fact: government agents abuse their surveillance powers to spy on their loved ones.

https://en.wikipedia.org/wiki/LOVEINT

This article was posted here recently, a reminder of how easy it is to become the target of warrantless government surveillance:

https://schmud.de/posts/2020-06-02-mlk.html

There's no reason to believe the government is any better than these criminals. Cryptography must be strong enough to defeat even intelligence agencies as well as ubiquitous so that it'll be hard if not impossible to enforce legal limitations or bans.


> child abuse happens literally all the time... I don’t want my kids in someone elses [private conversations] either

93% of the time, the perpetrator knows the child. If you’re seriously worried your children might be victims of abuse, then your first line of defense should be against your own family and friends.

https://www.rainn.org/statistics/perpetrators-sexual-violenc...

The statistics for children are in the paragraph after the first graph.


Yes, this is correct. Children bullying is one such example.

And this is exactly the case where we want to be able to have digital evidence.


So the solution is to record teens’ private video chats?

What could possibly go wrong?


You will always be able to communicate with someone else in an encrypted manner, if you both want to do so, and no legislation that forces popular platforms to go unencrypted can change that. So, no illegal activity will be harmed.


Apparently it's okay for Zoom to shunt this responsibility for its paid users? Even if I were to accept your premise that omitting E2EE is a legitimate trade-off to detect abuse, Zoom's choice to selectively apply this standard for its free users suggests that this is NOT why Zoom chose to do this.


> I don't want the government in my private conversations, but I don't want my kids in someone elses either.

Easy: don't let your kid join zoom meetings without your permission/supervision until he/she understands that there are bad people out there.


on the other hand, zoom has a vested interest in identifying the people in the call (say to allow linking to a linkedin profile or other revenue-generating reasons).


I think you are wrong. The reason is, people seem to continue extrapolating reasonable privacy laws that were originally meant for the physical world to the virtual world. In the physical world, however, there's always reasonable workarounds to break into these privacy barriers if there's a suspicion of crime. In the virtual world, often, there's no possible way to break strong encryption barriers even if everyone agrees there needs to be a check on what's inside for the public good.

As an illustration, if we get reasonable evidence suggesting that someone is growing marijuana in their ranch, we can get a warrant and go inside. There's not too much the owner can do to stop it. However a perfectly encrypted iphone cannot be broken into, no matter if the entire world agrees that there's evidence of crime in it.

From what I can see, no one argues against warranted search of personal property in the physical world, except maybe some sovereign citizen crazies. Given this, why can't we strive for a similar system on the virtual world as well? I too agree warrantless or unfettered govt surveillance of technology is bad, but that's a policy failing not a technology one. We should try to focus on how we can hold governments responsible instead of making fully protected crime caves for anyone who cannot whip up a conscience.

I agree privacy should be a right, but not at the expense of many people enduring a life of hell in these cordoned spaces for that cause.


> why can't we strive for a similar system on the virtual world as well

because any crime in the virtual world can be uncovered by good police work. nobody has perfect operational security, including the government. so the solution to law enforcement is hard work by the law enforcers.

consider: prior to electronic communication, all private discussions were perfectly encrypted, because if you weren't there, you didn't hear what was happening. And society continued to function.

You simply can't trust the government to respect boundaries that they created but have the ability to breach, especially when it can be done completely surreptitiously.

We need to learn the lessons of Snowden, and fight tooth and nail to prevent anything less that complete, unfettered access to private communications by human beings. Anything that falls short of that will eventually be complete, unfettered surveillance, because there is no metastable equilibrium point in the middle.


consider: prior to the internet, /all/ telephone conversations could be monitored by the government. And society continued to function.

The controls on surveillance are not technical, they are political. The technology was the same, yet the Stasi listened to every call they could; other governments did not.

Fix the politics, because it /will/ win in the end. Learn the lessons of Germany and China.


No, they could not monitor all conversations. They could only listen to as many calls as they had agents to listen to them. It was not possible for them to listen to everyone at once, nor could they use this as means of discovery. They had to suspect someone in the first place in order to decide to expend the human resources to listen to their calls.

This is fundamentally different from modern technology where they can have a computer listen to every single call, pick out whatever keywords they're looking for, and flag it for later review. Technology now makes it possible for them to truly listen to everyone at once. This is why end-to-end encryption is necessary for everyone.

Politics is not going to solve this problem. A lot of what America's police and intelligence agencies do is already illegal. They don't care. They're going to do anything they can with the technology.


If you can't fix the politics it's _not going to matter_. The politics will just make the technology illegal. That's what's happening in China.

It's a weirdly blinkered concept to say "America's agencies already do illegal things and their politics is broken but what will save us is American corporations deploying technology".

(The "we need universal E2E to protect our freedoms even if there are downsides" is not, in logical form, a million miles different from 'we need guns everywhere to protect us from the government and damn the negative consequences of having guns everywhere', frankly)


What if we fix the politics and forget about the technology, then the politics later become broken again? We won't be able to take back those private unencrypted conversations that could be used to retroactively incriminate us.

I actually believe that technologies such as strong encryption are creating important checks and balances that make our democracy stronger. They are not subverting it like you are implying.


>The "we need universal E2E to protect our freedoms even if there are downsides" is not, in logical form, a million miles different from 'we need guns everywhere to protect us from the government and damn the negative consequences of having guns everywhere', frankly

I agree, and I agree with both of those. Giving up freedom/privacy for safety is almost always a losing bet.


The actual trade-off is giving up safety to gain the illusion of freedom.

With guns, the state will always outgun you. So the gun-riddled society sees children in its schools murdered staggeringly often, while its (supposedly free) citizens are tear-gassed with impunity by a state for nothing more than a photo opportunity.

That was not a winning bet for that society.

It's similar with E2E. It can't protect you from the government, because the protection is illusory – it protects just you so long as the state wants it to. When it no longer wants it to, it makes it illegal. Administrations are already heading in this direction.

Meanwhile E2E enables a number of proven harms, from lynchings to child abuse. Is that a worthwhile trade-off just for the protections it gives from corporate or illegal privacy invasion? Would it lose all of those benefits if legitimate law enforcement were allowed access? There is at least a debate to be had, there.


I see it as the exact opposite: giving up freedom for the illusion of safety. Using the tear gassed protesters as an example, when there have been protests where a large number of protesters were openly carrying firearms, nobody gets tear gassed. Neither the cops nor the protesters get remotely violent.

The people with the guns aren't attending the current protests, and you can see how that has worked out.


You can't do physical harm with encryption (unless you want to count superficial burns acquired from touching a Bitcoin-mining GPU), though. The presence of guns is a necessary and pretty much sufficient condition for certain classes of physical harm, which in the eyes of many _does_ make or qualitatively different.


One of the defences Facebook uses when confronted with WhatsApp-orchestrated lynchings in India is that e2e encryption means it can't know what people are talking about or help police track the source of the messages.

https://www.wired.com/story/how-whatsapp-fuels-fake-news-and...


Your point? If those lynchings had been orchestrated by people meeting up in person instead, nobody could know what people are talking about or help police track the source of the messages either.

In either case, to actually lynch someone, you still need to go there physically and actually do the deed. WhatsApp chats don't kill; dudes with weapons do.


The point is the scale. Law enforcement was scaled and equipped to meet the challenge of in-person lynch mob formation. In-person meetings are risky, finding like-minded people can be a challenge, etc.

Encrypted comms gives a huge asymmetric scale benefit to those who have these crimes committed. What it hasn't scaled is the ability of law enforcement to respond. And that's a choice, one which is open to criticism.


>No, they could not monitor all conversations. They could only listen to as many calls as they had agents to listen to them. It was not possible for them to listen to everyone at once, nor could they use this as means of discovery. They had to suspect someone in the first place in order to decide to expend the human resources to listen to their calls.

I think you're taking this a bit too lightly. As a side topic, I am surprised to what extent state surveillance was a thing here in the telephone era.

The secret police had about 50k full-time agents, 600k double-agents and about 400k-500k informants. From a population of 18 mil, that's about 1 in 18. Consider an usual family. You have a brother or a sister, two parents, 4 aunts or uncles and 4 grandfathers. Odds were in favor of one of them being at least an informant.

For your community? There definitely was an informant or double agent among them. Just knowing that the threat is there has a massive effect in how people communicate and bond with each other, effects that can still be felt to this day.


You're saying this was in America? Sounds more like Cuba or former soviet states.


You are correct. This is in a former soviet state.


We can work on fixing politics AND fix technology. We don’t have to choose between them.

What stopped the Stasi until politics was technology. And I think the encryption used helped to bring about the political change. If the Stasi had what Zoom is offering then perhaps the wall wouldn’t have fallen for 10,20,30 more years.


Consider: prior to the telephone, to monitor a conversation government had to actually send people to where the conversation happened, and that meant that they could barely monitor any conversations - and yet society continued to function.

The government still can send people to watch people use their phones or computers. On the other hand, it seems hard to dispute that all our most efficient examples of totalitarian states are post-telephone.


> You simply can't trust the government to respect boundaries that they created but have the ability to breach, especially when it can be done completely surreptitiously.

No, but in a functioning democracy we can vote them out. Democratic governments by definition have an large concentration of power, otherwise they can't fulfill their functions.

But this is bound by laws, time, and the ballot box. Surreptitious (warrantless) government surveillance should absolutely be illegal. Searches with a legal warrant (through an accountable, non-abusive, warrant granting judicial system) are absolutely necessary to gather evidence for prosecution of crimes to take place. Without trustworthy investigation and prosecution of crimes, the social contract will fail, and this has already started happening in many areas, as we are seeing in a way right now

However, this goes both ways - the populace should get far more transparency into the functioning of the criminal legal system - especially in to the training and conduct of physical law enforcement (police officers).


I think your analogy fails in many ways.

In the real world one will notice law-enforcement breaking into their ranch, in the virtual world, they won't (and comparing growing marijuana to voice/video over Zoom is wrong).

In the real world law-enforcement wouldn't have access to the complete history of a conversation, in the virtual world they would. Even to anything in the past which is irrelevant to the topic.

The events in the real world are ofthen ephemeral, we don't expect our friendly conversation to last forever, in the virtual world, however, they can be recorded and stored forever.

Basically you should compare spying/wire-tapping in the real world vs. spying/wire-tapping in the virtual world.


I think there is a debate here that we need to have, as a society.

But I think that broader society are not going to understand the technical issues, and are going to be swayed by overly-emotional appeals to "think of the children" and similar.

Therefore I think that we, as engineers - the people who will be asked to implement the results of any such debate, need to have this debate ourselves so we can take responsibility for our actions.

I can see both sides of this debate.

There is a legitimate need in society to gather evidence to discover the guilt or innocence of accused criminals. We cannot have a system of justice that assumes innocence until proven guilty but provides no method for gathering incriminating evidence.

There is also a legitimate basic human right to privacy. We must not be subject to constant surveillance by the state.

We have to find a middle path between the two extremes.


> Given this, why can't we strive for a similar system on the virtual world as well?

Because there is no technical solution that allows something similar, such a solution

1. Must be exclusive to use by lawful authorities, a criminal cannot get a search warrant 2. Must have some reasonable per-instance cost to prevent overreaching

1 is very hard in a tech space if even possible, backdoors can always be used by other parties.

But even if 1 is possible, by the nature of digital surveillance it is very cheap and relatively easy to do mostly secretly, leading to things like NSA literally inspecting all internet traffic.

Yes, theoretically you could the seconds this if it is the overwhelming political will, but it isn't, and the general public doesn't care.

So in the end it's better to encrypt everything


You make a good point, but finally encryption is just a tool. The virtual and the physical spaces are both domains, whose different nature offers different tools at their disposal. I don't think you can protect anything in the physical domain with the same certainty and mathematical elegance that's available to digital files, but if there were I wouldn't be opposed to it.

Imagine if there were a safe that couldn't be opened by anyone but the owner without destroying its contents. Would you be opposed to that? What if the design mechanism of this safe were as easy to implement as the encryption protocols are? Yes, one day some expert safe-cracker might break it. And in the even farther future the advent of "quantum safecracking" would perhaps make the safe as secure as a luggage lock. In the meantime the police would have to resort to their traditional methods.

Unfortunately all kinds of damning evidence have been lost to time. Fire is older than paper.


I beg to differ at some point.

In physical world two people are talking.

If police has suspicion that they are commiting a crime it can request a warrant to install listening device and only then they can listen.

In Zoom like scenarios any third party (like technological companies using law as excuse) can listen without warrant (and they will say something like "no one is listening" as training AI is not considered "someone")

As such communications should be encrypted with asymetrical cryptography where only warrant giver can decrypt them (not warrant giver giving the private key to law enforcement, but decrypting the symetrical per session key and give that to the law enforcement). And this goes as phones too.

And quite frankly I dont care if police with warrant is listening to my conversations. I dont want to any company listen to them as they are not doint it for law enforcement but for profitting on my data (quite possibly against my interest) and this is something completely different.

This is the scenario where technology gives people MORE privacy, prevents police illegal wiretaps (without warrent giver consent), prevents technology provider wiretaps and on the other side still allows legal wiretapping based on warrent-giver.

But interesting, no one has any interest doing it, guess why?


Do you see a fundamental difference between Zoom and telephone companies here? Or do you think how we've handled telephony over the past century has been a clear failure? If the latter, do you think most people would agree?


I dont really care for telephone companies as heads would roll if they would dare to intercept my phone calls without court order. We had one case just 2 weeks back where one of mobile phone/internet installed some security "firewall" that was doing mitm on https, they are now under investigation and under consideration of criminal persecution. They had system in place for less then 1 week. I am protected regarding those by laws.

So to answer your question, telephone companies are a failure in USA (wild west and lawlessness), in my country they need to obey laws. Corporations doesnt obey any laws outside their country (which they select based on inneficient laws) and need to be harshly regulated.

My personally favorite would be legislation that would mandate e2e encryption that must not be backdoored by anyone else except law enforcement getting warrent but private keys are staying under judge supervision without possibility to give it away (in pkcs#12 manner) and can be only used to decrypt communication when he presses the big red button. Quite frankly you want to be able to wiretap organized crime.


So open source solutions should be banned? I should not be allowed to use or create a program that allows me to talk with e2e encryption? Finding someone in possession of undisclosed keys should be a crime?

Care to see what happens then? Check China. They are implementing this very thing. For the children, I suppose.


Those are not simple debates and you are just taking them as black and white and then offer one solution (e2e) and making huge issues on the other side (organized crime, corrupted politians (If I understand you correctly, you are most worried about them - China?). The "think of the children" and "terrorists" are the least problematic topics here).

The judge only access prevents mass data gathering of law enforcement agencies and three letter agencies (at least in my country). And enable control of further institutions. Secret and hidden backdoors (Crypto AG, Dual_EC_DRBG,...) or corporations bribed by government deals are the worse solution here as it doesnt prevent the access to the data to either corporations or secred agencies while it might hold away law enforcement or also not. And surely enables mass data gathering from all without any supervision or control. What the real issue here is that no one is mentioning any court orders. Everyone would just want to have access to everything. Now THAT IS an issue.

I was talking about legal entities operating in same manner as telcos were. Also in real world you can invent your own one time pad encoded speaking and no one will understand you even if they wiretap the communication. And actually mafia historically has been using slang to cover up the communication. Same as you can do it in open source.

Anyway, do you communicate over the "secret encryted communication channel" covered with rag, to prevent recording your lips, recording with laser measuring shaking of window glass, you face muscles, IR recording and probably next 100 methods I am not even aware of. As this are the issue you also have with warrant being issued. Guess not. So the police looks like is not an issue for you (or warrant).

Then the three letter agencies, except for "warrant" methods they will use rubber hose cryptography to break you and any of your e2e communication and actually you might wish they would be able to read from your communication without contacting you in person. So e2e doesnt change anything for you here either.

I refuse to handle open source solutions that you install on your server to use them in same manner as corporation entities that use their solutions to wiretap the communications of everyone so they can earn more money from informations they gather.

And I also think that "encrypted Apple" phones (and everyone else doing any business with government) and the whole FBI story is just a sharade to bait people that are hiding something in ecosystem where the can simply access the information by agencies that CAN issue gag order. The whole story surely looks like counter-espionage operation from 1970. Time will tell if I am right.


It's not black and white. And I am not offering e2e as THE solution to privacy and freedom, but as a part of it and an important metric of whether a solution is actually working right. Just because encryption does not protect me from EVERYTHING, like physical surveillance, that does not mean we should abandon it - THAT is black and white thinking.

Having the law being able to access encrypted communications at any time will trample at the examples I brought up, which are examples that came up with zero effort, no matter what you try to put into your proposed solution - if the goal is to prevent crime, and there are available solutions out there that allow for e2e communication, the goal does not stand. You can't have a corporation banned from e2e, but allow any random dude spin up a secure communication platform without any keys compromised - what are you even banning then.

It amazes me that "corrupt politicians" is shrugged off just like that, while corrupt officials of any kind is exactly what everyone need defenses against with ANY means. In China, they are in the process of legislating exactly what you propose - no private encryption key to be withheld from the law, and yes, you did not misunderstand, it's at the scale this implies, total control and ability to observe over all traffic and restive data at any time - even forgetting all that is happening now, that leaves little unattended by law there.

Now, what, China is a "bad example"? An "exception"? I'd say this attitude coming from governments is the norm around most the world. Where people are at real risk from what say say over the net.

Out of all such countries, let's take China. Do you believe China should reverse its course and allow encrypted communication for its citizens? Based on your words and thoughts, I say you would answer "no". It's doing exactly what you propose after all - now, the only tiny step to totally suit your proposal is to use their powers for "good"! Right? And they indeed using it for good, according to their own legislation.

Because, if you nonetheless said "yes, China should allow e2e in favour of its citizen's rights", you would in essence be saying that "Freedom loving Western countries" should give the law total access to any information (they will always do it only when needed, of course!), but the same countries should pressure "totalitarian regiments" to maintain their citizens rights including encryption. That's contradictory, at least by thinking about it only for a bit.

There's a correlation between these things. Any power given is sure to be abused. If that is not prevented and pushed back, it will not stop but worsen. Trying to find a formula to give absolute power and restrict it at the same time is just fooling around, it's the core assumptions that matter. Unless you really think that some governments are somehow immune to becoming corrupt ant totalitarian when meeting no resistance - their people must be saints indeed! - in which case, I am sorry to say, but I can only chuckle.


Read what my proposal was and stop beating the strawman (i wont attribute this to malice as you clearly havent read any of it).

With my proposal law enforcement can access to the unencrypted data far less that they can do it now (under the rag) and when they access they are under scrutiny of judges while it prevents corporations accessing it.

Maybe do take time to think about what country is, what government is and to who it serves, what corporation is and to who it serve, maybe ask yourself what the law enforcement is and who does it serve, if you dare go into further, what if there would be no law enforcement? Do you have the muscless for that?

Or chuckle mindlessly on. I think your whole statement is demanding advantages in system where someone else takes care for you to allow you to not think about dissadvantages.


> As an illustration, if we get reasonable evidence suggesting that someone is growing marijuana in their ranch, we can get a warrant and go inside. There's not too much the owner can do to stop it. However a perfectly encrypted iphone cannot be broken into, no matter if the entire world agrees that there's evidence of crime in it.

These examples are talking about a different thing, we should be careful to not mix them up since the arguments for and against can be different.

The discussion prior to your comment was about protecting data in transit (end-to-end encryption); both your examples are about data at rest (full disk encryption).

With encrypted data in transit, not only can it be broken into by intercepting at the endpoints (in the case of video or audio calls, even through the physical world by pointing a camera and a microphone at the user's device), but also the end result of an end-to-end encrypted connection is much closer to a physical world private conversation (can be "broken into" only by intercepting the endpoints, that is, pointing a camera and a microphone at the persons involved).

With encrypted data at rest, the best physical analogy is a diary written in code; even if the whole world agrees that it contains evidence of embezzling, it cannot be decoded without the help of its owner's mind.


> In the physical world, however

With regards to security analysis the only difference between the physical world and digital word is proximity (hops) between agents, or evidence, in a conversation and convenience of access. Software developers tend to think purely in terms of controls and exploits, which is a tiny subset of security. Even conversations in the physical world can be encrypted, for example if two people are speaking Pashto I would have no idea what is said. If it isn’t recorded for later translation it’s encrypted forever.

Those few distinctions are important from a legal perspective where agents of digital concerns are more likely operating across political boundaries at any given moment.

> There's not too much the owner can do to stop it.

They can make available fail safes to store the evidence in a physical safe with tamper proof mechanics. Breaking such a safe would destroy the contained contents in the process much like attempting to break an iPhone with supposedly perfect encryption.

Since you are talking about surveillance another common misconception I have noticed many software developers make is equating the terms: security, privacy, anonymity which are all distinct. Privacy and anonymity are both aspects of confidentiality but privacy is concerned with hiding the contents of a message where anonymity is concerned with hiding the agents of the message. Those two do not overlap. Confidentiality is one of three aspects of security, though from a legal perspective privacy is available in many contexts without application of security controls.


I don't think that any of the surveillance powers that the state is demanding with respect to electronics actually map that neatly to what was possible before electronics emerged. We're talking about conversations rather than physical effects, and it's not like you could obtain a warrant to retroactively obtain the contents of a conversation a marijuana dealer had with his client yesterday: once the vibrations were gone from the air, that data has been erased irretrievably. To listen in on the conversation you actually had to go there, which naturally forces you to be judicious with your surveillance powers by virtue of limited resources, whereas the electronic version scales indefinitely. On the other hand, as long as the people who are of interest to law enforcement still exist in meatspace themselves, everything that used to be possible is still possible: just as you could obtain a warrant to bug someone's room to listen in on a conversation, you can obtain a warrant to bug someone's room to observe their phone (or bug the phone itself, with physical access! Maybe that would be one rationale to finally force Apple to make its phones "repairable" by individuals :)).


You mean Zoom employees will present a warrant ?


> As an illustration, if we get reasonable evidence suggesting that someone is growing marijuana in their ranch, we can get a warrant and go inside. There's not too much the owner can do to stop it. However a perfectly encrypted iphone cannot be broken into, no matter if the entire world agrees that there's evidence of crime in it.

Is that a terrible thing? It's not like they are hiding guns in their iphone. While there could be evidence in there, at some point there is physical evidence in the real world. Just making it easier to convict them is not a solid argument for weakening protections for everyone.


I think this argument makes me more sympathetic to law enforcement’s desire than any other I’ve heard.

I can really flip my brain around and see how this desire for non encrypted communication to be the standard could come from a good place.

That said, I still come back to my default stance: crimes need to be exist outside of the private communication, to be a crime. At least under US law, where it’s very hard for just pure communication alone to be a crime.

So go investigate whatever it is that is an actual crime and causing actual harm. Making communication not private has tremendous potential chilling effects on actual thought, because people think by talking!


> Given this, why can't we strive for a similar system on the virtual world as well?

1. Encryption is an indispensable part of pretty much everyone's life. I can't imagine there's many people in our society that go more than a few days without using it.

2. If encryption can be broken by the police, it can be broken by other actors. Full stop.

2.1. It is been shown impossible for our government to keep a secret like a master key.

2.2. Math


> As an illustration, if we get reasonable evidence suggesting that someone is growing marijuana in their ranch, we can get a warrant and go inside. There's not too much the owner can do to stop it. However a perfectly encrypted iphone cannot be broken into, no matter if the entire world agrees that there's evidence of crime in it.

let's wait until something like a "perfectly encrypted" phone actually exists before we go down this road. AFAIK, the feds have eventually been able to break into the phone in every high profile case where the issue has come up. it's not impossible, they just don't want to pay what it costs.


If zoom doesn't encrypt, bad actors will use (or build) another app, that does. All they are doing is giving a free peek for Chinese or Russian spies.


Maybe they need weaker encryption for the masses who pay nothing, like DES


Speaking more broadly, surely there is a point between zero privacy and 100% surveillance that we can all move to. If we take the encrypt (E2EE) approach to everything and all aspects of our lives, i.e. we should be able to protect our faces from video recording when walking into banks, then surely the system would be more open to abuse by bad actors. Accountability in society is what drives good behavior....if we take that away then chaos reigns. Therefore there must be a balance which is why the whole 'let's encrypt everything and protect everyone' feels like sometimes it goes too far and borders on zealotry...Yes I feel that personal freedoms are important but so is the state in mantaining peace.


I think it's fine to have cameras in a shop or in a bank or whatever the virtual equivalent becomes because they serve a clear security purpose.

The problem with not end to end encrypting private communications is during a lockdown people now have nowhere they can go for a private conversation. If you invite someone over to your house for drinks or for dinner you feel you can talk freely because the government doesn't have cameras in your house, that would be an invasion of privacy. Where is the virtual equivalent of that once zoom is not longer private?

Remember your freedom will be taken an inch at a time. Not all at once.


I'm not completely sure what the answers are here yet, but I do agree that it can be very psychologically reassuring to know that the only people involved in a conversation are the genuine invitees.

That said, abusive people do exist and are a legitimate problem, given the damage they can cause. Their abuse may be overt (threats, violence, noise, etc), or it can be subtle (for example, manipulation over long periods of time).

Some of that abuse may come from prior anger and frustration outside their control, and perhaps it's good to allow people to let that out -- as long as it doesn't end up harming other people in the process.

Would the situation be improved if the service provider could only step into the meeting when explicitly requested by participant(s)?

To follow your analogy, that could be seen as the equivalent of someone experiencing a medical emergency during dinner at your house and requiring outside assistance.

All these options would be gamed and misused, as they are during existing use cases in real life. Some people over-react, many people under-react, and society itself changes so it's important to build in flexibility for transparent and accountable change.


If you somehow accidentally invited an abusive person to dinner and they started acting abusive you would ask them to leave. Then when they don't leave you call the police. Really a stretch to imagine that happening more than once a lifetime.

In video calls you don't even have to do that, you can just kick them from the call. You don't need Zoom to step in you just kick them.

I really don't understand what you are getting at with the abusive people thing. What sort of situation are you imagining exactly?


Yep, kicking the participant would be a good option in many cases.

To answer your question: phishing scams could be one example. I'm sure there are many others.


Phishing is already handled by email. Don't click the zoom meeting link in the email from a Nigerian Prince and you will be fine. In general I don't see this being a problem with Zoom but rather a problem with clicking links from dodgy sources, zoom meeting links just happen to be one of many.

If there are others please list them because I'm struggling to understand the overarching thing you are getting at and examples would help with that.


You know that Zoom is not the only video chat software, right?


Exactly, which makes it worse. Real bad actors will just switch meanwhile normal people get surveillance.


True, and I don't support Zooms decision. You said:

> Where is the virtual equivalent of that once zoom is not longer private?

What I meant by my comment is that there are good alternatives that are E2E and free, like FaceTime (ok you need an iPhone or Mac), WhatsApp, Signal etc. So people can just use that if they don't want to pay for Zoom (again, I don't think they're making the right decision either).


People will choose convenience over privacy when they can't see the threat. Zoom is convenient for video calls because it allows you to see all the people in the call on one screen, you can schedule meetings, the connection quality is good, it supports screen sharing and it's fully cross platform.

You already disqualified Facetime because realistically some of your friends and family have Android and Windows. WhatsApp connection quality is flaky, same with Skype. Does signal support group class? If it does maybe it could replace Zoom. Maybe. But all these options existed before lockdown and people still settled on Zoom because it's more convenient.

Either some massive scandal has to happen to make the public more privacy conscious or there needs to be e2e encryption by default, as a standard. With or without invisible state surveilled cameras shouldn't factor in to which dining table I buy.


> People will choose convenience over privacy when they can't see the threat.

Agree.

> WhatsApp connection quality is flaky

Disagree. Might be anecdotal, but at least in Europe, most my friends use WhatsApp for personal video calls and not Zoom.

Genrally agree with you that we need E2E as default (as we need SSL as a default).


>that are E2E and free, like FaceTime (ok you need an iPhone or Mac), WhatsApp, Signal etc. So people can just use that if they don't want to pay for Zoom (again, I don't think they're making the right decision either).

https://twitter.com/alexstamos/status/1268199863054811136 2) None of the major players offer E2E by default (Google Meet, Microsoft Teams, Cisco WebEx, BlueJeans). WebEx has an E2E option for enterprise users only, and it requires you to run the PKI and won't work with outsiders.

Any E2E shipping in Zoom will be groundbreaking.


You're totatlly wrong! Cisco does offer E2EE even for free accounts... You can choose, if you want to use E2EE or not (via website).

Make a free account and test it.


I'm referring to personal use (i.e., friends and family) as to per the parent comment's argument.


It'll stop feeling like zealotry as governments develop more and more stasi-like tendencies.

The trouble is, by the time they do (and they will, China is halfway there), it will be too late to protest.


Actually, the government already has, and has always had, stasi-like tendencies, they just happen to mostly target people who are not given a platform to talk about it.

Police and other law enforcement are already using legal powers to infiltrate and monitor 'radical' political groups such as Black Lives Matter, just like they have in the past with civil liberties groups. In fact, as we discovered in the COINTELPRO leaks, they were going way beyond the legal limits, having been complicit in the assassination of Malcolm X, and having tried to blackmail MLK into committing suicide.

Of course, it could be that such things don't happen anymore. Or, seeing how the police in Minneapolis are actively targeting journalists, it is significantly more likely that we just don't know about it yet.

No large state has ever tolerated real dissent to any great extent. The state doesn't have to be as paranoid about dissent as China or the USSR (which almost require(d) enthusiastic support) for police powers to eb abused against the legitimate interests of citizens.


Fair point with regards to government overeach. I feel like this is both a practical and political question we would need to answer as a society. What level of surveillance would be acceptable? Zero? Some? Maybe when for temporary accounts with no identity attached?

Inheritly in society, there was always some form of surveillance. When we left our homes, people around us could see and hear what we are doing and there in report suspicious behavor. Now we are adjusting to a new way of life with new forms of surveillance which are harder to detect. I completely get it and I more for encryption then for not. I guess I am also challenging myself to see both sides and think about a middle ground.

While Western governments could be better behaved, I'm feel like comparisons with the stasi are somewhat extreme and out of wack. I live in the UK and generally speaking I'm happy with the government here when it comes to surveillance. Maybe the US has a greater focus on security but they are a long way from stasi.


They might be long away from stasi now but you don't get there in a big leap you get there slowly, justifiable inch by justifiable inch until the surveillance is enough you don't have to justify it anymore because people are too afraid to protest.

If you want to discuss surveillance then yes of course it's a matter of degree. Putting cameras in a bank vs putting cameras in a pub vs putting cameras in your home. As you can tell in the real world it's clearer when it's a step too far. In the digital world we need to be more careful because it's unmapped territory.

You need to think hard about why it's an invasion of privacy to put cameras in your home. It may seem obvious but it's not. Once you understand the reasons why that is an invasion of privacy then you can start to draw analogies to the digital world and understand what is going too far and what is not. The problem is people don't have a deeper understanding of the reason we need privacy so they are easily sold security in the form of digital surveillance without understanding the eventual consequence.


I don't think it's practical to focus on the amount of surveillance as it is its nature and whether or not it can justify itself.

We already live in a society where widespread aggressive authoritarian surveillance that doesn't justify itself is commonplace. Snowden proved this. Your emails are read. Your naked selfies looked at. Personal data is used frequently to crack down heavily on legitimate dissent. These are unquestionable and it's getting worse and more entrenched, not less and it hasn't caught a single act of terrorism like it was set up (ostensibly) to do. The question is, how do we personally react to the unchecked growth of stasi-friendly surveillance infrastructure?

I think arguing that western governments could be better behaved is a fair point. The stasi also could have been better behaved. Frequent appeals to moderation didn't make them behave though and they haven't and won't make western governments behave either, though.

Appeals to moderation have a null effect because if the goalposts keep being moved, so does the moderate position. If you want your opinion to never matter at all, always pick the moderate, middle ground opinion.


That's equivalent to saying that there is a point that provides suitable authentication/privacy/… for me to ask my bank about things and instruct my bank to do stuff on my behalf, and also privides little enough privacy that any criminal goings-on can be surveilled.

Since money is key to many crimes and finding out who controls the money is an important way to investigate crimes, this in turn means that that point of agreement has to secure me from surveillance by badguys when I talk to the bank, and permit surveillance of the same badguys when they talk to the same bank in the same way.

This might perhaps be possible but the word "surely" seems inappropriate.


The point of view depends on the experience.

When you actually see the horrors of abuse, helped by internet and you realize that there are voluntary walls to protect these people (encryption for instance, but others too) you may have a different position. I would willingly give up that just to see children (or whoever) saved.

You may not, that's a choice. I would just like to know whether you have seen what actually happens in these circles before making a decision.

Also, I live in a normal country where this concern (state surveillance) is less of an issue.


Sounds as if you could also say that all photos people ever take, should be accessible to the government and police, to protect the children. And things people say in their homes, need to be accessible to the police, to maybe rescue kidnapped children.

> When you actually see the horrors of ...

I heard about someone working as a nurse, in an emergency room, and because of witnessing injuries from traffic accidents, she decide to never be in a car again. I can understand that, I think that decision makes sense.

But not handing over people's communication to people like Trump and Putin etc and their men.


If you had read all the tweet, you will come accross to this https://twitter.com/alexstamos/status/1268199863054811136

2) None of the major players offer E2E by default (Google Meet, Microsoft Teams, Cisco WebEx, BlueJeans). WebEx has an E2E option for enterprise users only, and it requires you to run the PKI and won't work with outsiders.

Any E2E shipping in Zoom will be groundbreaking.


The tweet you linked is newer than this discussion. And it's also misleading. The major players in the business world aren't E2EE today, so Zoom is breaking ground in that way, but as far as free offerings go FaceTime is E2EE and that's certainly not a niche service. WhatsApp is also E2EE. And there's a platform called Wire which I'm not particular familiar with but claims to be E2EE. And it's also a paid service, which suggests it's targeting businesses. I guess it just doesn't count as a "major player".


I forgot, there’s also Google Duo which is E2EE as well.


Wrong. Cisco WebEx does have E2EE for free users!


Yup, I was a bit confused about what you were trying to say. Thanks for clarifying.


I'm having trouble reading between these lines.

What a few good examples of "abusive" meetings?


dick flashers for one thing, and the pattern of connections or attempted connections should be a partial indicator of this sort of abuse, perhaps there should be somesort of flag that could be set on such an account for inappropriate behaviour, or even liberally thinking there should be an adults only adult activity flag to set.

this is a hydra that shows up every time someone creates a video chat there is a problem with sausage parties and sextual blackmail that needs work arounds


> dick flashers for one thing

How are these people invited to meetings and why aren't they kicked from the meeting? How would anyone handle this in a real world meeting?


luring people to invite them as per clickbait methods, using obfusication so they are accidentally invited, straight up blackhat hacking to manipulate the system, early on the public meetings were getting bombed by flashers

these types are handled by identifying and outing them, or chasing them away, internet has to go beyond the "your ip number is" thing and demonstrate that there is a real knowledge of who they are, and that they are bothering people and it isnt going unnoticed.


Even if identifying dickflashers is their primary concern, situation can be helped by improving security, not decreasing it. Non-guessable meeting IDs, passwords, maybe unique invite links so it's possible for meeting orgs to identify who invited the flasher.

If someone sends clickbait invites to abusive meeting, then victims can trivially report it, possibly with screenshots.


They didn't specify, but I assume from context that they mean meetings conducted expressly for the purpose of aiding illegal activity.


> Their E2EE design will make it impossible for Zoom employees to enter E2EE meetings without permission

If this is acceptable to view a meeting without permission under abuse pretense, then it's also possible to do so even if someone's doing nothing wrong.

Even worse, if their system is compromised, a bad actor could monitor free users' meetings without any protections. And what is stopping those bad actors from getting a paid subscription, or maliciously gaining access to a paid account? Or these bad actors could use another system that doesn't compromise (Signal) or host their own.

Security comes in layers and logs. A system without these layers and accountability isn't secure. Zoom isn't secure, and is using law enforcement as a scapegoat and pretense to keep their security low.


Zoom has said that employees can enter a meeting, but there's no way to do that without being seen on the participant list and there is no way to record a meeting secretly. They've also said they wouldn't build these things.

https://blog.zoom.us/wordpress/2020/05/07/zoom-acquires-keyb...

>We also do not have a means to insert our employees or others into meetings without being reflected in the participant list. We will not build any cryptographic backdoors to allow for the secret monitoring of meetings.


I don't get it. So they're fine with abuse, as long as you're paying them for it? Or do they have some sort of E2EE backdoor (probably, since they manage the keys), that they want to selectively apply, but can't do so if people are constantly using burner accounts, and thus want E2EE users to be somewhat anchored to their payment information?


Specifically, Zoom is selling Abetting as a Service. For a fee, Zoom will take active steps to shield criminal activity from law enforcement.


Also I could pay for the service and still be abusive.


That's a significantly better explanation, and they're really between a rock and a hard place.

They get to pick between headlines like the current one, and claims that they support child porn rings (he isn't saying it explicitly, but everything I saw looks like that is the problem they're trying to fight).


Personally I think the whole topic could just have been avoided by not saying anything at all.

Zoom needs a business model and saying "if you want encryption you need to pay for it", to me sounds like a reasonable approach to making money.

Once you start dragging other reasons into it, you need to start defending them.


I think it's perfectly reasonable to have multiple reasons for doing something. Making money and deterring malicious users are both valid reasons. Some forums have paid fees for the purpose of deterring unwanted users (MetaFilter, Something Awful, Bitcoin Wiki in the past), so it's certainly a strategy with a precedent.


Yes, you can have as many reasons as you want, but the more you list them off to the public, the more time you have to spend defending them individually.. was the point that I was making.


There is a damn good reason why good PR people often refuse to comment. People remember stupid responses, but forget that “Zoom declined to comment to this article” very quickly.


> child porn rings (he isn't saying it explicitly

Well he didn't use the word "rings" but he did says CSAM (Child Sexual Abuse Material).


What, like a child porn ring can't pay for the version to get E2EE?


The whole CSAM thing is terrible because it provides a great excuse for surveillance at any turn. Even though it's a tiny minority of people participating in the abuse because it is so bad people are willing to give up their own privacy for it.

There are other ways to track these criminals and we should be using those. We know they are smart enough to stop using Zoom once its no longer encrypted. Meanwhile normal people will be left holding the bag of surveillance.


A few bad apples, right? And it’s technologists and privacy advocates that are shielding their behavior.

I don’t think this argument really holds but I think it’s funny how quick we are to downplay our own “bad apples” and say that encryption is more important.


No one is defending child sex offenders. Just find another way to catch them that doesn't involve invading private conversations. It's the first step down a really bad path that you only have to look once at China to see the results of.


No obviously not, but you are creating a system that allows them to operate and shields them from being discovered. And if we're going to turn the political tide on E2EE from being the thing the "bad people" use to "the standard for private communications" then we have to have a better answer to this.

You don't win any political battles by being the preferred tool for child molesters and then telling the gov't to pound sand when they come asking for help finding them.

E2EE plus the client looks up images from the NCVIP database and refuses to send/receive messages would at least be something.

If the FBI comes knocking looking for access to a particular user's messages then have a system that kicks that user off the network until they agree to add the FBI's key into all their chats for a specified time. Make it a bright-line visible action to the user being monitored. You have PFS right, so they can't see old messages, and once the FBI's access is revoked you can prove that all your chats are private again.


This wont work because real child sex offenders would never give the key to the FBI. The reason I don't provide a solution is because there isn't a good compromise. However you slice it normal people will end up getting their communications monitored while the real bad actors will be one step ahead. You also can't effectively ban e2e encryption because there will always be a new software that pops up to do it. The end result is always normal people get surveiled and the real criminals are still shielded. That is why I'm saying it's a red herring. They will not be any closer to catching these people by opening up one app to surveillance. Maybe they catch one or two lazy ones but then it will dry up real quick.

If as a layman I had to guess another way to catch them it would be to go to the source. Follow cases of missing children. Investigate reported child abuse. Once you have caught one of them you are free seize their computer and use it to honey pot all their contacts, with e2e encryption so the contacts believe it's person you just caught.


Right! The point isn’t to ban E2EE it’s the design your chat system in such a way that it’s less effort for the worst actors to go somewhere else and pay lip service to the FBI. I don’t think any of these would actually solve the problem. Just that we might have a popular E2EE chat service that could survive the political fight.


Excuse me? I'm not actively shielding abusers who I have witnessed.


But it's okay if you're passively shielding abusers? I mean that's the crux of the argument here. "Sure, we're the preferred tool for child abusers and sex trafficking but a few bad actors don't invalidate the need for private communications" is an argument I would accept as a technologist but doesn't seem to fly with the public. It doesn't mean that E2EE is DoA it just means that you can't just throw up your hands and say that doing something is impossible.

Have the client check the FBI's CP database and refuse to send pictures that match. Sure it's open source and abusers could recompile it but they wont. In the same way that blocking the default curl useragent stops 99% of spam at my company. Would be attackers could change it, but they don't.


Zoom is not encrypted today.


Uhhh yeah since the announcement. That's what we are here to discuss right?


The parent post said “once it is no longer encrypted”, implying that zoom is encrypted today.


This was really interesting, thanks for linking it.

When he puts it in the context of their typical abuse pattern - anonymous emails, VPNs, and just a few meetings - this decision makes much more sense.

I hope they expand on their thought process in a blog post at some point, I'd love to read more.


I like the explanation and seems fair trade off to me for complying with laws. Meanwhile, I think Zoom sucks big time with their PR team unlike Google, Microsoft, due to which they are being pushed again and again on privacy issues.


The thread is very good and offers some reasonable points putting their decision in perspective.


I'm not sure I understand the trade off w.r.t. possible abuse.

For harassment / offensive content, if anything E2EE will make it easier to prove where offensive content came from. You've got a cryptographic chain leading to the source after all. All you need is a button to record and report things (which admittedly seems to be exactly what they intent to build). The E2EE aspect doesn't really change things, except that Zoom can't record things themselves, which they claim they didn't do in the first place (although they might have relied on the small server side buffer they had, but that's an iffy solution at best).

Also not sure what to think of Zoom's Trust and Safety team breaking into a private conversation when they think some kind of abuse is going on. Yes E2EE would make it impossible, but why on earth would Zoom want that kind of role?


I feel like "we need to protect the children" is the new reductio ad hitlerum. This is such an easy way to shut down a conversation.

Do you want to:

a) accept lack of E2EE

or

b) do you hate children? Pick one.

Hurry up, your precious internet points™ are at stake here.


Of course, in this case they’ve set up the counter move easily:

“So what you’re saying is that Zoom is fine protecting pedophiles from law enforcement, as long as you profit?”

This is why you shouldn’t play such dumb games as a company, there is only downsides from a PR perspective.



I thought it was the Four Chans of the Infopocalypse? shrug


Nice, the upside of posting a comment with >10 upvotes (not a common occurence in my case) is that:

1. you end up getting more responses

2. more responses === a higher probability of seeing gems like this wikipedia wormhole I'm about to get sucked into:) I had no idea about the horsemen/chans or that May identified the reason behind the alpha particle problem. Cheers.


I didn't know there was a specific term for this concept. Thanks!


We are rarely given more than two choices. There is no room for nuance in 140 characters.


> I feel like "we need to protect the children" is the new reductio ad hitlerum.

At the risk of being pedantic: it’s not new at all. That’s part of almost every political campaigns from the past century. Technically “reductio ad hitlerum” is the new “protect the children” argument.


I don't think history supports that. There were essentially zero childrens' rights until post-war. In some locations child abusers were originally prosecuted via animal-protection laws.


The “protect the children” argument has little to do with actual children protection laws. It’s about using pathos of the crowd to manipulate opinion.

Regarding history, I don’t know about the US specifically, but Europeans countries have legal children protections since end of 19th, beginning of 20th centuries (depending on the country).

See https://en.wikipedia.org/wiki/Declaration_of_the_Rights_of_t... for a pre-WW2, international effort.


These fall into the class of 'thought terminating cliches':

https://en.wikipedia.org/wiki/Thought-terminating_clich%C3%A...


With all of the calls for platform inter-mediation of content for the protection of the disenfranchised b) is just turning into 'do you hate?'


It's not even new, "the children" have been routinely trotted out since the 90s


And the Satanic Panic of the 80s.


As one of said children in the 80's that stuff freaked me out. Riding bikes at night with friends and hearing weird noises in the woods would get the ol' heart (and legs) pumping.


I was just the opposite. It just sounded so bizarre to me, that when the rumors started flying in my area of certain group's activities at certain locations, my friends and I would sneak off to investigate. It wasn't that we were interested in said activities, or that we were necessarily investigating the validity of the rumors. We were just curious. It turns out, it was all false and made up. Put serious doubts in my mind at an early age about "believing everything you hear", but also seriously lowered the credibility of the people repeating the information in my mind. Had it not been for my interest in computers, I might have turned into some sort of investigator.


And the Communists of the 50s.


And the prohibitionists of the 1890's.


Good point, let me rephrase it to "reductio ad hitlerum du jour". If we're scoring the internet points, we might as well be a bit pretentious.


It’s not new. Schneier has been saying this for a long time.


Uhm, it's definitely way older than Hitler even being a person.


Not clear from history.


You think it's just cover for it being a hard technical problem for zoom?

But... what the fuck?

Also... I have a paid account. How can I tell if my connection is encrypted or not? Is it only if all other parties have paid accounts? Is there an indicator?

Under an "encrypt some calls" approach, if even paid users can't tell easily and reliably if they have an encrypted connection... basically nobody can count on it.


I don't think it's a hard problem, they're already doing it for paid calls. It's probably more work to do it for some calls but not others. From what I understand, zoom has the encryption keys anyway, and they decrypt and encrypt on their server during routing, which can have some benefits. But, that means it's more processor intensive (costly) to use encryption. So, I think they're not encrypting free calls just because it's an added expense (which really adds up at scale).

Working with law enforcement might be true, but it doesn't make sense that it has anything to do with free calls. Again, they have the encryption keys so they could decrypt any calls that they want to work with law enforcement on. This might even be a really poor attempt at upselling to paid accounts.

If you're concerned about security, I don't think zoom is the conference tool of choice -- maybe they've fixed everything I mentioned, but they still have among the worst track records.

Security aside, the feature set and user experience is attractive. Except for one thing, why does it take two clicks to end a call? That's awkward every time. If people are accidentally leaving calls, that's a different problem and two clicks is a lazy solution.


I don't think zoom provides e2e encryption. iirc zoom decrypts all messages at their server before encrypting it again and forwarding it to the destination.


Huh, right.. so this makes it all even more confusing.

Are they saying that WHEN they implement true e2e encryption, it will only be for paid accounts?

Or are they saying the encryption they've already got, which they are inaccurately calling "e2e" when it is not, was formerly enabled for free accounts, but no longer will be?

Or something else?

(Who would have thunk that lying and calling something "e2e" that wasn't would end up confusing!)

I also still don't understand if you get the encryption (whichever one they are disabling for free accounts) if the 'host' is a paid account but some/all of guests are not...


Exactly this. End to end encryption does NOT mean "Encrypted on each end but not in the middle".


On the desktop app there's a green icon at the top left that says encrypted when you hover it. Not sure about mobile.


A thought:

Sometimes the distinction between physical and digital security is brought up in these discussions, the idea that physical security is imperfect (you can always break a lock) but that digital security may truly be impenetrable. This is a false dichotomy.

If people have a conversation in a pub or on a park bench, then law enforcement can surveil them individually or bug the venues in a targeted manner.

But the same methods can also be applied to digital communication. This is opsec 101 right - if one happens to be a high value target, one would totally expect their house/apartment to be surveilled - no amount of digital privacy can make up for a pinhole camera installed on the wall behind one's monitor, LE doesn't even need the keys, they see the content directly.

I think the argument that digital security is 'too perfect' falls apart if you take into account the reality that physical security is a component of that. "If you control the physical hardware" and all that.

TL;DR Digital security is just a subset of physical security. You can always just drill through the side of the safe.


I think it's pretty clear that unencrypted communication with law enforcement access allows for the discovery of individuals you'd never find via traditional means. Over the years, I've seen this argument brought up a number of times, but oddly seldom explicitly stated:

It seems like law enforcement wants to be able to use digital communication to discover criminals, and

privacy experts want law enforcement to rely on HUMINT, a traditional warrant, and physical access.

I believe the second method is far more just, but I seldom see anyone acknowledge that it's almost certainly less effective.


Does it need to be acknowledged? I consider it to be a priori knowledge.

The distinction is between targeted and untargeted surveillance.

Digital communication is so easy to monitor, particularly by a state-level actor, that if it's unencrypted, it's pretty much all being hoovered up by someone by definition.

That's not the case for physical security, even if everyone leaves their doors unlocked, their windows open, and their notes on the kitchen table; everyone is not automatically a suspect, so most people aren't being put under the microscope.

The government likely has the ability to know, instantly, within milliseconds, everything I've ever done on the Internet that's unencrypted.

By contrast, they will likely never see the contents of the love note on my kitchen table. Well, if that pinhole camera isn't there, anyway. ;)

All of the approaches applicable to physical communications apply to digital communications too.

It's just that the _additional_ level, which in the physical world would be equivalent to knowing the contents of all of the conversations/interactions that people are having in person, is something that people wish to fight against and prevent from becoming normalised.


> Does it need to be acknowledged? I consider it to be a priori knowledge.

I think you have a good point. The reason I'd like to see it acknowledged is because the two sides of the argument often talk past each other. Police power should not be unlimited, and it's clear that our constitution intended for the power of the state to be limited, with the intent of maximizing liberty.

However, for years people made the claim that the "liberty vs. security" argument was a false premise. ie, that ultimate liberty and ultimate security are both possible. I don't believe this is correct. (Broadly I think liberty is more important than security, but everyone has their set of exceptions to this rule) I might just be dating myself. People had this debate constantly in the years after 9/11. Maybe this argument is not getting made any longer?

In either case, I often hear these two sides talking past each other. I wish instead that both sides were more overt. Digital information can make police work more broad and effective, but we should treat it with quite a bit of cautious. We don't want police effectiveness to encroach on liberty in most cases.


> but oddly seldom explicitly stated

In democratic societies, law enforcement usually has no right to run "criminal discovery" processes like those. That's why they don't cite their intentions, because it's illegal (more often than not, a crime).

Notice that limits on crime policing are a very important factor on maintaining a democracy.


Which also explains why they keep trying to classify encryption as prohibited munitions, making users of it automatically criminal.


Is it less effective, though? From what I remember about successes e.g. against organized crime or other groups that operated partially or totally over the internet in the news over the last years, it was mostly the simple and dull ~~physical~~ police work that leveraged mistakes of operational security, but not an evaluation of minable data (even when learned in hindsight that it existed).

E.g. looking at the logs of relevant servers and waiting for someone to login without their VPN at some point.


Of course strong encryption and privacy protections makes surveillance less effective. The problem with mass surveillance is that it's very easy to abuse, and the abuse potential is very scary. The abuse potential is unacceptable. I'd rather get less effective surveillance.


Existing judicial system is calibrated for HUMINT. False accusations are costly, but rare.

Using digital communications to discover criminals can accidentally sweep in many more innocents, who would then have to hire lawyers and carry all kinds of other costs to defend themselves.


Not to mention, the court system in the US is already at capacity, even though the vast majority of cases are settled before reaching a decision. Sweeping in more cases resulting from a flood of new information won't make it any easier to defend oneself.

Then there are the unintended outcomes. What does the correctness look like for those found crimes based on bits from a sea of untapped information when Bayes theorum is applied to an entire populace? And if crimes are prosecuted before being verified using the real world investigation methods already in use?


Youtube generates several minutes of video per wall clock second. Now many of those videos are innocuous, but one must assume that occasionally someone uploads a video of a street fight or something more grotesque that is of interest to law enforcement or intelligence apparatuses.

That's public. You can analyze all of those. The NSA is free to pull them just as much as you and I.

And they don't as far as we can tell. Is it the cost of analyzing that much content? Is it that the NSA doesn't care? Is there something difficult about stripping audio off a video for keyword spotting?

Well I have a theory, and the theory is based off what little comes out of that side of the community. The theory is that the NSA can't meaningfully process the data it ingests. There's too much, it's too hard to query and they hit the same roadblocks of telling the difference between an actual crime and a videogame or fiction story.

So then we must ask, why do they want more? They have more data than they can analyze, why even bother ingesting more? It's not because it helps their mission, it's not because there's some value to it.

Well, why do we see, regular businesses fall into this trap? A billion points of analytics data that they can't make sense of. When I see it, it's because it's easier to blame a lack of data than to explain the difficulty of the problem. You can always say "Well I just don't have enough data" but it's much harder to explain that a bunch of crappy error-filled data isn't good for anything except wild goose chases. Adding more bad data doesn't improve the quality of your data, it just adds more of it.


I think it would feel pretty good to have a database of potentially incriminating evidence against a wide swath of the population that could be used if a person became a high-profile target. For example, if you're in one of those videos and then run for public office 10 years later you better hope the intelligence agencies like your positions and don't want to tank your chances.

So, no, they can't process all of it. But they can more easily trawl it for specific data they need. Especially 10 years from now.


It's a fine supposition, but these suppositions often get passed around as if they're true and self evident. The reality is you don't have distinct information about what the government is collecting. Instead, what you have is information about what's probably possible.

From that standpoint it makes sense to err on the side of caution, and assume it's all being collected. But, while this is an effective risk calculus, it's different from having access to the ground truth.


> And they don't as far as we can tell.

That's wishful thinking which you have no evidence for. But let's assume that you're correct - eventually they will have a way to analyse it en masse.

There are, then, two things we need to bear in mind:

- is the time horizon likely to be close enough that data currently collected will be relevant then - if we allow the collection now, will it be easy to roll back that collection later when the threat is on the horizon

The answer to both of those questions is yes. Similarly, we use high strength encryption now, even if we think 128-bit is fine, because in time it won't be.

The above is theoretical. The next bit isn't - they will _always_ be able to decide that agent A should look at video B from N years ago.

They can't do that for a letter on the hypothetical table, or a message stored with strong encryption that stands the test of time - it won't exist in N years.


> The above is theoretical. The next bit isn't - they will _always_ be able to decide that agent A should look at video B from N years ago.

Unless they didn't collect and store it, as the parent suggests.


Why in the world would the NSA care about a street fight?

I have the opposite opinion: it is trivial and inexpensive to create and store an indexed archive of text from speech in audio, and to run image recognition models on video and pictures. There's value in having that data archived, so that they can go back and go through it should whoever created the data become a target in the future.

However, I doubt the NSA would waste resources investigating a street fight, but I'm pretty sure the video would be mined of any valuable data that could be gleaned from it.


>So then we must ask, why do they want more?

How do you know they want more?

[edit] Or was this meant as a rhetorical? ie, "who would want more in this case?"


I cannot quite discern whether you intended it, but "law enforcement wants to be able to use digital communication to discover criminals" sounds quite ominous, even if someone has never read 1984.

That is primarily a problem even at the best of times that law enforcement wants to create criminals whenever it fits their fancy; Even more ominously, if police had any greater command of the voluminous criminal codes and the incentive structure is changed, they could basically be charging/locking up most people they ever come across for any number of arbitrary violations of convoluted laws.

Maybe it is being a bit anxious, but with the full on surveillance state unfolding right before our eyes where wrongthink has you "cancelled", we seem to be racing, headlong into something not all that different than what Orwell envisioned would be the consequences of self-righteously benevolent tyranny … for our own good, of course.


Without warrants, and thus probable cause, searches are highly discriminatory.

This is evidenced by stop-and-frisk, which was effective only in finding criminality among select individuals.


> seems like law enforcement wants to be able to use digital communication to discover criminals,

Nope. Please remember these words. The surveillance system is about control, not security (finding criminals).

William Binney and thinthread are a great starting place to understand this.


I think that's reductive. I agree that it causes control, and may even be motivated by control, but there's clearly a law enforcement incentive here.


I think the evidence disagrees. I mean I get the on the face justifications, which in Aspen Institute type circles revolves around the fall of the nation state as the threat actor and the move towards a reality where a single non-state actor can be a viable threat, but thats just the surface level justification that makes it palatable to the average person and policy maker.

I think we just have to look at the history of surveillance not just since 9/11 to understand this. Forest and trees and all that.


If someone is a high value target, LE needs a warrant to bug their homes and to listen to their conversation, which implies they should have some indication of a wrong doing after which a judge grants them a warrant. With digital communications being non encrypted, it could increasingly be used for just surveillance in the name of national security, and as a way of finding out who is indulging in a crime vs just getting more evidences to prove that someone has committed a crime.

If the same physical system were to work in a digital age, a company could share a special encryption key with LE for the collecting evidence part provided they get a legit warrant for that. Physical security was never perfect, but we aspired it to be as close to perfect as we could. Same applies to digital as well.


I can only break so many physical locks in an hour. My computer can break thousands digital ones at the same time


Operators must go to the premises and covertly install the equipment needed to monitor the target. This fact alone limits the scope of surveillance operations. Usually there's a lot more oversight.

Unencrypted communications will be intercepted by default with no warrant, no oversight, no limitations on its processing and on a world-wide scale.


"If people have a conversation in a pub or on a park bench, then law enforcement can surveil them individually or bug the venues in a targeted manner."

My home internet connection could have spies from 30 different countries all over it and I wont see anything. If I'm sat on a park bench then anyone with a Russian accent asking for directions to Salisbury Cathedral is going to stand out somewhat.


That's not how HUMINT works. Most likely an acquaintance, friend or co-worker would be the one that actually collects the information from you.

In this scenario the person on the park bench with you.


Do you really believe that Russian spies have Russian accent?


I don't, but it's amusing that this comment has a Russian accent.


If you watch enough English language spy movies you will know full well that a Russian spy has a Scottish accent. So do Russian submarine captains. Funnily enough a British spy and a Russian submarine captain were played by the same actor - Sean Connery. Russian spies are played by less posh Scots.

Hollywood and co. doesn't get out much or something.


The 256 bit encryption is no good when they zip tie you to a chair and cut off body parts until you give up the passphrase.


Precisely. This is about mass surveillance vs targeted surveillance, not obtaining evidence on known criminals.


I know right, someone think of the children!


This all seems rather moot:

1. There is no way to verify that you are actually connected to a particular person. i.e. Zoom has no identity management.

2. The client is closed source and can't be verified.

3. Zoom can trivially impersonate any participant as they control the servers. They can MITM at will and they won't get caught at it.

This discussion is like talking about the security of the bank vault door when you are planning to make the vault out of drywall.


Not being E2EE, it doesn't matter what Zoom logs or does or doesn't know, packet captures can definitely record this information. This is basically extortion from Zoom, pay us or information will be in the clear that you probably don't want to be.


They encrypt "end point to end point" now. It's just that they have easy access to the keys. I was suggesting that what is being proposed for the paying customers might not have any real value over what they are doing now.


The craziest thing with this is that this is even a discussion.

US citizens should have a 'right to privacy'. But that's been stripped away due to post-9/11 reforms, among others.


Whether or not you have a 'right to privacy' does not mean Zoom has to provide it. You can choose a provider who allows you to exercise that right.


Yes, but the quote says:

“Free users for sure we don’t want to give that because we also want to work together with FBI, with local law enforcement in case some people use Zoom for a bad purpose,”

So they want to keep the data unencrypted so they can give it to the feds. That doesn't sound like privacy to me.

edit: So I mean, something like that should not be allowed by law. Though it's rather the FBI that is breaking the law here, but Zoom explicitly says they want to work together with them. So that means they approve that injustice, making them also unjust. If they would encrypt their data to protect their user's privacy, they would not be unjust on this aspect.


It's not obvious to me why Zoom should have any less right than, say, a hotel to block people using its service for criminal activity.

A hotel that suspects you're taping child porn in one of its rooms is well within its rights to call the police. If Zoom has reason to believe you're distributing child porn in a Zoom room, why shouldn't it be allowed to take action, too?


But you're arguing for the ability for hotels to install peephole cameras in every room to make sure you're not up to no good.

The point of encryption is that no one knows what you're doing, because they can't see it. Just like no one can see what you're doing in a hotel, most of the time.


No one can see what you're doing inside your hotel room, but they know when you enter and exit, who you're with, and they can hear you if you make noise. They can track what you watch on TV, and when you're on the Internet. Housekeeping goes in every day and sees all your stuff, and rearranges some of it.

So they can't literally see you every moment, but they have a lot of visibility into what you're up to.


US citizens have never had a right to privacy when it comes to commiting crimes. That's why court-authorized wiretaps have existed for decades. Long before 9/11.

Do you think courts shouldn't be allowed to wiretap the phones of suspected criminals? Because Zoom is just a modern phone.


Now might be a good time to mention that if you join the Free Software Foundation as an associate member you gain an account on their Jitsi server.

https://www.fsf.org/blogs/community/fsf-gives-freedom-respec...


How well do you find jitsi works? For multi-party video? Large number of parties like zoom?


Dunno what you consider large, but it works well for me with 20 participants.


> […] There will not be a backdoor to allow this.

https://twitter.com/alexstamos/status/1268061796339814400

Eh, am I supposed to trust you just like that? If history taught is anything, it's that there will be.


Current encryption imementations (AES GCM) will not be downgraded. Meetings will still be encrypted and meeting content is still not going to be used for tracking users.

E2E will be an opt in choice for paying users who are willing to sacrifice some features for the benefits from additional security.

See this thread for more details: https://twitter.com/alexstamos/status/1268061790954385408

Edit to credit vjeux for the thread link


Do I have any way to verify this as a user?


Sure. Study a PhD in cryptanalysis and reverse engineering.

If you want to look at something now, the white paper for the E2E protocol design is public and open right now: https://github.com/zoom/zoom-e2e-whitepaper

On a more serious note, until there is a protocol and implementation available then we can't say anything for sure. Us Security folks aren't magicians.


That was uncalled for . Yes it is hard or impossible to do in zoom .

If these tools use open standards and well documented protocols this will not be a problem.

I can verify without a phd in cryptoanalyis and reverse engineering my browser is running a secure connection to a website and certificate is signed by the source(for sites enabled with FS and HSTS ).


Don't get me started on browser certificates. That's a whole week of my life I'll never get back.

The short versiom of it is, your browser trusts CAs to say whether a certificate is valid. But CAs often trust other CAs who may not actually be that trustworthy. Those CAs then trust other CAs who definitely are not as trustworthy... Etc.

So that certificate/padlock picture in your browser may not be as trustworthy as you think. It's an active problem.


Mandatory Ceryificate Transparency is solve problem of trust to CAs quite well though.


In terms of an actually relevant reply that's not bemoaning browser certs...

Yes I was a bit harsh. But I was trying to demonstrate a point - no one knows for sure until we can look at this stuff in detail. Until the researchers get to pull it apart then no one can verify anything. The little green tick on a zoom call is practically worthless until some external work is done.

The protocol is documented and open. I linked to it in my comment.


The open proctols for RTC today is webTRC. Zoom does not use webRTC. If the proctols Are open like http then I can build my own client and do not have to use theirs (just like you can your own hacker news app) . Zoom will not use webRTC for this precise reason. If they and all others did I can choose my client and I can choose a client who I trust and will give my green tick open source or not .

Google supported jabber in chat for a long time , slack supported IRC (both dropped the support ) but when they did you could any irc client in slack or use google chat using jabber with any client

If an open protocols for video are used like email (although not good example for encryption). It does not matter who your service provider is you can verify they are secure , or move to another one .

Today I have more than 10 video conferencing apps on my devices (zoom, Hangouts, meet, Webex , teams, GoToMeeting , chime , Skype, SfB, FaceTime , signal, telegram , ring central and Uber conference... ) because a customer , partner friend or family uses one of those . I have only one email and browser client though, it does not have to open source at all, ppl happily pay for closed source gmail or o365 without worrying will my mails deliver to you while still using official client or client of their choice


Sorry, to clarify, when i say protocol I specifically mean the E2E encryption protocol.

Also, have you thought about asking your clients/whatever to use one app to communicate with you? Even if you get half of them onboard, it sounds like it would save you a lot of mental bother.


I wish , every company has their own app to use , they will invite you to their conference by default, asking 10 people on the call to change for 1-2 is not feasible.

Many of them cannot install any new native application on their desktop / phone without IT approvals or their vpn does not allow traffic to consumer apps like Hangouts . They also need to record for compliance , pre-Covid some apps like Webex are connected to their conference room bridges using dedicated lines and hardware etc

Family / friends do not use use biz tools , it not easier to convince Apple users who like FaceTime, messenger is popular in few countries , wechat in China , WhatsApp in other places .

It is easier install another app rather than trying to get your grandma to switch from one thing someone installed on her phone and she learnt to use.


Well, even if all of your software was open source, do you have the time to validate all of it, from the app to the OS? what about the CPU?


This is well known issue since Ken Thompson’s trusting trust paper and not what am I getting at it

It is degrees of trust . Trust is not absolute , neither is security . Depending on your threat models you have to secure yourself. More transparency improves security does not solve all the problems just makes it costlier for an attacker . If cost outweighs the benefit they will not attempt to do it.

Https does not magically make your communication 100% secure ,however the number of people who can issue a certificate from a comprised root CA or control one is considerably less than the number of people who can monitor your plain text traffic .


I like your tone, and it lead me to think:

Any sufficiently advanced cryptography is indistinguishable from magic.

Which isn’t entirely untrue from a layperson’s perspective.

Edit: fixed a word. I’d accidentally written “is” rather than “isn’t”.


Go open source.


as the colloquialism goes: Talk is cheap...


How will you show this going forward?


Reverse engineer/packet sniff the implementation and perform some sort of cryptanalysis to see if the implementation works according to the published protocol. It's pretty standard.

That's sort of what happened with the ECB mode stuff that kicked this whole thing off in the first palace. See section 4 from the below for more info.

https://citizenlab.ca/2020/04/move-fast-roll-your-own-crypto...


That doesn't show that someone on Zoom's end isn't decrypting data in an invisible way, without a warrant, with the key they have access to, in the non-e2e case.


This whole discussion is about the E2E case. That's what the CEO was referring to. There's a twitter thread link somewhere.

I think I read that Zoom do decrypt AES-GCM server side already. They have to so they can put the little green box around the person currently speaking. EDIT - this is incorrect for AES-GCM, it's not decrypted server side.

Edit - at least e2e is the angle I'm approaching it from as it's the new information.


This is what you said:

> Meetings will still be encrypted and meeting content is still not going to be used for tracking users.

And the person responding to you asked "how will you show this?".


Which is a fairly ambiguous question and could be interpreted several ways so I went mainly for the E2E case (as it's the new thing).

Could also be interpreted as how can we show only paid users can access it? Or that certain features will be disabled with E2E?

What I replied with covers both E2E and the current state equally tbh (the linked article did it before with ECB). There are always limitations to what is possible.

I could break into the Zoom servers to make sure everything is kosher. But that's illegal.

If WhatsApp started transmitting E2E keys back to their servers people would find that out client side through network packet inspection, not server side.

Security researchers are limited in the tools/methods they can use. We have to work with what we've got at our disposal.


> Security researchers are limited in the tools/methods they can use. We have to work with what we've got at our disposal.

Which is exactly why "trust us, we're not going to do anything with these keys" is a ridiculous state of affairs and shouldn't be tolerated. We can't show that they're actually doing what they say, and it'll be years after they implement mass surveillance on the behest of law enforcement before someone leaks something.


Perfect security doesn't exist. If humans are involved at any point, it's not perfectly secure. The One time pad is a great testament to that.

Should we work towards an ideal? Sure. Should we stress out that things aren't perfect? Probably not.

It's an iteresting technical idea though. Would be interesting to see if any existing systems have a "canary" element to them.


There's a very simple solution to "the provider cannot prove that it will not misuse its access to the stream" - it's to use e2e encryption.


> I think I read that Zoom do decrypt AES-GCM server side already. They have to so they can put the little green box around the person currently speaking.

Why is this not a client thing?

Edit: Civility


* it used to be with ECB. not anymore. My bad.

> Matthew Green, a cryptographer and computer science professor at Johns Hopkins University, points out that group video conferencing is difficult to encrypt end to end. That’s because the service provider needs to detect who is talking to act like a switchboard, which allows it to only send a high-resolution videostream from the person who is talking at the moment, or who a user selects to the rest of the group, and to send low-resolution videostreams of other participants. This type of optimization is much easier if the service provider can see everything because it’s unencrypted.

https://theintercept.com/2020/03/31/zoom-meeting-encryption/

This was 2 months ago so their new white paper clarifies the current situation:

> For use cases such as meeting real-time content (video, voice, and content share), where data is transmitted over User Datagram Protocol (UDP), we use AES-256 GCM mode to encrypt these compressed data streams. Additionally, for video, voice, and content share encrypted with AES, once it’s encrypted, it remains encrypted as it passes through Zoom’s meeting servers until it reaches another Zoom Client or a Zoom Connector, which helps translate the data to another protocol.

https://zoom.us/docs/doc/Zoom%20Encryption%20Whitepaper.pdf


Ah, so their proprietary codec has come back to bite them in the ass.

(I realize codec is not the correct term.)


That's what you get when you are a MITM-by-design solution.

In short: Zoom E2EE eventually will encrypt corporate conferences, but will not solve the privacy problems they have, because their structure stills the same.

Seems it will be a feature just to make customers have the feeling they are safe (and pay more, indeed).

But, as usual, they are not.

Poor Keybase.


Keybase got acquihired, which is a much better outcome for both founders and investors than that which befalls most startups that never invent a viable revenue model.

Don’t shed any tears for anyone making hundreds of thousands of dollars per year while a third of the US has approximately zero income.


I'm certainly sad to see a company that took security seriously and "did it right" being bought by a company that arguably did not and got large mostly by trading security for convenience, even going as far as saying they were E2EE when they were actually not [1].

How do you compete with that?

Most users don't care about security/privacy and maybe that's fine but it was nice to see a company that seemed to genuinely care about these things.

[1] going by the commonly used definition


Not touching for a moment whether or not Keybase “did it right”, but, very simply: without a good revenue model, their only other option was to eventually go out of business. They were default-dead, near as I can tell from outside looking in.

You can do all the open source e2e crypto trendiness you like, but unless you’re a nonprofit like Signal that can generate a stream of donations, if you don’t eventually get people to pay you for the service, you’re not going to be able to stick around.

This was the best possible outcome for them, given the circumstances.


Skype was safe before it got acquired by microsoft.

I a way, any platform that is able to evade government surveillance will be deemed as an exception, and anyone using or building such platform will be a suspect.


Ah, so if I understand the Zoom CEO correctly, "protecting" pedophiles is fine as long as they pay Zoom. It's only an issue when they try to use Zoom for free.


The issue is that the free tier makes creating and disposing of burner accounts easy. If you pay, they have credit card info, etc. That makes it easy for law enforcement to link the Zoom account to a real person if it turns out that a particular account is a pedophile.


That’s how I understand it. What disadvantaged groups would be using Zoom for free that law enforcement would want to target? Protest organizers maybe?


May I recommend https://jitsi.org/ for meetings of 5 or fewer people. Easy to deploy on any cloud provider.


Does it bog down with more than 5? For it to be a serious alternative, it needs to support meetings with dozens of participants (even if it requires beefy hardware).


It fares pretty good in my opinion, I was taking Japanese classes before this COVID situation and they implemented remote classes using a self-hosted jitsi instace, we're 15 people in the jitsi room, we have the class for 5 hours and it's pretty stable, I imagine there are more rooms working simultaneously too, since we were not the only class taking place before this.


Unfortunately yes, I mean, the software allows for thousands of participants but the stream would be terrible. For 5-50 people, use BigBlueButton instead.


The scaling of bandwidth with the number of people _actually transmitting video_ is not perfect, so you results will vary with internet uplink speeds.


Jitsi works great for me with 20 participants, never tried more.


As Zoom now owns Keybase, I am really worried about the future of Keybase, especially after statements like these.

This also makes none or very little sense - if this is actually just to cooperate with law enforcement, why would encrypting corporate (or paying) calls be any better, the bad people that are referred to in the statement could just get a paid plan?


Keybase is dead. In the PR they were explicit they were hired to work on Zoom and not continue working on Keybase. Luckily there is keys.pub


Is keybase fully open source? Or is the server closed source?


The server stills closed but seems there're people saying the server-side is not needed(!) to trust the platform.

https://www.reddit.com/r/Keybase/comments/77c241/keybase_why...


Why (!)?

Of course servers are untrusted. If you think you need to see the server source then any trust you have is mistaken.

Same as with HTTPS. If you think you need to trust the MITM, you've already lost


I think you don't need to trust the server just if you can audit the frontend and assure it does not share any sensitive information with the server-side. If they apply the concept of data minimisation, decentralisation and distribution properly, there are fewer risks involved.

But, if the server manages sensitive information, yes. It is preferable to audit the server code to understand how they handle the lifecycle of the information.

"If you think you need to see the server source then any trust you have is mistaken."

Sorry but I don't agree. I trust in systems I can verify. Trust without verifying is not trust, it is faith.


OK, but that doesn't help you when they shut down the server, which I think is what this thread was about? That zoom purchased it as an aquihire to have staff work on zoom, and isn't committed to the platform.


Agreed, but a server turned off has lower risks to leak information. And I think also they bought the expertise of the team to improve zoom, more than getting the solution per se. It will take some months to have this question answered (about what will really happen to keybase)


Clearly, a product which has been discontinued has less of a chance of violating your privacy, that's true.... I think there are a couple different non-intersecting conversations going on here...


But was it discontinued?

I was looking into their terms (https://keybase.io/docs/terms) and they should notify before it. And seems my account is up over there.


Paying leaves a verified trace though.


True, but they can't know the content at all.


-No worries, they'll just assume the content is malicious, then. (/s. I hope.)


It sounds like bullshit. I don't think the reason to avoid keybase is necessarily because it draws their security from authorities into question, but it does call into question if a company that would say something that looks like impulsive nonsense is someone you want to trust.


Interesting Zoom timing co-incident with the announcement that the DEA has been authorized to surveil protestors. https://news.ycombinator.com/item?id=23397868

Next up: Zoom meeting attendees raided for unlawful assembly. https://www.persecution.org/2020/05/24/wuhan-preacher-taken-...

Catch up: Identifying influencers from sampled social networks. https://ui.adsabs.harvard.edu/abs/2018PhyA..507..294T/abstra...


Note that warrantless and illegal bulk military intelligence gathering and “law enforcement” in the US are now practically indistinguishable: several US federal law enforcement agencies, including the DEA and FBI, receive intelligence products from the military’s domestic spying apparatus, which they then use to conduct what’s known as “parallel construction”: illegal evidence that they then use in court, unrelated to the spying (because that’s illegal, as well as the evidence that they found later as a result of the illegal spying).

Additionally, federal “law enforcement” (and concentration camp-operating) organizations like the CBP are engaging in domestic mass spying using aircraft to collect mobile phone identity data from millions, even for peaceful protests and the like. There isn’t really a line between “state surveillance” and “law enforcement” anymore in the US.

The title of this item as I submitted it to HN prior to its edit by mods ended in “to aid in state surveillance”, which I think is a more plain, accurate, and unbiased description of the practice, as I think that pretending that this illegal military spying practice (PRISM et al) has anything to do with legitimate “law enforcement” is basically state propaganda at this point.

I stand by my previous snarky, HN-rulebreaking flame of Zoom’s announcement of end to end encryption support from a month ago:

> I'm sure the result of this will be lots of good and secure trustworthy software that I'll be eager to install on my computer.

https://news.ycombinator.com/item?id=23103578


Latest upgrade of their macOS app bricked me and I have to use Zoom in the browser. We have a paid account and started to record our meetings and post on YouTube, although I've increased the quality setting, it's still 360p! In general, it's the most complex and confusing app, it's expensive, and the quality of the software is the worst. Unfortunately, Google Hangouts Meet isn't better and it has much higher internet connection requirements so our nonprofit was forced to use Zoom. Unfortunately, Jitsi Meet is even worse. It's kinda ridiculous that Zoom has no popular alternative in 2020.


WebEx and Skype are the popular alternatives. And if you want to talk about poor software quality I think Zoom was created because WebEx is such poor software from a user's perspective.


Skype absolutely wrecks my MacBook, not sure if the experience is similar on other laptops but I can't even open a PDF while on a Skype call. It would be impossible for me to give a presentation over Skype with my current setup. The software on its own seems reasonably user friendly but if the resource usage is a common issue I don't see Skype being a viable alternative right now.


I would add google meetings... I think it has the nicest interface and works better than the others cross platform.

Its a pity there is no cross platform communication standard - it looks like it will evolve like messaging with dozens of companies.


We have kids with Chromebooks and older laptops and on meeting with 10+ kids, Hangouts Meet becomes intolerable. Not to mention the lack of control - kids mute the teacher, can speak at any time, there's no raising hands, etc. We have to install a bunch of extensions so that we can have a basic functionality in place like the Nod Chrome extension.


Yeah, there's actually tons of alternatives. Everyone and their cousin has a chat app, and video conferencing is a basic feature.


I have cancelled my (paid) Zoom subscription. I will not finance companies that feel that protection from oppression is only for those that can pay.


Same, but strangely when it asks your reason for cancelling, there was no radio button for "I find your system of morals incompatible with my own".

Unrelated to this, I read yesterday that Jitsi Meet now supports E2E encryption so I look forward to trying that out.



Good grief this one is complicated.

That Bloomberg article is boring except for the sentence at the end. The submitter made the title be about the last sentence, but we changed it back to the original, which didn't satisfy anybody because the only interesting thing about the article is the last sentence.

The current post seems at first glance to be a garden-variety tweet picking up on that sentence, but someone pointed out to me that it's actually by the author of the Bloomberg article, suggesting that he might be at odds with the Bloomberg editors about what aspects of the story are significant. Suddenly that's interesting.

Given that Alex has been tweeting in response to this in detail (https://twitter.com/alexstamos/status/1268061790954385408), it seems like there's enough information here to support a substantive thread.

Given that sneak's post was the first on this and that it links to the statement by the reporter about the only thing that anyone here cares about, it seems clear that this is the post we should leave up. So I merged the comments from the other thread hither.


Thank you for your work!


Thanks for the merge!


The optics on this are truly awful. They want to "work more with law enforcement" - at this time?

Now?

As the police are moving in to cities across the US military-style?


Commenting on the PR-friendliness of something is generally the least valuable of contributions.


I understand your point, but need to disagree: PR-friendliness can have real-world implications, such as a stronger negative response of Zoom users.

Another important thing to consider: Zoom is probably aware of the risks involved in this decision and, despite the PR risks, decided to go ahead. Why? Most of us here can come up with a couple of reasons.


That, and most zoom users don't care. We use zoom at work, in spite of my repeated warnings to the contrary and my efforts (and demonstrations) of Jitsi. But they like zoom. It's convenient. It's sort of cross platform, and when it isn't, you can just dial in using a phone.

Plot twist? The company I work for is a software company. We're all software engineers. And yet I'm unable to make them have a negative response to zoom. I can't imagine the greater public giving an iota.


One more reason not to use this garbage. I keep recommending Signal to everyone I know.


last time i checked Signal doesn't support group video calling.

Also Group video calling features in apps like Facetime and Facebook Messenger are in a slightly different category from meeting centric apps like Zoom and Microsoft Teams.


I hope we get group video calls and video calls on desktop soon


I work at Signal on calling.

Non-group desktop calls are coming soon (hopefully hitting beta in a few weeks).

We're working on desktop group calls as well. That will take a bit longer, though. I don't have a good estimate of when it will be available


Excellent news. It would also be helpful if we could also get (missed) call notifications on desktop. Right now if someone calls and I don't have my phone with me, I don't see their (missed) attempt on the desktop.


Thanks for your work and looking forward to testing it out!


I guess this is the SF version of "pay us or else..."


So a group of corporate employees could conspire over Zoom in full confidence?

Gotta protect that source of revenue I suppose.


Can they encrypt the calls at all if they allow people to call in? They would be the ones receiving the phone call, so they would need to be able to decrypt the zoom call in order to send it over the phone. I guess they could encrypt calls only if you disable dialing in or something.


Seems like E2E will be unavailable for meetings where there is a dial in participant. There's several other features which would need to be disabled as well to opt into E2E.

https://twitter.com/alexstamos/status/1268061790954385408


Aside from the PR fiasco, this is bad for their paying users.

I can’t verify that Zoom actually encrypts my calls, I have to trust that they’re telling the truth. When I find out that they’re willing to turn off encryption for some calls to make spying on their users easier, the idea of holding business meetings on their platform becomes unpalatable.


They're not turning off encryption; they're allowing law enforcement decryption. Whether or not you think that's okay, it's not the same thing.


It's the same thing. If it can be turned off, it will be accidentally turned off, accidentally not turned on, or a myriad other things in this general direction.

I witnessed a cellular carrier discovering that they had all encryption disabled for several months. A honest mistake, but one that should have been impossible by design.


They're also maybe allowing decryption by anyone else who wants to, and by law enforcement without a warrant - there's no cryptographic proof anywhere that Zoom will only hand the key to police on presentation of a warrant, and never use it themselves.

In the meantime, anyone who actually cares about their privacy can use an e2e encrypted group chat today with e.g. Wire. All this gets us is that unwitting people who can't afford to pay are in the position to get spied on.


Handing law enforcement the keys is functionally the same, from my perspective.

And what reassurance do I have that they won’t do that to me, a paying customer? All I have is a little icon that says “encrypted” and Zoom’s word.


How is that different from rot13?


Secure by default is no longer a staple of communication apps?

I thought we established that standard. Oh well, ride your wave Zoom, don’t get mad when the inevitable funded competitors start showing up with security as a default.


I spend less running a Jitsi Meet cloud server than a single Zoom host account... 2 party e2e is still only experimental in Jitsi, but its coming.


You are missing the point. Zoom is popular because it's easy to use and "just works". Enterprise orgs don't care about cost.


I test drove Jitsi with a few different groups (including a not particularly technical frined group) with no more instruction than "Here's a link to a video meeting. If you're on mobile it'll prompt you to download the app. The password is 'foobah'", and it "just works" too.

> Enterprise orgs don't care about cost.

In my experience,thay also don't care too much about ease-of-use either. Purchasing deparments and managers are looking to someone who'll convince them that they won't personally take the blame when something goes wrong, which is why expensive proprietary services win over open source so often... "Sure everything's gone TITSUP[1], but it's Microsoft/SalesForce/Zoom fault. I've logged a ticket." is a magical career and face saving phrase...

1 "Total Inability To Supply Useful Product" - hat tip to El Reg...


> Here's a link to a video meeting. If you're on mobile it'll prompt you to download the app.

You can disable forcing mobile users into an app, too.


Do you know a good guide for setting up a Jitsi Meet cloud server?


This works: https://aws.amazon.com/blogs/opensource/getting-started-with...

(And you don't need anything like as big an instance as a t3.large - a t3.small works fine for small workloads...)

Everything from the "Install and configure Jitsi" section works just fine on Digital Ocean (and Raspberry Pi) too, if you translate the AWS specific setup stuff in the previous parts to suit...


"Secure by default is no longer a staple"

It never was. It never was.

Security was built over time, with a lot of lessons learned in between.

Browsers didn't magically start supporting SSL overnight. MSN Messenger never had any encryption, for example.

One of the first protocols to support some form of encryption was SILC, but who uses it?


Is there any reason to believe that, if you pay, Zoom can't listen to your conversations anyway?


Same reason as with Windows probably...


Sometimes, this company can’t seem to help but shoot itself in the foot.

The best thing, is to encrypt everything, in order to make all the traffic look noisy and randomized.

Then, for the paying customers, they can use a stronger encryption, that’s tougher to crack.

Ideally, this way, all the traffic being sniffed, will look randomized. But, with the paying customers, having a tougher encryption.


So, they'll let the Epsteins of the world pay and get E2EE, but those "Thug" protestors who want to organize and maybe can't justify the cost need to be surveilled.

Set against the events of the past week, I strongly feel this message is quite tone deaf and we're continuing to see two classes. Those exempt from police authority and those who cannot afford to be.

Edit: Authority isn't the right word. Oppression?


Definitely a strange thing opening access to only free subscribers. There's probably an argument to made on both sides, but putting this as an 'added value' for paying? Tone deaf for sure.


Of course, if you can't justify the cost to a company, you will be stripped of its services. Is that not expected of private corporations?


Its not stripping them of service though is it? Its providing them with the zoom service, but its specifically removing encryption to expose the conversations of non paying people to the police. This is not about them saving money its about them intentionally choosing to hand up non paying customer chats to law enforcement.

Also from a customer satisfaction / PR perspective I am hard pressed to think of a worse time for a company to announce this.


I am not applauding this. I just point out how corporations work now.


Sure, we can ignore that the Zoom CEO mentioned anything at all about law enforcement and focus solely on the financial aspect. If we do that, then why have a free segment at all? Go ahead and make it completely for pay.


It's all up to financial incentives. I am not saying it is right. The whole system is not right.


1) Encryption costs are marginal.

2) He didn't say it was a costly operation so they're charging for it. They said it was for LEO purposes.


Oh, you won't have to be rich to purchase E2EE. I'm sure that Zoom will make it surprisingly affordable.


If they are willing and capable to enable/disable it over the matter of a few dollars (any amount of dollars, really,) can one actually trust that it is enabled?


It's okay to use Zoom for a bad purpose as long as you pay?


It sounds like nearly all of the people using Zoom to facilitate abuse are doing so on the free tier (with untraceable throwaway identities).


There is some logic to that. Paying requires you provide something traceable to a physical identity.

I still think it’s a dumb move. Imagine if “HTTPS for pay users only” was a thing.


Or https for everyone, root cert validation for paid users only


You want to hash out a bad business plan - do it on our paid version of a product. Gotta spend money to burn money.


Also wouldn't they logic imply you have something to hide?

- Special Agent Smith here, FBI - here judge sign this order?

- What is it?

- Its someone who we suspect of doing bad stuff, AND they are using paid version of Zoom.

- Oh that's the encrypted one, right?

- Yes, your Honor, the free one is non encrypted, you have to pay to hide your convo.

- Here is the paper, good luck.


There are plenty of legit reasons to want end to end encryption. What if you are discussing a patent that isn’t submitted yet?

And if there is a backdoor for police there is a backdoor for more than the police.


I guess the hopes of the Keybase acquisition leading to better privacy in Zoom are dead.


It just lead to keybase's death.


Keybase died when they added bitcoins or whatever it was. Zoom kicked the corpse.


Why so dramatic? I understand the will to use zoom for free, but why should they care much for someone not paying? Video conferencing is still mostly not p2p, which means they are giving away a lot of bandwidth for free to a lot of people. If you have extra requirements, why not pay?

The acquisitions and hires they have made has made sense if what they are aiming for is making a more secure and private service. I don't doubt their intention. What I don't understand is why people expect them to provide all benefits for free.

Edit: does any service to multi-party e2e conferencing? What trade-offs are there?


A lot of general people don't care about zoom 'dark' deals and still use it just because habit/comfort/etc. Hope many general news&blogging platform will spread the word about.


I think civil unrest will set back privacy efforts. Basically, when people are scared, they want the government to protect them and theirs, and privacy concerns are pushed to the back burner.


> Basically, when people are scared, they want the government to protect them

That works only until the people you're scared of are those the government sends "to serve and protect"...


>when people are scared, they want the government to protect them and theirs

Should someone tell him?


“Free users for sure we don’t want to give that because we also want to work together with FBI, with local law enforcement in case some people use Zoom for a bad purpose.”

He could be a 'PR Genius' by gently coaxing people into being paying customers, but I don't think that's it, rather, this is just mind-blowing, gigantic PR lack of self-awareness verging on disaster. To just say it as he articulated it, publicly ... my gosh man.

From a communications perspective this is like comedy.


Why's there still no open-source peer-to-peer standards-based encrypted chat and video meeting software? I mean, that works well and isn't shady.


You simply can't do P2P video meetings in a performant way because videoconferencing a) still requires STUN and TURN servers due to NAT, firewalls, mobile, etc. and b) requires a server for bandwidth reasons once you get beyond 2-5 participants, since otherwise bandwidth is N^2 for N participants.

It's that simple.

Now, if someday NAT's and firewalls die so every device can receive connections from anywhere, and packet multicast across the internet becomes a thing, then this could probably change. But I don't think anybody sees either of those happening anytime in the next decade (or ever), for both technical and security reasons.


Because you haven’t written it yet!


It is a hard problem to solve. People earn bread making and scaling this stuff.


Thoughts, in order:

1) Good, I guess I'll be using and promoting Zoom as much as I can

2) Well, I guess it doesn't really matter, since bad actors will have other encrypted software they can use

3) Well, I guess I'll still use and recommmend my friends use it just to avoid false-positive risks of flags from law enforcement

Yes, I know this is effectively an "if you have nothing to hide" mindset. I'm okay with that.


I wonder can we do overlay encryption like this: some software encrypts audio and video stream from your camera, then apps like zoom transfer encrypted stream with some noise, and end user software tries to decrypt those streams. This way you can use any middleware app as long as both users have encryption software overlay. Just a crazy idea


FBI are slightly above police concerning abuse of authority. Once I reported a serious crime police in my area weren’t willing to investigate and I filed it with the fbi but never was contacted again and when I was told they would. Seeing how cops behave with the protests makes me skeptical if any law enforcement in USA is legitimate unless you’re loaded with money or social influence.



The cops are definitely not learning from this not to abuse black people. They are instead learning how to cut off lines of movement and communication, how to kettle and gas people, and they are learning that in a crisis, nobody will stop them from doing it.


"Pay us or we share your information with law enforcement".

Does this legally qualify as extortion?


No. Blackmail maybe.


Well, respect for coming out and saying it directly.


I think this is the right call. Putting up a financial barrier so you have fewer trolls numerically is the right solution (also- the credit card used can help track down the identity of zoom bombers).


What are some reliable open source alternatives for non-commercial use?


Jitsi Meet



This statement from Zoom makes me believe that even for encrypted calls they may give authorities a back door (i.e. something that Apple and others have been very vocal about not doing).

Do enough people care about privacy to warrant a "Signal for video conferencing" Zoom competitor?


I work at Signal on video calls, and we're working on adding video conferencing so that Signal can be the "Signal of video conferencing".


Best news I've seen all week. Keep it up! And please keep us posted.


There was already a controversy over Zoom marketing their calls as "end to end encrypted" when in fact they're only encrypted on the way to and from Zoom's servers, where they're unencrypted (and easily wiretap-able).

There are a variety of better Zoom alternatives from fully end-to-end encrypted peer-to-peer ones like GNU Jami (which requires an install) to Jitsi Meet, which runs in a browser with better quality¹ and IMO better ease of use than Zoom but requires running everything through a central server (though at least allows you to host it on your own).

1: https://www.nytimes.com/wirecutter/reviews/best-video-confer... See also https://en.wikipedia.org/wiki/Jitsi


Zoom is popular because it can support 50+ person conferences. P2P really can’t support that many users.


That's part of why I mentioned Jitsi and not just Jami. The former supports partial p2p on one-on-one calls, but uses a central server to support 50+ person calls (though they do recommend 75 max on the free instances they provide IIRC).


Jitsi is a great replacement


Sadly, not for all use cases, in particular not for large meetings (>50 people) with video.


And, ironically, it doesn't seem to work in ChromeOS (Chromebooks).


It does.

The administrative tools of my local school district block the official community instance https://meet.jit.si/ on school-issued Chromebooks (as well as probably millions of other sites, at one point they even blocked a subdomain of the school district's own website), but the same service hosted at the other official instance https://8x8.vc/ works wonderfully.

Edit: detail


It is safe to assume that lawful access is possible on all major commercial platforms. In theory I don't see an issue with this if it is controlled by warrants and judicial process/oversight. Unfortunately the US did away with this with the Patriot Act and the CCP and Russia don't even have such a concept as mass surveillance is the law.


The point of E2E encryption is to make this mathematically impossible. Anything labeled E2E that doesn’t provide that guarantee should be pointed out.


So it's official then, huh? It's worth $20/month to keep the FBI out of your shit?

Do they _really_ think terrorists, counterintellegence agents, or criminal organisations can't afford $20/month???


When you pay for something, you are very traceable, its a big difference.


stolen credit cards are a thing or proxy purchases


What's a proxy purchase?

Edit: never mind, Google told me. It's getting the older kids to buy you alcohol.


or get homeless guy/junkie to buy 10 sim cards/prepaid phones for 20$


And you know - stolen credit cards might even give them an actual conviction they can make stick "Al Capone style", instead of trying to convince a jury that shittalking in a Zoom chat was a real threat...


Right, but there is always something to go on there, who made the purchases, where was the card stolen etc.


At which point you find a trail leading to some Russian hackers who stole the credit card and then sold it for cryptocurrency over the internet to the person you're looking for, who they have never met.

It's square one. It's no easier to trace the anonymous buyer of a stolen credit card than the anonymous Zoom user to begin with.


> When you pay for something, you are very traceable, its a big difference.

So in order to have privacy you have to do a thing that violates your privacy. Rather problematic for people who need privacy.

If I'm organizing one of these anti-police brutality protests, I don't think I want the associated purchases to be tagged with my name in some police-accessible database.

Meanwhile the actual terrorists and foreign governments can just commit identity theft or similar.


No, in order to have privacy you can choose applications which appeal to different user base and have different tradeoffs. Like Signal, or even WhatsApp.

Zoom appeals to a different user base and offers different features as selling points. Please read the twitter thread mentioned below to understand their perspective.


> or even WhatsApp

Ummm, no.

(I mean, sure, it sometimes uses Signal protocol, but seriously a _Facebook_ product recommendation in a privacy discussion???)


What do you mean by "sometimes", does WhatsApp generate a random number to decide what protocol to use?

Do you have better recommendations which use E2E and are also already used by the masses?


Originally only two person chats were e2e encrypted, then the added e2e to group chats, but last time I checked (admittedly a year or more back) if one participant in a group chat had an old version of the app, the entire group chat was unencrypted, without an obvious user interface indication of that.

I use (and trust) Signal. I believe Wickr is trusted by people who have resources behind them to know whether it's trustworthy (though it's closed source, so :shrug:).

If you need "used by the masses", then you're gonna be stuck with, I dunno - Facebook public posts? Gmail? Slack? Smoke signals? Nothing that's fit for purpose" if you value privacy...


Set up a webserver with teamspeak, which a ton of the millennial and zoomers already use for games, and nuke the server afterwards?


> No, in order to have privacy you can choose applications which appeal to different user base and have different tradeoffs.

In other words, don't use Zoom.

> Please read the twitter thread mentioned below to understand their perspective.

Their perspective is they don't want to use end to end encryption so they can turn their users in.

Even for ordinary users, this yields no advantage to the user. For users who have reason to fear oppression by the authorities it's quite problematic, and in general everyone else should try to avoid using such things out of solidarity.


Security does not equal privacy.

They are two distinct concepts. Security with encryption is about trust guarantees. I send message X to Alice and I know Bob can't read the plaint text of message X because I encrypted it with Alice's key.

Privacy can benefit from additional security. But it's generally a whole other ball game. For example, Bob will still know I sent a message to Alice and can hit me with a wrench until I reveal the plaintext of message X.


Isn’t that part of security? I mean you lack physical security to prevent bob from using a 5$ wrench on you.

if security proctols make it difficult to trace that you sent a message at all,I.e. also takes care of metadata along with the content you also get privacy.


With physical security, it would be more like "I trust Bob isn't going to hit me with a wrench because he told me he wouldn't".

Bob isn't very trustworthy.

Sure, we could encrypt everything ever created by any device at all times. But some sort of communication is sent from one IP address to another IP address. Even if it's encrypted, some form of something was sent.

This is where privacy then becomes a thing. You could start sending random noise out constantly. then when your encrypted data is sent out it looks like the rest of the random noise.

That would be like "I sent out 4999 random messages to 4999 random people and 1 encrypted message to Alice. Now Bob can't work out I sent my encrypted message to Alice".

That's the difference. Privacy is about hiding the existence of something from Bob. Security is about keeping something safe from Bob.


> That would be like "I sent out 4999 random messages to 4999 random people and 1 encrypted message to Alice. Now Bob can't work out I sent my encrypted message to Alice".

Sounds exactly like I2P darknet.


I'm full of great ideas that other people have already done.


> Security does not equal privacy.

It's all privacy here. End to end encryption provides privacy; the company can't view your conversations. Not paying with a credit card provides privacy; you don't have to give them your personal information. So if you can't have one without the other then you can't have privacy with Zoom. (Unless you're a terrorist with a stolen credit card.)


So by your logic then, WhatsApp provides privacy?

What about the troves of metadata they collect and make available to law enforcement? Plus the fact you have to register with a phone number which can tie the metadata to a specific individual.

Doesn't sound very private to me.

Privacy is full of tradeoffs. Security isnt. My messages on WhatsApp are safe from anyone I don't trust that doesn't have 10 super computers and a lot of patience.

But that doesn't mean everything I do on there isn't tracked to kingdom come. It doesn't mean the fact I sent a message to Alice is private.


> So by your logic then, WhatsApp provides privacy?

By your logic by my logic then Zoom provides privacy if you pay them. This is obviously the opposite of what I said. The most private solution is end to end encrypted and doesn't require a credit card and doesn't collect metadata etc.

> Privacy is full of tradeoffs. Security isnt.

Everything is full of trade offs. How can privacy be full of trade offs and security not when privacy is a subset of security?


> The most private solution is end to end encrypted and doesn't require a credit card and doesn't collect metadata etc.

The most private solution actually doesn't involve a computer at all. It's not very useful though.

> How can privacy be full of trade offs and security not when privacy is a subset of security?

See my initial comment: > Security with encryption is about trust guarantees.

I'm not meaning "cybersecurity" when I say security here. I'm talking specifically about the security of encryption. Privacy and crypto security are two distinct concepts within cybersecurity.

Privacy hides the existence of a thing. Encryption (security) keeps the thing safe.


If they think you're a terrorist or want to pretend that you are, $20 won't keep them out of your calls. This is just a way to justify expanding surveillance and making a few bucks off it.


Yeah - it's even more Zuckerbergian "surveillance capitalism" than Facebook.

Advertisers are nickel and dime-ing cheapskates - gotta plug into the "War on Citizens" police state money pipe to get rich these days...


According to the article, they don't yet have end-to-end encryption yet. It's still under development.


Maybe it is not about affordability, but about digital footprint (credit card, bank account etc.) which these kind of agencies would be wary of, after all Zoom does not accept cash transactions.


Would compression techniques eliminate any possibilities of using steganography in real time video calls?


I will be surprised if Microsoft or Goolge encrypt their video calls that LE cannot access them either.


Jitsi are doing some nice work on this...

https://jitsi.org/blog/e2ee/

"If Emil was a rogue service provider running the bridge for the meeting, he would no longer be able to eavesdrop on it and an attempt to do so would only yield, well we already said that: an endless stream of rubbish.

The only way for Emil to actually participate in the meeting would be if he was made privy to the e2ee key. In this case he was and once he enters it, everything goes back to normal"

(Sadly, Chrome only for now - so if Google and state actors are your adversary, "you're still gonna get mossad'd upon"...)


Lol, Emil should join my scrums. It's an endless stream of rubbish, encrypted or not.


Neither do end to end encryption of group video calls.


and both work with law enforcement as they are required to.


‪if paid =true and NotLaw=true then encryption =true else encryption =false ‬


That’s the same as saying they want to aid foreign government corporate espionage.


AES GCM will still exist for free users.

> ... this is in reference to end-to-end encryption, but simply ran out of space in the tweet. ...

https://twitter.com/nicoagrant/status/1268020841054269440


Q: Won’t criminals use the paid tier?

A: No, because crime doesn’t pay

I’ll see myself out :-)


Bet they work with law enforcement on “encrypted” calls as well.


Free calls: unencrypted, content tappable, participant list subpoana'd

Subscriber calls: encrypted, subscriber list subpoena'd

no real thoughts on it, there wasn't a real expectation for me that Zoom was private, only convenient.


pacman -R zoom


Who's placing puts on ZM now?


They really timed that well


How about Google meeting


Bye bye Zoom.


The death of Zoom means the dominance of Teams. Hope we're happy with the trade.


fbi, chinese govt, top bidder... ever optional always optional


My French language classes, typically held in person after work, moved to online and over Zoom. Initially, we tried Google Meet and found it laggy and that we would often drop out. Then we moved to Zoom and it has been a much better experience. The interface is more intuitive, has a few more features and somehow the quality has improved and the connection is much smoother. I care deeply about my privacy, but I'm lucky that we don't discuss anything sensitive in our classes such that the security issues and the lack of encryption would become a big deal.

EDIT: We tried Meet, not Hangouts.


What about Google Meet? Did you try it? I heard that its performing okay.


We're a Gsuite shop, so we've been using Meet since before WFH.

It's fine, not great, but the connection seems stable and we have not experienced issues with conferences. Zoom is all that plus much more intuitive and easier to use. The entire Zoom experience is great from start to finish, built in background replacement is a really big draw along with the full tile layout (Meet got tiles two weeks ago).

I would also say that Discord video has been great too. It's only downside is that you can only be in a session on one device. That is an extremely annoying limitation as I prefer to be mobile on my phone headset and present or stream on the computer.


> It's fine, not great

Exactly what we've found with our GSuite and Google Meet. Works fine with 8-12 people on a call. Useable by technically adept people, but we've had to talk clients through the interface sometimes. (From today- client: "How do I share my screen" me: "click the [share screen] text in the bottom right" client: "Oh, yeah. Of course.")

(If you want background replacement enough, OBS and VirtualCam lets you do it... I did it for a gag the other week to put myself inside a Russian nuclear powerplant control room for standup. It's not something I'd recommend telling anybody who's then gonna ask you to help them set it up though...)


I don't understand this though. I use both Google Meet and Zoom. On both platforms there are people who don't know how to use basic features labeled by buttons with descriptive text, and I don't even blame them because in a meeting when everyone's listening to you it's easy to have "brain farts" like this. On Zoom there's even the additional issue of "joining computer audio" even. I think this has more to do with people's personalities (perhaps triggering a form of stage fright) than the software.


Thanks for giving this overview! Are you using the web version or the Zoom app?

(I suspect many people on HN are using the web version because Zoom pushes their app aggressively and in an abusive way, which immediately makes the more paranoid among us decide that they don't want it. And I've heard that the installed version is great while the web version is not.)


Last time I tried, the web version of Zoom required you to create an account (even when using Chromium and the trick of canceling the download twice to make it show the web version link). The app, on the other hand, does not require creating an account.


I use the iOS and Windows Clients.


Thats true and the only annoyance I have with Zoom is that its bad for screen sharing. To much of jitter and its terrible for sharing code screens. Microsoft Teams is much better most likely cause its P2P I guess.


Shameless plug: https://goteam.video

Trying to provide an alternative to zoom/google.

I wrote the UI and my friend did the backend. One key point for me personally is doing privacy right. No identifiable data in logs etc. Tricky cause there will always be an element of "trust us".

Next weekend I hope to implement the experimental e2ee feature in WebRTC. Only for chrome so far, but maybe it'll take off for all browsers.


This is pretty neat. But pls add a couple of screen shots of the app in the landing page


Thanks for the idea! I'll forward it to my designer :)


Zoom is closed source, you could never trust their encryption to begin with.


Are they somehow obfuscating their binaries, or are they simply LLVM output? Is it your belief that straightforward compiler output is somehow infeasible to verify?


"Reverse engineering a specific version [every version?] of a private binary is a reasonable burden of proof for users to undertake in order to feel confident in the encryption used by their comms"

What? I usually agree with your takes but this seems out there (or did I misread/mischaracterize it)


You would still have to audit every the code of every released version right?

Reverse engineering is obviously more difficult but someone with the right skills and tools would find it a straightforward task.

Also - unless you are somehow also inspecting their CI/CD toolchain, there's no guarantee that the source being inspected matches the binaries being released - so you would need to reverse the binaries anyway.


It’s already too hard to find security bugs in clean source code. Analyzing a binary can confirm the existence of encryption and the overall crypto scheme, but not the absence of backdoors.


Help me understand how the exact opposite thing isn't true? The binary is the true record of what the platform is actually going to execute, unlike the source code.


What he means is that it's not practically feasible. Experts and a few talented crackers have the expertise to go through a binary with tools like IDA Pro and analyse it, but the process takes a lot of man-hours and is much harder than auditing source code. So yes, if there is a problem with Zoom the NSA and other state-run agencies will find and exploit it, but ordinary security professionals might not have the combination of manpower and expertise needed.

Since it's used so much now, I still hope that somebody reverse-engineers some of the clients. In contrast to auditing open source code, this might also pose legal problems, though.


If the build is reproducible, you can analyze the source, compile it and verify they're distributing the correct binary. If the build isn't reproducible for some reason, you can still do this on some subset of the code.

Now, of course, the binary is what gets executed, so it is the ground truth.


Because the analysis needs to be done by a human mind, and it's 10x more difficult to analyse a binary, even with some tools. And it is already hard to analyze even clean source code, because there can be hidden backdoors, they are not always all called "function SendDataToFBIForTracking(data)".


I should have qualified my statement better. There aren’t many people in the world who can accurately audit a complex application. Evidence of the above fact is that there are approximately zero complex applications without a history of security issues, and there have even been successful attempts to maliciously add exploitable bugs in open source projects.

Among these people only a small minority is able to perform a similar audit on compiled code, and only at a much lower pace.

So I agree that it is possible to check a binary, but it is not feasible in a reasonable way.

PS. You are a well known (maybe even famous in the community) software security expert. Your perception of the availability and competence of good experts may be skewed by the fact that you probably know most of the good ones


>Your perception of the availability and competence of good experts may be skewed by the fact that you probably know most of the good ones

This is a bit disingenuous.

The fact that he knows them means that he can comment on the feasibility of that analysis.


He can maybe do it, and a few dozen of other people. There aren’t simply enough good experts to do it on any significant portion of popular software products.


So we are not worried about popular software products, we are concerned about critical pieces of software.

So seriously--how many are required?


which opposites are you talking about?

I cannot find any in the parents comment.


>but not the absence of backdoors.

This is patently false. Binary code can be completely analyzed and understood with respect to back doors.


Their terms of service disallow reverse-engineering or even disassembling the binaries: https://zoom.us/terms

> You agree that You will not use, and will not permit any End User to use, the Services to: (i) modify, disassemble, decompile, prepare derivative works of, reverse engineer or otherwise attempt to gain access to the source code of the Services


Their E2E proposal design is open:

https://github.com/zoom/zoom-e2e-whitepaper


That's nice and all, but unless you can install their software by compiling the source yourself you have no chance of verifying that the implementation is actually correct. Even with the source available that's an extremely difficult task.


Difficult, but not impossible, but yet still practical. It is done quite frequently.


You say no chance, I say someone is probably already starting to work on it.

Don't forget we can sniff and inject packets at the network level. Just because we dont have sources doesn't mean we can't test if it works or not.


Well all the more reason to quit Zoom.


Or pay for it. There is always a cost for free.


> There is always a cost for free.

There is? I'm not sure what my cost for clang, python, llvm, firefox, is for being free. Even wikipedia, mdn, openstreetmaps, ...


I donate to wikipedia. They need to pay for electricity, servers, etc as well.


Someone else is paying for that, because very little happens for free.


Apples and oranges. The Python Software Foundation can't be held liable if terrorists use their software.


Firefox is funded by Google, mostly. So it's free because you're the product.


Do you trust the sort of CEO who'd make those statements to "protect you" for the $20/month you're paying?

I suggest looking elsewhere instead of supporting Zoom.


If it’s only a question of money, if a 3 letter agency or someone else pays more than you to access your unencrypted data then what happens?


Apple and Lavabit both say "Nope. Sorry"

Zoom clearly says "Sure! (how high would you like me to jump, officer?)"


You really shouldn’t hold Apple to such high standards. Just look at what they’re doing with iCloud in China.


While I understand the tradeoff they took, it also means there’s no end to end encryption if any of the party is not paying.

Imagine being in a call with 4 people, 3 of you are paying customers and you’ll need to ask the fourth to also pay Zoom to get encryption. It’s a bad deal for the paying users as well.


Given their eagerness to work with the FBI why would anyone not assume they’d give the FBI encryption keys for paid sessions of interest to them?


Fair options:

All meetings recorded (for law agencies and similar future use) or paid version

*We take your privacy seriously.


alternatively, the FBI is sponsoring "free" for whoever wants it.


The more and more I see of Zoom, the less and less I think the organization is in tune with Liberal ideals and culture. At least this is less devious than coping an absolute security stance, then turning around and giving the government the keys anyways.


To clarify, are you referring to the american jargon for the left wing or contemporary liberalism?

Edit: downvoted on clarification is certainly a new experience.


I mean the historical concept. In a way, I think Zoom's organization is somewhat more in touch with the left wing and youth culture. I have a real distaste for how Zoom (the software) does things for you, as if it knows better.


I can’t speak for others, but zoom is super oriented for corporate meetings and they certainly aren’t “in touch” with my culture (esp compared to facetime or facebook video chat), its current popularity is just an artifact of being the most popular corporate group video software at the time the pandemic hit the US.


This comes just after days after Trump tweeting that "ANTIFA" would be considered a terrorist organisation.

If this kind of attitude picks up, of labelling domestic protest groups as terrorists, together with things like these, there won't be much separating the USA from an oppressive state


Can't Zoom encrypt everything and silently drop encryption with a court subpoena? That sounds like the right thing to do.


How is that the right thing to do? It would be the worst PR disaster the company has ever seen, which is a high bar.


Co-operating with police isn't a 'PR disaster' when directed by a court. Checks and balances establish the legal framework for which Zoom aids the police - a court order. It's not hard.


Silently putting in a back door subverts the trust of users and is worse than having no encryption at all. If they aren't upfront with having backdoors in their encryption, how can we trust that they haven't been using encryption to just establish a false sense of security and sell the keys the whole time? It's a huge pile of bad faith.


This is complete and utter BS.

EVERY communications provider has to comply with lawful intercept [1] regulation in all the regions where they operate. If they do not they find themselves hauled before the regulator and they'll be fined or worse until they do or go out of business.

'Encryption', while it would make it harder to snoop on your calls from third parties, will not 'protect' you from lawful intercept.

Furthermore specific to the US Zoom also has to comply with the Communications Assistance for Law Enforcement Act [2]. This does in no way mean Zoom can not encrypt its traffic. It just means it has to provide law enforcement the ability to covertly wiretap every and any communication. If Zoom provides end to end encryption to its paying customers, it still has to provide access to law enforcement to the content of those communications.

[1] https://en.wikipedia.org/wiki/Lawful_interception

[2] https://en.wikipedia.org/wiki/Communications_Assistance_for_...


> 'Encryption', while it would make it harder to snoop on your calls from third parties, will not 'protect' you from lawful intercept.

Yes it will, it will make the intercept so expensive that it will not make sense anymore for them to do it.

In the first case, all they need to do to intercept is to call Zoom headquarters, or even just go to some pre-setup website and just enter the identity of the user and voíla you have the data. The cost is 1 man-hour of an agent.

In the second case, the Zoom doesn't have technical ability to break the encryption, and to "lawfully intercept" they need to either break your phone or physically break into your home to install some devices, or construct elaborate servers to trick your phone into thinking it talks to real Zoom, or use the supercomputers to break some of the encryption, or use some of their hidden stashed 0-day, and thus risk exposing it by using it, and not be able to use it later for a real threat. Cost of this can be astronomical for breaking a single user. (And all of that is even more true for open-source solutions.)

Also a lot of people disdain this not because they are terrorists. They disdain this type of surveillance because it has been shown many times that governments do not just track terrorists, they always end up abusing their power and track everyone, and there are agents who sometimes just make fun of people and read their emails etc. Didn't you read any of the Snowden material that was published? Encryption prevents exactly this type of ABUSE of the power by the government.


I'm not sure what your point is? Do you believe that all encryption is broken? If not, to the FBI, the tapped data should be indistinguishable from random bytes, from the definition of ciphertext. And that serves its purpose perfectly well. Whether or not the FBI has access to the ciphertext is immaterial.


>If Zoom provides end to end encryption to its paying customers, it still has to provide access to law enforcement to the content of those communications.

How do you think this works? If Alice and Bob generate private keys and share only their public keys with each other over the wire how does the provider obtain the content of those communications to share? It never had them.

There is no non broken way to provide a backdoor for government while actually retaining meaningful security in the long run. See the extensive discussion on "key escrow"

Lets discuss a very old encrypted communication mechanism. PGP encrypted email. If I Bob send an encrypted email to Alice how does gmail fulfill its obligations as stated previously to law enforcement? PGP came out in 1991 and CALEA is from 1994. In the 26 years since I don't seem to recall anyone shutting down gmail.


"There is no non broken way to provide a backdoor for government while actually retaining meaningful security in the long run"

I actually agree with you on this. I was personally involved in the past to advocate for stronger encryption ( https://archive.nytimes.com/www.nytimes.com/library/cyber/we... [1997]).

However, law enforcement agencies around the world see "broken" ways to achieve this as a 'lesser of two evils' trade-off. Am I too paranoid (I hope I am) to envisage a scenario where Sam, our friendly US law enforcement officer asks Sheila, his friendly AU counterpart to request a copy of the conversation obtained through technical capability and assistance notices and share it back under 5 eyes agreements? Add the pending bilateral CLOUD act for an extra spicy double exchange sandwich?


The reality is that this is being made out to be way more insidious than it is. Platforms should work with authorities so that bad actors like known terrorists can’t use a free platform to organize.

To the crowd who says “well you can do bad stuff if you pay” - If terrorists are paying for Zoom, they’re leaving a paper trail for the FBI. Free platforms are ripe for abuse because they’re free.


Can you please move into a glass house. I just need to make sure you’re not doing anything bad, and the FBI can’t see through your walls.

Also I’ll need a copy of your bank account statement. I just need to take a look. Make sure it’s ok. Unless someone else has recently.

The narrative that we should sack our privacy to help “law enforcement” is fucking braindead.


If you have a warrant, the police can both break into your house and see your records, no glass required.


People doing high-profile illegal activities will always be able to use simple open-source software, that cannot be stopped.

Terrorism is not a valid reason - well, perhaps unless you're going to take the jump to including a Chinese national that says 'Tiananmen Square' to their wife in a random call, as some 'Law Enforcement' will.


Unfortunately bad actors are ill defined and even that changes from time to time.


What happens when the bad actors are the police?


Ever heard of stolen credit cards


What if the terrorists use free open source software running on their own hardware?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: