The relevant bits I got from it were that there are a bunch of bad actors who create new Zoom identities and host meetings a few times before moving on, and Zoom needs a means by which they “can, if they have a strong belief that the meeting is abusive, enter the meeting visibly and report it if necessary.” Their E2EE design will make it impossible for Zoom employees to enter E2EE meetings without permission, so giving E2EE to the free tier will enable people to use Zoom meetings to facilitate abuse.
I think Zoom is wrong.
End-to-end encryption should be available to everybody no matter who they are. This means making it available to bad actors too. Expanding the scope of human communication should not be used to justify state surveillance. Privacy is a fundamental human right and one we should not be forced into giving up because it annoys the government.
The reason "think of the kids" works so well to justify blocking E2E all the time is because child abuse happens literally all the time.
When someone solves this problem, and I don't think any of us really believe it to be solvable, we can move on. I don't want the government in my private conversations, but I don't want my kids in someone elses either.
To extend this - we recognise a duty of care to our users and their privacy when we build these systems, but if those users plan and carry out an act of terrorism did we not also have a duty of care to their victims to not aid their killers in planning their murder?
We can't shunt this responsibility forever, the public will not take our side down the road - because we are ignoring the counter-argument even if we wedge our fingers in our ears.
Law enforcement entities that try to prosecute these kinds of crimes doesn't do it by building haystacks of data and then combing through looking for needles because that's a waste of their resources relative to the amount of results obtained. They do it by attacking the endpoints where the abuse or terrorism, or other professional crime has to actually happen. They find a terrorist, or a child abuser or a drug trafficker or they find evidence of their handiwork and then work from there. They see where they get their money, their bombs, their drugs, etc, etc and follow the links as much as they can. When law enforcement is actually trying to target crime they don't go fishing, digitally or in meatspace because that's not an efficient way to obtain results if the goal is to go after some genre of professional/organized crime.
Running a mass operation with no specific target (like speed traps in meat-space or dragnet operations in the digital world) is great for padding stats because you can say "look, we got X pounds of meth off the street" or whatever but it doesn't actually do much to target the professional crime because professional criminals take steps to avoid being caught in lowest common denominator type policing.
Neutering encryption (so that cops can continue to run surveillance dragnets) doesn't do anything to help the cops catch real criminals, that's just a talking point made up by the people who want the government to have the ability to put any arbitrary person under a microscope.
In case anyone doubts the above fact: government agents abuse their surveillance powers to spy on their loved ones.
This article was posted here recently, a reminder of how easy it is to become the target of warrantless government surveillance:
There's no reason to believe the government is any better than these criminals. Cryptography must be strong enough to defeat even intelligence agencies as well as ubiquitous so that it'll be hard if not impossible to enforce legal limitations or bans.
93% of the time, the perpetrator knows the child. If you’re seriously worried your children might be victims of abuse, then your first line of defense should be against your own family and friends.
The statistics for children are in the paragraph after the first graph.
And this is exactly the case where we want to be able to have digital evidence.
What could possibly go wrong?
Easy: don't let your kid join zoom meetings without your permission/supervision until he/she understands that there are bad people out there.
As an illustration, if we get reasonable evidence suggesting that someone is growing marijuana in their ranch, we can get a warrant and go inside. There's not too much the owner can do to stop it. However a perfectly encrypted iphone cannot be broken into, no matter if the entire world agrees that there's evidence of crime in it.
From what I can see, no one argues against warranted search of personal property in the physical world, except maybe some sovereign citizen crazies. Given this, why can't we strive for a similar system on the virtual world as well? I too agree warrantless or unfettered govt surveillance of technology is bad, but that's a policy failing not a technology one. We should try to focus on how we can hold governments responsible instead of making fully protected crime caves for anyone who cannot whip up a conscience.
I agree privacy should be a right, but not at the expense of many people enduring a life of hell in these cordoned spaces for that cause.
because any crime in the virtual world can be uncovered by good police work. nobody has perfect operational security, including the government. so the solution to law enforcement is hard work by the law enforcers.
consider: prior to electronic communication, all private discussions were perfectly encrypted, because if you weren't there, you didn't hear what was happening. And society continued to function.
You simply can't trust the government to respect boundaries that they created but have the ability to breach, especially when it can be done completely surreptitiously.
We need to learn the lessons of Snowden, and fight tooth and nail to prevent anything less that complete, unfettered access to private communications by human beings. Anything that falls short of that will eventually be complete, unfettered surveillance, because there is no metastable equilibrium point in the middle.
The controls on surveillance are not technical, they are political. The technology was the same, yet the Stasi listened to every call they could; other governments did not.
Fix the politics, because it /will/ win in the end. Learn the lessons of Germany and China.
This is fundamentally different from modern technology where they can have a computer listen to every single call, pick out whatever keywords they're looking for, and flag it for later review. Technology now makes it possible for them to truly listen to everyone at once. This is why end-to-end encryption is necessary for everyone.
Politics is not going to solve this problem. A lot of what America's police and intelligence agencies do is already illegal. They don't care. They're going to do anything they can with the technology.
It's a weirdly blinkered concept to say "America's agencies already do illegal things and their politics is broken but what will save us is American corporations deploying technology".
(The "we need universal E2E to protect our freedoms even if there are downsides" is not, in logical form, a million miles different from 'we need guns everywhere to protect us from the government and damn the negative consequences of having guns everywhere', frankly)
I actually believe that technologies such as strong encryption are creating important checks and balances that make our democracy stronger. They are not subverting it like you are implying.
I agree, and I agree with both of those. Giving up freedom/privacy for safety is almost always a losing bet.
With guns, the state will always outgun you. So the gun-riddled society sees children in its schools murdered staggeringly often, while its (supposedly free) citizens are tear-gassed with impunity by a state for nothing more than a photo opportunity.
That was not a winning bet for that society.
It's similar with E2E. It can't protect you from the government, because the protection is illusory – it protects just you so long as the state wants it to. When it no longer wants it to, it makes it illegal. Administrations are already heading in this direction.
Meanwhile E2E enables a number of proven harms, from lynchings to child abuse. Is that a worthwhile trade-off just for the protections it gives from corporate or illegal privacy invasion? Would it lose all of those benefits if legitimate law enforcement were allowed access? There is at least a debate to be had, there.
The people with the guns aren't attending the current protests, and you can see how that has worked out.
In either case, to actually lynch someone, you still need to go there physically and actually do the deed. WhatsApp chats don't kill; dudes with weapons do.
Encrypted comms gives a huge asymmetric scale benefit to those who have these crimes committed. What it hasn't scaled is the ability of law enforcement to respond. And that's a choice, one which is open to criticism.
I think you're taking this a bit too lightly. As a side topic, I am surprised to what extent state surveillance was a thing here in the telephone era.
The secret police had about 50k full-time agents, 600k double-agents and about 400k-500k informants. From a population of 18 mil, that's about 1 in 18. Consider an usual family. You have a brother or a sister, two parents, 4 aunts or uncles and 4 grandfathers. Odds were in favor of one of them being at least an informant.
For your community? There definitely was an informant or double agent among them. Just knowing that the threat is there has a massive effect in how people communicate and bond with each other, effects that can still be felt to this day.
What stopped the Stasi until politics was technology. And I think the encryption used helped to bring about the political change. If the Stasi had what Zoom is offering then perhaps the wall wouldn’t have fallen for 10,20,30 more years.
The government still can send people to watch people use their phones or computers. On the other hand, it seems hard to dispute that all our most efficient examples of totalitarian states are post-telephone.
No, but in a functioning democracy we can vote them out. Democratic governments by definition have an large concentration of power, otherwise they can't fulfill their functions.
But this is bound by laws, time, and the ballot box. Surreptitious (warrantless) government surveillance should absolutely be illegal. Searches with a legal warrant (through an accountable, non-abusive, warrant granting judicial system) are absolutely necessary to gather evidence for prosecution of crimes to take place. Without trustworthy investigation and prosecution of crimes, the social contract
will fail, and this has already started happening in many areas, as we are seeing in a way right now
However, this goes both ways - the populace should get far more transparency into the functioning of the criminal legal system - especially in to the training and conduct of physical law enforcement (police officers).
In the real world one will notice law-enforcement breaking into their ranch, in the virtual world, they won't (and comparing growing marijuana to voice/video over Zoom is wrong).
In the real world law-enforcement wouldn't have access to the complete history of a conversation, in the virtual world they would. Even to anything in the past which is irrelevant to the topic.
The events in the real world are ofthen ephemeral, we don't expect our friendly conversation to last forever, in the virtual world, however, they can be recorded and stored forever.
Basically you should compare spying/wire-tapping in the real world vs. spying/wire-tapping in the virtual world.
But I think that broader society are not going to understand the technical issues, and are going to be swayed by overly-emotional appeals to "think of the children" and similar.
Therefore I think that we, as engineers - the people who will be asked to implement the results of any such debate, need to have this debate ourselves so we can take responsibility for our actions.
I can see both sides of this debate.
There is a legitimate need in society to gather evidence to discover the guilt or innocence of accused criminals. We cannot have a system of justice that assumes innocence until proven guilty but provides no method for gathering incriminating evidence.
There is also a legitimate basic human right to privacy. We must not be subject to constant surveillance by the state.
We have to find a middle path between the two extremes.
Because there is no technical solution that allows something similar, such a solution
1. Must be exclusive to use by lawful authorities, a criminal cannot get a search warrant
2. Must have some reasonable per-instance cost to prevent overreaching
1 is very hard in a tech space if even possible, backdoors can always be used by other parties.
But even if 1 is possible, by the nature of digital surveillance it is very cheap and relatively easy to do mostly secretly, leading to things like NSA literally inspecting all internet traffic.
Yes, theoretically you could the seconds this if it is the overwhelming political will, but it isn't, and the general public doesn't care.
So in the end it's better to encrypt everything
Imagine if there were a safe that couldn't be opened by anyone but the owner without destroying its contents. Would you be opposed to that? What if the design mechanism of this safe were as easy to implement as the encryption protocols are? Yes, one day some expert safe-cracker might break it. And in the even farther future the advent of "quantum safecracking" would perhaps make the safe as secure as a luggage lock. In the meantime the police would have to resort to their traditional methods.
Unfortunately all kinds of damning evidence have been lost to time. Fire is older than paper.
In physical world two people are talking.
If police has suspicion that they are commiting a crime it can request a warrant to install listening device and only then they can listen.
In Zoom like scenarios any third party (like technological companies using law as excuse) can listen without warrant (and they will say something like "no one is listening" as training AI is not considered "someone")
As such communications should be encrypted with asymetrical cryptography where only warrant giver can decrypt them (not warrant giver giving the private key to law enforcement, but decrypting the symetrical per session key and give that to the law enforcement). And this goes as phones too.
And quite frankly I dont care if police with warrant is listening to my conversations. I dont want to any company listen to them as they are not doint it for law enforcement but for profitting on my data (quite possibly against my interest) and this is something completely different.
This is the scenario where technology gives people MORE privacy, prevents police illegal wiretaps (without warrent giver consent), prevents technology provider wiretaps and on the other side still allows legal wiretapping based on warrent-giver.
But interesting, no one has any interest doing it, guess why?
So to answer your question, telephone companies are a failure in USA (wild west and lawlessness), in my country they need to obey laws. Corporations doesnt obey any laws outside their country (which they select based on inneficient laws) and need to be harshly regulated.
My personally favorite would be legislation that would mandate e2e encryption that must not be backdoored by anyone else except law enforcement getting warrent but private keys are staying under judge supervision without possibility to give it away (in pkcs#12 manner) and can be only used to decrypt communication when he presses the big red button. Quite frankly you want to be able to wiretap organized crime.
Care to see what happens then? Check China. They are implementing this very thing. For the children, I suppose.
The judge only access prevents mass data gathering of law enforcement agencies and three letter agencies (at least in my country). And enable control of further institutions. Secret and hidden backdoors (Crypto AG, Dual_EC_DRBG,...) or corporations bribed by government deals are the worse solution here as it doesnt prevent the access to the data to either corporations or secred agencies while it might hold away law enforcement or also not. And surely enables mass data gathering from all without any supervision or control. What the real issue here is that no one is mentioning any court orders. Everyone would just want to have access to everything. Now THAT IS an issue.
I was talking about legal entities operating in same manner as telcos were. Also in real world you can invent your own one time pad encoded speaking and no one will understand you even if they wiretap the communication. And actually mafia historically has been using slang to cover up the communication. Same as you can do it in open source.
Anyway, do you communicate over the "secret encryted communication channel" covered with rag, to prevent recording your lips, recording with laser measuring shaking of window glass, you face muscles, IR recording and probably next 100 methods I am not even aware of. As this are the issue you also have with warrant being issued. Guess not. So the police looks like is not an issue for you (or warrant).
Then the three letter agencies, except for "warrant" methods they will use rubber hose cryptography to break you and any of your e2e communication and actually you might wish they would be able to read from your communication without contacting you in person. So e2e doesnt change anything for you here either.
I refuse to handle open source solutions that you install on your server to use them in same manner as corporation entities that use their solutions to wiretap the communications of everyone so they can earn more money from informations they gather.
And I also think that "encrypted Apple" phones (and everyone else doing any business with government) and the whole FBI story is just a sharade to bait people that are hiding something in ecosystem where the can simply access the information by agencies that CAN issue gag order. The whole story surely looks like counter-espionage operation from 1970. Time will tell if I am right.
Having the law being able to access encrypted communications at any time will trample at the examples I brought up, which are examples that came up with zero effort, no matter what you try to put into your proposed solution - if the goal is to prevent crime, and there are available solutions out there that allow for e2e communication, the goal does not stand. You can't have a corporation banned from e2e, but allow any random dude spin up a secure communication platform without any keys compromised - what are you even banning then.
It amazes me that "corrupt politicians" is shrugged off just like that, while corrupt officials of any kind is exactly what everyone need defenses against with ANY means. In China, they are in the process of legislating exactly what you propose - no private encryption key to be withheld from the law, and yes, you did not misunderstand, it's at the scale this implies, total control and ability to observe over all traffic and restive data at any time - even forgetting all that is happening now, that leaves little unattended by law there.
Now, what, China is a "bad example"? An "exception"? I'd say this attitude coming from governments is the norm around most the world. Where people are at real risk from what say say over the net.
Out of all such countries, let's take China. Do you believe China should reverse its course and allow encrypted communication for its citizens? Based on your words and thoughts, I say you would answer "no". It's doing exactly what you propose after all - now, the only tiny step to totally suit your proposal is to use their powers for "good"! Right? And they indeed using it for good, according to their own legislation.
Because, if you nonetheless said "yes, China should allow e2e in favour of its citizen's rights", you would in essence be saying that "Freedom loving Western countries" should give the law total access to any information (they will always do it only when needed, of course!), but the same countries should pressure "totalitarian regiments" to maintain their citizens rights including encryption. That's contradictory, at least by thinking about it only for a bit.
There's a correlation between these things. Any power given is sure to be abused. If that is not prevented and pushed back, it will not stop but worsen. Trying to find a formula to give absolute power and restrict it at the same time is just fooling around, it's the core assumptions that matter. Unless you really think that some governments are somehow immune to becoming corrupt ant totalitarian when meeting no resistance - their people must be saints indeed! - in which case, I am sorry to say, but I can only chuckle.
With my proposal law enforcement can access to the unencrypted data far less that they can do it now (under the rag) and when they access they are under scrutiny of judges while it prevents corporations accessing it.
Maybe do take time to think about what country is, what government is and to who it serves, what corporation is and to who it serve, maybe ask yourself what the law enforcement is and who does it serve, if you dare go into further, what if there would be no law enforcement? Do you have the muscless for that?
Or chuckle mindlessly on. I think your whole statement is demanding advantages in system where someone else takes care for you to allow you to not think about dissadvantages.
These examples are talking about a different thing, we should be careful to not mix them up since the arguments for and against can be different.
The discussion prior to your comment was about protecting data in transit (end-to-end encryption); both your examples are about data at rest (full disk encryption).
With encrypted data in transit, not only can it be broken into by intercepting at the endpoints (in the case of video or audio calls, even through the physical world by pointing a camera and a microphone at the user's device), but also the end result of an end-to-end encrypted connection is much closer to a physical world private conversation (can be "broken into" only by intercepting the endpoints, that is, pointing a camera and a microphone at the persons involved).
With encrypted data at rest, the best physical analogy is a diary written in code; even if the whole world agrees that it contains evidence of embezzling, it cannot be decoded without the help of its owner's mind.
With regards to security analysis the only difference between the physical world and digital word is proximity (hops) between agents, or evidence, in a conversation and convenience of access. Software developers tend to think purely in terms of controls and exploits, which is a tiny subset of security. Even conversations in the physical world can be encrypted, for example if two people are speaking Pashto I would have no idea what is said. If it isn’t recorded for later translation it’s encrypted forever.
Those few distinctions are important from a legal perspective where agents of digital concerns are more likely operating across political boundaries at any given moment.
> There's not too much the owner can do to stop it.
They can make available fail safes to store the evidence in a physical safe with tamper proof mechanics. Breaking such a safe would destroy the contained contents in the process much like attempting to break an iPhone with supposedly perfect encryption.
Since you are talking about surveillance another common misconception I have noticed many software developers make is equating the terms: security, privacy, anonymity which are all distinct. Privacy and anonymity are both aspects of confidentiality but privacy is concerned with hiding the contents of a message where anonymity is concerned with hiding the agents of the message. Those two do not overlap. Confidentiality is one of three aspects of security, though from a legal perspective privacy is available in many contexts without application of security controls.
Is that a terrible thing? It's not like they are hiding guns in their iphone. While there could be evidence in there, at some point there is physical evidence in the real world. Just making it easier to convict them is not a solid argument for weakening protections for everyone.
I can really flip my brain around and see how this desire for non encrypted communication to be the standard could come from a good place.
That said, I still come back to my default stance: crimes need to be exist outside of the private communication, to be a crime. At least under US law, where it’s very hard for just pure communication alone to be a crime.
So go investigate whatever it is that is an actual crime and causing actual harm. Making communication not private has tremendous potential chilling effects on actual thought, because people think by talking!
1. Encryption is an indispensable part of pretty much everyone's life. I can't imagine there's many people in our society that go more than a few days without using it.
2. If encryption can be broken by the police, it can be broken by other actors. Full stop.
2.1. It is been shown impossible for our government to keep a secret like a master key.
let's wait until something like a "perfectly encrypted" phone actually exists before we go down this road. AFAIK, the feds have eventually been able to break into the phone in every high profile case where the issue has come up. it's not impossible, they just don't want to pay what it costs.
The problem with not end to end encrypting private communications is during a lockdown people now have nowhere they can go for a private conversation. If you invite someone over to your house for drinks or for dinner you feel you can talk freely because the government doesn't have cameras in your house, that would be an invasion of privacy. Where is the virtual equivalent of that once zoom is not longer private?
Remember your freedom will be taken an inch at a time. Not all at once.
That said, abusive people do exist and are a legitimate problem, given the damage they can cause. Their abuse may be overt (threats, violence, noise, etc), or it can be subtle (for example, manipulation over long periods of time).
Some of that abuse may come from prior anger and frustration outside their control, and perhaps it's good to allow people to let that out -- as long as it doesn't end up harming other people in the process.
Would the situation be improved if the service provider could only step into the meeting when explicitly requested by participant(s)?
To follow your analogy, that could be seen as the equivalent of someone experiencing a medical emergency during dinner at your house and requiring outside assistance.
All these options would be gamed and misused, as they are during existing use cases in real life. Some people over-react, many people under-react, and society itself changes so it's important to build in flexibility for transparent and accountable change.
In video calls you don't even have to do that, you can just kick them from the call. You don't need Zoom to step in you just kick them.
I really don't understand what you are getting at with the abusive people thing. What sort of situation are you imagining exactly?
To answer your question: phishing scams could be one example. I'm sure there are many others.
If there are others please list them because I'm struggling to understand the overarching thing you are getting at and examples would help with that.
> Where is the virtual equivalent of that once zoom is not longer private?
What I meant by my comment is that there are good alternatives that are E2E and free, like FaceTime (ok you need an iPhone or Mac), WhatsApp, Signal etc. So people can just use that if they don't want to pay for Zoom (again, I don't think they're making the right decision either).
You already disqualified Facetime because realistically some of your friends and family have Android and Windows. WhatsApp connection quality is flaky, same with Skype. Does signal support group class? If it does maybe it could replace Zoom. Maybe. But all these options existed before lockdown and people still settled on Zoom because it's more convenient.
Either some massive scandal has to happen to make the public more privacy conscious or there needs to be e2e encryption by default, as a standard. With or without invisible state surveilled cameras shouldn't factor in to which dining table I buy.
> WhatsApp connection quality is flaky
Disagree. Might be anecdotal, but at least in Europe, most my friends use WhatsApp for personal video calls and not Zoom.
Genrally agree with you that we need E2E as default (as we need SSL as a default).
2) None of the major players offer E2E by default (Google Meet, Microsoft Teams, Cisco WebEx, BlueJeans). WebEx has an E2E option for enterprise users only, and it requires you to run the PKI and won't work with outsiders.
Any E2E shipping in Zoom will be groundbreaking.
Make a free account and test it.
The trouble is, by the time they do (and they will, China is halfway there), it will be too late to protest.
Police and other law enforcement are already using legal powers to infiltrate and monitor 'radical' political groups such as Black Lives Matter, just like they have in the past with civil liberties groups. In fact, as we discovered in the COINTELPRO leaks, they were going way beyond the legal limits, having been complicit in the assassination of Malcolm X, and having tried to blackmail MLK into committing suicide.
Of course, it could be that such things don't happen anymore. Or, seeing how the police in Minneapolis are actively targeting journalists, it is significantly more likely that we just don't know about it yet.
No large state has ever tolerated real dissent to any great extent. The state doesn't have to be as paranoid about dissent as China or the USSR (which almost require(d) enthusiastic support) for police powers to eb abused against the legitimate interests of citizens.
Inheritly in society, there was always some form of surveillance. When we left our homes, people around us could see and hear what we are doing and there in report suspicious behavor. Now we are adjusting to a new way of life with new forms of surveillance which are harder to detect. I completely get it and I more for encryption then for not. I guess I am also challenging myself to see both sides and think about a middle ground.
While Western governments could be better behaved, I'm feel like comparisons with the stasi are somewhat extreme and out of wack. I live in the UK and generally speaking I'm happy with the government here when it comes to surveillance. Maybe the US has a greater focus on security but they are a long way from stasi.
If you want to discuss surveillance then yes of course it's a matter of degree. Putting cameras in a bank vs putting cameras in a pub vs putting cameras in your home. As you can tell in the real world it's clearer when it's a step too far. In the digital world we need to be more careful because it's unmapped territory.
You need to think hard about why it's an invasion of privacy to put cameras in your home. It may seem obvious but it's not. Once you understand the reasons why that is an invasion of privacy then you can start to draw analogies to the digital world and understand what is going too far and what is not. The problem is people don't have a deeper understanding of the reason we need privacy so they are easily sold security in the form of digital surveillance without understanding the eventual consequence.
We already live in a society where widespread aggressive authoritarian surveillance that doesn't justify itself is commonplace. Snowden proved this. Your emails are read. Your naked selfies looked at. Personal data is used frequently to crack down heavily on legitimate dissent. These are unquestionable and it's getting worse and more entrenched, not less and it hasn't caught a single act of terrorism like it was set up (ostensibly) to do. The question is, how do we personally react to the unchecked growth of stasi-friendly surveillance infrastructure?
I think arguing that western governments could be better behaved is a fair point. The stasi also could have been better behaved. Frequent appeals to moderation didn't make them behave though and they haven't and won't make western governments behave either, though.
Appeals to moderation have a null effect because if the goalposts keep being moved, so does the moderate position. If you want your opinion to never matter at all, always pick the moderate, middle ground opinion.
Since money is key to many crimes and finding out who controls the money is an important way to investigate crimes, this in turn means that that point of agreement has to secure me from surveillance by badguys when I talk to the bank, and permit surveillance of the same badguys when they talk to the same bank in the same way.
This might perhaps be possible but the word "surely" seems inappropriate.
When you actually see the horrors of abuse, helped by internet and you realize that there are voluntary walls to protect these people (encryption for instance, but others too) you may have a different position. I would willingly give up that just to see children (or whoever) saved.
You may not, that's a choice. I would just like to know whether you have seen what actually happens in these circles before making a decision.
Also, I live in a normal country where this concern (state surveillance) is less of an issue.
> When you actually see the horrors of ...
I heard about someone working as a nurse, in an emergency room, and because of witnessing injuries from traffic accidents, she decide to never be in a car again. I can understand that, I think that decision makes sense.
But not handing over people's communication to people like Trump and Putin etc and their men.
2) None of the major players offer E2E by default (Google Meet, Microsoft Teams, Cisco WebEx, BlueJeans). WebEx has an E2E option for enterprise users only, and it requires you to run the PKI and won't work with outsiders.
What a few good examples of "abusive" meetings?
this is a hydra that shows up every time someone creates a video chat there is a problem with sausage parties and sextual blackmail that needs work arounds
How are these people invited to meetings and why aren't they kicked from the meeting? How would anyone handle this in a real world meeting?
these types are handled by identifying and outing them, or chasing them away, internet has to go beyond the "your ip number is" thing and demonstrate that there is a real knowledge of who they are, and that they are bothering people and it isnt going unnoticed.
If someone sends clickbait invites to abusive meeting, then victims can trivially report it, possibly with screenshots.
If this is acceptable to view a meeting without permission under abuse pretense, then it's also possible to do so even if someone's doing nothing wrong.
Even worse, if their system is compromised, a bad actor could monitor free users' meetings without any protections. And what is stopping those bad actors from getting a paid subscription, or maliciously gaining access to a paid account? Or these bad actors could use another system that doesn't compromise (Signal) or host their own.
Security comes in layers and logs. A system without these layers and accountability isn't secure. Zoom isn't secure, and is using law enforcement as a scapegoat and pretense to keep their security low.
>We also do not have a means to insert our employees or others into meetings without being reflected in the participant list. We will not build any cryptographic backdoors to allow for the secret monitoring of meetings.
They get to pick between headlines like the current one, and claims that they support child porn rings (he isn't saying it explicitly, but everything I saw looks like that is the problem they're trying to fight).
Zoom needs a business model and saying "if you want encryption you need to pay for it", to me sounds like a reasonable approach to making money.
Once you start dragging other reasons into it, you need to start defending them.
Well he didn't use the word "rings" but he did says CSAM (Child Sexual Abuse Material).
There are other ways to track these criminals and we should be using those. We know they are smart enough to stop using Zoom once its no longer encrypted. Meanwhile normal people will be left holding the bag of surveillance.
I don’t think this argument really holds but I think it’s funny how quick we are to downplay our own “bad apples” and say that encryption is more important.
You don't win any political battles by being the preferred tool for child molesters and then telling the gov't to pound sand when they come asking for help finding them.
E2EE plus the client looks up images from the NCVIP database and refuses to send/receive messages would at least be something.
If the FBI comes knocking looking for access to a particular user's messages then have a system that kicks that user off the network until they agree to add the FBI's key into all their chats for a specified time. Make it a bright-line visible action to the user being monitored. You have PFS right, so they can't see old messages, and once the FBI's access is revoked you can prove that all your chats are private again.
If as a layman I had to guess another way to catch them it would be to go to the source. Follow cases of missing children. Investigate reported child abuse. Once you have caught one of them you are free seize their computer and use it to honey pot all their contacts, with e2e encryption so the contacts believe it's person you just caught.
Have the client check the FBI's CP database and refuse to send pictures that match. Sure it's open source and abusers could recompile it but they wont. In the same way that blocking the default curl useragent stops 99% of spam at my company. Would be attackers could change it, but they don't.
When he puts it in the context of their typical abuse pattern - anonymous emails, VPNs, and just a few meetings - this decision makes much more sense.
I hope they expand on their thought process in a blog post at some point, I'd love to read more.
For harassment / offensive content, if anything E2EE will make it easier to prove where offensive content came from. You've got a cryptographic chain leading to the source after all. All you need is a button to record and report things (which admittedly seems to be exactly what they intent to build). The E2EE aspect doesn't really change things, except that Zoom can't record things themselves, which they claim they didn't do in the first place (although they might have relied on the small server side buffer they had, but that's an iffy solution at best).
Also not sure what to think of Zoom's Trust and Safety team breaking into a private conversation when they think some kind of abuse is going on. Yes E2EE would make it impossible, but why on earth would Zoom want that kind of role?
Do you want to:
a) accept lack of E2EE
b) do you hate children? Pick one.
Hurry up, your precious internet points™ are at stake here.
“So what you’re saying is that Zoom is fine protecting pedophiles from law enforcement, as long as you profit?”
This is why you shouldn’t play such dumb games as a company, there is only downsides from a PR perspective.
1. you end up getting more responses
2. more responses === a higher probability of seeing gems like this wikipedia wormhole I'm about to get sucked into:) I had no idea about the horsemen/chans or that May identified the reason behind the alpha particle problem. Cheers.
At the risk of being pedantic: it’s not new at all. That’s part of almost every political campaigns from the past century. Technically “reductio ad hitlerum” is the new “protect the children” argument.
Regarding history, I don’t know about the US specifically, but Europeans countries have legal children protections since end of 19th, beginning of 20th centuries (depending on the country).
See https://en.wikipedia.org/wiki/Declaration_of_the_Rights_of_t... for a pre-WW2, international effort.
But... what the fuck?
Also... I have a paid account. How can I tell if my connection is encrypted or not? Is it only if all other parties have paid accounts? Is there an indicator?
Under an "encrypt some calls" approach, if even paid users can't tell easily and reliably if they have an encrypted connection... basically nobody can count on it.
Working with law enforcement might be true, but it doesn't make sense that it has anything to do with free calls. Again, they have the encryption keys so they could decrypt any calls that they want to work with law enforcement on. This might even be a really poor attempt at upselling to paid accounts.
If you're concerned about security, I don't think zoom is the conference tool of choice -- maybe they've fixed everything I mentioned, but they still have among the worst track records.
Security aside, the feature set and user experience is attractive. Except for one thing, why does it take two clicks to end a call? That's awkward every time. If people are accidentally leaving calls, that's a different problem and two clicks is a lazy solution.
Are they saying that WHEN they implement true e2e encryption, it will only be for paid accounts?
Or are they saying the encryption they've already got, which they are inaccurately calling "e2e" when it is not, was formerly enabled for free accounts, but no longer will be?
Or something else?
(Who would have thunk that lying and calling something "e2e" that wasn't would end up confusing!)
I also still don't understand if you get the encryption (whichever one they are disabling for free accounts) if the 'host' is a paid account but some/all of guests are not...
Sometimes the distinction between physical and digital security is brought up in these discussions, the idea that physical security is imperfect (you can always break a lock) but that digital security may truly be impenetrable. This is a false dichotomy.
If people have a conversation in a pub or on a park bench, then law enforcement can surveil them individually or bug the venues in a targeted manner.
But the same methods can also be applied to digital communication. This is opsec 101 right - if one happens to be a high value target, one would totally expect their house/apartment to be surveilled - no amount of digital privacy can make up for a pinhole camera installed on the wall behind one's monitor, LE doesn't even need the keys, they see the content directly.
I think the argument that digital security is 'too perfect' falls apart if you take into account the reality that physical security is a component of that. "If you control the physical hardware" and all that.
TL;DR Digital security is just a subset of physical security. You can always just drill through the side of the safe.
It seems like law enforcement wants to be able to use digital communication to discover criminals, and
privacy experts want law enforcement to rely on HUMINT, a traditional warrant, and physical access.
I believe the second method is far more just, but I seldom see anyone acknowledge that it's almost certainly less effective.
The distinction is between targeted and untargeted surveillance.
Digital communication is so easy to monitor, particularly by a state-level actor, that if it's unencrypted, it's pretty much all being hoovered up by someone by definition.
That's not the case for physical security, even if everyone leaves their doors unlocked, their windows open, and their notes on the kitchen table; everyone is not automatically a suspect, so most people aren't being put under the microscope.
The government likely has the ability to know, instantly, within milliseconds, everything I've ever done on the Internet that's unencrypted.
By contrast, they will likely never see the contents of the love note on my kitchen table. Well, if that pinhole camera isn't there, anyway. ;)
All of the approaches applicable to physical communications apply to digital communications too.
It's just that the _additional_ level, which in the physical world would be equivalent to knowing the contents of all of the conversations/interactions that people are having in person, is something that people wish to fight against and prevent from becoming normalised.
I think you have a good point. The reason I'd like to see it acknowledged is because the two sides of the argument often talk past each other. Police power should not be unlimited, and it's clear that our constitution intended for the power of the state to be limited, with the intent of maximizing liberty.
However, for years people made the claim that the "liberty vs. security" argument was a false premise. ie, that ultimate liberty and ultimate security are both possible. I don't believe this is correct. (Broadly I think liberty is more important than security, but everyone has their set of exceptions to this rule) I might just be dating myself. People had this debate constantly in the years after 9/11. Maybe this argument is not getting made any longer?
In either case, I often hear these two sides talking past each other. I wish instead that both sides were more overt. Digital information can make police work more broad and effective, but we should treat it with quite a bit of cautious. We don't want police effectiveness to encroach on liberty in most cases.
In democratic societies, law enforcement usually has no right to run "criminal discovery" processes like those. That's why they don't cite their intentions, because it's illegal (more often than not, a crime).
Notice that limits on crime policing are a very important factor on maintaining a democracy.
E.g. looking at the logs of relevant servers and waiting for someone to login without their VPN at some point.
Using digital communications to discover criminals can accidentally sweep in many more innocents, who would then have to hire lawyers and carry all kinds of other costs to defend themselves.
Then there are the unintended outcomes. What does the correctness look like for those found crimes based on bits from a sea of untapped information when Bayes theorum is applied to an entire populace? And if crimes are prosecuted before being verified using the real world investigation methods already in use?
That's public. You can analyze all of those. The NSA is free to pull them just as much as you and I.
And they don't as far as we can tell. Is it the cost of analyzing that much content? Is it that the NSA doesn't care? Is there something difficult about stripping audio off a video for keyword spotting?
Well I have a theory, and the theory is based off what little comes out of that side of the community. The theory is that the NSA can't meaningfully process the data it ingests. There's too much, it's too hard to query and they hit the same roadblocks of telling the difference between an actual crime and a videogame or fiction story.
So then we must ask, why do they want more? They have more data than they can analyze, why even bother ingesting more? It's not because it helps their mission, it's not because there's some value to it.
Well, why do we see, regular businesses fall into this trap? A billion points of analytics data that they can't make sense of. When I see it, it's because it's easier to blame a lack of data than to explain the difficulty of the problem. You can always say "Well I just don't have enough data" but it's much harder to explain that a bunch of crappy error-filled data isn't good for anything except wild goose chases. Adding more bad data doesn't improve the quality of your data, it just adds more of it.
So, no, they can't process all of it. But they can more easily trawl it for specific data they need. Especially 10 years from now.
From that standpoint it makes sense to err on the side of caution, and assume it's all being collected. But, while this is an effective risk calculus, it's different from having access to the ground truth.
That's wishful thinking which you have no evidence for. But let's assume that you're correct - eventually they will have a way to analyse it en masse.
There are, then, two things we need to bear in mind:
- is the time horizon likely to be close enough that data currently collected will be relevant then
- if we allow the collection now, will it be easy to roll back that collection later when the threat is on the horizon
The answer to both of those questions is yes. Similarly, we use high strength encryption now, even if we think 128-bit is fine, because in time it won't be.
The above is theoretical. The next bit isn't - they will _always_ be able to decide that agent A should look at video B from N years ago.
They can't do that for a letter on the hypothetical table, or a message stored with strong encryption that stands the test of time - it won't exist in N years.
Unless they didn't collect and store it, as the parent suggests.
I have the opposite opinion: it is trivial and inexpensive to create and store an indexed archive of text from speech in audio, and to run image recognition models on video and pictures. There's value in having that data archived, so that they can go back and go through it should whoever created the data become a target in the future.
However, I doubt the NSA would waste resources investigating a street fight, but I'm pretty sure the video would be mined of any valuable data that could be gleaned from it.
How do you know they want more?
 Or was this meant as a rhetorical? ie, "who would want more in this case?"
That is primarily a problem even at the best of times that law enforcement wants to create criminals whenever it fits their fancy; Even more ominously, if police had any greater command of the voluminous criminal codes and the incentive structure is changed, they could basically be charging/locking up most people they ever come across for any number of arbitrary violations of convoluted laws.
Maybe it is being a bit anxious, but with the full on surveillance state unfolding right before our eyes where wrongthink has you "cancelled", we seem to be racing, headlong into something not all that different than what Orwell envisioned would be the consequences of self-righteously benevolent tyranny … for our own good, of course.
This is evidenced by stop-and-frisk, which was effective only in finding criminality among select individuals.
Nope. Please remember these words. The surveillance system is about control, not security (finding criminals).
William Binney and thinthread are a great starting place to understand this.
I think we just have to look at the history of surveillance not just since 9/11 to understand this. Forest and trees and all that.
If the same physical system were to work in a digital age, a company could share a special encryption key with LE for the collecting evidence part provided they get a legit warrant for that. Physical security was never perfect, but we aspired it to be as close to perfect as we could. Same applies to digital as well.
Unencrypted communications will be intercepted by default with no warrant, no oversight, no limitations on its processing and on a world-wide scale.
My home internet connection could have spies from 30 different countries all over it and I wont see anything. If I'm sat on a park bench then anyone with a Russian accent asking for directions to Salisbury Cathedral is going to stand out somewhat.
In this scenario the person on the park bench with you.
Hollywood and co. doesn't get out much or something.
1. There is no way to verify that you are actually connected to a particular person. i.e. Zoom has no identity management.
2. The client is closed source and can't be verified.
3. Zoom can trivially impersonate any participant as they control the servers. They can MITM at will and they won't get caught at it.
This discussion is like talking about the security of the bank vault door when you are planning to make the vault out of drywall.
US citizens should have a 'right to privacy'. But that's been stripped away due to post-9/11 reforms, among others.
“Free users for sure we don’t want to give that because we also want to work together with FBI, with local law enforcement in case some people use Zoom for a bad purpose,”
So they want to keep the data unencrypted so they can give it to the feds. That doesn't sound like privacy to me.
edit: So I mean, something like that should not be allowed by law. Though it's rather the FBI that is breaking the law here, but Zoom explicitly says they want to work together with them. So that means they approve that injustice, making them also unjust. If they would encrypt their data to protect their user's privacy, they would not be unjust on this aspect.
A hotel that suspects you're taping child porn in one of its rooms is well within its rights to call the police. If Zoom has reason to believe you're distributing child porn in a Zoom room, why shouldn't it be allowed to take action, too?
The point of encryption is that no one knows what you're doing, because they can't see it. Just like no one can see what you're doing in a hotel, most of the time.
So they can't literally see you every moment, but they have a lot of visibility into what you're up to.
Do you think courts shouldn't be allowed to wiretap the phones of suspected criminals? Because Zoom is just a modern phone.
Eh, am I supposed to trust you just like that? If history taught is anything, it's that there will be.
E2E will be an opt in choice for paying users who are willing to sacrifice some features for the benefits from additional security.
See this thread for more details: https://twitter.com/alexstamos/status/1268061790954385408
Edit to credit vjeux for the thread link
If you want to look at something now, the white paper for the E2E protocol design is public and open right now:
On a more serious note, until there is a protocol and implementation available then we can't say anything for sure. Us Security folks aren't magicians.
If these tools use open standards and well documented protocols this will not be a problem.
I can verify without a phd in cryptoanalyis and reverse engineering my browser is running a secure connection to a website and certificate is signed by the source(for sites enabled with FS and HSTS ).
The short versiom of it is, your browser trusts CAs to say whether a certificate is valid. But CAs often trust other CAs who may not actually be that trustworthy. Those CAs then trust other CAs who definitely are not as trustworthy... Etc.
So that certificate/padlock picture in your browser may not be as trustworthy as you think. It's an active problem.
Yes I was a bit harsh. But I was trying to demonstrate a point - no one knows for sure until we can look at this stuff in detail. Until the researchers get to pull it apart then no one can verify anything. The little green tick on a zoom call is practically worthless until some external work is done.
The protocol is documented and open. I linked to it in my comment.
Google supported jabber in chat for a long time , slack supported IRC (both dropped the support ) but when they did you could any irc client in slack or use google chat using jabber with any client
If an open protocols for video are used like email (although not good example for encryption). It does not matter who your service provider is you can verify they are secure , or move to another one .
Today I have more than 10 video conferencing apps on my devices (zoom, Hangouts, meet, Webex , teams, GoToMeeting , chime , Skype, SfB, FaceTime , signal, telegram , ring central and Uber conference... ) because a customer , partner friend or family uses one of those . I have only one email and browser client though, it does not have to open source at all, ppl happily pay for closed source gmail or o365 without worrying will my mails deliver to you while still using official client or client of their choice
Also, have you thought about asking your clients/whatever to use one app to communicate with you? Even if you get half of them onboard, it sounds like it would save you a lot of mental bother.
Many of them cannot install any new native application on their desktop / phone without IT approvals or their vpn does not allow traffic to consumer apps like Hangouts . They also need to record for compliance , pre-Covid some apps like Webex are connected to their conference room bridges using dedicated lines and hardware etc
Family / friends do not use use biz tools , it not easier to convince Apple users who like FaceTime, messenger is popular in few countries , wechat in China , WhatsApp in other places .
It is easier install another app rather than trying to get your grandma to switch from one thing someone installed on her phone and she learnt to use.
It is degrees of trust . Trust is not absolute , neither is security . Depending on your threat models you have to secure yourself. More transparency improves security does not solve all the problems just makes it costlier for an attacker . If cost outweighs the benefit they will not attempt to do it.
Https does not magically make your communication 100% secure ,however the number of people who can issue a certificate from a comprised root CA or control one is considerably less than the number of people who can monitor your plain text traffic .
Any sufficiently advanced cryptography is indistinguishable from magic.
Which isn’t entirely untrue from a layperson’s perspective.
Edit: fixed a word. I’d accidentally written “is” rather than “isn’t”.
That's sort of what happened with the ECB mode stuff that kicked this whole thing off in the first palace. See section 4 from the below for more info.
I think I read that Zoom do decrypt AES-GCM server side already. They have to so they can put the little green box around the person currently speaking. EDIT - this is incorrect for AES-GCM, it's not decrypted server side.
Edit - at least e2e is the angle I'm approaching it from as it's the new information.
> Meetings will still be encrypted and meeting content is still not going to be used for tracking users.
And the person responding to you asked "how will you show this?".
Could also be interpreted as how can we show only paid users can access it? Or that certain features will be disabled with E2E?
What I replied with covers both E2E and the current state equally tbh (the linked article did it before with ECB). There are always limitations to what is possible.
I could break into the Zoom servers to make sure everything is kosher. But that's illegal.
If WhatsApp started transmitting E2E keys back to their servers people would find that out client side through network packet inspection, not server side.
Security researchers are limited in the tools/methods they can use. We have to work with what we've got at our disposal.
Which is exactly why "trust us, we're not going to do anything with these keys" is a ridiculous state of affairs and shouldn't be tolerated. We can't show that they're actually doing what they say, and it'll be years after they implement mass surveillance on the behest of law enforcement before someone leaks something.
Should we work towards an ideal? Sure. Should we stress out that things aren't perfect? Probably not.
It's an iteresting technical idea though. Would be interesting to see if any existing systems have a "canary" element to them.
Why is this not a client thing?
> Matthew Green, a cryptographer and computer science professor at Johns Hopkins University, points out that group video conferencing is difficult to encrypt end to end. That’s because the service provider needs to detect who is talking to act like a switchboard, which allows it to only send a high-resolution videostream from the person who is talking at the moment, or who a user selects to the rest of the group, and to send low-resolution videostreams of other participants. This type of optimization is much easier if the service provider can see everything because it’s unencrypted.
This was 2 months ago so their new white paper clarifies the current situation:
> For use cases such as meeting real-time content (video, voice, and content share), where data is transmitted over User
Datagram Protocol (UDP), we use AES-256 GCM mode to encrypt these compressed data streams. Additionally, for video,
voice, and content share encrypted with AES, once it’s encrypted, it remains encrypted as it passes through Zoom’s meeting servers until it reaches another Zoom Client or a Zoom Connector, which helps translate the data to another protocol.
(I realize codec is not the correct term.)
In short: Zoom E2EE eventually will encrypt corporate conferences, but will not solve the privacy problems they have, because their structure stills the same.
Seems it will be a feature just to make customers have the feeling they are safe (and pay more, indeed).
But, as usual, they are not.
Don’t shed any tears for anyone making hundreds of thousands of dollars per year while a third of the US has approximately zero income.
How do you compete with that?
Most users don't care about security/privacy and maybe that's fine but it was nice to see a company that seemed to genuinely care about these things.
 going by the commonly used definition
You can do all the open source e2e crypto trendiness you like, but unless you’re a nonprofit like Signal that can generate a stream of donations, if you don’t eventually get people to pay you for the service, you’re not going to be able to stick around.
This was the best possible outcome for them, given the circumstances.
I a way, any platform that is able to evade government surveillance will be deemed as an exception, and anyone using or building such platform will be a suspect.
This also makes none or very little sense - if this is actually just to cooperate with law enforcement, why would encrypting corporate (or paying) calls be any better, the bad people that are referred to in the statement could just get a paid plan?
Of course servers are untrusted. If you think you need to see the server source then any trust you have is mistaken.
Same as with HTTPS. If you think you need to trust the MITM, you've already lost
But, if the server manages sensitive information, yes. It is preferable to audit the server code to understand how they handle the lifecycle of the information.
"If you think you need to see the server source then any trust you have is mistaken."
Sorry but I don't agree. I trust in systems I can verify. Trust without verifying is not trust, it is faith.
I was looking into their terms (https://keybase.io/docs/terms) and they should notify before it. And seems my account is up over there.
Next up: Zoom meeting attendees raided for unlawful assembly. https://www.persecution.org/2020/05/24/wuhan-preacher-taken-...
Catch up: Identifying influencers from sampled social networks. https://ui.adsabs.harvard.edu/abs/2018PhyA..507..294T/abstra...
Additionally, federal “law enforcement” (and concentration camp-operating) organizations like the CBP are engaging in domestic mass spying using aircraft to collect mobile phone identity data from millions, even for peaceful protests and the like. There isn’t really a line between “state surveillance” and “law enforcement” anymore in the US.
The title of this item as I submitted it to HN prior to its edit by mods ended in “to aid in state surveillance”, which I think is a more plain, accurate, and unbiased description of the practice, as I think that pretending that this illegal military spying practice (PRISM et al) has anything to do with legitimate “law enforcement” is basically state propaganda at this point.
I stand by my previous snarky, HN-rulebreaking flame of Zoom’s announcement of end to end encryption support from a month ago:
> I'm sure the result of this will be lots of good and secure trustworthy software that I'll be eager to install on my computer.
Its a pity there is no cross platform communication standard - it looks like it will evolve like messaging with dozens of companies.