1. Pre-COVID Zoom claims it has E2E encryption for everyone.
2. During COVID Zoom grows in popularity, which prompts journalists to learn that the claims that Zoom has E2E encryption are inaccurate.
3. Zoom admits that it never had true E2E encryption, but announces they will develop it and it will only be available for paying customers.
4. Zoom gets another wave of criticism for restricting its new E2E encryption service so it walks back to its original message that all accounts get E2E encryption.
Given their track record I’d expect this timeline to repeat itself so after they release this E2E encryption feature, security researchers will discover that it’s not true E2E encryption again.
Yes, we know its easy to use.
The other thing that is wrong with this sentiment is that "privacy" is not a binary thing: you don't "have privacy" or "not have privacy". You don't "want it" or "not want it".
Privacy and the need for it, is heavily dependent on context. You want more privacy when watching porn than when watching some sports. You want more privacy when you are a minority than when you are part of the ruling class. And so on.
Edit: so a person wanting to escape religion and seeking online help on how to do so needs different privacy than three friends having an online beer. Everyone has moments and time where they need (some) privacy. So probably 99% (a made up statistic) has a need to replace software in some (rare) context, over privacy matters.
Though privacy is also something people want "afterwards".
Something you'd wish you'd taken care of when its too late. When your identity is stolen, and used to get hundreds of speeding tickets on your name. When your sons pictures were lifted off your facebook to bully him after you had your 15 minutes of fame, and so on.
Gmail does not use end to end encryption. Yet it's overwhelmingly popular, and perceived to be secure. Google has a huge incentive to keep Gmail secure, and that's enough for most people.
With respect to Zoom, I am fully convinced that if the PRC (or the USA for that matter) wants Zoom to compromise a given account they'll do it. But that's irrelevant to the overwhelming majority of people. Sure, companies like Google and Microsoft should not use Zoom nor should activists or other people that might attract the ire of governments that have leverage over Zoom. But that is a substantial minority of use cases.
I agree, evidence shows most people are not willing to go very far out of their way to defend their privacy. But I also think privacy is a genuine virtue, and a desire for it is present and often untapped. Why else would Apple have run privacy-centric ad campaigns? Attempting to tap into the weak but widespread desire for privacy, I think
Also, side note, lgbtq people make up somewhere in the range of 2-7% of the population, not 1%.
It’s like the derogatory “social justice warrior.” What? It’s bad to give a damn about people and advocate on their behalf? If having empathy means I’m “an SJW” then I’ll gladly wear that moniker.
If you say you care about privacy but do nothing to protect your privacy, you don't really care about privacy in any way that matters. Since you mildly inconvenience yourself for the sake of privacy, I can conclude based on this very limited evidence that you mildly care about privacy, which is a whole lot more than most people care about it.
Apple markets itself as privacy conscious because that makes Apple look like a trustworthy company. That's a rare and beneficial appearance that a company can maintain to get more business.
Not all gay people care about their privacy. Plenty live in states where they feel more free to open up to their friends, family, and neighbors.
If you want to argue that your target does less to further that particular cause than you do, fine. If you want to argue the cause is misguided, fine.
But using the term is just lazy.
> Author Sarah Churchwell asserts that it was Woodrow Wilson who popularized the phrase 'fake news' in 1915, although the phrase had been used in the US in the previous century.
> The term actually dates from the late 19th century, when it was used by newspapers and magazines to boast about their own journalistic standards and attack those of their rivals. In 1895, for example, Electricity: A Popular Electrical Journal bragged that “we never copy fake news,” while in 1896 a writer at one San Jose, California, paper excoriated the publisher of another: “It is his habit to indulge in fake news. ... [H]e will make up news when he fails to find it.”
https://twitter.com/CraigSilverman/status/522179364767924224 and https://www.theverge.com/2014/10/22/7028983/fake-news-sites-... are good examples of its use pre-Trump. By the time Trump started applying it to actual news outlets, it was already in relatively common use.
December 2016 Hillary Clinton decides to use the term in reference to recent events like her losing the election. The media picks up on it and the term starts to lose its meaning. Then Donald Trump, always with a nose for catch phrase politics, picks up the term and runs with it, a turn of events that someone on HN called a huge "own goal" by the media, and I can't say I disagree with that assessment.
For more analysis of this cultural specimen:
In your examples the word "fake" is used as an adjective to the noun "news site". As in "a fake news site".
"fake news" as it used today is a noun in its own right. As in "that's fake news"
See also "alternative facts".
Virtue signaling or whatever you want to call that sort of in-group circle jerk is fundamentally unproductive and self-rewarding because it's just a reiteration of existing group beliefs that basically everyone already knows and has. There's no term you can use to describe that behavior that will not make people uncomfortable when you call it out because at the end of the day you're calling the behavior unproductive and selfish.
For example, someone goes on Reddit and asks "should I put economy tire A or economy tire B on my 20yo, $2k car, also I have $250 to spend". The most popular answers will be invariably be "you should have dedicated summer and winter tires" and "you should use jack stands when you change tires" and many duplicates thereof many of which will suggest that anyone who does not do these things is not someone of good character, deserves to rot in hell, is a burden upon society, etc, etc. Neither of these sideshows are productive discussion. Both of them serve purely to signal to the in-group that the signaler believes something the in-group already believes. Of course everybody wants the greatest tires and nobody wants a car on them but the former is not in the scope for budgetary reasons and the latter is not a concern when simply changing tires never-mind that installing tires usually adds no cost over having them put on rims. The people circle jerking it to jack stands and fancy tires are negatively affecting the discussion for anyone who cared about economy tires. Virtue signaling takes legitimate relevant content and displaces it with low quality junk. I'm more technical than I am cultured but I see this kind of behavior across multiple topics and I'm sure that any serious amateur film critic, book critic, etc. probably could come up with a similar example from their niche.
While that example is hypothetical you can see similar ones play out across a multitude of topics and I think this does legitimate damage to civil discourse. It's like how searching for a service manual for an appliance on the internet to yield results but now it yields a million sites that don't have what you're looking for but will try and sell you something. I see virtue signaling as having a similar quality degradation affect but on legitimate discussion. We can no-longer have detailed discussions about complex issues, regardless of subject because they all get derailed, bogged down and diluted by people showing up to signal their virtue on a higher level issue. Take for example the recent discussion about police. Many on the left and right have a litany of small points on which they agree. Any public discussion about common ground, like fewer MRAPs in the hands of suburban police departments, gets drowned out by riff raff showing up to tell the world which team they're on. At scale this is damaging to civil discourse.
In conclusion, I think that having a nice, short, two-word, pop culture word to describe that behavior is useful because makes it easier for those who may not be articulate enough to otherwise do a good job call it out when they see it. Of course some people are gonna abuse it but I think that's just the nature of it being a negative thing since all negative things become a name you can call someone or their actions.
Signaling being a term used by some niche fields. We have that happen all the time. It’s how language evolves.
The article says to use show off instead of virtue signaling. Right after the next argument is assuming the person is disingenuous. Which is exactly what you’re doing if you say they’re showing off. The two arguments can’t both be used. They are arguments against different things.
ALWAYS dismissive and reductive. ALWAYS. It's not a good faith argument. If you think someone is all-talk-no-action, accuse them of being performative. If you think someone's cause is dumb, attack that. You can call ANYBODY advocating for a viewpoint a virtue signaller so it is an entirely meaningless attack.
Maybe some people are too quick with the term, there always will be people like that for any label. But just consider all the people just as tired of real conversation being shackled by "virtue signaling" as you are of being called a virtue signaler.
Also, you could write your disgruntled comments about any label. We are certainly way too quick to find racism where there isn't any. Yet there are also real racists out there complaining that the charge of racism is really getting on their nerves. Does that mean we stop charging people with racism when we see it? No, we're trying to call out unacceptable behavior.
If we're getting into pet peeves, mine is "gaslighting". Everything is "gaslighting" now. Accidentally spread a small inaccuracy? Gaslighting. Tell a public lie? You're now "gaslighting the world". Sharing my disagreement with you, like this comment? I'm gaslighting you.
If I say "don't be racist" and you call me a virtue signaller, you are implying that I don't actually care if you're racist, I just want social points for being "good". The problem is you don't know if I care or not. I might care deeply. I might dedicate my life to being anti-racist, but you can just label it virtue signalling and move on? that's an intellectually dishonest maneuver.
It can be incomparably frustrating as it focuses on non-issues or minor issues and misses the bigger picture.
I personally don't see it used outside of those contexts but it shouldn't be used to dismiss actual action like doing your part against climate change.
Buying CFL lightbulbs is performative climate action.
Buying CFL lightbulbs is virtue signalling climate action.
One of them is about the action - buying bulbs isn't effective, the other is about the person - if you buy bulbs you are fake. Perhaps "performative" fails at making this distinction clear enough, but I'd like a way to imply that someone is all-talk-no-action, without undermining their actual belief in something.
Don’t you realize how ridiculous and useless this argument is?
They're are plenty of things that people agree with in the abstract, but don't do enough to make a difference. Probably everyone agrees that the environment should be cleaner. If I agree with that, but "don't lift a finger" to make it better, how am a virtue signaling? It's the opposite.
"Virtue signaling" involves doing some token thing to get the kudos. Smugly saying you only use DDG for web searches. Putting a Tor sticker on your laptop. Etc.
When virtue signalling can become dangerous however is in situations where something based on a mistaken notion of doing good (when in reality it does more harm) becomes a fad and people who virtue-signal keep promoting it into wider popularity.
I don't know anything about Jitsi. Will research. Thanks.
At the start of the pandemic, my friend at this particular court was tasked with figuring something out. Initial plan was choose a web cam and software combo for everyone to use. And then they'd be futzing with Windows boxes of unknown provenance, trying to do remote tech support, etc. Whose got time for all that? For instance, they told me one of the new Logitech web cams they tried doesn't have Windows 10 drivers.
I recommended they just a cheap iPad for each location, participant. Create iCloud account for each device. Use FaceTime. Buy mounts or tripods as needed.
I haven't heard back what they finally decided.
FWIW, I since learned they were also having uninvited people join their confidential sessions, just like the naked guy showing up for online classes. Such a mess.
You think the backdoors that Zoom are likely creating for the Chinese government won't ever be found and used by malicious hackers?
You really think the Chinese-American human rights activists whose Zoom accounts were identified and banned from a private video call with Chinese-based allies and friends and family are thinking "what use is there for inferior products?"
Oh well, we live in free countries. You're free to open your company and possibly your Chinese colleagues/friends to China or whoever else is the boogeyman right now.
Downloading of Signal doesn’t signify privacy concerns. I have Signal because others do. Not because I care about signal’s privacy.
Yes, you have read that right. I was forced to make a zoom call and hold my passport open to the camera. No, they wouldn't accept a scan and an email, or even a call through Jitsi. This is a major public institution with a 200+ strong IT department consuming millions of pounds a year. This is the moronic "enterprise" stuff those millions of pounds buy.
The government of China border guards scan everybody's passport details.
It's unfortunate that a passport number is a form of ID.
Come on, the market wouldn't permit that to happen! They'd lose all their customers!
Edit: on a less sarcastic note, I'd be less critical of Zoom if their software were open source.
Meanwhile the companies in question universally refuse to acknowledge THEY NEVER ACTUALLY VERIFIED ANY of the claims around encryption. It would be hilarious if it weren't so terrifying. And oh, by the way, all of those companies refuse to admit they messed up so they ALSO haven't switched to another service, so Zoom is literally still selling on "If we weren't secure, these big guys wouldn't be paying for our service". It's insanity.
In most companies I've observed, the people deciding what products to buy are not capable of reviewing any of the products claims. If they happen to have an employee that is capable, and that employee points out a problem, they are usually ignored. Especially if it would make someone in management look bad for spending money on something they shouldn't have, or even worse if it would make them lose their free lunches and golf trips with their vendor buddy.
Now these are small to medium-ish size companies (20-500 people), so maybe it's not a big deal to Zoom's marketing bottom line. But it's definitely a thing.
People are doing 50-200 person video meetings with Google? Has Meet really improved that much since March/April? That’s not that long ago. Otherwise it seems more like cost is the reason. Not anything relating to quality.
While we pay lip service to Zoom's super shitty security stance, we now run a video meeting service where it's trivially easy to click the "turn on captions" button, and see how good a job the world's biggest advertising agency is doing of transcribing all the audio in our most sensitive business WFH calls... :sigh:
(I have my own Jitsi Meet instance running on AWS, but there's me and about four of my tinfoil headwear sporting friends who care enough to bother using it...)
Teams or Meet works fine (unlike the train wreck of Skype), have improved recently and are already paid for.
Saving face by not admitting egregious mistakes and even lying about making or not making them even after the evidence is public and irrefutable is just the human ego defending itself.
I'm starting to get past taht sort of childishness in my own life but having lived it for a long time I see it easily in others.
For the business model, it is the same as other OS,
there are companies that want a tech insurance policy.
I disagree. Plenty of video chat software has comparable reliability and "just works"-ity.
Google Meet, Skype, Microsoft Teams, Discord...
Hell, Apple has had "just works" and "reliability" in their walled garden since FaceTime was introduced -- if you're willing to look only in their walled garden.
Zoom is almost as good on a laptop as dedicated Cisco gear.
I have been video conferencing mostly for work for almost a decade, and Zoom is the best solution for that that I've encountered.
I recently changed roles, and the use of Zoom was a small reason to go with the company I did.
most of 8x8 revenue comes from sources and products unrelated to jitsi.
so, i don't think you can say that it seems to work fine for them compared to zoom.
The idea that a business needs a moat to be profitable is a problem endemic to business.
The value add would be in hosting services and support contracts. Video chat is needed by a lot of non-technical people. Furthering, plenty of people don't have sufficient bandwidth to host their own video chat even with just their own friends or teams. Even technical people often don't understand how to write secure software.
I'm pretty econ-left philosophically (socdem short-term, mutualist/ancom long-term), so you can't get much argument from me here. :)
But I want to steelman the other perspective: so long as we live in a pre-post-scarcity market economy, having some kind of moat is part of how one gains bargaining leverage in a price negotiation. (Think of "moat" in this context as influencing cost/benefit incentives, rather than an absolute barrier: the customer could build a boat to cross it, or they could pay the toll to cross the bridge, with the latter being usually cheaper.)
One answer is as you describe: hosting services and support contracts, in a market ecosystem of interoperable commodity services. Sign me up! But: such an ecosystem has a free-rider problem when it comes to the non-trivial expense of creating and maintaining the client software (including the risk of front-loading the 0-to-1 effort of building it before you know it will be adopted). In a FOSS model, other players in that ecosystem can obviously contribute to that effort, but those who don't contribute will have a competitive advantage, since commodity markets tend to viciously compete until margins are as near-zero as possible.
There are "moats" / competitive advantages that have nothing at all to do with the software itself: superior support experience, brand reputation, efficient hosting services through economy of scale. So I don't at all claim your model is unworkable, and there are many successful companies who do just that.
I don't disagree that the world would be a better place (and the overall economy perhaps more efficient), if most/all software was FOSS, and business models required less centralized control. (Note that nothing has stopped us from a building a pure FOSS E2EE VC client with a comparable feature-set; we still could.) But say I'm a board member or an investor in Zoom, whether pre- or post-success: how would you pitch me on the business value of open-sourcing the expensive-to-produce client software?
Okay. Consider the following fantastical talk from technical me to you, oh dear fantastical board member:
Let's face a fact: yeah open source software is "free (as in free speech)" and it can also be "free (as in free beer)". Anyone can inspect it. Anyone can "steal" it so-to-speak and set up a competitor. That will always be the case. Just look at how many stolen software products end up in your favorite app store. Games are ripped off right down to their copyrightable artwork, malvertisements added, and reuploaded with a new name. But I think worrying about that is like worrying about the people brewing their own beer. I think that's preventing us from building a brewery.
Let's face another fact: what's expensive isn't software. That's pretty cheap. That's just man-hours. A kid in a garage can build video chat over a weekend or two. What's expensive is experience. Experience is basically an impossible-to-estimate number of man-hours. We'll never be able to pay one person or one team to understand all of the pieces and platforms and make it work for everyone.
Customers want to run Windows, Mac, Linux, iOS, Android... and all of that is hard to keep up with. Customers have a plethora of network and hardware configurations. Customers have crazy different bandwidth and latency profiles. It's really hard for us to make our software work in all of that. But some of our customers are experienced and they're curious and they are looking at our software with a fine toothed comb. We simply can't stop them from doing so. That's how we got into these repeated PR messes after all. So let's embrace that. I think there's a good chance that some of those customers would solve our problems for us if only there was a way they could contribute fixes.
Like I said, experience is expensive. With experience comes ideas. Ideas are gold. That's why we're worried about our competitors after all. We don't really have any solid ideas. Neither do they. Even if we did have a solid idea, they'd create a competing idea or even just outright steal ours. And we'll still be left holding the bag, we'll still have these PR details for not getting things right in the first place. So let's turn that on its head.
If we open source our software, we give these technical people the opportunity to help us fix problems before they become problems. There's a ton of home brewers out there and some of them would love to be able to help our big brewery. We're not going to stop home brewers. So we shouldn't even try. But home brewers do need tools. Let them come up with their own recipes.
So, we provide the tools for free. But we can sell the recipe. Or, technically: provide a cheap service for the people who need something they know is secure but don't have the technical know-how and/or time to set it up themselves. Lawyers have a legal requirement to keep their conversations private. Schools have a legal requirement to keep their children safe from stalkers. Even citizens have a right to privacy. We'll make all of the tools available for anyone to audit and validate. The recipe to use those tools is where we make profit.
The recipe is the environment. We'll provide, for a fixed cost, the ingress bandwidth and compute needed. We'll provide secure storage of recorded conversations and an audit history of who's accessed it. We'll provide the experienced technical support to directly either fix problems or point at misconfigured devices outside of our control (and why it's the source of a problem); we'll be able to understand the debug logs that the software provides. Of course, any other technical person could too. But that's already the case so we're not really losing anything here. Indeed, we're gaining here. We're gaining the trust of law firms and governments; the trust that they're getting the value that they want for the services they need and that they can go directly to us if they need troubleshooting.
I'm not arguing against centralization. Centralization is good for us and for our customers. It's an anchor point for experience to grow from. I'm saying that open source software can help us avoid further technical problems from our lack of security experience. And who knows? Maybe some of those home brewers are interested in a paying job at our brewery -- if only they could prove they knew a little bit about beer, if it was free. We could definitely use the experience.
it would be very dangerous for the primary value add to be in hosting services. any hyperscale cloud provider could offer the service and undercut zoom (based on superior unit economics and market reach)
You scoff, but isn’t that exactly what we’re all doing to Zoom right now?
And they can be changed without you knowing it. Do you have a Fingerprint that what you get is what they share?
Jokes aside, with Zoom's track record, it's not worth using anymore regardless of what features they implement. Not having E2E encryption is no where near as much of a red flag as lying about it is to me.
Not to me. I would just assume they don't have E2E encryption and wouldn't base my calls around the idea of needing that. The claim is never worth it without an independent review and then thinking about the attack surface you actually want to shield against.
I mean, in another market, if you have ever investigated VPN providers you would see 100% conflicts of interest with affiliate marketing everywhere and the articles never acknowledge that the business of reselling internet access has inherent trust and unverifiable claims involved. A government can always tap the source with a legal order and there will always be information available to them.
For a video chat service, them merely saying E2E doesn't mean anything without a way to verify it, or host the whole stack myself and this is incompatible with being a company.
I've also have been using Discord for voice almost daily for a little over a year and it just works 99% of the time. Unfortunately, it suffers from "gamer" branding that makes it awkward suggesting for work. They should try offering a "business skin" that interops with discord.
It's just a means of communicating, people. Maybe a few Discord features aren't useful outside of gaming and a few Slack features aren't useful outside of the workplace. I find that to be a stupid reason not to generalize the use of these products. Skinning (and filtering away those specific features) just might be the ticket.
Mind you, I only host it at home on a VM for personal use. Have had sessions with 6 people with one of them a Europe-Australia connection. All fine on default 720p.
If I were looking for 20+ meeting software though I'd consider something else.
I would consider it a case for streaming to faceless attendees. I have never had a meeting with useful input from more than 10 people.
The company my wife works for can't get a reliable Teams conference going with anyone in France.
The company lied about their encryption.
Both of these statements can be true, it's just a question of trade offs.
I know they just announced their own "Ring central video" But im weary of that for the time being.
Separately: affordable servers likely are accessible to infrastructure providers (whether a VPS, or bare metal at a colo, etc.) so it's tough to say that "my own server" is usable exclusively by me and therefore not adversarial. Plus, maybe people want to use my server and consider me adversarial for whatever reason; they should use their own server instead, but might not have the skills.
It's in here somewhere: https://jitsi.org/security/
I'm no Zoom fan (I'd even use BlueJeans first), but people on HN are always so eager to crucify a company for its past. If it made mistakes, get out the tar and feathers! If it doesn't fix those mistakes, get out more tar and feathers! If it fixes the mistakes, even more tar and feathers!
Zoom isn't learning from mistakes and making improvements that the market demands. It's providing a feature it said it already had.
Zoom knew E2EE was something the market demanded, so it lied about having E2EE. This was a blatant lie to get more people to use its platform. Then Zoom got caught. Now it's actually trying to provide what it said it provided in the first place.
Perhaps. In my case it's less crucifying a company despite intentions to fix and more crucifying a company because I'm tired of hearing the same PR nonsense and not seeing real improvement to the industry as a whole.
What you're seeing is the flip side of the whole "it's easier to ask forgiveness than permission" nonsense.
Unfortunately, it didn't end there. Rather than earning back their reputation, they have continued to burn through it with blunder after blunder.
The company has proven itself ethically corrupt and that's not something that can be made up for with apologies and product improvements. It will take time, demonstrations of humility, and a healthy dose of transparency to restore their reputation with me.
Criticisms of large corporations is a healthy part of the HN community IMO. In fact, if we didn’t criticize Zoom they might still be lying about their E2EE capabilities.
Crucifying Zoom over this while letting virtually every other company in the space (inc. Hangout/Meet and MS Teams/Skype) go free seems quite hypocritical from an HN community that's comprised of many startupers and startup wannabees who spend their professional lives working for entities with similar practices.
How is saying "yes we can scale" when you're not sure if you can, aren't you essentially implying that you have the infrastructure to deliver on that promise? And if you don't actually have that infrastructure yet/built/proven, then you're essentially selling a feature that doesn't exist.
It's shades of grey from lying about E2EE, but seems pretty similar imo
That's very different than making a specific claim that you already have a feature right now, that you in fact don't. That claim cannot possibly be made in good faith, as it's currently outright false, and you can never retroactively apply end to end encryption on conversations that have already happened.
Really? Do you think they would/could lie on the features of a product that they deliver to a client. If they did, do you think they should get away with that?
'We have that capability' claim is totally not the same.
(I think I'm preaching to the choir here; just clarifying for anyone else reading your comment)
The only alternative I ever looked into was Jitsi (because it was the first alternative I started doing research on, and by the time I'd finished researching it there was no doubt that it was more than good enough -- and super easy to build our own cloud instance so that, even though it wasn't E2E, we had total control of the server that managed the encryption), but I don't recall hearing arguments that any of the other major competitors were actually E2E encrypted.
It would be super interesting if there was a way to abstract out encryption on the camera itself, where the video call software gets an encrypted video stream and its only job is to convey that stream to the other side, which decrypts it.
The hard part is sending an encrypted stream that can be programatically degraded based on available bandwidth, and still be cryptographically secure.
Denying E2EE is a cost as you are punishing people for the crimes of another, this is depriving them of their hard-earned freedoms and liberties, for something someone else has done or may do.
Look at the activism going on today. BLM, dissidents in China, the rise of oppressive far-right governments in Europe like Hungary. I am sure if you dig far enough, you could find many people fighting in obscure causes, high profile causes, in a number of countries, who would fear the fist of an oppressive government.
What if the FBI / NSA decides to surveil BLM, as they already are? What if the CCP strikes down a dissident as they already have on Zoom? What if Orban decides you are a secret agent of George Soros plotting to undermine the government? Is it the case that everyone should roll over because a criminal might use the same means as them?
Your smart tv recording has nothing to do with this, but one does still need to trust that it isn't happening. In the case of the smart tv we can attempt to look for microphones or other components that are able to be used as microphones. Software offers a more difficult path in verification.
Technically, the exact packets of the data you send is E2E encrypted... but the copies they make for themselves aren't.
It's shocking to me how often this is glossed over when discussing E2EE services: you still must trust the platform.
The implementation of E2EE must be robust and there must be somebody who is actually checking the source code (plus verifiable builds)
Nothing makes software automatically super-crazy-secure. Absolute security doesn't exist.
And if so then what's the term for encryption that a middle man cannot decrypt?
I don't know if there's a term, but short of exchanging public keys in person there will always be a theoretical attack vector because there's always some[one|thing] in between you and your recipient.
No, it only implies avoiding a central server (and not even for every aspect of the service), you still run through routers, ISPs, NSA etc.
If you are certain that there's no middleman, you don't need encryption.
N.B. Maybe someone defines it in another way today, but when the term became popular, with Napster, it really meant simply not having a central server for certain functions, or even more banally not downloading your mp3s from a web site or ftp server; it did have some significance also because the legal aspect of it was more uncertain; when people started getting 100k dollars fines, peer-to-peer stopped meaning much, sometimes it's better to send packets directly to each other, other times through a server, but you almost always encrypt and almost always ought to encrypt end-to-end
I never implied a need for encryption associated with peer-to-peer. The parent comment asked about avoiding a middleman.
I have no idea what "end-to-end encryption" means, nor do I seek to know. I do not wish to be part of that debate. The record of how that term is being applied speaks for itself.
I do know of the term "end-to-end" as in https://en.wikipedia.org/wiki/End_to_end_principle One can find this concept in many of the early RFCs.
To me, "peer-to-peer" (with no central server) is in the spirit of end-to-end. This is why for example, people will sometimes say, "The internet was originally peer-to-peer."
Which I think it's pretty much how you defined it too in your (last) comment, so I'm not sure what we're debating.
The important thing was that no one reading these comments get the impression that the multitude of systems that describe themselves as "peer-to-peer" are for sure using "encryption that a middle man cannot decrypt".
> The parent comment asked about avoiding a middleman
Middle man in cryptography is anyone intercepting a message
> I have no idea what "end-to-end encryption" means, nor do I seek to know
Well, I don't mean to be rude, but then there's not much you can say in a discussion about encryption...
Look, the important thing was to underscore that the https://news.ycombinator.com/item?id=23554823 comment was (apparently) wrong, I don't have any interest in winning a battle, I appreciate your enthusiasm, you probably currently don't know everything about cryptography or networking and there's nothing wrong with that, no one is born expert and no one knows everything there is to know. I have to go to sleep, bye
All I said and cared to stress, to avoid that someone reading this make mistaken assumptions about p2p software (although probably few of this site's users would run the risk), is that ^^^they don't, as you claimed in https://news.ycombinator.com/item?id=23554823 , automatically imply "encryption that a middle man cannot decrypt"^^^.
You admitted you don't even know what end-to-end encryption is, and apparently don't know much about encryption, what are you debating?
> The term the parent comment used was "middle man" not man-in-the-middle
It's the same thing (unless the post author meant "a man of middle age")
> As for "E2EE", I have never seen djb even use that term
You mean Daniel J. Bernstein with djb? Do you mean that you are actually knowledgeable about encryption? I don't mean to be insulting but it didn't seem so (and there wouldn't be anything bad in that), it's hard to believe that someone with basic familiarity with encryption wouldn't know what end-to-end encryption is.
If with "that term" you meant the E2EE acronym, I indeed wouldn't be surprised if Daniel J. Bernstein never used it, it's the first time I see it myself (but it obviously doesn't mean anything more than "end-to-end encryption").
I don't know why you took it so personally, maybe I sounded aggressive in saying NO in uppercase, if so I'm sorry, it was just to make it more visible
Example comment: "Peer-to-peer is a viable design for videoconferencing for small groups. If one is concerned about a "middle man" then it is worth investigating a peer-to-peer design."
end to end encryption would prevent lots of monetization strategies, such as indentifying people via facial recognition and voice printing and then using this data (along with transcripts for example) to "add value".
Now the "we have identified a path forward" bit makes me wonder if they can still pull it off. Maybe it's client-side identification with out-of-band notification.
Google makes an enormous amount of money identifying people.
It's simpler to set up (accounts and password protection are optional), IMO easier to use (eg. the hand button is on the bottom bar with mute, etc. and not in a menu labeled "Participants") and higher quality according to the New York Times, who deemed it "reliable and easy to use": https://www.nytimes.com/wirecutter/reviews/best-video-confer....
I've introduced it to extended family members who've used Zoom prolifically, with zero complaints.
Can you name a single disadvantage?
Audio capture APIs often suggest it may be possible to use very small frame sizes, which naturally promise much improved latency. Going from 100ms of audio latency to 20ms is great so surely going from 20ms to 5ms is even better right? Well, the hardware underneath that API may not be able to deliver, at least it may not be able to deliver consistently. If your 5ms buffer isn't filled on time, what do you send? A partially filled buffer? Silence? The last 5ms of filled buffer again? All bad answers.
Tool A with 40ms of latency may feel imperceptibly worse than Tool B with 30ms of latency. But Tool C with 10ms of latency but frequent "drain piping" as audio frames are garbled or undelivered is clearly much worse than either.
Participant limits in Jitsi Meet are a bit confusing. There's a lot of variables to consider.
> 1. Room hard limit is 75 users, recommended 35 users.
> 2. The limit with more than 15 users with camera is the user’s PC.
> 3. Working test with a good bare metal servers, 115 mute users and 5 users with camera.
> 4. Test in progress for 500 simultaneous users.
We are available at email@example.com
I'm not being facetious. If you compare the name Zoom to Jitsi, people will choose Zoom 9 out of 10 times. You won't get many people to even try Jitsi.
Zoom is quickly becoming a verb similar to what Skype used to be. Let's Skype. Let's Zoom. Everyone understands what that means.
Let's Jitsi... Let's what?
I mean, you can already look at the design if you wish, it was disclosed by Alex Stamos: https://twitter.com/alexstamos/status/1268061790954385408
TBH I'm sort of surprised they gave in to the new wave of criticism, their arguments for not giving E2E to free accounts were pretty decent.
The real reason is they want to be able to hand data over to China / NSA / marketers.
It's honestly the first time I heard this justification - where else did you hear it in the last twenty years?
Also do you have some concrete reasons to believe Zoom hands over data to marketers? That's the first time I personally heard this claim - can you link me some evidence?
As for the same old same old? It hasn't been precisely in that form but criminals have used Facebook, Tor, Email, Discord, YouTube, Usenet, Skype, MySpace, and other technologies / sites to facilitate abuse. This is merely the newest iteration.
Also, you seem to acknowledge that there are legitimate concerns/ reasons to NOT offer E2E encryption for free users. Unlike all the other services you mention - Zoom users that care about illegitimate interference in their communications would actually have a way to get E2E encryption. Unlike FB/Youtube/etc, Zoom doesn't need to offer the "free" service in order to exist - it does this as a marketing ploy to get you accustomed to their services. In that sense, withholding functionality from free accounts seems perfectly reasonable?
Yes, it is a problem, it's been a problem for a very long time. Criminals gravitate to the most convenient platform like anyone else but otherwise don't stop being criminals. It only serves to punish other people.
Perfect instrument to collect more personal data.
The objective behind verifying accounts is to prevent spammers creating lots of spam accounts and using those to spam.
However, spammers rarely care if their spam is encrypted, so putting E2E behind verification won't do anything as far as spammers are concerned - they'll happily keep spamming using the unencrypted accounts.
There's some other reason behind this that isn't about reducing spam.
I know this argument is often quickly dismissed on HN since people see child abuse or 'going dark' as an easy excuse for the government to leverage to get more control (and it has been used for this), but that doesn't mean the problem isn't serious or doesn't exist.
See this: https://www.nytimes.com/interactive/2019/09/28/us/child-sex-...
The resources fighting this are relatively small in comparison the scale of the problem: https://www.freethink.com/videos/child-exploitation
The people carrying out the abuse are sophisticated.
I have a friend that works at WhatsApp and their entire team is focused on trying to remove groups that exist to share child abuse imagery (via metadata since content is encrypted).
I fall on the side that secure encryption is critical for all of the reasons that technical people normally argue that it's critical and breaking it doesn't work/is a bad idea, but I also understand and empathize with the difficulty encryption by default causes for the organizations fighting this abuse.
That said, I have serious disagreements with Zoom unrelated to this particular e2ee issue (https://zalberico.com/essay/2020/06/13/zoom-in-china.html), I think they don't actually care about protecting the speech of their users or securing content from authoritarian governments. It's still good to avoid them for that reason alone.
Of course, criminals are ordinary people too. They care about convenience and network effects as much as anyone. Which is why I think it’s insane that governments want to jeopardize the trust people have in proprietary, huge E2EE platforms that actually have the means to aid them in investigations. Yes, breaking the crypto may not be an option, but at least collecting useful metadata for use in investigating, and potentially ethical hacking, is an option.
I fear the day when the trust is gone because there is a very real possibility that some day many will be using decentralized E2EE chats, maybe even P2P. It’s not just conjecture of course, Matrix exists today and is already very impressive (in my opinion) in terms of usability.
The internet is opening up the concept of having nearly private communication with pretty much any individual in the world. It isn’t free of implications, but also, as more of our lives move online I feel its absolutely crucial that every day people can feel confident they’re not being monitored. The problem of CSA and other criminal behavior existed before the internet and it will certainly exist after. It’s absolutely past time to re-evaluate laws surrounding child protection, which seem to me to mostly be reactionary at this point (in that many of them are spawned as a result of a specific incident.)
Individual child abusers aren’t part of a monolithic organization with training on how to secure their comms and practice OpSec.
The number of criminals who still create evidence against themselves on unencrypted platforms (SMS, phone, etc) is significant, despite E2EE options already being available. People are even being arrested for rioting after admitting on public TikTok videos to participating.
I think the only way criminals will standardize on E2EE is if every platform and communication mechanism is E2EE by default. Otherwise they will continue to make mistakes or think they can slip under the radar.
FWIW, I believe this is the future if lawmakers don’t prevent it. A look at some E2EE software today:
- Firefox Send
The list will grow.
In my opinion, E2EE today is like TLS 10 years ago. TLS was once a nice-to-have when it came to communication that was not strictly necessary to encrypt. Today, TLS is more sophisticated, stronger, and easier to implement than ever, and damn near a necessity for anything, even toys.
Granted... E2EE is necessarily harder, since it requires application-level implementation of crypto primitives, things definitely get complicated. Still, I believe the state of the art will continue to improve and tooling with it. Eventually there will probably be defacto libraries and maybe even OS frameworks to deal with E2EE key management, trust, etc.
To be clear, I view this as strictly a good thing and an inevitability. I don’t think transport encryption and encryption-at-rest are good enough anymore for private communication. Of course for public sites like Twitter or Tiktok it’s all you would logically get, but for any group or direct communication I now believe E2EE is slowly becoming the new baseline, and it’s mostly the complexity of it that hampers adoption.
Now that iMessage and WhatsApp are E2EE though, there is a lot of messages flowing that, exploits notwithstanding, are “truly” private, today, and I think the number will only go up. The only real question in my mind is, who’s next?
As far as criminals making slip-ups, this is guaranteed; even the best make mistakes obviously. But assuming all criminals are foolish and stupid is a mistake; I believe there’s a lot of selection bias in there, since we don’t get to find out those who truly never get caught. Time will tell if any of this really matters, or, if, as usual, it’s just another panic that has no tangible effects. I vote on the latter, but I still do believe proliferation of E2EE will change the game in ways we can’t really anticipate 100%.
This is not the problem. The argument is hollow.
People need to take child protection laws out of political discourse, as it's now approaching silly.
Put plainly, there will always be crimes you won't be able to catch. You prioritise resources on the most pressing ones and build up resources in the real world to tackle them in other ways. Dystopian lists on the client to control what you're allowed to say or think or report your thoughts back to the government still violates the principle E2EE is built upon.
There is no middle-ground. You either are secure or you are not. The genie is out of the bottle either way.
There are no valid arguments against encryption
> And yes, law enforcement eavesdrops for law enforcement purposes
Lawful eavesdropping is an oxymoron
In this case wouldn't they build their own solutions (potentially based on existing open-source solutions like Asterisk + Linphone or Jitsi Meet) or they might've built them already?
Phone numbers are also very easy to obtain anonymously, so I am not sure SMS verification would help track down abusers when it'll lead to a prepaid SIM or some innocent user's phone that happened to be compromised by malware.
It depends on which country really. In some places in Europe it became almost impossible to do that (sadly).
I agree that these reasons are why it's not a good idea to break or outlaw encryption since bad actors can still use it and good people that need it are blocked, but this doesn't mean that making it the default doesn't enable more abusers to get away with it that might be caught otherwise.
There's a spectrum of sophistication, if it's harder more of them will make more mistakes that make them easier to catch.
Also to clarify, specifically a reasonable trade-off for Zoom (I don't think there should be a general law that requires IDs for video software use or something).
Zoom is not a company I would use at all if you're looking for secure communications (https://zalberico.com/essay/2020/06/13/zoom-in-china.html).
If you care about secure communication you should be using something else.
It's not a reasonable trade off in countries where you get you legs broken, skin flayed alive, and head cut off: https://www.telegraph.co.uk/news/2019/11/18/russian-mercenar...
A likelier explanation, is they want an easy way to wash their hands off when being pressed.
What you're arguing is a strawman, we agree more than we disagree.
I read, and I think your argument is hollow, and, assuming your goodwill, you are not understanding the matter at all, and if not, I see an ill intent.
I do not appreciate all what you say at all. Any argument against encryption must be quashed without exceptions, and second thoughts.
It is only since the start of 21st century, the experience akin to "legs broken, skin flayed alive, and head cut off" has been a grim reality for far more than a million people by now, mostly for, really, nothing. What are talking about this! And what you talk about?
Attack this argument, not something not even having a passing genuine relation to the matter.
It seems any argument that you don’t already agree with (basically only your exact position) is classified this way.
The rest of your comment is basically incoherent, and the parts that do make sense are obviously wrong. It’s also a willful misinterpretation of my position.
People were flayed before the 21st century. Acknowledging the issues with encryption is a critical requirement in making an effective defense of it. I am not arguing against encryption.
If this is an issue you actually care about (which it sounds like it is), learning how to build consensus and honestly consider the positions of others would be a valuable skill to develop.
As it stands you’re doing more harm to the pro-encryption position (which is also my position) with how you’re attempting to defend it.
When a company says they want your phone number in order to use their resources, so they can take steps to avoid having their resources used for (certain) crimes, that's well within the bounds of reasonable.
The problem most people have is when the government tkes away the use of _super important feature_ from the populace as a whole (even using their own resources), because it _can_ be used for crimes.
Those are two VERY different things.
Lower privacy "because security" is not a reasonable trade-off. It should not be. See also: https://en.wikipedia.org/wiki/Four_Horsemen_of_the_Infocalyp...
Now, I'm not saying there is nothing that can be done to reduce it. I very much hope there can be, especially if counsellors can find warning signs and we can better figure out how to spot the danger signs, both online and off.
Facebook took a good step forward by putting warnings up to minors when someone outside of their social circles has contacted many others, although there are other things which could be done.
Should they be allowed to contact them through onion routing during such situations? Where do you draw the line of when such technologies can be used? Is it better not to open this can of worms and risk a slippery descent? What are the chances of false positives, will it unfairly impact relatives? Will it give a black mark to privacy technologies and civil liberties to be associated with automatic blocks? What if minors want to engage in activism, should this be limited? At what point does pushing and pushing start the lie about your age shenanigans again?
This is about Facebook here but it ties back to arguments about doing this or that for the greater good.
Is a more grounded approach better? Ensure minors are well-educated of the risks and dangers online? Invest in mental health services to avoid minors falling into depressive slumps where they might be susceptible to such criminals? In the rare event they drag anyone back home, whether they think they're of a similar age or not, they bring them before the parents first?
My personal political answer to "how to have end-to-end encryption and prevent its use for child rape" would be to tax the companies which profit from E2EE, and use that money to fund death squads, which livestream dragging child rapists out of their home, anywhere in the world, and beating them to death with truncheons.
I'm joking, of course (or am I?) but I do consider this the general shape of a viable solution. E2EE is essential for a modern life which isn't a hellish surveillance dystopia, and the detection and prosecution of child rape is criminally underfunded.
In which ways do you think it is underfunded?
CPS should be able to spot children in abusive homes and respond to reports of unusual activity. They should be able to spot clearly unstable caretakers.
Counsellors and teachers should be able to spot unusual behaviour from children. Mental health services can help someone escape falling into such a situation in the first place by keeping them from falling into depression which leads them to rely on such a person.
Local police shouldn't dismiss leads so readily. This is the it is impossible for him or her to do such a thing mindset which prevails so frequently.
Parents shouldn't trust their relatives so readily and should keep an eye out. 90% of cases happen at home.
If they stopped showing off their crimes online, would the entire system come to a crawl? I'm worried by how much of a reliance there is on divining crimes off the internet.
Child pornography gets held up to the public a lot because it's a crime nobody can defend and walk away the same they were, no matter what you say. If you publicly contest this move for privacy reasons, you're automatically defending the worst child molester someone's mind can come up with.
The one, admittedly terrible incident, will shock people and they will push exaggerated means to "stop" it. Ones which just so happen to feed tons of information into the NSA machine.
People come up with stories of live-streamed child pornography too but do these children live in some parallel universe where crimes can be committed against them without recourse? What is the police doing? Did they not find suspicious behaviour in a neighbourhood? Did a counsellor not pick up on it?
Yeah sure, child pornography is awful but why is this part of the equation the only one that is ever mentioned? Why is it always about encryption or anonymity?
with added bonus of no central server
There is no need to use my cell phone or any telephone number when you already have a means of communication via email or any other channel.
Nationalistic and ethnic flamewar will get you banned here. So will personal attacks and insinuated slurs.
People have been hounded off this site in the past by comments along these lines. That's shameful, and we want no more of it.
No, we're not defending communism or the communist party. We're trying to defend Hacker News against (a) mob behaviors and (b) self-immolation. Here are some recent comments about this, which include other links to plenty of past explanations.
When will this meme die?
Zoom is NOT a Chinese company. It is incorporated in and headquartered in the US. Like any American company ever, it follows US laws in the US, and local laws in other companies where it operates. End of story.
Yes their culture certainly has stronger cultural internal ties to China, due to the number of Chinese employees, but what has that got to do with anything? At the end of the day, they're a public, profit-driven corporation trying to make lots of money across the entire world.
It's not like they're secretly and nefariously doing the CCP's bidding, which seems to be the veiled suggestion people keep making.
Seriously, every time someone brings up that Zoom is "really" a Chinese company, it comes across as borderline racism or conspiracy-mongering or both. And while I'd usually never comment on someone using a throwaway account, in this case when you're pushing these kinds of shady "stronger than the more-commonly-discussed" insituations, I think using a throwaway here is representative of exactly the kind of astroturfing that spreads malicious rumors without evidence.
>The suspension targeted Humanitarian China, an organisation based in the US, after it held a call with roughly 250 people, including a number who dialled in from China.
They said it was wrong to do, reinstated those accounts, and are building the functionality to enforce those Chinese laws without ever impacting users outside China.
That's from their blog. https://blog.zoom.us/wordpress/2020/06/11/improving-our-poli...
They are actually admitting that they're going to prevent people IN CHINA from connecting to a meeting that is presumably hosted IN THE US. That doesn't make it better, it makes it WORSE. You're basically telling the world that China will dictate how you operate WOLRDWIDE not just in China.
There's a pretty big difference IMO.
The same would be if they blocked Gmail of US citizens because of discussions related to China.
What they aren't doing is actively blocking users from China getting to www.google.com, and they aren't censoring searches Chinese users do on www.google.com because those resources aren't hosted in China and aren't subject to Chinese law.
Zoom on the other hand IS blocking users from China accessing resources in the US and while they may have "undone" the ban, they banned US users from their platform for breaking a Chinese law that they aren't subject to. Namely talking about the Tienanmen Square massacre.
Unless you've got a magic bullet to give Wall Street a conscience, that wasn't going to happen.
By the way, in general my understanding is that the "beholden to their shareholders as a public company" (and thus forced to make the most remunerative decisions) belief is a myth, a public company is free to make ethical choices (if its major shareholders don't oppose them).
Based on a complaint from china regarding non-Chinese citizens with non-china accounts, zoom cancelled the non-china accounts.
Full stop. Let that sink in.
If you are wondering who zoom will accommodate you have your answer. This is a fact. This is what we are calling "racism"?
It's entire contents is an argument that Zoom is evil because Eric Yuan is Chinese-American. It doesn't say a single one of the things you're saying.
We can be upset at power structures and at their effects. Taking that further to make assumptions about groups of people is racist.
You'll get a Liu Xiaobo once in a while, but much more often you'll have people that, while easily wonderful as people per se, are disgracefully forced, or have been convinced to, do some things that they shouldn't do.
He is a naturalized American of Chinese ancestry.
In history, this kind of scapegoating was counterproductive.
Though I think the CEO is less relevant than the critical amount of developers Zoom relies on that are directly living under and subject to CCP malfeasance.
Could you give a few such cases?
That's definitely one area the Chinese system is vastly behind the international standard. But this has not been conscious to me.
When I was a child, we had horror stories of people sentenced to death penalty and were executed on superficial charges that sexual misconduct. And my parents have witnessed detained thief were beaten like wild animals by policeman, and the thief's scream was heart far in the then poor village.
These measures can even escalate during "Yanda", a nationally-coordinated clampdown on criminal activities. In , it was noted: "China's execution rate increases dramatically during Yanda campaigns."
Such brutal national campaigns are dying down, the most recent one  has much less cruelty. And historically such campaigns enjoyed universal domestic support.
As of today, the Chinese government has a high degree of popular endorsement to use whatever not-too-out-of-line measures to bring back the corrupted businessman or former government officials, who are prosecuted by the public attorneys. Thanks a grain of national pride, i.e., "those corrupted bastard not only embezzled our money, and they escape to the country that is unfriendly and can be benefited from those fortune".
I will say... it's curious to me that you seem to be well aware of such practices but still challenged the parent to provide examples (the implication being, that they were misinformed).
Why did you ask the parent to provide examples if you knew of some already? What's your motivation?
#2 I was not aware of the details of harassing on corruption suspects' relatives.
#3 I think it was reasonable to not able to link these 2 facts automatically together as self-evident.
#4 I wanted to emphasize the fact that such behavior does have popular support. Which means corrective measure must be based on educating people, not assume that a certain way of thinking is automatically universal.
Just to be clear, I'm not suggesting that China is unique in doing this. They are simply much more openly aggressive and nonchalant about it.
EDIT: I also think it was both reasonable and correct for you to ask for examples. Superpowers do enough sketchy things that we don't need to be muddying the waters with made up claims.
Do you not understand the racism behind assuming someone is a CCP agent based on nothing more than their race?
Edit: Let me also state that I'm not entirely sure what OP's argument was, considering the comment was deleted. I'm merely stating that there seems to be some cooperation with the CCP and Zoom.
What we have here is going from "CEO was born and raised in China" to "No wonder why they have to play party with CCP" and "inevitable ties and implicit subservience to the CCP". I can't see the logical connection in the absence of other information. Don't you see a problem with assuming someone's motives purely from their country of origin?
1. zoom's application is sending data to Chinese servers separate from the application functionality servers.
2. the CEO is from china, I'm going to assume he has relatives in china.
3. we know CCP is a completely fucked up government with an absolutely horrible history of civil rights violations, genocide, etc.
I wouldn't put it past CCP to be pressuring the CEO by threatening relatives who live in china. this wouldn't be unheard of for CCP.
add the the unnecessary data transfer to chinese servers makes it look really bad.
its a fairly reasonable conclusion to draw that the CEO is compromised if all the above holds true.
more extreme conclusions could just as easily be drawn from that same data that he is literally a foreign agent for china.
It'd be one thing if there are actually some nefarious ties between Eric and CCP, but all we are going by is he's originally from China and there could be influence by CCP on people from China. It's not bad to point out a connection, it's bad to point out a possible connection based on nothing more than where the guy is from.
It is best to focus on these sorts of links rather than someone merely being from China.
The issue is national origin, not ethnicity. Japanese Americans were thrown into camps for the same reason, but we're not talking about jailing anyone here. We're talking about avoiding a specific product.
Another difference is that Japanese Americans were put into camps regardless of how many generations removed from being Japanese they were. No one is arguing that CCP has control over Chinese Americans whose ancestors immigrated here in the 1800s. It's about people who literally grew up in China and/or still have close family there for CCP to threaten.
EDIT: Also sounds like you're saying that it's ok to avoid doing business with someone based on national origin, which I also find problematic.
Sure, it can be problematic. I've seen articles about how Russian people in the software industry are having a very hard time because of what Putin's regime does. It's not fair for the people who have nothing to do with Putin and no exposure to him.
But what is the alternative? Putin and Kim have assassinated dissidents in Western countries. Do we assume people can't be coerced just because they left the borders of the authoritarian country?
Should the US government also remove its nationality restrictions for security clearances?
I mean overall you really don’t find it an issue to blankedly judge an entire class of people based on what some people within that population does or could do?
It absolutely is not a protected class. There are no protected classes when I am deciding whom I trust with my personal data. I can discriminate for any reason, including national origin.
> I mean overall you really don’t find it an issue to blankedly judge an entire class of people based on what some people within that population does or could do?
I would find that an issue if anyone (including me) were proposing it. We are not. You're attacking a straw man.
Here are the facts, regardless of Yuan's citizenship, race, etc:
1. Yuan grew up in China. He still has Zoom employees and family there.
2. China is controlled by a regime that has no qualms about using physical threats and violence to maintain control.
That's it. That's all I need to decide that I don't trust Zoom, if all of their extreme dishonesty and malware installations weren't enough. They haven't shown good judgment, and even if they did, it would be easy for CCP to put pressure on Yuan (or any other employee living in mainland China).
If Yuan had no family in China, no employees, and enough bravery to speak against CCP, I would not feel this way. I am not judging an "entire class" of people.
By the way, every firm that requires security clearance does judge entire classes of people as security risks. The question I asked, which you didn't answer, is whether you think that's also inappropriate.
You listed 2 things. First is where he's from, the second is the politics of the country. You are then basing your judgement (at least in this comment) purely on those factors. The implication here is you wouldn't trust your data to anyone that was born, grew up and has family and / or employees in China. I mean most of the large tech companies have some employees in China. How is this not judging an entire class (or group if you'd like) of people?
As far as security clearance, they are at least in theory assessed based on established facts about a particular person. e.g. being born in China doesn't automatically disqualify you as far as I'm aware. If you know otherwise or can point to examples, I'm open to being corrected.
I mean if it's been established that Eric has connections to the CPP then that's a different matter and we can look at that. My objection is with "Eric is a Chinese-American billionaire businessman so we shouldn't trust him".
At the risk of a slippery slope fallacy, institutional xenophobia ain't controlled by an on/off switch. Dehumanization is a gradual process, and establishing an attitude that people associated with an enemy are aligned with that enemy is part of that process. At first those associations might seem reasonable, going for officials and other important figures, and then perhaps their family, and so might the actions against them, like added scrutiny and surveillance of their communications and travels. The problem is that both ends of that are prone to scope creep - the target set broadens ever so slowly (citizens, ex-citizens, descendants of (ex-)citizens, their descendants, and so on, almost always excused with "well we need to be sure that $CURRENT_TARGET is not part of $PREVIOUS_TARGET"), while the actions worsen ever so slowly (surveillance, profiling, travel restrictions, property confiscation, imprisonment, sterilization, execution) as the rhetoric heats up from "we just want to make sure these people aren't the enemy" to "these people are the enemy and shall be treated as such".
Personally, I'd prefer to nip that in the bud rather than watch 1800's-era sinophobia reenact itself at the expense of my Chinese-American friends and colleagues. I also have enough self-awareness to know that if I would be upset by people writing me off as "will probably help oppress minorities and political dissidents if his government tells him to do so" simply because I happen to be a citizen of a country with a track record for oppressing minorities and political dissidents, then I should refrain from doing so to a citizen (let alone ex-citizen) of a different country with those same tendencies, even if those tendencies are, in my opinion, much stronger.
Of 127,000 Japanese Americans living in the continental United States at the time of the Pearl Harbor attack, 112,000 resided on the West Coast. About 80,000 were Nisei (literal translation: "second generation"; American-born Japanese with U.S. citizenship) and Sansei ("third generation"; the children of Nisei). The rest were Issei ("first generation") immigrants born in Japan who were ineligible for U.S. citizenship under U.S. law.
What do you suggest?
Should there be a upper limit on the generations to be considered not originated from a nation state? According to what happened to WWII Japanese ethnic Americans, that number seems have to be > 3?
And remember that what happened in WWII Japanese internment camp is an evidence that "national origin" as an association was plainly wrong, from the same wiki page:
In 1980, under mounting pressure from the Japanese American Citizens League and redress organizations, President Jimmy Carter opened an investigation to determine whether the decision to put Japanese Americans into concentration camps had been justified by the government. He appointed the Commission on Wartime Relocation and Internment of Civilians (CWRIC) to investigate the camps. The Commission's report, titled Personal Justice Denied, found little evidence of Japanese disloyalty at the time and concluded that the incarceration had been the product of racism
Emphasis on the last statement: `the incarceration had been the product of racism`.
It explicitly argues that he's evil because he has a Chinese name and Wikipedia describes him as Chinese-American. There's no equivocating about the software being China-based, or him having close family in China for the CCP to threaten.
And discriminating against people for national origin is considered so bigoted it was explicitly included in the 1964 Civil Rights Act. Your reasoning is sound, but you should seriously reconsider your basic ethical principles.
Not to justify atrocities happening in China or getting into Whataboutism, but just to give an analogy, would it be fair to consider any US expat an accomplice in or a proponent of separating migrant children from their families at the border?
I believe Zoomed has earned the privilege of folks being highly skeptical of their actions / motivations.