Hacker News new | past | comments | ask | show | jobs | submit login
Zoom to bring end-to-end encryption to all users, including non-paying (zoom.us)
845 points by jmsflknr 16 days ago | hide | past | web | favorite | 524 comments



I find this story arch with Zoom amusing:

1. Pre-COVID Zoom claims it has E2E encryption for everyone.

2. During COVID Zoom grows in popularity, which prompts journalists to learn that the claims that Zoom has E2E encryption are inaccurate.

3. Zoom admits that it never had true E2E encryption, but announces they will develop it and it will only be available for paying customers.

4. Zoom gets another wave of criticism for restricting its new E2E encryption service so it walks back to its original message that all accounts get E2E encryption.

Given their track record I’d expect this timeline to repeat itself so after they release this E2E encryption feature, security researchers will discover that it’s not true E2E encryption again.


I'm actually more alarmed than I was before the announcement, because it indicates that there wasn't sufficient pressure for them not to do this. Watch them put the key in a predictable memory location, then have a subtle vulnerability elsewhere that lets them exfiltrate the client-generated key at any time. Anyone with views that might be dangerous to reveal to state actors should be very, very wary.


Anybody that trusts zoom with anything even slightly sensitive these days is completely nuts.

Yes, we know its easy to use.


99% of people do not care about privacy. They won't switch programs because a nation state might spy on them.


A majority cares little about privacy: maybe. 99%: where's your proof? Or is this part of the 73.6% of the made-up statistics?

The other thing that is wrong with this sentiment is that "privacy" is not a binary thing: you don't "have privacy" or "not have privacy". You don't "want it" or "not want it".

Privacy and the need for it, is heavily dependent on context. You want more privacy when watching porn than when watching some sports. You want more privacy when you are a minority than when you are part of the ruling class. And so on.

Edit: so a person wanting to escape religion and seeking online help on how to do so needs different privacy than three friends having an online beer. Everyone has moments and time where they need (some) privacy. So probably 99% (a made up statistic) has a need to replace software in some (rare) context, over privacy matters.


I measure values by how much you are willing to give up. How much of a pain in the ass will you endure to get privacy? How much will you sacrifice to lessen your carbon footprint? Will you march in the streets and get tear gassed in order to protest for your social activist movement? Anyone can say they believe in anything. It's your actions that have an impact.


I guess that is -to some extent- the KPI for "wanting privacy".

Though privacy is also something people want "afterwards".

Something you'd wish you'd taken care of when its too late. When your identity is stolen, and used to get hundreds of speeding tickets on your name. When your sons pictures were lifted off your facebook to bully him after you had your 15 minutes of fame, and so on.


Most users care about privacy in the sense that they don't want random people to see their communications. Most people don't care about privacy in the sense that they are concerned about the service provider divulging their communications.

Gmail does not use end to end encryption. Yet it's overwhelmingly popular, and perceived to be secure. Google has a huge incentive to keep Gmail secure, and that's enough for most people.

With respect to Zoom, I am fully convinced that if the PRC (or the USA for that matter) wants Zoom to compromise a given account they'll do it. But that's irrelevant to the overwhelming majority of people. Sure, companies like Google and Microsoft should not use Zoom nor should activists or other people that might attract the ire of governments that have leverage over Zoom. But that is a substantial minority of use cases.


Yes they will. You need to be thinking about LGBTQ people in many non-Western countries.


They are the 1%. 99% of people consider "privacy" a good value in abstract but will not lift a finger to protect their own privacy. It's virtue signalling.


I absolutely hate the term virtue signalling. It's always reductive and dismissive. If I am willing to go a LITTLE out of my way to protect my privacy, but not a LOT out of my way, am I "just virtue signalling"? If I continue to use a privacy-less platform (e.g. zoom/instagram/facebook) but just exercise caution with what I say using that medium, is that also "just virtue signalling"?

I agree, evidence shows most people are not willing to go very far out of their way to defend their privacy. But I also think privacy is a genuine virtue, and a desire for it is present and often untapped. Why else would Apple have run privacy-centric ad campaigns? Attempting to tap into the weak but widespread desire for privacy, I think

Also, side note, lgbtq people make up somewhere in the range of 2-7% of the population, not 1%.


Man you really hit the nail on the head with this. I’ve never been able to articulate why the term bothers me so much and you just absolutely nailed it.

It’s like the derogatory “social justice warrior.” What? It’s bad to give a damn about people and advocate on their behalf? If having empathy means I’m “an SJW” then I’ll gladly wear that moniker.


The social justice and warrior part once hinted at the oxymoron of enforcing solidarity with violence. It was never meant to be criticism of social justice. The dialogue that degraded just beyond nuance. The term is probably 10 years old at least. Today it probably means just left-leaning person for most people.


Social justice warrior is an oxymoron. It's a way to call someone a hypocrite because those who are called SJW usually exhibit behavior that is exactly the opposite of what they claim to stand against. e.g. fight discrimination by using discrimination in a different context

https://slatestarcodex.com/2014/09/30/i-can-tolerate-anythin...


I measure virtue by how much you are willing to lose for the sake of your values or because you want to help other people. If you aren't willing to give up anything, you're not virtuous no matter what you say. Actions matter.

If you say you care about privacy but do nothing to protect your privacy, you don't really care about privacy in any way that matters. Since you mildly inconvenience yourself for the sake of privacy, I can conclude based on this very limited evidence that you mildly care about privacy, which is a whole lot more than most people care about it.

Apple markets itself as privacy conscious because that makes Apple look like a trustworthy company. That's a rare and beneficial appearance that a company can maintain to get more business.

Not all gay people care about their privacy. Plenty live in states where they feel more free to open up to their friends, family, and neighbors.


Is, would you say, virtue signaling to say something others like but not actually do something inconvenient related to that oneself?

Yes and more than that. It's virtue signaling to say you have some value when you have no evidence to back up that claim.


100% agreed. It's almost exclusively used as an ad hominem by people who lack that particular virtue to dismiss people who espouse a caring viewpoint.

If you want to argue that your target does less to further that particular cause than you do, fine. If you want to argue the cause is misguided, fine.

But using the term is just lazy.


It’s like “PC” or “fake news”, it was originally meaningful but got co-opted by people who use it in bad faith.


Just curious: Fake news was coined by Trump, so you're agreeing it was a meaningful usage?


It was not coined by Donald Trump[1][2]:

> Author Sarah Churchwell asserts that it was Woodrow Wilson who popularized the phrase 'fake news' in 1915, although the phrase had been used in the US in the previous century.

> The term actually dates from the late 19th century, when it was used by newspapers and magazines to boast about their own journalistic standards and attack those of their rivals. In 1895, for example, Electricity: A Popular Electrical Journal bragged that “we never copy fake news,” while in 1896 a writer at one San Jose, California, paper excoriated the publisher of another: “It is his habit to indulge in fake news. ... [H]e will make up news when he fails to find it.”

[1] https://en.wikipedia.org/wiki/Fake_news#20th_century

[2] https://www.csmonitor.com/The-Culture/In-a-Word/2019/0726/Su...


It was always dismissive. Serious people used already existing terms like propaganda or yellow journalism.


Ironically, it was not coined by Trump.

https://twitter.com/CraigSilverman/status/522179364767924224 and https://www.theverge.com/2014/10/22/7028983/fake-news-sites-... are good examples of its use pre-Trump. By the time Trump started applying it to actual news outlets, it was already in relatively common use.


The irony runs deep in this case. The examples you point out are good ones for the term in common use through the end of 2016. Then this happened:

https://www.cnn.com/2016/12/08/politics/hillary-clinton-fake...

December 2016 Hillary Clinton decides to use the term in reference to recent events like her losing the election. The media picks up on it and the term starts to lose its meaning. Then Donald Trump, always with a nose for catch phrase politics, picks up the term and runs with it, a turn of events that someone on HN called a huge "own goal" by the media, and I can't say I disagree with that assessment.

For more analysis of this cultural specimen:

https://www.bbc.com/news/blogs-trending-42724320


It's possible that the term existed before Hillary/Trump but your examples aren't it.

In your examples the word "fake" is used as an adjective to the noun "news site". As in "a fake news site".

"fake news" as it used today is a noun in its own right. As in "that's fake news"

See also "alternative facts".


No, and it's total bullshit that we let him get away with re-claiming it. Originally, it was for stories that were ludicrously false and made up. It's like the stuff you see in tabloids. It was being passed around on social media as real news. Things like Obama being a secret Muslim, and not a natural born US citizen. You know the things that launched Trump's campaign.


Wrong, it originally referred to fabricated political news stories (generally pro-Trump) circulated on social media during the 2016 presidential campaign. Trump later co-opted the term to mean any unfavorable news coverage.

I disagree. Metaphorically speaking, online discussion is full of literal butt-heads and having the term butt-head is valuable to have in the toolbox even if some people call everyone they don't agree with a butt-head because it's easy and they're lazy. I don't really care if we call them butt-heads or ass-hats as long as there's a more or less agreed upon word for it.

Virtue signaling or whatever you want to call that sort of in-group circle jerk is fundamentally unproductive and self-rewarding because it's just a reiteration of existing group beliefs that basically everyone already knows and has. There's no term you can use to describe that behavior that will not make people uncomfortable when you call it out because at the end of the day you're calling the behavior unproductive and selfish.

For example, someone goes on Reddit and asks "should I put economy tire A or economy tire B on my 20yo, $2k car, also I have $250 to spend". The most popular answers will be invariably be "you should have dedicated summer and winter tires" and "you should use jack stands when you change tires" and many duplicates thereof many of which will suggest that anyone who does not do these things is not someone of good character, deserves to rot in hell, is a burden upon society, etc, etc. Neither of these sideshows are productive discussion. Both of them serve purely to signal to the in-group that the signaler believes something the in-group already believes. Of course everybody wants the greatest tires and nobody wants a car on them but the former is not in the scope for budgetary reasons and the latter is not a concern when simply changing tires never-mind that installing tires usually adds no cost over having them put on rims. The people circle jerking it to jack stands and fancy tires are negatively affecting the discussion for anyone who cared about economy tires. Virtue signaling takes legitimate relevant content and displaces it with low quality junk. I'm more technical than I am cultured but I see this kind of behavior across multiple topics and I'm sure that any serious amateur film critic, book critic, etc. probably could come up with a similar example from their niche.

While that example is hypothetical you can see similar ones play out across a multitude of topics and I think this does legitimate damage to civil discourse. It's like how searching for a service manual for an appliance on the internet to yield results but now it yields a million sites that don't have what you're looking for but will try and sell you something. I see virtue signaling as having a similar quality degradation affect but on legitimate discussion. We can no-longer have detailed discussions about complex issues, regardless of subject because they all get derailed, bogged down and diluted by people showing up to signal their virtue on a higher level issue. Take for example the recent discussion about police. Many on the left and right have a litany of small points on which they agree. Any public discussion about common ground, like fewer MRAPs in the hands of suburban police departments, gets drowned out by riff raff showing up to tell the world which team they're on. At scale this is damaging to civil discourse.

In conclusion, I think that having a nice, short, two-word, pop culture word to describe that behavior is useful because makes it easier for those who may not be articulate enough to otherwise do a good job call it out when they see it. Of course some people are gonna abuse it but I think that's just the nature of it being a negative thing since all negative things become a name you can call someone or their actions.


That article gives such a bad argument that I feel I have now become more pro using the term than before. All of the arguments have huge exceptions or contradict each other.

Signaling being a term used by some niche fields. We have that happen all the time. It’s how language evolves.

The article says to use show off instead of virtue signaling. Right after the next argument is assuming the person is disingenuous. Which is exactly what you’re doing if you say they’re showing off. The two arguments can’t both be used. They are arguments against different things.


"To an asshole, all virtue looks like virtue signaling." -- https://twitter.com/drvox/status/1273472663621529604


[flagged]


At some point we are just circling a drain of nonsense when we make points like that.


Attempting to remove a word or phrase is exactly the kind of impractical behavior I expect from a virtue signaler. And with the annoyingly pedantic reasoning about how it's not technically signaling they way economists use the term is just the cherry on top. Virtue signaling is real, so is the misuse of the term, like every other, but so what.


This seems like pro-virtue-signalling-as-a-term virtue signalling. We get it, you want all the other people who use the term virtue signalling to like you.

ALWAYS dismissive and reductive. ALWAYS. It's not a good faith argument. If you think someone is all-talk-no-action, accuse them of being performative. If you think someone's cause is dumb, attack that. You can call ANYBODY advocating for a viewpoint a virtue signaller so it is an entirely meaningless attack.


The problem is that the action condemned by the "virtue signaling" label is real, widespread, and destrucive. Always someone obnoxiously coming out of nowhere with some usually inconsequential moral high ground. I think where we will disagree is how destructive and common it is, and how much it needs to be called out. I personally think it's one of the worst traits of modern social media discourse.

Maybe some people are too quick with the term, there always will be people like that for any label. But just consider all the people just as tired of real conversation being shackled by "virtue signaling" as you are of being called a virtue signaler.

Also, you could write your disgruntled comments about any label. We are certainly way too quick to find racism where there isn't any. Yet there are also real racists out there complaining that the charge of racism is really getting on their nerves. Does that mean we stop charging people with racism when we see it? No, we're trying to call out unacceptable behavior.

If we're getting into pet peeves, mine is "gaslighting". Everything is "gaslighting" now. Accidentally spread a small inaccuracy? Gaslighting. Tell a public lie? You're now "gaslighting the world". Sharing my disagreement with you, like this comment? I'm gaslighting you.


I guess my point is that the term "virtue signalling" is used to undermine someone's belief that something is an actual virtue.

If I say "don't be racist" and you call me a virtue signaller, you are implying that I don't actually care if you're racist, I just want social points for being "good". The problem is you don't know if I care or not. I might care deeply. I might dedicate my life to being anti-racist, but you can just label it virtue signalling and move on? that's an intellectually dishonest maneuver.


Virtue signalling to me is to do something over the top with little practical utility, or which is even counter-productive, to show off one's apparent virtues.

It can be incomparably frustrating as it focuses on non-issues or minor issues and misses the bigger picture. I personally don't see it used outside of those contexts but it shouldn't be used to dismiss actual action like doing your part against climate change.


In what way is there a difference between virtue signaling and performative?


The difference I'm hoping to draw out is that virtue signalling implies the person doesn't actually care, just wants to be seen as virtuous.

Buying CFL lightbulbs is performative climate action.

Buying CFL lightbulbs is virtue signalling climate action.

One of them is about the action - buying bulbs isn't effective, the other is about the person - if you buy bulbs you are fake. Perhaps "performative" fails at making this distinction clear enough, but I'd like a way to imply that someone is all-talk-no-action, without undermining their actual belief in something.


The idea that you can't attack someone's behavior is ridiculous. If you want to consider what i'm doing virtue signaling then go right ahead I guess.


Yes. You calling other people for virtue signalling is a form of virtue signalling in itself.

Don’t you realize how ridiculous and useless this argument is?


Don't you understand that I am arguing that virtue signaling is real. You saying I am virtue signaling is only agreeing with my point.


Other comments aside, this isn't even using the term "virtue signaling" correctly.

They're are plenty of things that people agree with in the abstract, but don't do enough to make a difference. Probably everyone agrees that the environment should be cleaner. If I agree with that, but "don't lift a finger" to make it better, how am a virtue signaling? It's the opposite.

"Virtue signaling" involves doing some token thing to get the kudos. Smugly saying you only use DDG for web searches. Putting a Tor sticker on your laptop. Etc.


If you go out on the street to give homeless people food but your main motivation for doing it is so you can take a quick selfie for some feel-good reactions on instagram and facebook, you're definitely virtue signalling in a particularly irritating way...... But, those homeless people still got some food as a result, and somewhere else, someone else might see your posts and decide to do the same thing (with or without selfie, depending on how much of a self absorbed ass they are) Either way, the net result is that now more people are giving food to the needy in this simple example. A net benefit is produced by self absorbed behavior.

When virtue signalling can become dangerous however is in situations where something based on a mistaken notion of doing good (when in reality it does more harm) becomes a fad and people who virtue-signal keep promoting it into wider popularity.


I don't think people become more virtuous by saying a few words and doing nothing. If you say you care about the environment but do absolutely nothing about it, you don't care in any way that matters. It's just a social signal without any impact on the world. Buying a Tor sticker doesn't promote privacy. Buying and running a Tor node does.


Most yes, but I don't think it is only 1%. Virtue signaling is for increasing your standing towards a social group through a feigned declaration of values. You can be sloppy with your privacy and still think it to be of utmost importance. Nobody pats you on the back for being privacy aware in the mainstream. On the contrary, it is looked down upon becuase it is associated with tin foil hats right now.


To be pedantic LGBTQ are ~5% of the population in the US. With higher or lower numbers in the other countries. Source for the US: https://en.m.wikipedia.org/wiki/LGBT_demographics_of_the_Uni...


Every single day I see more and more 100K liked tweets about this or that creepy person. I don't think people don't care about privacy, they just don't see the through line because it's abstract.


Zoom is being used by at least one court system for remote hearings. Their barebones IT people are completely incapable of independently verifying Zoom's encryption claims.


Would they use Jitsi if it was fully managed for them and free?


Apologies: rereading I realize you asked a question; I thought you were making a recommendation.

I don't know anything about Jitsi. Will research. Thanks.

At the start of the pandemic, my friend at this particular court was tasked with figuring something out. Initial plan was choose a web cam and software combo for everyone to use. And then they'd be futzing with Windows boxes of unknown provenance, trying to do remote tech support, etc. Whose got time for all that? For instance, they told me one of the new Logitech web cams they tried doesn't have Windows 10 drivers.

I recommended they just a cheap iPad for each location, participant. Create iCloud account for each device. Use FaceTime. Buy mounts or tripods as needed.

I haven't heard back what they finally decided.

FWIW, I since learned they were also having uninvited people join their confidential sessions, just like the naked guy showing up for online classes. Such a mess.


How many people want to deal with an inferior product? I used Jitsi for over a year. There’s no deal breaker. But so many little or medium issues with how inferior it is to Zoom. It’s a hard ask (outside specific vocal minorities online)


Governments deal with inferior products all the time. At least in this case, there's a good reason to do so. The security of court systems would surely be higher priority than "my grandma couldn't install Jitsi" if they were pushed to choose.


Outside of court systems and some or a lot of government stuff, what use is there for inferior products vs Zoom? I don’t think China or whoever else is the boogeyman right now is going to care what mid level execs at most companies are talking about.

Not sure why you're implying that security in the private sector is useless. China has been known to hack and infiltrate private companies. Fun fact: this is why Google left China in the first place. You think China doesn't care what employees are discussing at Intel or AMD? You think they're not interested in worming their way into Apple's messaging systems to identify and monitor dissidents?

You think the backdoors that Zoom are likely creating for the Chinese government won't ever be found and used by malicious hackers?

You really think the Chinese-American human rights activists whose Zoom accounts were identified and banned from a private video call with Chinese-based allies and friends and family are thinking "what use is there for inferior products?"

Oh well, we live in free countries. You're free to open your company and possibly your Chinese colleagues/friends to China or whoever else is the boogeyman right now.


It looks like many Americans do now.

https://news.ycombinator.com/item?id=23424245


That’s a drop in the bucket. A spike of a couple thousand downloads is a rounding error to the big platforms.

Downloading of Signal doesn’t signify privacy concerns. I have Signal because others do. Not because I care about signal’s privacy.


It's still a noticeable increase, though. Taking into account that this was triggered by mass protests, it's very likely many of these people are concerned about government snooping.


It’s a noticeable increase from almost nothing though. The increase would be just as dramatic if all the numbers were 10x smaller too. I’m sure at that point most would agree it’s just too small of a number to matter.

99% of people don't understand how modern world actually works, in which your data is the product.


It's far too black-and-white to hold the belief you wrote. Pessimism is an indulgence.


99% agree


This is sad but also extremely true. At my company, Zoom was suddenly rolled out en masse to everyone, despite the fact that we are a technology company and should know better.


I would very much like not to use it but I am forced by my employer, including to divulge sensitive info like my passport.

Yes, you have read that right. I was forced to make a zoom call and hold my passport open to the camera. No, they wouldn't accept a scan and an email, or even a call through Jitsi. This is a major public institution with a 200+ strong IT department consuming millions of pounds a year. This is the moronic "enterprise" stuff those millions of pounds buy.


The same issue is present if simply transit through Hong Kong International Airport.

The government of China border guards scan everybody's passport details.

It's unfortunate that a passport number is a form of ID.


I gave Jitsi a spin last week and was surprised at how easy it is to use. If you have a browser you sont even need to install anything if you’re not on mobile.


Jitsi is the way to go as far as I know. It passed the grandma test.

https://meet.jit.si/


Doesn't pass my browser test. Empty gray page.


You mean to imply that a business would just lie to customers?

Come on, the market wouldn't permit that to happen! They'd lose all their customers!

/s

Edit: on a less sarcastic note, I'd be less critical of Zoom if their software were open source.


What makes your comments even better is that Zoom's response from the get-go has basically been "Look at all these large companies that are using our service. Would they be using our service if we weren't secure?"

Meanwhile the companies in question universally refuse to acknowledge THEY NEVER ACTUALLY VERIFIED ANY of the claims around encryption. It would be hilarious if it weren't so terrifying. And oh, by the way, all of those companies refuse to admit they messed up so they ALSO haven't switched to another service, so Zoom is literally still selling on "If we weren't secure, these big guys wouldn't be paying for our service". It's insanity.


Meanwhile the companies in question universally refuse to acknowledge THEY NEVER ACTUALLY VERIFIED ANY of the claims around encryption.

In most companies I've observed, the people deciding what products to buy are not capable of reviewing any of the products claims. If they happen to have an employee that is capable, and that employee points out a problem, they are usually ignored. Especially if it would make someone in management look bad for spending money on something they shouldn't have, or even worse if it would make them lose their free lunches and golf trips with their vendor buddy.


I've been in the interview loop as of late, and there's been a crazy shift away from Zoom. Almost every video chat I had was using Zoom a couple months back, now everyone is using Google Chat or MS Team Meetings.

Now these are small to medium-ish size companies (20-500 people), so maybe it's not a big deal to Zoom's marketing bottom line. But it's definitely a thing.


Is any one choosing Teams for Teams alone and not because of already having a relation with Microsoft? That’s Microsoft’s playbook.

People are doing 50-200 person video meetings with Google? Has Meet really improved that much since March/April? That’s not that long ago. Otherwise it seems more like cost is the reason. Not anything relating to quality.


We're using Google Meet, but for the very Microsoftian reason of "it comes with our GSuite"...

While we pay lip service to Zoom's super shitty security stance, we now run a video meeting service where it's trivially easy to click the "turn on captions" button, and see how good a job the world's biggest advertising agency is doing of transcribing all the audio in our most sensitive business WFH calls... :sigh:

(I have my own Jitsi Meet instance running on AWS, but there's me and about four of my tinfoil headwear sporting friends who care enough to bother using it...)


It’s the business model. Scaling up Zoom or Webex is costly because it’s based on meeting hosts.

Teams or Meet works fine (unlike the train wreck of Skype), have improved recently and are already paid for.


The National Australia Bank (one of Australia’s “big 4”, and our biggest business bank), settled on Zoom for all internal video comms. It’s usability made such a material difference to daily ops that physical meetings were halved way before COVID. I believe the review of Zoom constituted a dot-point analysis of their marketing claims.


Nope, this is human nature at it's most basic and obvious.

Saving face by not admitting egregious mistakes and even lying about making or not making them even after the evidence is public and irrefutable is just the human ego defending itself.

I'm starting to get past taht sort of childishness in my own life but having lived it for a long time I see it easily in others.


People don't mind lies though. General public will forget about it after a week if you keep silent or show them lies. Admitting your mistake isn't always a good choice in the corporate environment.


Feels like Theranos except the only difference is that Zoom has working software


I don’t understand. That means it’s fine. Theranos big issue was not having anything. Otherwise they lied a ton. Not unlike many, many companies.


From a business model perspective, if Zoom embraced open-source, what would be their moat/value-add, compared to users downloading/forking from GitHub? Not being snarky: I'm genuinely curious what the "good citizen" (but still profitable) OS/FOSS model would look like, whether at equivalent revenue or reduced revenue.


Enterprise support is the usual answer, and it'd probably work pretty well for Zoom given how many enterprises are already willing to pay Zoom for said support.


i don't think it'd work particularly well. people pay for zoom not because of 'enterprise support', but because they want features that they need (eg. meetings not automatically ending after 40 minutes).


The two aren't mutually exclusive though


how successful do you think zoom would be if their only value was support? do you think they'd be incentivized to create as frictionless a product as possible?


It is not like no one ever came up with video chatting before and they are the only ones around so the moat is moot. The only reasons they exist in any meaningful way starts the "just works" thing and ends with the "vhs v.s betamax" thing.

For the business model, it is the same as other OS, there are companies that want a tech insurance policy.


No other company right now can get the reliability and “just works” like Zoom. If they open source everything, why wouldn’t all the competitors including the biggest companies in the world not start seeing how Zoom is doing things. We already see open source “borrowed” frequently. Amazon’s AWS being a regular.


> No other company right now can get the reliability and “just works” like Zoom.

I disagree. Plenty of video chat software has comparable reliability and "just works"-ity.

Google Meet, Skype, Microsoft Teams, Discord...

Hell, Apple has had "just works" and "reliability" in their walled garden since FaceTime was introduced -- if you're willing to look only in their walled garden.


I haven't tried discord, but all of the other tools that you mention are materially worse than Zoom, in my experience.

Zoom is almost as good on a laptop as dedicated Cisco gear.

I have been video conferencing mostly for work for almost a decade, and Zoom is the best solution for that that I've encountered.

I recently changed roles, and the use of Zoom was a small reason to go with the company I did.


This is not a complete answer, but they could still have a proprietary backend, yet give people a lot more trust by having open-source client applications. The idea being that forking would still require someone to build their own backend, making it only interesting for people that are willing to do a lot more in-depth engineering than just running an application.


Seems to work fine for 8x8, who maintain Jitsi.


8x8 is valued at about 1/10 of what zoom is worth, and has seen a declining valuation over the last 1 year.

most of 8x8 revenue comes from sources and products unrelated to jitsi.

so, i don't think you can say that it seems to work fine for them compared to zoom.


Are the numbers different for you? 8x8 appears to have a market cap of $1.55B vs Zoom’s $67B. 40x difference.


you're right. i remembered the numbers wrong. i think it just proves my point even further though.


Yes for sure it does hah

Jitsi is a fraction of what they do and something they didn’t even own that long ago. The company is tiny, 42x less valued than Zoom and doesn’t appear profitable. It isn’t growing in the right direction at the moment. If anything, it is a reason for why Zoom shouldn’t do what Jitsi or 8x8 are doing.


> From a business model perspective, if Zoom embraced open-source, what would be their moat/value-add, compared to users downloading/forking from GitHub? Not being snarky: I'm genuinely curious what the "good citizen" (but still profitable) OS/FOSS model would look like, whether at equivalent revenue or reduced revenue.

The idea that a business needs a moat to be profitable is a problem endemic to business.

The value add would be in hosting services and support contracts. Video chat is needed by a lot of non-technical people. Furthering, plenty of people don't have sufficient bandwidth to host their own video chat even with just their own friends or teams. Even technical people often don't understand how to write secure software.


> The idea that a business needs a moat to be profitable is a problem endemic to business.

I'm pretty econ-left philosophically (socdem short-term, mutualist/ancom long-term), so you can't get much argument from me here. :)

But I want to steelman the other perspective: so long as we live in a pre-post-scarcity market economy, having some kind of moat is part of how one gains bargaining leverage in a price negotiation. (Think of "moat" in this context as influencing cost/benefit incentives, rather than an absolute barrier: the customer could build a boat to cross it, or they could pay the toll to cross the bridge, with the latter being usually cheaper.)

One answer is as you describe: hosting services and support contracts, in a market ecosystem of interoperable commodity services. Sign me up! But: such an ecosystem has a free-rider problem when it comes to the non-trivial expense of creating and maintaining the client software (including the risk of front-loading the 0-to-1 effort of building it before you know it will be adopted). In a FOSS model, other players in that ecosystem can obviously contribute to that effort, but those who don't contribute will have a competitive advantage, since commodity markets tend to viciously compete until margins are as near-zero as possible.

There are "moats" / competitive advantages that have nothing at all to do with the software itself: superior support experience, brand reputation, efficient hosting services through economy of scale. So I don't at all claim your model is unworkable, and there are many successful companies who do just that.

I don't disagree that the world would be a better place (and the overall economy perhaps more efficient), if most/all software was FOSS, and business models required less centralized control. (Note that nothing has stopped us from a building a pure FOSS E2EE VC client with a comparable feature-set; we still could.) But say I'm a board member or an investor in Zoom, whether pre- or post-success: how would you pitch me on the business value of open-sourcing the expensive-to-produce client software?


> But say I'm a board member or an investor in Zoom, whether pre- or post-success: how would you pitch me on the business value of open-sourcing the expensive-to-product client software?

Okay. Consider the following fantastical talk from technical me to you, oh dear fantastical board member:

Let's face a fact: yeah open source software is "free (as in free speech)" and it can also be "free (as in free beer)". Anyone can inspect it. Anyone can "steal" it so-to-speak and set up a competitor. That will always be the case. Just look at how many stolen software products end up in your favorite app store. Games are ripped off right down to their copyrightable artwork, malvertisements added, and reuploaded with a new name. But I think worrying about that is like worrying about the people brewing their own beer. I think that's preventing us from building a brewery.

Let's face another fact: what's expensive isn't software. That's pretty cheap. That's just man-hours. A kid in a garage can build video chat over a weekend or two. What's expensive is experience. Experience is basically an impossible-to-estimate number of man-hours. We'll never be able to pay one person or one team to understand all of the pieces and platforms and make it work for everyone.

Customers want to run Windows, Mac, Linux, iOS, Android... and all of that is hard to keep up with. Customers have a plethora of network and hardware configurations. Customers have crazy different bandwidth and latency profiles. It's really hard for us to make our software work in all of that. But some of our customers are experienced and they're curious and they are looking at our software with a fine toothed comb. We simply can't stop them from doing so. That's how we got into these repeated PR messes after all. So let's embrace that. I think there's a good chance that some of those customers would solve our problems for us if only there was a way they could contribute fixes.

Like I said, experience is expensive. With experience comes ideas. Ideas are gold. That's why we're worried about our competitors after all. We don't really have any solid ideas. Neither do they. Even if we did have a solid idea, they'd create a competing idea or even just outright steal ours. And we'll still be left holding the bag, we'll still have these PR details for not getting things right in the first place. So let's turn that on its head.

If we open source our software, we give these technical people the opportunity to help us fix problems before they become problems. There's a ton of home brewers out there and some of them would love to be able to help our big brewery. We're not going to stop home brewers. So we shouldn't even try. But home brewers do need tools. Let them come up with their own recipes.

So, we provide the tools for free. But we can sell the recipe. Or, technically: provide a cheap service for the people who need something they know is secure but don't have the technical know-how and/or time to set it up themselves. Lawyers have a legal requirement to keep their conversations private. Schools have a legal requirement to keep their children safe from stalkers. Even citizens have a right to privacy. We'll make all of the tools available for anyone to audit and validate. The recipe to use those tools is where we make profit.

The recipe is the environment. We'll provide, for a fixed cost, the ingress bandwidth and compute needed. We'll provide secure storage of recorded conversations and an audit history of who's accessed it. We'll provide the experienced technical support to directly either fix problems or point at misconfigured devices outside of our control (and why it's the source of a problem); we'll be able to understand the debug logs that the software provides. Of course, any other technical person could too. But that's already the case so we're not really losing anything here. Indeed, we're gaining here. We're gaining the trust of law firms and governments; the trust that they're getting the value that they want for the services they need and that they can go directly to us if they need troubleshooting.

I'm not arguing against centralization. Centralization is good for us and for our customers. It's an anchor point for experience to grow from. I'm saying that open source software can help us avoid further technical problems from our lack of security experience. And who knows? Maybe some of those home brewers are interested in a paying job at our brewery -- if only they could prove they knew a little bit about beer, if it was free. We could definitely use the experience.

/pitch


Well done! That was a good pitch, I'll bookmark it in case I need to use these arguments in the future. ;)

problem endemic to business? that doesn't make a lot of sense.

it would be very dangerous for the primary value add to be in hosting services. any hyperscale cloud provider could offer the service and undercut zoom (based on superior unit economics and market reach)


> Come on, the market wouldn't permit that to happen! They'd lose all their customers!

You scoff, but isn’t that exactly what we’re all doing to Zoom right now?


All? So Zoom has basically no customers now, because their initial false claims triggered a boycott?


With your personal info kind of closed sourced. The program is not the ultimate issue.

And they can be changed without you knowing it. Do you have a Fingerprint that what you get is what they share?


You forgot "5. Zoom gets praised for developing features in response to criticism that already existed in other products that work better."

Jokes aside, with Zoom's track record, it's not worth using anymore regardless of what features they implement. Not having E2E encryption is no where near as much of a red flag as lying about it is to me.


> it's not worth using anymore regardless of what features they implement.

Not to me. I would just assume they don't have E2E encryption and wouldn't base my calls around the idea of needing that. The claim is never worth it without an independent review and then thinking about the attack surface you actually want to shield against.

I mean, in another market, if you have ever investigated VPN providers you would see 100% conflicts of interest with affiliate marketing everywhere and the articles never acknowledge that the business of reselling internet access has inherent trust and unverifiable claims involved. A government can always tap the source with a legal order and there will always be information available to them.

For a video chat service, them merely saying E2E doesn't mean anything without a way to verify it, or host the whole stack myself and this is incompatible with being a company.


Just curious - what other product that works better do you recommend? Webex, Skype, Hangouts/Meet, Teams all pale in comparison when it comes to quality and ease-of-use.


jitsi works wonderfully.

I've also have been using Discord for voice almost daily for a little over a year and it just works 99% of the time. Unfortunately, it suffers from "gamer" branding that makes it awkward suggesting for work. They should try offering a "business skin" that interops with discord.


It bothers me a bit that such branding/skin influences the situations in which people use good products. I tried to get my friends to switch from Facebook Messenger to Slack, and plenty of them use Slack at work (as do I), and a large amount of pushback was along the lines of "I don't want to feel like I'm working."

It's just a means of communicating, people. Maybe a few Discord features aren't useful outside of gaming and a few Slack features aren't useful outside of the workplace. I find that to be a stupid reason not to generalize the use of these products. Skinning (and filtering away those specific features) just might be the ticket.


I second Jitsi. For me, the value is in hosting your own Jitsi server. Really not that hard to do.

Mind you, I only host it at home on a VM for personal use. Have had sessions with 6 people with one of them a Europe-Australia connection. All fine on default 720p.

If I were looking for 20+ meeting software though I'd consider something else. I would consider it a case for streaming to faceless attendees. I have never had a meeting with useful input from more than 10 people.


Honestly, Meet has had huge updates in the past month or so that make it much better than Zoom. The image quality no longer looks like crap with more than 2 people in the room.


If I had to decide the official comms app for my employer, I'd be wary of Google, due to their poor track record with comms apps.


Meet is a GSuite product, and the only video-conferencing app in GSuite. Google doesn't fuck with GSuite.


I have to remind this to myself when I see features available for free which are not ported to GSuite (parental control, google assistant for home users, reminders, inbox in the past (it took I think a year to port it)


And the reasons it takes time to port those features is because Google respects GSuite's privacy promises.


What do you mean? How porting, say, Inbox has anything to do with privacy? This is the same codebase.

There were other examples in the list too. "Parental controls, google assistant for home users, reminders"... a lot of these are difficult to do because GSuite data is extremely siloed and almost designed to be not easily interoperable with other Google products. This is entirely for privacy reasons.

except AppMaker


And all of Gsuite


So one month. Not long.


I use Jitsi Meet. It's easier to use, the audio is generally better (although my only evidence for this is anecdotal, I'm taking piano lessons over video chat right now and meet was the only one that provided consistently good audio for my teacher), and it's web interface "Just works" whereas Zooms hates to be launched and tries to make you install the app constantly (although their launcher is so weird it behaves differently for me depending on the day).

BlueJeans and RingCentral might as well be clones of Zoom. Amazon Chime and Microsoft Teams are fine for me too, but I'm not picky.


After the disastrous experience I had with BlueJeans this morning, I wouldn't include it as a quality or reliability match for Zoom.

The company my wife works for can't get a reliable Teams conference going with anyone in France.


Zoom is a really great product.

The company lied about their encryption.

Both of these statements can be true, it's just a question of trade offs.


Ring Central uses Zoom for the backend.

I know they just announced their own "Ring central video" But im weary of that for the time being.


You say backend, but the frontend also has tons of overlap. The first time I saw RingCentral was after much Zoom experience, and I was thoroughly confused (based on nothing but the frontend) by the similarity until reading up about it. As a one-time invitee rather than a frequent user, RC was absurdly identical to Zoom other than the title bar and icon.


Ring central has used zoom on the backend for a while even before zoom went public. But just recently they added a powered by zoom watermark. And yes the applications do look very similar but RC meetings hasn't changed the UI in the 3+ years i've been using it.


*wary


ah, my bad. I just googled and educated myself on the difference.


Jitsi worked fine for me every time.


More to the point: which of those has true e2e encryption?


I don't think any of them do, but more relevant to security none of them has auditable source code. Jitsi Meet (the easiest to use out of the services I've tried, namely Zoom and Google Meet) has experimental E2EE. But if you want real security you probably want something more like GNU Jami, which is not grandma-friendly easy to use and is a native application only.


The beauty of Jitsi is that you can run the software yourself (especially with the free Google Cloud credit) and then the E2E is hardly relevant because it's still server-to-client encrypted and you own the server...


Good point. Are there any use cases for E2E when the server isn't potentially adversarial to any clients?

Separately: affordable servers likely are accessible to infrastructure providers (whether a VPS, or bare metal at a colo, etc.) so it's tough to say that "my own server" is usable exclusively by me and therefore not adversarial. Plus, maybe people want to use my server and consider me adversarial for whatever reason; they should use their own server instead, but might not have the skills.


My understanding is that Jitsi does actually have E2E at this point, at least in an experimental form.

It's in here somewhere: https://jitsi.org/security/


Can't speak for others but Microsoft Teams & Skype do not have E2E, and neither does Google Hangout/Meet.


Meet seems to be improving rapidly.


For ease of use I recommend Whereby.


Another way of looking at it is that Zoom is learning from its mistakes and making improvements that the market demands.

I'm no Zoom fan (I'd even use BlueJeans first), but people on HN are always so eager to crucify a company for its past. If it made mistakes, get out the tar and feathers! If it doesn't fix those mistakes, get out more tar and feathers! If it fixes the mistakes, even more tar and feathers!


There's a difference between a couple of honest mistakes and a history of shadiness (MacOS hidden server to prevent uninstallation), outright lies (E2EE), and attacking the pillars of democracy (censoring Chinese Americans discussing Tienanmen Square).


And routing interceptable call encryption keys through China even for calls that do not originate or terminate in China... https://citizenlab.ca/2020/04/move-fast-roll-your-own-crypto...


This is overly charitable.

Zoom isn't learning from mistakes and making improvements that the market demands. It's providing a feature it said it already had.

Zoom knew E2EE was something the market demanded, so it lied about having E2EE. This was a blatant lie to get more people to use its platform. Then Zoom got caught. Now it's actually trying to provide what it said it provided in the first place.


> people on HN are always so eager to crucify a company for its past

Perhaps. In my case it's less crucifying a company despite intentions to fix and more crucifying a company because I'm tired of hearing the same PR nonsense and not seeing real improvement to the industry as a whole.

What you're seeing is the flip side of the whole "it's easier to ask forgiveness than permission" nonsense.


I feel like the mistakes you are referring to were actually lies and there is a sort of a pattern emerging with a company that is not merely making innocuous mistakes


I'd be tempted to agree if their "mistakes" ended with the E2E debacle and Zoom was quick to admit wrongdoing and take responsability.

Unfortunately, it didn't end there. Rather than earning back their reputation, they have continued to burn through it with blunder after blunder.

The company has proven itself ethically corrupt and that's not something that can be made up for with apologies and product improvements. It will take time, demonstrations of humility, and a healthy dose of transparency to restore their reputation with me.


A generalization like “people on HN are always...” coming from a 3yr old HN user with 20k+ karma points looks like a case of the pot calling the kettle black.

Criticisms of large corporations is a healthy part of the HN community IMO. In fact, if we didn’t criticize Zoom they might still be lying about their E2EE capabilities.


A majority of the alternatives put up vs Zoom here outside Jitsi come from Verizon, Microsoft, and Google. Seems more like the routine of being super anti something specific. Like Facebook before and now. Among others.


That's a fairly new account.


Lying about a feature isn’t just a “mistake”.


Lying about a feature is exactly what Silicon Valley's "fake it till you make it" culture encourages.

Crucifying Zoom over this while letting virtually every other company in the space (inc. Hangout/Meet and MS Teams/Skype) go free seems quite hypocritical from an HN community that's comprised of many startupers and startup wannabees who spend their professional lives working for entities with similar practices.


That’s not what the expression means. If you are a tiny company and a big customer comes to you and says “can you scale to support us”, you answer yes even if you are not 100% sure you are ready. If however you claim to have feature X and you don’t, that’s just a lie.


I think I agree with you, but this argument seems like a pretty arbitrary line.

How is saying "yes we can scale" when you're not sure if you can, aren't you essentially implying that you have the infrastructure to deliver on that promise? And if you don't actually have that infrastructure yet/built/proven, then you're essentially selling a feature that doesn't exist.

It's shades of grey from lying about E2EE, but seems pretty similar imo


Because with presumably feasible level of efforts and resources, you can make a claim that's not yet true, but has a reasonable probability of becoming true by the time you need to deliver your product/service. So you can make that claim in good faith, even if you're not 100% certain it will hold true.

That's very different than making a specific claim that you already have a feature right now, that you in fact don't. That claim cannot possibly be made in good faith, as it's currently outright false, and you can never retroactively apply end to end encryption on conversations that have already happened.


/like a pretty arbitrary line/

Really? Do you think they would/could lie on the features of a product that they deliver to a client. If they did, do you think they should get away with that?

'We have that capability' claim is totally not the same.


And perhaps instead of being lenient with Zoom because "well everyone else lies about features", we should instead be consistent about holding companies accountable for dishonest business practices.

(I think I'm preaching to the choir here; just clarifying for anyone else reading your comment)


Did Meet/Teams/Skype claim E2E encryption?

The only alternative I ever looked into was Jitsi (because it was the first alternative I started doing research on, and by the time I'd finished researching it there was no doubt that it was more than good enough -- and super easy to build our own cloud instance so that, even though it wasn't E2E, we had total control of the server that managed the encryption), but I don't recall hearing arguments that any of the other major competitors were actually E2E encrypted.


There's no reason to think they'll change their ways anytime soon. Every time they're caught lying or doing something shady they use some bullshit explanation and 'fix' the issue only insofar it stops journalists from writing articles, but doesn't fix the underlying problems.


I'm fine with companies making mistakes and working to fix them, but Zoom has been bad not just because of E2E encryption, but because of their questionable ties to the CCP.


This is not right. As a responsible company, they should know better. Regarding China, they have always been sidestepping the main question. They knew from the start that CCP will be intercepting calls. As an american company, this is a blatant disregard for people's trust. and its not an american thing to do. If you had friends or family who are routinely subdued in China, you will know what I am saying.


What is weing with Zoom that you’d use blue jeans before?


The key information is in this sentences: "We are also pleased to share that we have identified a path forward that balances the legitimate right of all users to privacy and the safety of users on our platform ... while maintaining the ability to prevent and fight abuse on our platform." So they found a balance between privacy and ability to prevent abuse. In other words: this E2E encryption will have some backdoor which will be used only for legal reasons (to prevent abuse, etc.). Just like in the 80's when the US government asked Atari if they can provide encryption for their systems... In was encrypted for everybody, except... the government...


This is one of the downsides of the casual acceptance of corporate invasion of our privacy. Corporations can do whatever they want! Including letting China listen to your doctor appointments.


It's easy to say "it balances" if it's you who decide how much the stuff on the plates weigh


E2E encryption is meaningless unless there is a way to prove that it is E2E, e.g. by showing us the source code of the client side and allowing us to compile it ourselves, which Signal does.

It would be super interesting if there was a way to abstract out encryption on the camera itself, where the video call software gets an encrypted video stream and its only job is to convey that stream to the other side, which decrypts it.

The hard part is sending an encrypted stream that can be programatically degraded based on available bandwidth, and still be cryptographically secure.


Sounds like a use-case for efficient homomorphic encryption.


We really need a fully homomorphic fast public key encryption. That would enable digital signing schemes that can encrypt the entire document using public key scheme (double-barrelled signing: first encrypt using private key that would be decrypted using public key, then encrypt that form using public key that would be decrypted using private key; the first (and only that one!) encryption would be homomorphic).


Zoom’s engineering team is based in the PRC. This opens them up to pressure from the dictatorship which has made large scale industrial espionage a public policy goal. Somebody is listening, and likely transcribing, every call at organizations of interest. The source code, public statements, etc are irrelevant; if the PLA wants a Chinese national in China to do something, they will. The penalties for noncompliance are terrifying.


And at the same time they are probably pressured by the nsa. They should just upload every call to YouTube so any intelligence service can access them


Is there any better way to bring down China than them latching onto quarterly reporting and shareholder pressure?


Alex Stamos had a good thread on some of the costs and benefits of E2EE. There is a cost https://twitter.com/alexstamos/status/1268219067707453441


I don't really like Mr. Stamos because he has been saying everything he can to discredit Facebook since he got ejected after a row with management. This could just be my perception of him but I don't find his arguments to be entirely in good faith.

Denying E2EE is a cost as you are punishing people for the crimes of another, this is depriving them of their hard-earned freedoms and liberties, for something someone else has done or may do.

Look at the activism going on today. BLM, dissidents in China, the rise of oppressive far-right governments in Europe like Hungary. I am sure if you dig far enough, you could find many people fighting in obscure causes, high profile causes, in a number of countries, who would fear the fist of an oppressive government.

What if the FBI / NSA decides to surveil BLM, as they already are? What if the CCP strikes down a dissident as they already have on Zoom? What if Orban decides you are a secret agent of George Soros plotting to undermine the government? Is it the case that everyone should roll over because a criminal might use the same means as them?


There is also a cost in not having your smart TV microphone record all conversations and upload them to the police.


That's a false equivalence. At any point in this, there is little we can do to verify E2EE, but trust a 3rd party. We can trust that they enable it, or as previously proposed we can trust that they are visibly in the meetings and observing. Either way, we have no way to ensure this is true when dealing with 3rd party providers.

Your smart tv recording has nothing to do with this, but one does still need to trust that it isn't happening. In the case of the smart tv we can attempt to look for microphones or other components that are able to be used as microphones. Software offers a more difficult path in verification.


I have no idea why verification, or other technical details, would affect the ethical calculus of having our conversations spied upon.


People who wish to mask their crimes have a greater incentive to use E2EE so will probably gravitate towards platforms that offer it. I would therefore suggest those not committing crimes are disproportionately affected by E2EE not being made the default where possible. Once one service in a particular category offers E2EE, the benefits of the other services in that category not offering it is significantly reduced.


This is only true if you assume that the world is populated with an equal number of criminals and non-criminals.


It wouldn't surprise me as recently the app tried to get me to trust an untrusted cert.


It's encrypted all the way from one end to the other end, we just also happen to have a copy of the key and can dencrypt it in the middle.

Technically, the exact packets of the data you send is E2E encrypted... but the copies they make for themselves aren't.


This could be the case for literally any E2EE service that controls key distribution (including WhatsApp, Signal, etc.), especially when there's no way to verify key fingerprints (here Signal differs because it does have a way, and it's open source so you can be more confident that it's not BSing you).

It's shocking to me how often this is glossed over when discussing E2EE services: you still must trust the platform.


E2EE and open source: the two things people assume automatically makes things super-crazy-secure.

The implementation of E2EE must be robust and there must be somebody who is actually checking the source code (plus verifiable builds)


Don't forget the human element: users still have actually do the verifying (e.g. checking public key fingerprints of recipients) that the source code enables!


If you go down that road, you can make this argument infinitely. Even if you verify your builds, you cannot know if the software you are using to check the build isn't compromised. Or if you check the software you use to check the build, you have to check the software doing that check and so on.

Nothing makes software automatically super-crazy-secure. Absolute security doesn't exist.


You'd get close by doing all you mentioned, but also compiling and hosting the code and infrastructure yourself. Not often this is feasible.


You'd be still trusting the compiler. However many layers of checks you do, there's always something you need to trust.


It doesn’t automatically make everything secure, but it’s still a prerequisite for a trusted secure thing.


and safe OS, computer, room...


"E2EE" is kinda antonymous with "key distribution" (unless, maybe, if you mean authentication keys)


Well the crux of E2EE is authenticity, so yes. Key distribution is a fundamental problem of public-key cryptography: you can’t get my public, authentic identity key without trusting someone to not tamper with it along the way. We implicitly trust [Zoom|iMessage|Telegram|...] to do this.


You can check the finger print on WhatsApp


You absolutely can, but this leads directly to vonquant’s point (and antris’ follow-up)[1]: without the source, how far do you trust Facebook to not just pull a UI trick? It’s paranoia all the way down ;)

[1]: https://news.ycombinator.com/item?id=23554909


Is that consistent with the traditional definition of E2E?

And if so then what's the term for encryption that a middle man cannot decrypt?


> What's the term for encryption that a middle man cannot decrypt?

I don't know if there's a term, but short of exchanging public keys in person there will always be a theoretical attack vector because there's always some[one|thing] in between you and your recipient.


It’s definitely not adhering to the definition of E2E encryption. However given Zoom’s history of shadiness it’s a pretty good guess about how it will be implemented.


Regarding question #2, Peer-to-peer.


If that's the answer to "what's the term for encryption that a middle man cannot decrypt", NO: peer-to-peer simply means... well, pretty much it means sending IP packets directly to each other rather than through a central server (yes, not much of a thing, but it meant you could get free music more easily, so the term got a lot of traction)


That's right. It is certainly possible to use peer-to-peer to send unencrypted packets. Peer-to-peer does not imply encryption. It does imply avoiding a "middleman". Thus, to send encrypted packets without using a middleman, peer-to-peer is a viable method.


> It does imply avoiding a "middleman"

No, it only implies avoiding a central server (and not even for every aspect of the service), you still run through routers, ISPs, NSA etc.

If you are certain that there's no middleman, you don't need encryption.

N.B. Maybe someone defines it in another way today, but when the term became popular, with Napster, it really meant simply not having a central server for certain functions, or even more banally not downloading your mp3s from a web site or ftp server; it did have some significance also because the legal aspect of it was more uncertain; when people started getting 100k dollars fines, peer-to-peer stopped meaning much, sometimes it's better to send packets directly to each other, other times through a server, but you almost always encrypt and almost always ought to encrypt end-to-end


The central server is the "middleman" as I am using that term. Routers are not middlemen under the meaning I am using. I am referring to peer-to-peer without any supernode forwarding traffic. No central server. There may be a "rendezvous server" involved in allowing two nodes to discover how to connect to one another, however that server does not route traffic.

I never implied a need for encryption associated with peer-to-peer. The parent comment asked about avoiding a middleman.

I have no idea what "end-to-end encryption" means, nor do I seek to know. I do not wish to be part of that debate. The record of how that term is being applied speaks for itself.

I do know of the term "end-to-end" as in https://en.wikipedia.org/wiki/End_to_end_principle One can find this concept in many of the early RFCs.

To me, "peer-to-peer" (with no central server) is in the spirit of end-to-end. This is why for example, people will sometimes say, "The internet was originally peer-to-peer."


I know that the term "peer-to-peer" could be interpreted in many ways, but to the best of my knowledge it is usually interpreted how I wrote above.

Which I think it's pretty much how you defined it too in your (last) comment, so I'm not sure what we're debating.

The important thing was that no one reading these comments get the impression that the multitude of systems that describe themselves as "peer-to-peer" are for sure using "encryption that a middle man cannot decrypt".

---

> The parent comment asked about avoiding a middleman

Middle man in cryptography is anyone intercepting a message

---

> I have no idea what "end-to-end encryption" means, nor do I seek to know

Well, I don't mean to be rude, but then there's not much you can say in a discussion about encryption...

---

Look, the important thing was to underscore that the https://news.ycombinator.com/item?id=23554823 comment was (apparently) wrong, I don't have any interest in winning a battle, I appreciate your enthusiasm, you probably currently don't know everything about cryptography or networking and there's nothing wrong with that, no one is born expert and no one knows everything there is to know. I have to go to sleep, bye


Just because the peer-to-peer software known to you may suck does not mean that the concept of peer-to-peer is obsolete. Nevermind Wireguard and other known examples of peer-to-peer software that does not suck, consider that there is software you do not know about. The idea that "peer-to-peer" is Napster plus some list of crappy, widely known software fiddling around with DHTs and dreaming about "the next big thing" is nonsense. Peer-to-peer is just a design principle The term the parent comment used was "middle man" not man-in-the-middle. As for "E2EE", I have never seen djb even use that term. I see many untrustworthy "tech" companies using it though.


I don't know what's going on here, I never said that any peer-to-peer software sucks or that the concept is obsolete, in fact I much prefer it if a system is distributed/peer-to-peer.

All I said and cared to stress, to avoid that someone reading this make mistaken assumptions about p2p software (although probably few of this site's users would run the risk), is that ^^^they don't, as you claimed in https://news.ycombinator.com/item?id=23554823 , automatically imply "encryption that a middle man cannot decrypt"^^^.

You admitted you don't even know what end-to-end encryption is, and apparently don't know much about encryption, what are you debating?

---

> The term the parent comment used was "middle man" not man-in-the-middle

It's the same thing (unless the post author meant "a man of middle age")

---

> As for "E2EE", I have never seen djb even use that term

You mean Daniel J. Bernstein with djb? Do you mean that you are actually knowledgeable about encryption? I don't mean to be insulting but it didn't seem so (and there wouldn't be anything bad in that), it's hard to believe that someone with basic familiarity with encryption wouldn't know what end-to-end encryption is.

If with "that term" you meant the E2EE acronym, I indeed wouldn't be surprised if Daniel J. Bernstein never used it, it's the first time I see it myself (but it obviously doesn't mean anything more than "end-to-end encryption").

---

I don't know why you took it so personally, maybe I sounded aggressive in saying NO in uppercase, if so I'm sorry, it was just to make it more visible


I mean the term as it is being used by Zoom and others, not just the acronym. What I commented is that I am not interested in the term "end-to-end encryption". To me, at this point, it does not mean anything. It is no more meaningful than "cloud computing", "big data" or "AI". I prefer to read source code, not marketing copy. I want to know what something "does" not what it "is". The former is factual the later is potentially subjective. IMO, it is irrelevant what you or I know or do not know. No one really cares. Focus on the comment, not trying to make assumptions about the user who submitted it.

https://web.archive.org/web/20051029045942/http://www.unc.ed...

Example comment: "Peer-to-peer is a viable design for videoconferencing for small groups. If one is concerned about a "middle man" then it is worth investigating a peer-to-peer design."


Bah (yes, looking at the source code instead of the description is a good advice, and to focus on the comments is another one, but I really lost you) (sorry for the late reply)

real encryption?


yikes, if they ended up murdering Keybase and still ship crap, I will never forgive them.


I've always suspected they didn't want E2E encryption themselves, not because of any "work with the authorities" strategy.

end to end encryption would prevent lots of monetization strategies, such as indentifying people via facial recognition and voice printing and then using this data (along with transcripts for example) to "add value".

Now the "we have identified a path forward" bit makes me wonder if they can still pull it off. Maybe it's client-side identification with out-of-band notification.

Google makes an enormous amount of money identifying people.


They hired the Keybase team. I feel like if the team was directed to develop countermeasures to the E2E or design the E2E to be vulnerable, someone would have blown the whistle. E2E was Keybase's thing and it would be a huge slap in the face.


They're 6 months away from becoming a case study in squandering momentum.


The goodwill has already been squandered. There’s simply nowhere else to jump to (jitsi lol). As soon as a viable competitor launches, everyone will jump. Same thing happened from Skype to Discord with the gaming community.


Jitsi Meet's not a viable competitor?

It's simpler to set up (accounts and password protection are optional), IMO easier to use (eg. the hand button is on the bottom bar with mute, etc. and not in a menu labeled "Participants") and higher quality according to the New York Times, who deemed it "reliable and easy to use": https://www.nytimes.com/wirecutter/reviews/best-video-confer.... I've introduced it to extended family members who've used Zoom prolifically, with zero complaints.

Can you name a single disadvantage?


Jitsi has audio problems for several people in my peer group. I think from other audio work that it may be that they're using too small frame sizes, although it's possible that something else in their audio pipeline is busted.

Audio capture APIs often suggest it may be possible to use very small frame sizes, which naturally promise much improved latency. Going from 100ms of audio latency to 20ms is great so surely going from 20ms to 5ms is even better right? Well, the hardware underneath that API may not be able to deliver, at least it may not be able to deliver consistently. If your 5ms buffer isn't filled on time, what do you send? A partially filled buffer? Silence? The last 5ms of filled buffer again? All bad answers.

Tool A with 40ms of latency may feel imperceptibly worse than Tool B with 30ms of latency. But Tool C with 10ms of latency but frequent "drain piping" as audio frames are garbled or undelivered is clearly much worse than either.


We tested it at my org, it doesn't scale past 15 users


Yeah, something like BigBlueButton might be better for big business meetings. (it's built for them and education, and claims to support 150+ participants).

Participant limits in Jitsi Meet are a bit confusing. There's a lot of variables to consider. https://community.jitsi.org/t/jitsi-meet-performance-compari...

> 1. Room hard limit is 75 users, recommended 35 users. > 2. The limit with more than 15 users with camera is the user’s PC. > 3. Working test with a good bare metal servers, 115 mute users and 5 users with camera. > 4. Test in progress for 500 simultaneous users.


Heh. I like that this thread reveals Zoom's true advantage: it scales like hell.


If you are looking out for a BigBlueButton service provider who can enable the Virtual Classroom for the organization with unlimited number of users and setting the correct concurrent users then connect with 3E Software Solutions.

We are available at hello@3esofttech.com


The name Jitsi is a disadvantage.

I'm not being facetious. If you compare the name Zoom to Jitsi, people will choose Zoom 9 out of 10 times. You won't get many people to even try Jitsi.

Zoom is quickly becoming a verb similar to what Skype used to be. Let's Skype. Let's Zoom. Everyone understands what that means.

Let's Jitsi... Let's what?


Skype was never a real player in the gaming community, it was really just Teamspeak, Ventrilo, and Mumble


I was always a big fan of Mumble and their "latency first, even if you sound like a robot" approach. There are a lot of times in raiding that an 0.5s delay means everyone dies.


My immediate thought was that they'll introduce encryption that happens on the client, and decryption that happens on the other client, but will have a way to know what the key is on the server, too. Honestly any near-ubiquitous communication medium is going to have enormous pressure to be insecure by design, if not from the Chinese government then from the US.


In "enterprise" software, E2E hasn't meant what it means in modern chat programs. In my world, it just meant encrypted over the wire and at rest, nothing more. We aren't there technologically to have many zero knowledge SaaS products. I don't know they aren't liars, but that difference might be a reason.


You missed the part where they acquired Keybase to help them build e2e encryption. https://techcrunch.com/2020/05/07/zoom-acquires-keybase-to-g...


You should also mention step 4b. Zoom admits it censors accounts for China, losing any and all hope of being trustworthy.


I guess this is a literal definition of fake it till you make.


Didn't they also announce they wouldn't implement E2EE so they could cooperate with the police better?


It's crazy that zoom seems to have no real competitors who take this stuff seriously.


> Given their track record I’d expect this timeline to repeat itself so after they release this E2E encryption feature, security researchers will discover that it’s not true E2E encryption again.

I mean, you can already look at the design if you wish, it was disclosed by Alex Stamos: https://twitter.com/alexstamos/status/1268061790954385408

TBH I'm sort of surprised they gave in to the new wave of criticism, their arguments for not giving E2E to free accounts were pretty decent.


Their arguments for not giving it to free accounts was awful and the same trite which has been regurgitated for twenty years.

The real reason is they want to be able to hand data over to China / NSA / marketers.


> The other safety issue is related to hosts creating meetings that are meant to facilitate really horrible abuse. These hosts mostly come in from VPNs, using throwaway email addresses, create self-service orgs and host a handful of meetings before creating a new identity.

It's honestly the first time I heard this justification - where else did you hear it in the last twenty years?

Also do you have some concrete reasons to believe Zoom hands over data to marketers? That's the first time I personally heard this claim - can you link me some evidence?


Twitter has been caught using phone numbers for security purposes / tackling fake accounts in the past for marketing purposes. Zoom has a very dubious history, involving China, lies and a complete lack of security.

As for the same old same old? It hasn't been precisely in that form but criminals have used Facebook, Tor, Email, Discord, YouTube, Usenet, Skype, MySpace, and other technologies / sites to facilitate abuse. This is merely the newest iteration.


"twitter sells phone numbers so Zoom must do the same" - really? Maybe not, because you know, they have actual legitimate revenue that doesn't rely on advertising. So it's not like they have the same incentives...

Also, you seem to acknowledge that there are legitimate concerns/ reasons to NOT offer E2E encryption for free users. Unlike all the other services you mention - Zoom users that care about illegitimate interference in their communications would actually have a way to get E2E encryption. Unlike FB/Youtube/etc, Zoom doesn't need to offer the "free" service in order to exist - it does this as a marketing ploy to get you accustomed to their services. In that sense, withholding functionality from free accounts seems perfectly reasonable?


https://twitter.com/SenBlumenthal/status/1247510907992846337 You mean the features they lied about and may have the FTC chasing them for?

Yes, it is a problem, it's been a problem for a very long time. Criminals gravitate to the most convenient platform like anyone else but otherwise don't stop being criminals. It only serves to punish other people.


It's confusing too because implementing E2E crypto seems far easier than the above events. Any ideas why they might be so resistant to it?


Without identity management, E2E is worthless because you don't know that today's E is the same as yesterday's. Zoom is probably still sucking up to various state security agencies but doing it via MITM instead of just tapping the server.


I think that’s a little lacking in nuance. The team they have putting this together say to me they’re putting their money where their mouth is, at least. That paper has a fucking legit list of contributors.


Aside: Zoom stock has risen 252% during this time ($67 on Dec 18th 2019 to $236 on Jun 18th 2020).


"To make this possible, Free/Basic users seeking access to E2EE will participate in a one-time process that will prompt the user for additional pieces of information, such as verifying a phone number via a text message. Many leading companies perform similar steps on account creation to reduce the mass creation of abusive accounts."

Perfect instrument to collect more personal data.


Their argument doesn't make sense.

The objective behind verifying accounts is to prevent spammers creating lots of spam accounts and using those to spam.

However, spammers rarely care if their spam is encrypted, so putting E2E behind verification won't do anything as far as spammers are concerned - they'll happily keep spamming using the unencrypted accounts.

There's some other reason behind this that isn't about reducing spam.


I think their concern is paedophile rings using large group E2EE for live child abuse with completely anonymous accounts.


Yeah - I'm pretty sure this is the real concern and verifying a phone number is reasonable trade-off.

I know this argument is often quickly dismissed on HN since people see child abuse or 'going dark' as an easy excuse for the government to leverage to get more control (and it has been used for this), but that doesn't mean the problem isn't serious or doesn't exist.

See this: https://www.nytimes.com/interactive/2019/09/28/us/child-sex-...

The resources fighting this are relatively small in comparison the scale of the problem: https://www.freethink.com/videos/child-exploitation

The people carrying out the abuse are sophisticated.

I have a friend that works at WhatsApp and their entire team is focused on trying to remove groups that exist to share child abuse imagery (via metadata since content is encrypted).

I fall on the side that secure encryption is critical for all of the reasons that technical people normally argue that it's critical and breaking it doesn't work/is a bad idea, but I also understand and empathize with the difficulty encryption by default causes for the organizations fighting this abuse.

That said, I have serious disagreements with Zoom unrelated to this particular e2ee issue (https://zalberico.com/essay/2020/06/13/zoom-in-china.html), I think they don't actually care about protecting the speech of their users or securing content from authoritarian governments. It's still good to avoid them for that reason alone.


Indeed, E2EE will enable criminals to go undetected. And this is a real problem. However, it’s an arms race that will end with criminals having proper, strong E2EE anyways. Trying to reverse this is like trying to reverse entropy, the toothpaste does not go back into the tube. It may seem like it is still doable now, but I’d be willing to place bets that feeling will evaporate shortly.

Of course, criminals are ordinary people too. They care about convenience and network effects as much as anyone. Which is why I think it’s insane that governments want to jeopardize the trust people have in proprietary, huge E2EE platforms that actually have the means to aid them in investigations. Yes, breaking the crypto may not be an option, but at least collecting useful metadata for use in investigating, and potentially ethical hacking, is an option.

I fear the day when the trust is gone because there is a very real possibility that some day many will be using decentralized E2EE chats, maybe even P2P. It’s not just conjecture of course, Matrix exists today and is already very impressive (in my opinion) in terms of usability.

The internet is opening up the concept of having nearly private communication with pretty much any individual in the world. It isn’t free of implications, but also, as more of our lives move online I feel its absolutely crucial that every day people can feel confident they’re not being monitored. The problem of CSA and other criminal behavior existed before the internet and it will certainly exist after. It’s absolutely past time to re-evaluate laws surrounding child protection, which seem to me to mostly be reactionary at this point (in that many of them are spawned as a result of a specific incident.)


> Indeed, E2EE will enable criminals to go undetected. And this is a real problem. However, it’s an arms race that will end with criminals having proper, strong E2EE anyways.

Individual child abusers aren’t part of a monolithic organization with training on how to secure their comms and practice OpSec.

The number of criminals who still create evidence against themselves on unencrypted platforms (SMS, phone, etc) is significant, despite E2EE options already being available. People are even being arrested for rioting after admitting on public TikTok videos to participating.

I think the only way criminals will standardize on E2EE is if every platform and communication mechanism is E2EE by default. Otherwise they will continue to make mistakes or think they can slip under the radar.


> I think the only way criminals will standardize on E2EE is if every platform and communication mechanism is E2EE by default. Otherwise they will continue to make mistakes or think they can slip under the radar.

FWIW, I believe this is the future if lawmakers don’t prevent it. A look at some E2EE software today:

- WhatsApp

- Matrix

- Signal

- iMessage

- Firefox Send

- MEGA

- ...

The list will grow.

In my opinion, E2EE today is like TLS 10 years ago. TLS was once a nice-to-have when it came to communication that was not strictly necessary to encrypt. Today, TLS is more sophisticated, stronger, and easier to implement than ever, and damn near a necessity for anything, even toys.

Granted... E2EE is necessarily harder, since it requires application-level implementation of crypto primitives, things definitely get complicated. Still, I believe the state of the art will continue to improve and tooling with it. Eventually there will probably be defacto libraries and maybe even OS frameworks to deal with E2EE key management, trust, etc.

To be clear, I view this as strictly a good thing and an inevitability. I don’t think transport encryption and encryption-at-rest are good enough anymore for private communication. Of course for public sites like Twitter or Tiktok it’s all you would logically get, but for any group or direct communication I now believe E2EE is slowly becoming the new baseline, and it’s mostly the complexity of it that hampers adoption.

Now that iMessage and WhatsApp are E2EE though, there is a lot of messages flowing that, exploits notwithstanding, are “truly” private, today, and I think the number will only go up. The only real question in my mind is, who’s next?

As far as criminals making slip-ups, this is guaranteed; even the best make mistakes obviously. But assuming all criminals are foolish and stupid is a mistake; I believe there’s a lot of selection bias in there, since we don’t get to find out those who truly never get caught. Time will tell if any of this really matters, or, if, as usual, it’s just another panic that has no tangible effects. I vote on the latter, but I still do believe proliferation of E2EE will change the game in ways we can’t really anticipate 100%.


> Indeed, E2EE will enable criminals to go undetected. And this is a real problem.

This is not the problem. The argument is hollow.

People need to take child protection laws out of political discourse, as it's now approaching silly.


If you think this strengthens the case against encryption laws, I suggest you rethink. There’s plenty of valid arguments against banning strong encryption and this isn’t one. You can’t simultaneously argue that E2EE keeps people’s conversations private to eavesdropping and then suggest that it doesn’t prevent eavesdropping for law enforcement purposes- at face value it does, and image hash databases to prevent the spread of known CSAM exist today; see, for example, Project Arachnid. And yes, law enforcement eavesdrops for law enforcement purposes. That’s why wiretap warrants exist. Whether its a good thing is another argument entirely, but it is indeed the status quo.


Wiretap is a misnomer. Undermines security.

Put plainly, there will always be crimes you won't be able to catch. You prioritise resources on the most pressing ones and build up resources in the real world to tackle them in other ways. Dystopian lists on the client to control what you're allowed to say or think or report your thoughts back to the government still violates the principle E2EE is built upon.

There is no middle-ground. You either are secure or you are not. The genie is out of the bottle either way.


> There’s plenty of valid arguments against banning strong encryption

There are no valid arguments against encryption

> And yes, law enforcement eavesdrops for law enforcement purposes

Lawful eavesdropping is an oxymoron


Do you think there is no situation where it can be lawful for a law enforcement agency to perform a wiretap?


Yes


Well this is probably not the case if you are an American. See US code title 18 section 2516 paragraph 1.


> The people carrying out the abuse are sophisticated.

In this case wouldn't they build their own solutions (potentially based on existing open-source solutions like Asterisk + Linphone or Jitsi Meet) or they might've built them already?

Phone numbers are also very easy to obtain anonymously, so I am not sure SMS verification would help track down abusers when it'll lead to a prepaid SIM or some innocent user's phone that happened to be compromised by malware.


> Phone numbers are also very easy to obtain anonymously, so I am not sure SMS verification would help track down abusers when it'll lead to a prepaid SIM or some innocent user's phone that happened to be compromised by malware.

It depends on which country really. In some places in Europe it became almost impossible to do that (sadly).


Yes, some would - but not all.

I agree that these reasons are why it's not a good idea to break or outlaw encryption since bad actors can still use it and good people that need it are blocked, but this doesn't mean that making it the default doesn't enable more abusers to get away with it that might be caught otherwise.

There's a spectrum of sophistication, if it's harder more of them will make more mistakes that make them easier to catch.


So how do you define that giving away phone numbers is the right trade-off in the "spectrum of sophistication"? It effectively means lack of anonymous communications for everyone, i.e. global surveillance (personally identifiable metadata is in the hands of Zoom).


I didn't say it was 'right', I said it was 'reasonable' - and there aren't easy answers to this.

Also to clarify, specifically a reasonable trade-off for Zoom (I don't think there should be a general law that requires IDs for video software use or something).

Zoom is not a company I would use at all if you're looking for secure communications (https://zalberico.com/essay/2020/06/13/zoom-in-china.html).

If you care about secure communication you should be using something else.


> Yeah - I'm pretty sure this is the real concern and verifying a phone number is reasonable trade-off.

It's not a reasonable trade off in countries where you get you legs broken, skin flayed alive, and head cut off: https://www.telegraph.co.uk/news/2019/11/18/russian-mercenar...

A likelier explanation, is they want an easy way to wash their hands off when being pressed.


If you read the rest of my comment beyond the first line (particularly my blog link), you'd see that I agree with you when it comes to companies taking an ethical stand against authoritarian governments.

What you're arguing is a strawman, we agree more than we disagree.


> If you read the rest of my comment beyond the first line (particularly my blog link),

I read, and I think your argument is hollow, and, assuming your goodwill, you are not understanding the matter at all, and if not, I see an ill intent.

I do not appreciate all what you say at all. Any argument against encryption must be quashed without exceptions, and second thoughts.

It is only since the start of 21st century, the experience akin to "legs broken, skin flayed alive, and head cut off" has been a grim reality for far more than a million people by now, mostly for, really, nothing. What are talking about this! And what you talk about?

Attack this argument, not something not even having a passing genuine relation to the matter.


As you’ve responded here and elsewhere, calling an argument “hollow” is not a substantive disagreement.

It seems any argument that you don’t already agree with (basically only your exact position) is classified this way.

The rest of your comment is basically incoherent, and the parts that do make sense are obviously wrong. It’s also a willful misinterpretation of my position.

People were flayed before the 21st century. Acknowledging the issues with encryption is a critical requirement in making an effective defense of it. I am not arguing against encryption.

If this is an issue you actually care about (which it sounds like it is), learning how to build consensus and honestly consider the positions of others would be a valuable skill to develop.

As it stands you’re doing more harm to the pro-encryption position (which is also my position) with how you’re attempting to defend it.


It's not Zoom's job to solve every use case for every person in every situation.


People would give more support to government efforts to fight child abuse videos, if the government stopped using child abuse control tech to violently suppress human rights.


> I know this argument is often quickly dismissed on HN since people see child abuse or 'going dark' as an easy excuse for the government to leverage to get more control (and it has been used for this), but that doesn't mean the problem isn't serious or doesn't exist.

When a company says they want your phone number in order to use their resources, so they can take steps to avoid having their resources used for (certain) crimes, that's well within the bounds of reasonable.

The problem most people have is when the government tkes away the use of _super important feature_ from the populace as a whole (even using their own resources), because it _can_ be used for crimes.

Those are two VERY different things.


Are we talking about recirculation of existing content or new cases of abuse? How much of it is new? How much of it is duplicates? How much of it involves the platform facilitating crimes to produce it? One article noted something very alarming, that resources are diverted from more serious crimes to chase these ones.


>verifying a phone number is reasonable trade-off

Lower privacy "because security" is not a reasonable trade-off. It should not be. See also: https://en.wikipedia.org/wiki/Four_Horsemen_of_the_Infocalyp...


Please, don't derail the discussion with something as silly as this.


Only 4 comments in and we hit one of the four boogymen of the civil rights apocalypse. How many comments until we get to domestic terrorism or illegal drugs?


> one of the four boogymen of the civil rights apocalypse

The public is willing trade away privacy in exchange for protection from certain categories of risk. Instead of denying that, one can lean into it by ensuring strict definitions and enforcement options within those categories while preserving full privacy for those without. Arguing pedophile rings and terrorism are a cost of a privacy policy is a good way to sink that policy.


What if the only practical way to 100% stop all crime is to shutdown the internet?

Now, I'm not saying there is nothing that can be done to reduce it. I very much hope there can be, especially if counsellors can find warning signs and we can better figure out how to spot the danger signs, both online and off.

Facebook took a good step forward by putting warnings up to minors when someone outside of their social circles has contacted many others, although there are other things which could be done.

Should they be allowed to contact them through onion routing during such situations? Where do you draw the line of when such technologies can be used? Is it better not to open this can of worms and risk a slippery descent? What are the chances of false positives, will it unfairly impact relatives? Will it give a black mark to privacy technologies and civil liberties to be associated with automatic blocks? What if minors want to engage in activism, should this be limited? At what point does pushing and pushing start the lie about your age shenanigans again?

This is about Facebook here but it ties back to arguments about doing this or that for the greater good.

Is a more grounded approach better? Ensure minors are well-educated of the risks and dangers online? Invest in mental health services to avoid minors falling into depressive slumps where they might be susceptible to such criminals? In the rare event they drag anyone back home, whether they think they're of a similar age or not, they bring them before the parents first?


I would make a cogent argument to rebuff your straw man, but it's not worth my time if you don't share a priori assumptions with me about E2EE being uncrackable. It's just math. I don't see why the talk of trade-offs even is relevant to the discussion. People will use secure tools with E2EE or they will suffer the consequences of not doing so. Doing illegal things is already illegal. Banning or watering down E2EE so that it becomes no long E2EE is throwing the baby out with the bathwater.


Your mistake is bringing a technical argument to a political question.

My personal political answer to "how to have end-to-end encryption and prevent its use for child rape" would be to tax the companies which profit from E2EE, and use that money to fund death squads, which livestream dragging child rapists out of their home, anywhere in the world, and beating them to death with truncheons.

I'm joking, of course (or am I?) but I do consider this the general shape of a viable solution. E2EE is essential for a modern life which isn't a hellish surveillance dystopia, and the detection and prosecution of child rape is criminally underfunded.


This is creeping a little close to populist rhetoric. The crimes you've described are obviously awful but angry politics will only lead to knee-jerk solutions.

In which ways do you think it is underfunded?


It's clearly underfunded in relation to the difficulty in prosecuting these cases. Banning E2EE is a way of lowering the bar of difficulty in prosecuting these cases. The crime is reprehensible, and worthy of enforcement due to the heinous nature of abuse. Curtailing abuse via violating human right to encrypt is not the way to end abuse. Thus, more funding is likely justified, if it leads to an end to abuse. This social benefit of reduction and elimination of abuse should not come at the expense of human rights and E2EE.


I see, so your main concern here is prosecution. It is indeed true that prosecution is understaffed and underfunded. I feel there are other problems at play too.

CPS should be able to spot children in abusive homes and respond to reports of unusual activity. They should be able to spot clearly unstable caretakers.

Counsellors and teachers should be able to spot unusual behaviour from children. Mental health services can help someone escape falling into such a situation in the first place by keeping them from falling into depression which leads them to rely on such a person.

Local police shouldn't dismiss leads so readily. This is the it is impossible for him or her to do such a thing mindset which prevails so frequently.

Parents shouldn't trust their relatives so readily and should keep an eye out. 90% of cases happen at home.

If they stopped showing off their crimes online, would the entire system come to a crawl? I'm worried by how much of a reliance there is on divining crimes off the internet.


> E2EE is essential for a modern life which isn't a hellish surveillance dystopia, and the detection and prosecution of child rape is criminally underfunded.

Yup. This.


Next thing you know, people will be using E2EE to stream gasp copyrighted material!


I highly doubt paedophiles are watching child abuse streams on zoom.


If I recall from their previous statement, it's not about spammers, it's about people sharing child abuse photos.


It's really more about being able to trace accounts to people period, as allowed through the Patriot Act: https://en.wikipedia.org/wiki/Patriot_Act#Title_II:_Enhanced...

Child pornography gets held up to the public a lot because it's a crime nobody can defend and walk away the same they were, no matter what you say. If you publicly contest this move for privacy reasons, you're automatically defending the worst child molester someone's mind can come up with.


It isn't just that. People will cherry-pick the worst incident that ever happened on a platform involving children and compare everything else to that and say that if you support privacy you're supporting this everywhere.

The one, admittedly terrible incident, will shock people and they will push exaggerated means to "stop" it. Ones which just so happen to feed tons of information into the NSA machine.

People come up with stories of live-streamed child pornography too but do these children live in some parallel universe where crimes can be committed against them without recourse? What is the police doing? Did they not find suspicious behaviour in a neighbourhood? Did a counsellor not pick up on it?

Yeah sure, child pornography is awful but why is this part of the equation the only one that is ever mentioned? Why is it always about encryption or anonymity?


If I recall from their even more previous statement, it was already E2EE, but that turned out to be a lie...


Is there any E2EE app that doesn't require verification? Whatsapp does. Even Signal requires a phone number.


https://riot.im/ lets you sign up without even an email




Riot/matrix


Tox.chat

with added bonus of no central server


Threema.


Discord recently demanded my cell phone number to be able to type messages. I declined, contacted customer service, who offered no other way to authenticate. I will not be using Discord anymore and will recommend to everyone I know to avoid it as well.

There is no need to use my cell phone or any telephone number when you already have a means of communication via email or any other channel.


If you live in the US, burner SIMs are cheap and anonymous.


[flagged]


You started a wretched flamewar with this post. That was vandalism.

Nationalistic and ethnic flamewar will get you banned here. So will personal attacks and insinuated slurs.

People have been hounded off this site in the past by comments along these lines. That's shameful, and we want no more of it.

No, we're not defending communism or the communist party. We're trying to defend Hacker News against (a) mob behaviors and (b) self-immolation. Here are some recent comments about this, which include other links to plenty of past explanations.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


Unfortunately, I have to post this comment again, from just 3 days ago [1]. Also remember that Eric Yuan is an American citizen, not a Chinese citizen. He switched. Original comment:

When will this meme die?

Zoom is NOT a Chinese company. It is incorporated in and headquartered in the US. Like any American company ever, it follows US laws in the US, and local laws in other companies where it operates. End of story.

Yes their culture certainly has stronger cultural internal ties to China, due to the number of Chinese employees, but what has that got to do with anything? At the end of the day, they're a public, profit-driven corporation trying to make lots of money across the entire world.

It's not like they're secretly and nefariously doing the CCP's bidding, which seems to be the veiled suggestion people keep making.

Seriously, every time someone brings up that Zoom is "really" a Chinese company, it comes across as borderline racism or conspiracy-mongering or both. And while I'd usually never comment on someone using a throwaway account, in this case when you're pushing these kinds of shady "stronger than the more-commonly-discussed" insituations, I think using a throwaway here is representative of exactly the kind of astroturfing that spreads malicious rumors without evidence.

[1] https://news.ycombinator.com/item?id=23510886


Really? And which part of "local law" required Zoom to close accounts of US citizens in the US who weren't breaking any US Laws?

https://news.sky.com/story/zoom-disables-accounts-of-chinese...

>The suspension targeted Humanitarian China, an organisation based in the US, after it held a call with roughly 250 people, including a number who dialled in from China.


Zoom claimed they had to remove Chinese participants from the US-hosted meeting but didn't have the functionality to do that so (wrongly) banned the US hosts.

They said it was wrong to do, reinstated those accounts, and are building the functionality to enforce those Chinese laws without ever impacting users outside China.

That's from their blog. https://blog.zoom.us/wordpress/2020/06/11/improving-our-poli...


I'll ignore for a second the fact they refused to even acknowledge Tiananmen Square in that post, despite the fact that as was pointed out they're a US company that isn't beholden to China and they're posting in English on their US-based website.

They are actually admitting that they're going to prevent people IN CHINA from connecting to a meeting that is presumably hosted IN THE US. That doesn't make it better, it makes it WORSE. You're basically telling the world that China will dictate how you operate WOLRDWIDE not just in China.


I'm sure that if the US had as strict control over our/their internet as China does over theirs, non-US sites would be forced to operate differently for peers in the US too.


Same reason why Google censored itself for China in 2006. Did people think Google was a 'Chinese company'? Note that this was before Google set up a presence in China.

http://news.bbc.co.uk/2/hi/technology/4645596.stm


I think you're missing the point. Google censored itself in china based on Chinese requests. Zoom censored itself in the US based on Chinese requests.

There's a pretty big difference IMO.


I don't see how it's the same. In Zoom's case organization was in US and call organization was in US and company works by US laws. Why would they censor these accounts? Google on the other hand had to do it to operate in China, had to submit to Chinese regulations.

The same would be if they blocked Gmail of US citizens because of discussions related to China.


I guess I don't consider that the same thing at all. They're censoring searches coming from china to google.cn (hosted in China). They didn't really have a choice, the servers are in China, the users are in China.

What they aren't doing is actively blocking users from China getting to www.google.com, and they aren't censoring searches Chinese users do on www.google.com because those resources aren't hosted in China and aren't subject to Chinese law.

Zoom on the other hand IS blocking users from China accessing resources in the US and while they may have "undone" the ban, they banned US users from their platform for breaking a Chinese law that they aren't subject to. Namely talking about the Tienanmen Square massacre.


they didn't have a choice not to use servers in China?


They actually didn't. Their options were filtered servers in China or no services in China at all. We can collectively be upset at their decision but they ARE beholden to their shareholders as a public company. Abandoning the Chinese market entirely is something they could've tried, but their major shareholders made it pretty clear there would be a change of leadership if they tried.

Unless you've got a magic bullet to give Wall Street a conscience, that wasn't going to happen.


Ok it was the fault of their major shareholders, but then it's ok to make them pay their decision, I think (hoping that they're still holding those shares).

By the way, in general my understanding is that the "beholden to their shareholders as a public company" (and thus forced to make the most remunerative decisions) belief is a myth, a public company is free to make ethical choices (if its major shareholders don't oppose them).


They're the same "local laws" that fine foreign companies billions of dollars for doing business in Iran.


Let's be 100% clear.

Based on a complaint from china regarding non-Chinese citizens with non-china accounts, zoom cancelled the non-china accounts.

Full stop. Let that sink in.

If you are wondering who zoom will accommodate you have your answer. This is a fact. This is what we are calling "racism"?


How is it racist to suggest that Zoom has Chinese influences (seemingly not farfetched based on the equity ownership mentioned above and not at all disputed based on the technicality of Zoom being a US company)?


It is not racist to suggest that the Chinese government influences companies. It is racist to suggest that Chinese people are automatically predispositioned to certain actions.


Fair enough and I don't think anyone here will find that statement controversial. I think what bothers me in some of the dialogue here is what I perceive as an attempt to play "hide the ball" by pretending that CCP interests don't run counter to the interests of liberal democracies by labeling raised concerns as racist. No one framed anything in racial terms to begin with so I don't see why it needs to be placed in that context.


Did you read the comment he's replying to? https://news.ycombinator.com/item?id=23553503

It's entire contents is an argument that Zoom is evil because Eric Yuan is Chinese-American. It doesn't say a single one of the things you're saying.


It's racist to suggest that Chinese people (NOT the similarly-colored Taiwanese) feel some weight from the dictatorship were they live/grew up/have parents in?


That was never suggested. Zoom is developed almost entirely in China, which means it is effectively a Chinese company under full influence of the CCP.


Dude, you need to think first. Handing over data to CCP ? thats genocide.


Defending something as "NOT a Chinese company" or not a Chinese citizen (as if those would allow certain assumptions about the actions of individuals) is still xenophobic framing. People are people and are capable of independent thought and actions.

We can be upset at power structures and at their effects. Taking that further to make assumptions about groups of people is racist.


"Chinese" are not a race, and assuming that groups of people are somewhat more likely to be subject to some power structure, is not racism or xenophobia.

You'll get a Liu Xiaobo once in a while, but much more often you'll have people that, while easily wonderful as people per se, are disgracefully forced, or have been convinced to, do some things that they shouldn't do.


Wait, your point is that Zoom is evil because the founder is Chinese?


Yeah what the fuck, that was some weird casual racism you don't expect to see on HN.


[flagged]


Except Eric is not any sort of govt or party representative.

He is a naturalized American of Chinese ancestry.

In history, this kind of scapegoating was counterproductive.


The CCP can and will exert pressure on family members to get expats and former citizens to do what they want. They're not the only country to do so, not by a long shot, but they are particularly brazen about it.

Though I think the CEO is less relevant than the critical amount of developers Zoom relies on that are directly living under and subject to CCP malfeasance.


> The CCP can and will exert pressure on family members to get expats and former citizens to do what they want.

Could you give a few such cases?



Summary: Chinese government harass the relatives of the fugitive corruption suspects to force these suspects to return voluntarily.

That's definitely one area the Chinese system is vastly behind the international standard. But this has not been conscious to me.

When I was a child, we had horror stories of people sentenced to death penalty and were executed on superficial charges that sexual misconduct. And my parents have witnessed detained thief were beaten like wild animals by policeman, and the thief's scream was heart far in the then poor village.

These measures can even escalate during "Yanda", a nationally-coordinated clampdown on criminal activities. In [1], it was noted: "China's execution rate increases dramatically during Yanda campaigns."

Such brutal national campaigns are dying down, the most recent one [2] has much less cruelty. And historically such campaigns enjoyed universal domestic support.

As of today, the Chinese government has a high degree of popular endorsement to use whatever not-too-out-of-line measures to bring back the corrupted businessman or former government officials, who are prosecuted by the public attorneys. Thanks a grain of national pride, i.e., "those corrupted bastard not only embezzled our money, and they escape to the country that is unfriendly and can be benefited from those fortune".

[1] https://www.jstor.org/stable/23639486?seq=1 [2] https://www.economist.com/china/2019/02/28/china-is-waging-a...


I'll be honest, I'm not sure what your point is. Your response reads a bit like a defence of the practice, but I'm going to give you the benefit of the doubt (since there's an obvious language issue).

I will say... it's curious to me that you seem to be well aware of such practices but still challenged the parent to provide examples (the implication being, that they were misinformed).

Why did you ask the parent to provide examples if you knew of some already? What's your motivation?


#1 I was aware of the flaw in legal system. But the flaws I know was that the Chinese legal system was ineffective, inefficient, and quite a bit corrupted.

#2 I was not aware of the details of harassing on corruption suspects' relatives.

#3 I think it was reasonable to not able to link these 2 facts automatically together as self-evident.

#4 I wanted to emphasize the fact that such behavior does have popular support. Which means corrective measure must be based on educating people, not assume that a certain way of thinking is automatically universal.


I'm not at a desktop at the moment so I had to just Google for some examples. I was thinking of stuff like this:

https://www.businessinsider.com/china-uses-family-members-to...

Just to be clear, I'm not suggesting that China is unique in doing this. They are simply much more openly aggressive and nonchalant about it.

EDIT: I also think it was both reasonable and correct for you to ask for examples. Superpowers do enough sketchy things that we don't need to be muddying the waters with made up claims.


Is the argument still that this man is a CCP agent because he comes from China? That's literally all the evidence that was provided in the parent post.

Do you not understand the racism behind assuming someone is a CCP agent based on nothing more than their race?


People also suspected Japanese Americans to be spies during WW2 and sent them to concentration camps. I hope we do not repeat the same mistakes.


Which German ambassador are you talking about?


no, its not racism. Almost all independent American company is banned in China. Thats racism. Google, FB..etc..etc.


Chinese is also a nationality and that does have relevance if the person has ties to Beijing and the CCP. If the founder were Russian and had ties to Moscow would we be complaining about “casual racism?” It’s racist to suggest racism when a Chinese is involved when similar accusations wouldn’t have been made if the situation were Russian.


Because of its inevitable ties and implicit subservience to the CCP.


The only "relevant" information found in the quote in the GP comment is the nationality of the CEO. How does one jump from the CEO's nationality to inevitable ties and implicit subservience to CCP?


Didn't Zoom block a US based activist on the request of the Chinese government? I'm not arguing that there is complete subservience to the CCP but this censorship seems like a line was crossed.

https://www.nytimes.com/2020/06/11/technology/zoom-china-tia...

Edit: Let me also state that I'm not entirely sure what OP's argument was, considering the comment was deleted. I'm merely stating that there seems to be some cooperation with the CCP and Zoom.


I'm aware of Zoom's track record, and it could certainly be scrutinized, criticized, or even boycotted.

What we have here is going from "CEO was born and raised in China" to "No wonder why they have to play party with CCP" and "inevitable ties and implicit subservience to the CCP". I can't see the logical connection in the absence of other information. Don't you see a problem with assuming someone's motives purely from their country of origin?


ignoring the quote for a second we know a few things about this CEO and zoom.

1. zoom's application is sending data to Chinese servers separate from the application functionality servers.

2. the CEO is from china, I'm going to assume he has relatives in china.

3. we know CCP is a completely fucked up government with an absolutely horrible history of civil rights violations, genocide, etc.

I wouldn't put it past CCP to be pressuring the CEO by threatening relatives who live in china. this wouldn't be unheard of for CCP.

add the the unnecessary data transfer to chinese servers makes it look really bad.

its a fairly reasonable conclusion to draw that the CEO is compromised if all the above holds true.

more extreme conclusions could just as easily be drawn from that same data that he is literally a foreign agent for china.


I find myself agreeing with most of your points. I mainly took issue with taking a shortcut from "being born and raised in China" to being a hostile Chinese agent (and we don't even know if the CEO has relatives in China that can be leveraged anyway).


Very true.


[flagged]


I don't see how this isn't just discriminating based on national origin.

It'd be one thing if there are actually some nefarious ties between Eric and CCP, but all we are going by is he's originally from China and there could be influence by CCP on people from China. It's not bad to point out a connection, it's bad to point out a possible connection based on nothing more than where the guy is from.


There is plenty of evidence suggesting that Zoom collaborates with the CCP without it, there was undue pressure to terminate activists without users from China.

It is best to focus on these sorts of links rather than someone merely being from China.


That I'm fine with. If there is evidence that suggest Zoom is doing something unsavory then we should call them out on it. But suggesting Zoom isn't trustworthy simply because the CEO is Chinese is distasteful.


I'm disgusted that in 2020, Americans continue to use the same racist, unfounded smears againts people based on their ethnicity just as they did when they were throwing Japanese-Americans into internment camps.


Please don't up the flamewar ante like that. It leads noplace good, just to the same nasty loops played louder.

https://news.ycombinator.com/newsguidelines.html


There are plenty of HN users who won't (or wouldn't, in an ideal world) use any US-based software because of NSA interference.

The issue is national origin, not ethnicity. Japanese Americans were thrown into camps for the same reason, but we're not talking about jailing anyone here. We're talking about avoiding a specific product.

Another difference is that Japanese Americans were put into camps regardless of how many generations removed from being Japanese they were. No one is arguing that CCP has control over Chinese Americans whose ancestors immigrated here in the 1800s. It's about people who literally grew up in China and/or still have close family there for CCP to threaten.


There is a difference between US based software (e.g. servers located in the US), vs a CEO that's from the US (or China in this case). If the argument is that Zoom the company has routes traffic to China etc then fine I can get behind that. But if the argument is the CEO is from China I find that problematic. I mean it's not like internment of Japanese Americans is ok if it's limited to only first gen.

EDIT: Also sounds like you're saying that it's ok to avoid doing business with someone based on national origin, which I also find problematic.


> Also sounds like you're saying that it's ok to avoid doing business with someone based on national origin, which I also find problematic.

Sure, it can be problematic. I've seen articles about how Russian people in the software industry are having a very hard time because of what Putin's regime does. It's not fair for the people who have nothing to do with Putin and no exposure to him.

But what is the alternative? Putin and Kim have assassinated dissidents in Western countries. Do we assume people can't be coerced just because they left the borders of the authoritarian country?

Should the US government also remove its nationality restrictions for security clearances?


The alternative is to not discriminate based on national origin? Which is a protected class by the way. Nationality is also very different from national origin. Eric is an American citizen as far as I aware. You can point to the requirement to be bore in the US to run for presidency, but 1) that’s an edge case and 2) where is the line? Is it ok to discriminate again foreign born Americans but not against native born? What about native born Americans with relatives in China / Russian / North Korea.

I mean overall you really don’t find it an issue to blankedly judge an entire class of people based on what some people within that population does or could do?


> The alternative is to not discriminate based on national origin?

It absolutely is not a protected class. There are no protected classes when I am deciding whom I trust with my personal data. I can discriminate for any reason, including national origin.

> I mean overall you really don’t find it an issue to blankedly judge an entire class of people based on what some people within that population does or could do?

I would find that an issue if anyone (including me) were proposing it. We are not. You're attacking a straw man.

Here are the facts, regardless of Yuan's citizenship, race, etc:

1. Yuan grew up in China. He still has Zoom employees and family there.

2. China is controlled by a regime that has no qualms about using physical threats and violence to maintain control.

That's it. That's all I need to decide that I don't trust Zoom, if all of their extreme dishonesty and malware installations weren't enough. They haven't shown good judgment, and even if they did, it would be easy for CCP to put pressure on Yuan (or any other employee living in mainland China).

If Yuan had no family in China, no employees, and enough bravery to speak against CCP, I would not feel this way. I am not judging an "entire class" of people.

By the way, every firm that requires security clearance does judge entire classes of people as security risks. The question I asked, which you didn't answer, is whether you think that's also inappropriate.


Yes you're free to make personal choices based on whatever factor you like. You can decide based on vendor's national origin, race, sex whatever. I just don't think it's right on those factors alone.

You listed 2 things. First is where he's from, the second is the politics of the country. You are then basing your judgement (at least in this comment) purely on those factors. The implication here is you wouldn't trust your data to anyone that was born, grew up and has family and / or employees in China. I mean most of the large tech companies have some employees in China. How is this not judging an entire class (or group if you'd like) of people?

As far as security clearance, they are at least in theory assessed based on established facts about a particular person. e.g. being born in China doesn't automatically disqualify you as far as I'm aware. If you know otherwise or can point to examples, I'm open to being corrected.

I mean if it's been established that Eric has connections to the CPP then that's a different matter and we can look at that. My objection is with "Eric is a Chinese-American billionaire businessman so we shouldn't trust him".


> Japanese Americans were thrown into camps for the same reason, but we're not talking about jailing anyone here.

Yet.

At the risk of a slippery slope fallacy, institutional xenophobia ain't controlled by an on/off switch. Dehumanization is a gradual process, and establishing an attitude that people associated with an enemy are aligned with that enemy is part of that process. At first those associations might seem reasonable, going for officials and other important figures, and then perhaps their family, and so might the actions against them, like added scrutiny and surveillance of their communications and travels. The problem is that both ends of that are prone to scope creep - the target set broadens ever so slowly (citizens, ex-citizens, descendants of (ex-)citizens, their descendants, and so on, almost always excused with "well we need to be sure that $CURRENT_TARGET is not part of $PREVIOUS_TARGET"), while the actions worsen ever so slowly (surveillance, profiling, travel restrictions, property confiscation, imprisonment, sterilization, execution) as the rhetoric heats up from "we just want to make sure these people aren't the enemy" to "these people are the enemy and shall be treated as such".

Personally, I'd prefer to nip that in the bud rather than watch 1800's-era sinophobia reenact itself at the expense of my Chinese-American friends and colleagues. I also have enough self-awareness to know that if I would be upset by people writing me off as "will probably help oppress minorities and political dissidents if his government tells him to do so" simply because I happen to be a citizen of a country with a track record for oppressing minorities and political dissidents, then I should refrain from doing so to a citizen (let alone ex-citizen) of a different country with those same tendencies, even if those tendencies are, in my opinion, much stronger.


> The issue is national origin, not ethnicity. Japanese Americans were thrown into camps for the same reason

``` Of 127,000 Japanese Americans living in the continental United States at the time of the Pearl Harbor attack, 112,000 resided on the West Coast.[9] About 80,000 were Nisei (literal translation: "second generation"; American-born Japanese with U.S. citizenship) and Sansei ("third generation"; the children of Nisei). The rest were Issei ("first generation") immigrants born in Japan who were ineligible for U.S. citizenship under U.S. law.[10] ```

From https://en.wikipedia.org/wiki/Internment_of_Japanese_America...

What do you suggest?

Should there be a upper limit on the generations to be considered not originated from a nation state? According to what happened to WWII Japanese ethnic Americans, that number seems have to be > 3?

And remember that what happened in WWII Japanese internment camp is an evidence that "national origin" as an association was plainly wrong, from the same wiki page:

``` In 1980, under mounting pressure from the Japanese American Citizens League and redress organizations,[30] President Jimmy Carter opened an investigation to determine whether the decision to put Japanese Americans into concentration camps had been justified by the government. He appointed the Commission on Wartime Relocation and Internment of Civilians (CWRIC) to investigate the camps. The Commission's report, titled Personal Justice Denied, found little evidence of Japanese disloyalty at the time and concluded that the incarceration had been the product of racism ```

Emphasis on the last statement: `the incarceration had been the product of racism`.


Did we read the same comment? https://news.ycombinator.com/item?id=23553503

It explicitly argues that he's evil because he has a Chinese name and Wikipedia describes him as Chinese-American. There's no equivocating about the software being China-based, or him having close family in China for the CCP to threaten.

And discriminating against people for national origin is considered so bigoted it was explicitly included in the 1964 Civil Rights Act. Your reasoning is sound, but you should seriously reconsider your basic ethical principles.


I see your point about a person's family still living in China, and hence giving leverage to CCP to put pressure on them. However, the argument that where people grow up gives them inherent alignment with the government of that country, and they can only be "cleansed" through generations is at best bogus, and at worst ammunition for racism.

Not to justify atrocities happening in China or getting into Whataboutism, but just to give an analogy, would it be fair to consider any US expat an accomplice in or a proponent of separating migrant children from their families at the border?


First rule of HN commenting: Assume bad faith.


While I share your distaste for such assumptions.

I believe Zoomed has earned the privilege of folks being highly skeptical of their actions / motivations.


You also signed up an account for banks, they collect a ton of data on all your payments, got a problem with that? You gonna say yeah Zoom is not a bank, but your prose is the data collection part


It's perfectly ok to be against data collection in any form even when you are subjected to it on another service. Just because someone is subjected to data collection knowingly, does not mean they condone it.


> You also signed up an account for banks, they collect a ton of data on all your payments, got a problem with that?

Yes.


> Why care about privacy in one area when this completely unrelated area does it worse?


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: