Hacker News new | past | comments | ask | show | jobs | submit login
Zoom terms now allow training AI on user content with no opt out (zoom.us)
1581 points by isodev 8 months ago | hide | past | favorite | 512 comments



Thankfully nothing like this is in Jitsi Meet’s TOS: https://jitsi.org/meet-jit-si-terms-of-service/

It never ceases to amaze me how companies choose the worst software!


Section 4 of the Jitsi Meet ToS grants them similar rights. It's just with mushier language.

> You give 8×8 (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works..., communicate, publish, publicly perform, publicly display, and distribute such content solely for the limited purpose of operating and enabling the Service to work as intended for You and for no other purposes.

IANAL, but it seems like that would include training on your data as long as the model was used as part of their service.

Everyone who operates a video conferencing service will have some sort of clause like this in their ToS. Zoom is being more explicit, which is generally a good thing. If Jitsi wanted to be equally explicit, they could add something clarifying that this does not include training AI models.


> solely for the limited purpose of operating and enabling the Service to work as intended for You and for no other purposes.

To me (a former corporate lawyer) the "for You" qualifier would limit their ability to use content to train an AI for use by anyone other than "You". Is there an argument? Yes. But by that argument, they would also be allowed to "publicly perform" my videoconf calls for some flimsy reasons that don't directly benefit me.


I write these policies for my day job and I agree with this.


> I write these policies for my day job

My regrets :-p


it isn't for you solely/exclusively. If it "improves" the service for everyone - that includes "you".


Yep, I acknowledge that is a possibility, but it would also lead to them having permission to display literally the entirety of my videonconf calls to anyone, for advertising purposes or some other purpose that only incidentally benefits me. That would be a strained reading IMO.


Additionally courts consider the fact that users have little if any say in the terms and thus tend to take the most restrictive but still reasonable view of any uncertainty in the terms.

Basically "if you wanted it you could have asked for it, if you didn't then that is a problem".


Yep, contracts of adhesion, and construing against the drafter: both favor the user here.


To misquote Bill Clinton, it depends on what the means of 'you' is.


More like a certainty :)


Something like: If I have a call with you once, theoretically I might have a call with you again in the future. If they use my content to train "your" AI that would improve our theoretical future call, too, and is a "for me" use, I guess?

And I might have a call with any other zoom user, too, potentially, maybe. So really they are doing me a service by using my content all over the place — who knows, it might benefit me at some point!


"You" is a defined term in Jitsi's Terms of Service.

>...any legal entity or business, such entity or business (collectively, “You” or “Your”)


In case this is meant to imply that perhaps my business and your business are both part of the same "You", they are not. They are each a party to a separate contract with Jitsi; we are not all party to one huge contract with each other (which would hypothetically allow Jitsi to do anything with our content for the purpose of helping them serve all of us).


Self-hosting Jitsi is the better option. Or BigBlueButton, and there are more self-hosted open-source Zoom alternatives.


Do you happen to know of others by any chance. For self-hosted video call solutions, looks like Jitsi and BigBlueButton (BBB) are the only decent options out there.


There's now also https://github.com/vector-im/element-call.

They have SFU support as of recently, so it should scale similarly to Jitsi et al.


QOS (Quality-of-Service) rules might starve your traffic of bandwidth. Are you sure you have perfect "Net Neutrality" on your side?

You would be well advised to use services where the traffic travels through https on port 443 on the server (because it's been my experience that it tends to get pretty good QOS favorability). My own little rule of thumb: "you can connect to any port you want, so long as it's port 443 https." ;)


On the other hand, tls/443 is pretty undesirable for media delivery in videoconferencing because a) it's tcp-based and the required ACKs mean a big reduction in throughput and increase in latency, especially in the presence of packet loss, and b) most video services these days (and open source servers) use webrtc which encrypts the data in transit already--so the tls encryption is a waste of resources

Though tls/443 is usually still supported because it's most often allowed by even restrictive firewalls and networks


> Do you happen to know of others by any chance.

There's Galene, <https://galene.org>. It's easy to deploy, uses minimal server resources, and the server is pretty solid. The client interface is still a little awkward, though. (Full disclosure, I'm the main author.)


Wait, what is "the service" here?

As I understand it, it refers to using meet.jitsi.si, not "another service" someone might provide by downloading the Jitsi software and running it on their own server.

Please correct me if I'm wrong since this would give me cause to reconsider running a Jitsi server.


It's "the Service" with capital S, indicating that it is a term specifically defined in the contract. Here "the Service" is defined as "the meet.jit.si service, including related software applications". If that's not vague enough, article 2 gives 8x8 the right to change, modify, etc. the Service at any time without any notice.

The guys at 8x8 may be well intentioned, but their lawyers have done their best to not give the customer any basis to sue the company in any foreseeable circumstances. That is what company lawyers do, for better or worse.

Regardless, it appears that at present time jitsi is not including AI training in their service, and there is no explicit carve-out in their terms for AI training. However, by article 2 they do have the right to store user content, which might become a problem in the future.


Jitsi App Privacy:

https://apps.apple.com/us/app/jitsi-meet/id1165103905

And Zoom:

https://apps.apple.com/us/app/id546505307

Looks like one company likes to gobble data more than the other even if both privacy policies are gobble-open.


For various reasons I have a bunch of different groups where I use different videocall software for regular meetings - Zoom, Jitsi, Teams, Skype, Google Meet and Webex.

Out of all those, Jitsi is the only one where I can't rely on the core functionality - video calls and screensharing for small meetings (5-6 people); I have had multiple cases when we've had to switch to something else because the video/audio quality simply wasn't sufficient, but a different tool worked just fine for the same people/computers/network.

Like, I fully understand the benefits of having a solution that's self-hosted and controlled, so we do keep using self-hosted Jitsi in some cases for all these reasons, but for whatever reason the core functionality performs significantly worse than the competitors. Like, I hate MS Teams due to all kinds of flaws it has, but when I am on a Teams meeting with many others, at least I don't have to worry if they will be able to hear me and see the data I'm showing.


Bigger servers?


Won't help. I've had multiple callers encounter trouble with what I guess WebRTC traffic due to browser extensions, "anti" virus software, VPN policies etc. Zoom etc. works fine. They usually fixed it by switching to a personal phone instead of a work laptop but in general, the situation is not tenable.


Not sure there would be a decent enough return on investment, especially if the other tools they regularly use provide more reliable service at no additional cost.


How does Jitsi handle 500-person+ conference calls these days? This is the killer zoom feature - it looks like Jitsi can handle up to 500 now. https://jaas.8x8.vc/#/comparison .

That's personally not enough for many remote companies. So if we're going to have to have Zoom on our machines anyway (to handle an all-company meeting), why not just use it for the rest?


Are 500-person conference calls actually productive? Surely the number of speakers in any such meeting will be a small percentage of listeners?


It's useful.

It's more of a large-scale broadcast situation. Think of large corporate town halls, town council meetings, etc.


You can just have a conference call with the 5-10 speakers and use broadcasting software to stream it to the audience, why do they need to be in the conference?


Why setup a separate broadcast when listeners can just join the meeting room?


Yes, I know it's more comfortable that way, but if you have to decide between giving all your data from all your meetings to a random US company and a slight annoyance whenever you do conferences with more than 500(!) participants, the choice is pretty simple to me.

Giving all the data to zoom probably means also giving it to most US law enforcement agencies (should they request it), that would be a big no no for me.


Not to mention that until very recently even MS Teams sent you to a different product when you wanted to stream to 500 people. Even if it's now integrated, it's still a different product inside (and e.g. you could for example open a new window when you were in a 500 people "meeting" at the time when you still could not do so for a regular meeting).


You say "just more comfortable" but if you have two streams and one of them is on a channel you know to be unreliable (Jitsi) it's pretty guaranteed the unreliable stream is going to be down a significant percentage of the time. If you're a company with 500 people this isn't a comfort question, you're wasting probably hundreds of hours of your employees' time.


I think we're not on the same page about Jitsi being unreliable. In fact, it has been more reliable for me than Zoom in the past. Maybe due to the fact that I'm running Linux, I don't know, I haven't tried either on Windows.


For the corporate or training use case, this is not a problem. If you are worried about US law agencies, you shouldn't be using any system that isn't rooted in face to face communication for anything sensitive. (And even that is suspect with as small as bugged devices are today.)


There is a huge difference between requesting data that has already been collected and requesting Zoom/Microsoft/Google to record future data. The latter probably requires some serious intent. And of course, if I would want to be entirely safe from US law enforcement espionage then I would need to not use computers but whose use case is that?


Because then you have the option to use less specialized software (not Zoom).


Live Q&A is a nice feature.


Conference for the speakers + unlisted livestream on YouTube could handle that, using chat for Q&A.


So, then... you're bound by youtube's TOS, you can't prevent people from getting in (usually via login), and Zoom makes it a nice experience instead of a hack.

Oh, and you can also do sub-rooms with Zoom, which has some applications in these types of meetings.


They don't actually suggest using YouTube. The point is just to illustrate that this is a very common and relatively simple concept. There are tons of tools able to accomplish this.


Chat lags for 5-120 seconds depending on livestream settings, writing is much slower than speaking, does not always convey the question as well as sound, and is close to impossible to do on the go.


They allow substantially less than 5. Tho trying is indeed slower for most people.


In my experience there will be always some guy ranting for minutes so I learned to really appreciate town halls with a few speakers and taking questions written in the chat.


For the Q&A section that comes at the end, usually.


You don’t need to be in the videocall to ask a question; you can do it via chat.


Zoom has a mode that basically does this for you, which I assume is how they support >500 users.


At some point though why not just collect questions beforehand, record the whole thing and let people watch it on their own time. At that scale there'll be no interactivity during the meeting anyway.


Because that's how you end up with projects that take 3 years to plan instead of 3 months. A live Q&A where all of the experts who can answer questions and everyone interested in the subject who may have questions are in the same room (live or virtual) is a lot more productive compared to what you are suggesting.

If something they said in the main presentation was missing important details that you need to do you work, why do you need to wait days/weeks for them to gather all the questions, find all the answers, and publish a video, when they could just answer it live in a few seconds?!


Having 500+ people on a project is how something takes 3 years to plan.

"At that scale there'll be no interactivity during the meeting anyway."


There is interactivity. Each company has their own way of doing this, but it's typical that they have someone reading the chat to gather questions and that higher ranked employee can directly speak to ask questions.


You'd be surprised how much chat happens as a side channel. Further, collecting questions means that the presentation material would have to be out there first, and that misses the point of the town halls, where financials and other initiatives are often first presented to the larger organization.


Out town halls usually ask for questions beforehand and that works quite well.


Very... Particularly when the CEO announces half of those present are sacked...


So it only needs to support 250 participants, really.


It may be that only a small subset of people will talk, but it's not necessarily the case that you know which subset beforehand. When the software can handle it, it's much easier to have everyone join a single call than it is to make sure that the right three people and two meeting rooms have access to talk, and guess which one other person out of about 250 might be called on to provide more context on an answer.

And I suspect that for most people -- including me -- Zoom accounts are "effectively unlimited". I wouldn't expect that many people to attend one of my meetings. The Internal Events team have licenses that allow for more attendees; I have a 500 attendee limit and I doubt I've ever gone above 50.


City wide Town halls where every one can listen in but pre-registered people can ask questions are a productive usecase for public information. Those buildings can't accommodate 500 people.


For real, theres no 500 person conference calls, just mostly a one way broadcast with a stream of questions.


> 500-person+

that is called broadcast media -- it was actually better thirty years ago than it is now. If you want conversation then you make a panel, and have a single microphone for the rest.


Specifically because of the discussed TOS.


Come on, 500+ calls are a very niche use case. With plenty of alternatives at that


+1 for Jitsi. They are awesome, lightweight, and just work with the least hassle.

Pretty bad that many nontechnical users are not aware of it compared to Google Meet or Teams.


This is just a marketing problem aint it?

Unfortunately, one big marketing resource is also owned by said competitor...opps. So where are those antitrust laws again?


A lot of us technical users have never heard of it either lol


It's also much more responsive than teams. They seem to optimize frame rate over resolution and teams seems to do the opposite.

Having used both I find the framerate more important as it's much easier to interpret quick facial expressions. But teams looks glossier which makes it easier to sell I guess.


Have you experienced anything like this other commenter mentioned?

https://news.ycombinator.com/item?id=37022878


Nope it works great for me, we always use it with the ham radio club and it performs admirably.


For faces that might be true. I've had issues with different tools when sharing a full desktop session on a 4k monitor.


lightweight? they are literally the only video chatting service I use that makes my laptop fans spin up.


I am yet to find a modern video chat that isn't draining the battery of any laptop. From old Xeons, to fairly recent Ryzen and even M1/2 Macs.

It's a bit puzzling, actually. I don't think Skype and TeamSpeak had the same effect on computers back in the day. Just how much local processing are they doing these days? It's crazy


It's most likely due to the fact they are all electron apps rather than they are doing "something".


Hardware decoding is also an issue.. as in, not being used. Old webcams used to do h.264 encoding in hardware. Encoding has since now moved to the CPU which may or may not be fine.. the next issue becomes the codec chosen.. most stuff all has h.264 decoding in hardware.. but it's not being used anymore.. instead they're trying to use vp09 or h.265 or av1 which in many cases requires CPU-based software encoding and decoding.. so the fans rev up like turbines.

I feel certain the reason this is happening is because some middle-manager terrorist in a boardroom said "use this codec it won't require as much network data usage! value for the shareholder!" without asking first whether hardware encoding is beneficial even if there's a bit more network traffic with the older codecs.

Really burns me up. I do not want to use software encoding/decoding if I have hardware support.


Bandwidth is the limiting factor in a lot of circumstances, and networks are very challenging to manage. Especially with an increasing number of users on mobile connections, reducing network usage can be the right call.

But performance matters, too, of course. It's tricky to balance them.


I think Google Meet uses VP9, which is really annoying.


> electron apps

Which only adds limited overhead to certain cases. Unless they are encoding/decoding video directly in JS...


Correct, teams doesn’t use videotoolbox so it’s software encoding. Probably not directly in javascript per se, it’s probably calling a native library, but it’s hot because teams doesn’t use hardware encoding.


Video encoding and decoding is expensive! Especially as cameras improve and users' expectations of quality increase.


Zoom is reasonably light and uses hardware acceleration on anything modern (e.g. my 2015 MBP).


I tried it at the beginning of the pandemic and my siblings phones all drained during the hour long call.


I started using matrix internally (with element as a client) which uses jitsi under the hood for video/voice chat. Quality is amazing.


Element Call is going to be pretty great once it is production-ready and has E2EE enabled by default (a branch of it already has it on.)


We've been using self-hosted matrix for the past 3 years with our jitsi instance and I tend to agree with you.

It's reliable and privacy preserving.


Not anymore actually. The jitsi integration was just a temporary thing but 1:1 video chat already works natively.


I use Zoom for work and never got an email explaining that suddenly they can use call recordings to train some AI models and sell this to 3rd parties.


Worst is relative. Zoom has the lower barrier to entry for normal users (who far outnumber us nerdy type) than any other app in it's class. Worst for privacy, best for usability, many argue.


Worst for privacy best for usability is the norm. Most B2C stuff is almost predatory. The only exceptions are at the high (cost) end of the market, and Apple to some extent.

If you aren’t paying in either time (DIY) or money, you are probably being exploited.


Apple is also the high cost end of the market.


What I take to be the TOS for Google Meet (it's a little hard to tell!) makes no specific reference to AI, but does mention use of customer data for "developing new technologies and services" more generally. https://policies.google.com/terms#toc-permission


Actually, they only affect their hosted meet.jit.si service, right? Not if you self-host Jitsi on your own server (which you should if you're a medium-large company, for data protection and all that)


Of course. If you run it yourself, you're free to train your neural nets on your users, if that's something you want to do

For restrictions on what you can do with the code, you'll need to check the code's license, not the hosted-service's terms of use


Also jitsi can easily be self hosted which means no information will leak altogether.

I've refused to install zoom since they installed a Mac backdoor and refused to remove it until Apple took a stand and marked them as malware until they removed it. And that was far from their only skullduggery.


Jitsi is at least reasonably self hostable, minus the inability to have users to login without some effort.


Also HN user jeltz below mentioned:

> I have tried most of them: Google Meet, Teams, Slack, Discord, Skype, Jitsi and so far I liked Jitsi the most and Skype the least.


Skype became really really terrible, it looks like it's been unmaintained during the past 10 years, I'd rate its usability worse than most open-source software. The sound quality is also awful, it feels like I'm calling a landline.


Where do you live? In the US at least, landline (AKA POTS) is still the gold standard for audio quality.


I live in France, landline had a distinct background white noise to it that somehow Skype managed to imitate. Switching to any other software feels like you're upgrading to HD audio.


It’s called “comfort noise,” and was an option in Lync/Skype for Business. A lot of users being switched from desk phones, especially older ones who still primarily used landlines at home, found themselves wondering if their conversation partner was still on the line without it.


Not parent commenter, though facetime audio or telegram audio is my preferred for audio quality.


In the US I don't know a single person that has access to POTS. Discord (with paid nitro) is the gold standard for quality and latency, followed by all the free VoIP apps


I live in the US, and I'm pretty sure everyone I know has a landline, though a good number of them are now digital/fiber/whatever. Some people I know still have multiple landlines, as it's cheaper than paying multiple cell bills if necessary. I know at least one person who used to have call forwarding set up to get calls on their cellphone, but with the current state of marketing calls they probably don't do that anymore.


> everyone I know has a landline

We clearly live in very different bubbles

> digital/fiber/whatever

VoIP

> cheaper than paying multiple cell bills

Nobody pays multiple cell bills unless they wanna use several data-only eSIMs from different carriers to get better speed/coverage. If you just want a lot of phone numbers, you can port your numbers to a VoIP provider and forward them. Way cheaper than a landline


I may be too much of a zoomer but I haven't seen a landline in years, nearly a decade actually.

I'm not sure who still has them


The only people I know who still have a landline are my grandparents who are in their 70s


I'm in the US and landline was dogshit compared to modern discord/whatsapp/whatever.

Maybe it's cause old phone mics sucked but it wasn't great.


Zoom noise canceling is really good, it can filter my children screaming in the background. Very useful for WFH people


Not yet.


> how companies choose the worst software!

A local accounting firm with 4 employees just wants their conferencing software to work - Zoom does that better than anyone else.

There is nothing "worst" about that. In never ceases to amaze me that this community is so out of touch with the general populace.


Tangentially related, but a number of telehealth operations with hospitals/therapists/etc... use Zoom -- I suspect because their clients can connect without an app or an account over a browser.

When you join a Zoom session over the browser, you don't sign a TOS. And I assume that actual licensed medical establishments are under their own TOS provisions that are compatible with HIPPA requirements. Training on voice-to-text transcription, etc... would be a pretty huge privacy violation particularly in the scope of services like therapy. Both because there are demonstrable attacks on AIs to get training data out of them, and because presumably that data would then be accessible to employees/contractors who were validating that it was fit for training.

Out of curiosity, has anyone using telehealth checked with their doctor/therapist to see what Zoom's privacy policies are for them?


Looks like they have a separate offering, Zoom for Healthcare that presumably has different terms and conditions.

https://blog.zoom.us/answering-questions-about-zoom-healthca...


What if you discuss sensitive health-related details with someone other than your doctor, for example your attorney?

The privacy issues here are bottomless, and so are the legal issues.


The law doesn't protect it. HIPAA doesn't apply in that setting.

Attorney client privilege is an interesting case.

"Privacy issues" is a meaningless phrase to me when divorced from the law. Do you mean, like, ethically concerning? This term in the contract is neither uncommon nor illegal.


I know that many smaller therapists use Zoom for exactly the reasons you mentioned above - ease of use. They often don't have the technical know-how to assess the technology they're using.

The UK, for example, has hundreds of private mental health practitioners (therapists, psychologists, etc.) that provice their services directly to clients. They almost universally use off-the-shelf technology for video calling, messaging, and reporting.


IANAL, but I did health tech for 10 years and had my fair share of interactions with lawyers asking questions about stuff I built.

HIPAA applies to the provider. Patient have no responsibility to ensure the tech used by their care provider is secure or that their medical records don't wind up on Twitter. HIPAA dictates that the care providers ensure that happens by placing both civil and sometimes criminal liability on the provider for not going to great lengths here.

In practice, this means lawyers working with the care providers have companies sign legal contracts ensuring the business associate is in compliance with HIPAA, and are following all of the same rules as HIPAA (search: HIPAA BAA).

Additionally, you can be in compliance with HIPAA and still fax someone's medical records.


Healthcare professionals still use fax precisely because of this.

Analog line fax is HIPAA compliant because it is not "stored"

Using a cloud fax provider will inmediately put you out of compliance for this reason, unless you have a HIPAA compliant cloud fax service, which are rare.


I don’t think the question is about Zoom’s safeguards which are audited, and as you say almost certainly stronger than HIPAA requirements, but rather whether they can use the stored PHI for product development where the law appears ambiguous.


Imo the law basically says you can do this with PHI:

-De-identify it then do whatever you want with it -use it to provide some service for the covered entity, but not for anyone else -enter a special research contract if you want to use it slightly de-identified for some other specific purpose


One note is that the act of deidentification itself requires accessing PHI when done retroactively, this may be institutional policy or specific to covered entities but per the privacy office lawyers such access (apart from a small dataset) requires a permitted use to be accessible in order to then deidentify and use freely.

As with all things HIPAA, this only becomes a problem when HHS starts looking and I’m sure in practice many people ignore this tidbit (if in fact this is the law and not Stanford policy).


This is correct -- the covered entity can de-identify data, or ask the BA to de-identify data. If the BAA says the BA can, then they can.


Related to this, anyone know if Zoom has a separate offering for education (universities, schools, etc)? I teach at a university, and not only do we use Zoom for lectures etc, but also for office hours, meetings, etc, where potentially sensitive student information may be discussed. I'm probably not searching for the right thing; all I found was this: https://explore.zoom.us/docs/doc/FERPA%20Guide.pdf

(FERPA is to higher ed in the US what HIPAA is to healthcare.)


Both my undergraduate and universities have free Zoom under <university name>.zoom.us, so I assume it's separate


"Vanity-URLs" is just a feature, usually a requirement for SSO. I cannot see that that would cause any different treatment of data related to your use.


IANAL but “Zoom for Healthcare” is a business associate under HIPAA and treated as an extension of the provider with some added restrictions.

Covered entities (including the EMR and hospital itself) can use protected health information for quality improvement without patient consent and deidentified data freely.

Where this gets messy is that deidentification isn’t always perfect even if you think you’re doing it right (especially if via software) and reidentification risk is a real problem.

To my understanding business associates can train on deidentified transcripts all they want as the contracts generally limit use to what a covered entity would be allowed to do (I haven’t seen Zoom’s). I know that most health AI companies from chatbots to image analysis do this. Now if their model leaks data that’s subsequently reidentified this is a big problem.

Most institutions therefore have policies more stringent than HIPAA and treat software deidentified data as PHI. Stanford for example won’t allow disclosure of models trained on deidentified patient data, including on credentialed access sources like physionet, unless each sample was manually verified which isn’t feasible on the scale required for DL.

Edit: Zoom’s BAA: https://explore.zoom.us/docs/en-us/baa.html

“Limitations on Use and Disclosure. Zoom shall not Use and/or Disclose the Protected Health Information except as otherwise limited in this Agreement or by application of 42 C.F.R. Part 2 with respect to Part 2 Patient Identifying Information, for the proper management and administration of Zoom…”

“Management, Administration, and Legal Responsibilities. Except as otherwise limited in this BAA, Zoom may Use and Disclose Protected Health Information for the proper management and administration of Zoom…”

Not sure if “proper management and administration” has a specific legal definition or would include product development.

Edit 2: My non-expert reading of this legal article suggests they can. https://www.morganlewis.com/-/media/files/publication/outsid...

“But how should a business associate interpret these rules when effective management of its business requires data mining? What if data mining of customer data is necessary in order to develop the next iteration of the business associate’s product or service? … These uses of big data are not strictly necessary in order for the business associate to provide the contracted service to a HIPAA-covered entity, but they may very well be critical to management and administration of the business associate’s enterprise and providing value to customers through improved products and services.

In the absence of interpretive guidance from the OCR on the meaning of ‘management and administration’, a business associate must rely almost entirely on the plain meaning of those terms, which are open to interpretation.”


Haha wow this is a great post. I am a lawyer and you may have solved a problem I recently encountered. So you think this is saying that generic language in the Zoom BAA constitutes permission to de-identify?

Are there examples of healthcare ai chatbots trained on de-id data btw? If you're familiar would love to see.

What's your line of work out of curiosity?


> Haha wow this is a great post. I am a lawyer and you may have solved a problem I recently encountered. So you think this is saying that generic language in the Zoom BAA constitutes permission to de-identify?

Not that I’m an expert on the nuance here but I think it gives them permission to use PHI, especially if spun in the correct way, which then gives them permission to deid and do whatever with.

My experience has been that it’s pretty easy to spin something into QI.

> Are there examples of healthcare ai chatbots trained on de-id data btw? If you're familiar would love to see.

https://loyalhealth.com/ is one I’ve recently heard of that trains on de-id’d PHI from customers.

> What's your line of work out of curiosity?

Previously founded a health tech startup and now working primarily as a clinician and researcher (NLP) with some side work advising startups and VCs.


Awesome. Thank you!


Happy to help. Let me know where to send the invoice for my non-legal legal expertise, if your rate is anything like my startup's lawyer you'll find me a bargain! Haha.


Zoom has a specific version for HIPPA regulations.


Forgive me for being pedantic but this is like nails on a chalkboard to me.

HIPAA is the correct abbreviation of the Health Information Portability and Accountability Act which as an aside doesn't necessarily preclude someone from training on patient data.

HIPPA is the unnecessarily capitalized spelling of a (quite adorable) crustacean found in the Indo-Pacific and consumed in an Indonesian delicacy known as yutuk.

https://en.wikipedia.org/wiki/Hippa_adactyla


It's not a privacy violation because Service Generated Data is not PII.

All of this is a lot of BS about nothing.


edit: I'm retracting my earlier comment. Earlier I wrote that the headline didn't seem to match what was in the TOS, since OP never mentioned which part they're concerned about.

I'm now assuming the part they don't like is §10.4(ii):

> 10.4 Customer License Grant. You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: [...] _(ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof_

Notice that 10.4(ii) says they can use Customer Content "for ... machine learning, artificial intelligence, training", which is certainly allowing training on user content.


But it is saying that your customer content may be used for training AI, in 10.4:

> 10.4 Customer License Grant. You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: (i) as may be necessary for Zoom to provide the Services to you, including to support the Services; (ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, [...]


> You agree to grant and hereby grant

I get that legalese is like human-interpretable pseudocode, but like, is there really no better way to word this? How can you grant without agreeing to grant?

> import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works

Wow this cover of Daft Punk - Technologic sucks.

I, for one, do not welcome our dystopian overlords, but am at a loss to what I can do about it. I try to use Jitsi or anything not-zoom whenever possible, but it's rarely my pick.


>> You agree to grant and hereby grant

"Hereby grant" means the grant is (supposedly) immediately effective even for future-arising rights — and thus would take precedence (again, supposedly) over an agreement to grant the same rights in the future. [0]

(In the late oughts, this principle resulted in the biotech company Roche Molecular becoming a part-owner of a Stanford patent, because a Stanford researcher signed a "visitor NDA" with Roche that included present-assignment language, whereas the researcher's previous agreement with Stanford included only future-assignment language. The Stanford-Roche lawsuit on that subject went all the way to the U.S. Supreme Court.)

[0] https://toedtclassnotes.site44.com/Notes-on-Contract-Draftin...


Yes, but the parent commenter noticed that and wondered about the other part, the "agree to grant" part. Simply "hereby grant" should suffice.


> Simply "hereby grant" should suffice.

Not necessarily — in some circumstances, the law might not recognize a present-day grant of an interest that doesn't exist now but might come into being in the future. (Cf. the Rule Against Perpetuities. [1])

The "hereby grants and agrees to grant" language is a fallback requirement — belt and suspenders, if you will.

[1] https://en.wikipedia.org/wiki/Rule_against_perpetuities


> How can you grant without agreeing to grant?

I think it's more that they're being explicit about the logical AND in that sentence. You agree to grant, AND grant them the permission.

I think it's a technicality about it being a "user agreement" so they probably have to use the word agree for certain clauses.


To whom at Zoom do we send the eDiscovery (and litigation hold) requests? My goodness.


set yourself up with a couple of vices [coffee, smokes] and have look here, for things you can do:

https://news.ycombinator.com/item?id=37022623 [a number of links regarding how to play with bots and bork training by"malforming" your inputs]


And after that litany of very specific things, "and to perform all acts with respect to the Customer Content." Couldn't the whole paragraph just have been that phrase?


Not a lawyer, but generally when whole paragraphs aren't "that phrase" it's because people read loopholes into "that phrase."


You're right. I retracted the comment and edited to reflect this point.


Wow. I hope the op just didn’t read that far.


hitting "search" in your browser and typing "artificial intelligence" doesn't really require reading the whole thing ;)


Seemed odd that there was so much details refuting it on the points prior to 10.4

Maybe its just a coincidence.

Or maybe it’s two angles perfectly coinciding.


You will have to excuse me if I don’t trust a company that kicks off users at the behest of the PRC!

https://techcrunch.com/2020/06/11/zoom-admits-to-shutting-do...

Quibbles over the definition of phrases like “Customer Content” and “Service Generated Data” are designed to obfuscate meaning and confuse readers to think that the headline is wrong. It is not wrong. This company does what it wants to, obviously, given it’s complicity with a regime that is currently engaging in genocide.

https://www.bbc.com/news/world-asia-china-22278037.amp

Why do you trust them to generate an AI model of your appearance and voice that could be used to destroy your life? I don’t.


I'm not rendering an opinion here about the trustworthiness of Zoom. I'm simply saying that the plain reading of the TOS is the opposite of what the headline on this post claims.


The definition of phrases like “Customer Content” and “Service Generated Data” are unclear. It is disingenuous to say that the headline is the “opposite” of what the headline suggests.

You really think that the engineers in China are not actively working on developing AI models of users without using a lot of user content to feed the model? Doubtful. Hiding behind ill-defined terms has the fingerprints of an Orwellian regime. I think I know which one.


Well it’s a Chinese company. So they are beholden to the CCP.


Zoom? A company publicly traded on the Nasdaq and funded in San Jose, CA?


Surprised me too...

"The company has previously acknowledged that much of its technology development is conducted in China and security concerns from governments abound."

https://techcrunch.com/2020/06/11/zoom-admits-to-shutting-do...


The wording of things in the preferences dialog has always convinced me that it's not primarily developed in the US.


Its CEO has ties to the CCP, development is all done in China. Just because it has a company registered and claims to be founded in San Jose doesn’t mean it’s not a Chinese company.


By that logic, Apple is also a Chinese company.


Apple only assembled products in China. Almost none of the iPhone is made in China. They do no development in China. They didn’t start the company in China. They run the app store and iCloud storage separately in China.

So by that logic. No.


Yeah, I saw some people posting screenshots of 10.2 and was thinking maybe it was just exaggeration for clicks, but 10.4 is horrifying. Customer Content as defined in 10.1:

"10.1 Customer Content. You or your End Users may provide, upload, or originate data, content, files, documents, or other materials (collectively, “Customer Input”) in accessing or using the Services or Software, and Zoom may provide, create, or make available to you, in its sole discretion or as part of the Services, certain derivatives, transcripts, analytics, outputs, visual displays, or data sets resulting from the Customer Input (together with Customer Input, “Customer Content”); provided, however, that no Customer Content provided, created, or made available by Zoom results in any conveyance, assignment, or other transfer of Zoom’s Proprietary Rights contained or embodied in the Services, Software, or other technology used to provide, create, or make available any Customer Content in any way and Zoom retains all Proprietary Rights therein. You further acknowledge that any Customer Content provided, created, or made available to you by Zoom is for your or your End Users’ use solely in connection with use of the Services, and that you are solely responsible for Customer Content."


Yikes, and to think some schools force people to use Zoom...


And I'm supposed to trust then? The company that recently disabled security controls of the OS as a growth hacking technique?


Since this is a legal language discussion, worth noting your quoted portion might not say what you said it explicitly says:

> Service Generated Data; Consent to Use. Customer Content does not include any telemetry data, product usage data, diagnostic data, and similar content or data that Zoom collects or generates in connection with your or your End Users’ use of the Services or Software (“Service Generated Data”).

Notice that Service Generated Data quite explicitly doesn't include Customer Content.

On the contrary, it says Customer Content doesn't include service generated data. So you don't have rights to the telemetry or anything else they collect.

It does not say Service Generated Data doesn't include their own copies of customer content, which could be a part of "data Zoom collects .. in connection with your .. use".


Except it’s a few steps away from customer input and customer content.

Sounds like it can eventually include chats during a call.

Sounds like it can eventually include the files of your meeting recordings in its processing, since it is a file. A call recording stored to your zoom cloud can be a form of service generated data from calls.

And sounds like it include transcripts of live audio could also function as service generated data (was the audio clear? Could ai convert speech to text?)

The statistics of calls could turn into the wavelengths of the audio and video in real time. Gotta keep an eye on the quality with AI.

My only question is if this include the paid users?

If so, I had been meaning to move on from Zoom as a paid customer and this may have done it.

It’s not end to end encryption if Zoom can tap into your files on your cloud or computer. Or let you pretend you are providing the other party with encryption when they aren’t safe. Corporate information is valuable to some.


> But it doesn't say that Customer Content is being used to train AI; it says that Zoom can do whatever it wants with Service Generated Data.

Customer recordings are service generated content.


> You agree that Zoom compiles and may compile Service Generated Data based on Customer Content and use of the Services and Software.

This clause reads like the distinction is less about the contents and more about zoom's rights to use any content


Notice that "marketing" is in there. Microsoft claims the right to listen in on all your Zoom calls and use that data for marketing purposes.


Good catch jxf! but what is that boundary line between SGD and Customer Input/Content? Is it blurry or clearly defined? It seems like things like translations or future enhancements might fall into that area (it also seems like training AI on diags isn't as useful), so this might be expanded in the future now that they have that language in place.


It is defined not at all. Sorry if this is bad for your investment decisions, jxf, but this company is not trustworthy.


I don't have any positions in Zoom (although I did have some puts last year that I've since closed out).


You are misreading and misunderstanding this whole paragraph.

The purpose of 10.4 is to allow zoom to send your call to other services, like say YouTube for live streaming, or any of the dozens of other services that integrate with their APIs. Without 10.4, three quarters or more of Zooms use cases would no longer work.


Who in their right mind would use Zoom as a service. My employees will never connect to another conference call with a third party that uses zoom again, ever.


I appreciate your sentiment but sometimes there’s immense pressure to use it because it’s what everyone else is using, and refusing would cause a meeting to be disrupted (or force you not to attend).


But sometimes legal has the trump card in terms of dictating company policy, and having confidential information laundered into the public domain via training on "customer content" seems like a very red line.


I am curious if they have been silently saving voice to text transcription in the background on all calls and if AI will be permitted to ingest all of that data. A great deal could be learned from private one on one calls in the corporate world. The insider knowledge one could gain about corporations and governments would be fascinating.


I feel as if 2023 could become the inflection point where we will finally start investing in our own infrastructure again. Video calls for example are really a commodity service to be set up at this point.


Where I work they have been running in-house video meeting infrastructure for close to 20 years. They abandonded all the equipment and expertiese a few years ago in favor of Zoom. For all its faults, it's just so much easier for users. They probably saved 10 or more minutes per meeting of "Can you hear me? Can you see us? Can you see my screen?" BS at the start of each meeting.

I guess it also helps that these days most people are working with phones or laptops that have integrated and well supported cameras and microphones, vs. then when that stuff would have been external peripherals and required installation of the proper drivers.


Odds of any company spending the millions of dollars required to do that poorly, let alone going the extra distance to do it right: about zero.


I don’t know, we might be closer to quality of service parity than we think.

Even without taking into account “costs” of blatant privacy disregard / violation, data theft, potential industrial espionage, etc.

If the tools continue to get better at the current rate; then the SREs you have to hire anyways will probably be able to deliver about equal results (while staying in control of the data).

I’m thinking about those GPU “coops” we heard about emerging, shared between SV startups.

And then think about what Oxide are doing.

Then binding all of those trends together through the promise of Kubernetes and its inherent complexity finally getting realized / becoming “worth it” at some point.

Multi cluster, multi region - multi office attached server rooms across CO’s locations? Everything old could be new again. Wireguard enabled service meshes, Cluster API, etc. We will get there at some point probably sooner than later.

Then you “just install” the fault tolerant Jitsi helm chart across that infra… with all the usual caveats of maintenance taken into account of course. Again hassles will be reduced on all fronts and SREs needed anyways.

I do lots of terraform and k8s in my day job but at this point I deem any work that isn’t directly related to k8s as some kind of semi (at best) vendor specific dead weight knowledge. Kind of why I’d never would want to be knowledgeable about browser quirks - I hate how much I know about these proprietary cloud APIs.

I know some people who work on Kubernetes for “real-time” 5G back-ending if you can believe it. Lots of on-prem there on the cellular provider sides etc. We are getting really close already.


You're not going up against "how hard is it to roll your own", you're going up against "how inconvenient is it compared to Zoom". You can spend millions to make something that works but unless it's as good as Zoom is (and that's going to cost you a few million to develop from scratch, even with off-the-shelf FOSS components, and FAR more if you're hiring experts to write it scratch) your CEO should, and I stress *absolutely should* (because their responsibility is to shareholders, not to employees) go "how is this better than zoom, and why are we not using that instead so we can put that money in our own wallet?".


What is so hard about it? It's a web app and some video manipulation. It would be nice if computers were usable enough that this would take a weekend.


The part where "it's a web app and some video manipulation" requires hiring about a million dollars worth of "at least three developers" (which costs a company their salary plus that entire salary again for insurance, health care coverage, etc) to write and maintain that app for you, plus the at least another million that it'll set you back ensuring that you have all the hardware in all your offices to make that smooth rather than "OH FOR FUCKS SAKE CAN WE PLEASE JUST USE ZOOM WHAT THE FUCK" from every single employee.


Basic video calls? Absolutely, I've done it in a weekend with webRTC. All the other features that enterprise customers require? That's years of work.


It's quite common for corporate/government contracts to have totally different terms that prohibit any kind of AI training (or recording/access at all). This has been the case for years now. Precisely because of the risks you highlight.

In these cases, companies train on content stored/transmitted in the free/individual consumer version only.


That's good to know. Assuming government employees are not meeting with anyone that is using personal or corporate accounts contractors, vendors they should be at less risk of AI blackmailing them or selling secrets to opposing nations. Everyone else will just need to be extra careful what they say in the event that the AI accidentally leaks something.


Their mouths must be watering at this thought but the legal repercussions are obviously company destroying


It wouldn’t be surprising.

Gotta make sure audio is clear on calls.

How?

We run randomly less random speech to text to make sure words are being said.

Which words? Well if any are on this list of words we might think have to tell someone.


Isn't that what "Customer Content" is?


It is, though I suspect there may be some expectation that voice-to-text transcription only occurs when one clicks a button to make it so.


I just like how everyone is up in arms over the use of your meetings for AI training specifically, when the ToS clearly says all "Customer Content/Customer Input" AKA your words, text, voice, face, etc can be used for "Product and Services Development" which could as easily be a facial recognition database, a corporate espionage service, a direct competitor to whatever company you work for, or literally anything else before it's an AI lol.


10.2 … You agree that Zoom compiles and may compile Service Generated Data based on Customer Content and use of the Services and Software. You consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service Generated Data for any purpose, to the extent and in the manner permitted under applicable Law, including for the purpose of product and service development, marketing, analytics, quality assurance, machine learning or artificial intelligence (including for the purposes of training and tuning of algorithms and models), training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof, and as otherwise provided in this Agreement


Before that:

> Customer Content does not include any telemetry data, product usage data, diagnostic data, and similar content or data that Zoom collects or generates in connection with your or your End Users’ use of the Services or Software (“Service Generated Data”).

I could be wrong, but my take is that there is not all that much to see here


Zoom is giving itself the right to collect video and audio of you that could be used today to deepfake your voice in a convincing way.

It won't be long before the video deepfakes are convincing too.

This is absolutely awful and terrifying.


> does not include ... product usage data, diagnostic data, and similar content or data that Zoom collects or generates in connection with your ... use of the Services

Did you not read the quote? Or are you telling me this still might include video and audio data? I feel like an medieval illiterate farmer reading latin...


The ambiguity in this wording is on purpose, so it will be harder to tell in court (if someone sues them) that they were forbidden or allowed to do any specific thing.

They don't detail what any of product usage data is, and you might think it is content, but later one they detail that they'll use user content (which they also don't detail what it is) for AI training...


The scope of content used for AI training seems (to me) like it only includes video and audio data.

The list you reproduced above sounds like it's just metadata, like IP addresses, authentication logs, click tracking, etc.


It's section 10.4 that says they can use "Customer Content" for AI training.


It's hard to understand what they mean. I understand it as they're free to generate "Service Generated Data" based on “Customer Content”. So for example, a compressed rendition of a call recording would be "Service Generated Data" and thus they will be free to do whatever they want with it (improve their caption generation models ... or sell it to someone?).


> It's hard to understand what they mean.

Working as designed, surely.


A recording of your call can be generated data based on their service recording it.


After that, section 10.4:

> 10.4 Customer License Grant. You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: ... (ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, ...

I believe this might be the wording the submission references.


There is also a provision for letting them train AI on Customer Content (10.4: machine learning, artificial intelligence, training) so the distinction probably doesn't matter in this case?


"..for any purpose", how is this ever supposed to fly in the EU or UK? Should be opt-in, not in the small print, and entirely optional.


You're both misquoting and misunderstanding. Misquoting in that you clipped out the "to the extent and in the manner permitted under applicable Law". And misunderstanding since the text was talking "service generated data", not about "customer data". That's basically data generated by their system (e.g. debug logs). It's not the data you entered into the system (contact information), the calls you made, the chats you sent, etc.

Also, the linked document is effectively a license for the intellectual property rights. The data protection side of things would be covered by the privacy policy[0]. This all seems pretty standard?

[0] https://explore.zoom.us/en/privacy/


> And misunderstanding since the text was talking "service generated data", not about "customer data".

Isn't that what section 10.4 covers and ultimately grants liberal rights to Zoom?

> 10.4 Customer License Grant. You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: (i) as may be necessary for Zoom to provide the Services to you, including to support the Services; (ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, ...


Yes, but that's not the section that this subthread was about, and the objection about "this can't be legal in the EU and UK" was based on the text quoted for service generated content which is different.

And again, this is about granting an license on the intellectual property. It doesn't create any kind of end-run around the GDPR, and wouldn't e.g. count as consent for GDPR purposes.


I don't think they carved themselves out this permission for the purpose of training an AI on debug logs. For all we know "Zoom compiles Service Generated Data based on Customer Content" may include them compiling an mp4 of your call. That would seem to fall under the part of the definition that says "data that Zoom collects or generates in connection with your or your End Users’ use of the Services or Software"


You're right, I was too quick to judge. Sorry.


Furthermore, as far as I know, the "to the extent and in the manner permitted under applicable Law" part is just a reminder. Laws always have priority over contracts, and any part of a contract that goes against the law can simply be ignored.


Not if at least one of the parties is a government institution, because administrative actions have a presumption of legality, similar to presumption of innocence applied to other entities.


“…for any purpose, to the extent and in the manner permitted under applicable Law


This is not new. These terms were quietly updated on 1st April 2023. Looks like very few people noticed it until now.

https://web.archive.org/web/20230401045359/https://explore.z...


It's settled, then. I'll move on to using a different video chat service...

They're a dime-a-dozen. Good job tanking your reputation and business, zoom!


I'm sure they'll miss your business, but this change will hardly impact their bottom line. Most users will continue to use it, even if they're aware of and are concerned by this, as the cost and inconvenience of switching is too high.


I am really puzzled how are they able to "quietly" update the terms without notifying their users? Everybody was joking about the emails (We have updated our terms...) raining from every company when GDPR et al. got introduced. What changed?


Section 15 of the agreement ("MODIFICATIONS TO THIS AGREEMENT") allows for Zoom to unilaterally change the terms without providing notice other than updating them on the website.


In many countries that is illegal... ToS does not go over the local laws.


In such jurisdictions, it would be unenforceable, but not illegal. The agreement is executed in California per section 33.3, where it is perfectly legal.


You really ought to read “No Filter” by Sarah Frier. She talks about exactly this, except with Apple and iTunes in 2001. Apple’s biggest change wasn’t “digitizing music”, it was enabling a system that allows arbitrary changes to terms and conditions for services they offered. Apparently if you presented a digital copy of a TOS and users clicked one button, it was legally binding. Other companies caught on and started doing it, and well that’s how Zoom is able to do this - people don’t bother to read what they’re agreeing to so legally it’s the user’s fault if the software does something they don’t like.


Possibly they've done something illegal here. Let's wait and see (or, if you're in the EU, take action and report it to your data protection authority and NOYB).


I thought those emails were a form of protest, like complying in the most annoying way possible just to make a point.


This is very much like the Black Mirror episode Joan Is Awful.

By using modern services we consent to our data, including our likeness, being used in any way the service can extract value from it. User data is such a gold mine that most services should be paying their users instead. Even giving the service away for "free" doesn't come close to making this a fair exchange.

Not to sound pessimistic, but we are already living in a dystopia, and it will only get much, much worse. Governments are way behind in regulating Big Tech, which in most cases they have no desire in since they're in a symbiotic relationship. It's FUBAR.


Enjoyed the Black Mirror reference, and will hopefully add to the pop culture enjoyable cross linking with https://en.wikipedia.org/wiki/HumancentiPad.

> WHY WON’T IT READ?!


Balaji talks a lot about the state losing power in the future, but I don't think this is how he was envisioning it.


As far as I can tell he's not only pretty sure he'll be part of the class that holds power like this without accountability to any state, he consistently makes manipulative statements which function to move things in that direction.


Hi there - this is Aparna from Zoom, our Chief Operating Officer. Thank you for your care and concern for our customers - we are grateful for the opportunity to double click on how we treat customer content.

To clarify, Zoom customers decide whether to enable generative AI features (recently launched on a free trial basis) and separately whether to share customer content with Zoom for product improvement purposes.

Also, Zoom participants receive an in-meeting notice or a Chat Compose pop-up when these features are enabled through our UI, and they will definitely know their data may be used for product improvement purposes.


Answer unsatisfactory. With the recent T&C update, Zoom committed (business) suicide. A memorable fail story to avoid. Goodbye forever, Zoom.


Thanks for commenting. The issue is not with using AI features though - it is with the Terms granting you unrestricted and eternal use to our conversations to train your AI and potentially disclose our work to your other customers.


Well said. Zoom thinks we are not talking about the terms of service as it pertains to a particular feature and not their entire rights moving forward.


I can’t help but notice the distinction between between customers _deciding_ and participants being _informed_. Can participants not also decide? Can the decisions not be mutual and decided per-session?

My child uses zoom for school and our family for healthcare - both of those scenarios make us participants. It sounds like we are beholden to the decisions of your customer, the institutions.

I am extremely concerned and intending to initiate discussions and suggesting alternatives promptly this week.


The blog post your company just published confirms this:

Account owners or administrators (“customers”) provide consent. Participants receive notice.

Gross and disappointing.


That's not how consent works in the GDPR legal sense. (But maybe that's not something Zoom USA cares about if an insignificant amount of profit comes from EU.)


Remember when Zoom lied about having strong encryption, and sharing data without permission? https://arstechnica.com/tech-policy/2021/08/zoom-to-pay-85m-...


Thanks for your response, but as you can see in the comments even HN users are confused about this.

Where can we find the ability to 'switch off' any sort of generative AI features or data harvesting?

I ask because the zoom administrative interface is an absolute nightmare that feels more like a bunch of darkpatterns than usable UX. When I asked your customer support team – on this occasion and others – they clearly don't even read the request, let alone provide a sufficient response. I've been going back-and-forth on a related issue with your CSRs for almost two months; they've neither escalated nor solved my problem.

The bottom line is that as a paying customer, you're incentivizing me and others to move to different services – namely because you seem to be entangled by your own bureaucracy and lack of values than any outside problem.


When you say “Zoom customers decide … whether to share customer content with Zoom”…

Can you elaborate on whether this is opt-out or opt-in? Does a new user who starts to use Zoom today have this turned on by default?

Usually when companies say things like “customers decide” it can gloss over a lot of detail like hidden settings that default to “on” or other potentially misleading / dark patterns.

Given the obvious interest in the finite details being discussed in this thread, and your legal background, it would be good to hear a bit more of a comprehensive response if you can provide it.


Hi there - this is opt in. A new user starting to use Zoom today does not have this turned on by default.


Thanks for participating in the discussion here, it’s helpful.

Clause 10.4 in your terms seems to grant you rights to do pretty much anything with “Customer Content” (including the AI training specifically being talked about).

So I’m still a bit confused because regardless of any opt in mechanism in your product, these usage terms don’t seem to be predicated on the user having done anything to opt in other than ostensibly agreeing to your terms of service?

In other words, as a Zoom user who has deliberately NOT opted in to anything, I still don’t have a lot of confidence in the rights being granted to you via your standard terms over my content.

The wording of the terms imply that you don’t actually need me to opt in for you to have these rights over my data?


Thanks for your question - we have clarified our position in this blog. We do not use video, audio and chat content to train our AI models without customer consent. Please read more here https://blog.zoom.us/zooms-term-service-ai/


It's great that you are engaging and writing about this, many thanks.

While your blog is interesting, it doesn't change the impact of the Terms of Service as currently written. They seem to give you the freedom to train your current and future AI/ML capabilities using any Customer Content (10.4), and your terms apparently have your users warrant that doing so will not infringe any rights (10.6).

Perhaps your terms of use should reflect your current practices rather than leaving scope for those practices to vary without users realising? Will you be changing them following all this feedback?


Following up on this point, we’ve updated our terms of service (in section 10.4) to further confirm that we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent.

You can see that now clearly stated in our blog: https://blog.zoom.us/zooms-term-service-ai/


This addresses concerns about Zoom Video Communications, Inc. itself using e.g. recordings for purposes of training their own AI models. It does not address the potentially much greater risks arising from the company potentially selling access to the collection of zoom recordings to other companies for purposes of training AI models of such other companies. Here’s a somewhat-in-depth analysis: https://zoomai.info/


Thanks for following up, Michael, it is much appreciated. It does leave me (and judging my adjacent comments, also others) with questions, including:

* That wording seems very specific - is there a reason you did not just say "we will not use Customer Input or Customer Content to train our AI" given you have defined those terms? Are you leaving scope for something else (such as uploaded files or presentation content) to still be used? * Can you also clarify exactly which (and whose) "consent" is applicable here? In meetings between multiple equal parties there may not be any one party with standing to consent for everyone involved. Your blog post seems to assume there can be, but the ToS don't appear to define "consent".


Can there be some easy to file screen shots on how to disable it for anyone who is not a new user?

Your COO’s wording is that a new user will have to opt in. It seems the major still might have to know where to opt out.


Hello

Thanks for commenting directly.

As we know, ‘do not’ does not mean will not in the future.

Also can screenshots be added clearly outlining all the settings that will opt out and remain opt it?

As you might know, Zoom sometimes auto opts in on new or updated features.


Do you have a public, published trustworthy AI framework that you use to guide your AI projects? Something like https://www.cognilytica.com/trustworthy-ai-workshop/ ? Would be good to see what decisions and processes you follow to guide your AI efforts, how you work with suppliers and partners, consent and disclosure policies, and how you communicate internally and externally.


Because I use Linux, I can't make any local changes until I log into a meeting. For example, I can only change my display name after joining a meeting.

Ignoring the laughable lack of Linux support for a moment... will I need to log into a meeting so that I can open up my settings to opt out of this? If so, this is an unacceptable situation as I need to watch criminal court hearings and do not want to risk violating state law that bans the recording of criminal hearings.


As a voice actor whose sole income is my voice, ANYONE claiming the right to my voice for training AI and speech modeling is 100% unauthorized and unacceptable under any circumstance.


One of the major faults of Zoom possibly from the rate of change in it is how many features are buried in the web configuration that are not in the physical application.

Seamless integration and access between the two is not where it should be.


I certainly do not want my private chats, meetings, nor any non-public company information shared with anyone else. This seems like a massive privacy breach. Where is the opt-out to disable this?


No settings in the iOS app. They do share things with third parties.

Feel free to read through the pages of recently updated policies. I wonder what data is “retained” and where “overseas” among other concerns they state.


How to enable end to end encryption in Zoom: https://support.zoom.us/hc/en-us/articles/360048660871-End-t...

(Presuming of course that their closed source software really E2E encrypts without a backdoor)


Why not simply use a product that doesnt steal your meeting content?


Cause Zoom works well for a lot of people, or you have no choice in the matter.


[flagged]


> Why not simply not comment?

Please don't post attacks like this here.


It was a question, similar to the exact tort posted.

Please don't post self-righteous dismissive comments here.


I am sorry but it is not "self-righteous", it is in the guidelines you agree to when you sign up here.

https://news.ycombinator.com/newsguidelines.html


May want to review those guidelines; and look up the definition of "self-righteous," while on a literacy streak.

>Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

My comment was just as genuine as his, but wasn't dismissive, as his was. It also assumed good faith, which is ironic given his comment was a rhetorical dismissal.

I legitimately pointed out that he was on the wrong website; dismissive "why are yall even X why you should Y" is a hallmark of Stackoverflow.

Similarly, reddit-eqsue circle jerk musk-bad/trump-bad comments get a similar reminder that this isn't reddit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: