Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Zoom private chat text between attendees is actually public (twitter.com/hjhaldanephd)
132 points by manigandham on March 30, 2020 | hide | past | favorite | 32 comments



This headline is SOOO garbage - seriously - what is up with HN and Zoom these days. Just trash headlines.

Edit: In case it is removed this was the original headline "Zoom private chat text between attendees is actually public"

From the zoom docs.

"If you save the chat locally to your computer, it will save any chats that you can see-- those sent directly to you and those sent to everyone in the meeting or webinar.

If you save the chat to the cloud, it will only save chats that were sent to everyone and messages sent while you were cloud recording."

This is TOTALLY different than all private chat text is public. 100% false. Private chat text is available to the folks who were participants in the private chat. That would be a much more accurate headline.

Now if the recipient of a chat first downloads it to their computer (zoom auto-scrubs if you upload to cloud) and THEN uploads it to the cloud - then whoever has access to it can see it. But this takes a number of steps. Download your chat and then publish your chat publicly. The headline is completely wrong.


The headline isn't fully accurate, but if people are confused about – and burned by – this behavior, it is a documentation & design issue in Zoom.

They need clearer warnings this will happen, or a design which clearly separates the two info-streams, of differing sensitivity, into separate download steps & filenames.


The headline is inaccurate.

In a lot of businesses this really is not an issue. Many businesses don't want a lot of bob mockery in private chat. Webinars people want these questions to go over with a supervisor (common question analysis etc). Hosts can set appropriate controls (public chat only / host only etc).

I hope zoom continues to cater to people trying to get work done - which they seem to be, and not this hyper paranoid, totally misleading title crowd.

The solution on hacker news seems to always be to add lots of new steps, warnings, device authentications, encryption keys, block the host from doing this or that etc. I'm already tired to the endless cookie acceptance popups on the public web.


Ah yes, the "totally misleading title crowd," a well known group of people that everyone can easily identify.

> In a lot of businesses this really is not an issue.

I don't think anyone cares about the business in this situation. Individual privacy and freedoms are more important than shareholder value and micromanager snooping. When you force everyone to engage with their coworkers online, you need to find a way to preserve basic human rights -- like two individuals having a private conversation. Tech that obfuscates that, or exposes the contents of private conversations, should be called out for doing so.

Yes, we want business discussions to be open. But at the same time you have to remember that it's humans that conduct the business (as little as VCs may like that, because they're so hard to control and pacify), and that those humans have human needs and emotions, and will have private conversations wherever conversations can be had.


The use case zoom is targeting is not encrypted private conversations that focuses on user privacy.

It is conferencing -> generally public / group meetings.

The permissions are VERY loose by default - anyone can join your personal meeting room if you don't put a password based on a simple set of digits after a URL. People can join with NO account, no login. My own experience trying to do a lot more lockdown - people don't like it, they WANT this free and open approach.

If you want private conversations - lots of apps out there support that. But many apps targeting business have a corporate / compliance / discovery API option. For example, slack is very popular, and admins can get corporate export turned on or discovery API which allows exports of your PRIVATE direct messaging conversations by the company. So if you think zoom letting a user see their OWN messages is bad, slack (often a popular programmer system) let's admins get everything you ever typed.

The one I hate the most are the SSL middleware boxes and decrypt everything.


HN commenters didn't require cookie warnings: the EU & California governments did.

Somewhat ironic for someone taking so much care to maintain an anonymous 'thoraway' account to give so little weight to other users' desire for robust privacy, that doesn't collapse via confusing docs/interfaces & transient accidents.


My point is companies / policies made based on misleading headlines is often user unfriendly.

If you think privacy is enhanced by all these cookie warnings (when YOU can control / clear etc your cookies) you are probably mistaken.

If Zoom goes to full E2E encrypted chat (this means on-boarding is FAR more complicated than clicking a totally simple URL with some digits) they will likely loose a lot of their market. If they go to E2E encrypted chat and block users from downloading their own chat, even more market share lost.


Nobody but bureaucrats think the cookie warnings are ay good. I advocate self-help, like using the Brave browser.

But the issue here is not your nonsensical digression into cookie warnings. It is: Zoom's docs & UI are confusing ordinary, mass users into inadvertently revealing private chats. Zoom should fix that.

I believe a simple baseline fix would be: offer 2 downloads, one clearly marked/named as "shared transcript - what all participants saw", another clearly marked/named as "personal transcript - what you saw".


That's a great feature request - but the current zoom feature is simply not "private chat text is actually public".


It's not clear to me from this thread whether those messages only become public if one of the parties to the messages then download the chat, or even when a third party downloads the chat. One seems more serious than the other.


I don't use Zoom but it looks like someone verified that this is only the host involved with the private chat:

> IMPORTANT: I can only find evidence that Zoom by default creates a log of private messages WITH THE HOST. This happens regardless of whether the session is recorded. It isn't a good design decision from a privacy perspective. But it's not the same as recording all* messages.*

https://twitter.com/rcalo/status/1244404664260411392

It's not super clear what a "host" is in Zoom parlance. Is it the meeting creator, or do they mean the host as in the person you were chatting with?


AFAIK, Zoom meetings have a concept of "host", which is typically the person who creates the meeting, but I believe the role can be passed around (host can select another participant and make them a host).

Hosts are privileged to have control over the meeting, e.g. can toggle recording or shut it down (so everyone gets disconnected and sees "was ended by host" or something like that), etc.


One person in the meeting can be (re)assigned the host role, which allows them to end, record, and let people into the meeting. Kind of like a meeting admin.


The consensus appears to be that if Alice is talking "privately" to Charlie about Bob's stupid hair, and Alice is the person designated to save the minutes of the meeting and post them, then she may accidentally include the transcript of her Bob-mockery.


Correct. When a given user downloads chat history, it downloads everything they can see: public chat and their own 1:1 conversations. Zoom has plenty of security and privacy issues, but this is not one of them [1].

[1] https://support.zoom.us/hc/en-us/articles/115004792763-Savin...


I wouldn't say it's not one, just that it's a very minor one. Mixing of privacy contexts is a common problem on the social side of security. If it warrants an alert notifying you that you are saving both private messages and public ones if it notes both when saving locally (and I think that is definitely warranted), then not doing so is a (minor, most likely) security UI problem (if you accept that actual people and their common behaviors need to be accounted for as threat vectors).


> then not doing so is a (minor, most likely) security UI problem (if you accept that actual people and their common behaviors need to be accounted for as threat vectors)

Yes, you want to account for people's actual behavior. This isn't going to rise above the level of "minor" if viewed from a security perspective, because it's a self-only attack -- nobody gets any powers they didn't already have, and Alice is hurting herself, not someone else.

(She might inadvertently hurt Carl, if Carl was sending her messages making fun of Bob, but she was allowed to do that anyway.)

A usability or operational perspective might object to the behavior here more strongly.


You're assuming it's only used for interpersonal things. If Manager A requests a password regarding some system they are discussing from Manager B in a private chat which other Employees in the channel should not have access to, but inadvertently exposes it after downloading and sharing the channel chat, that is definitely a security problem. Any private channel of communication that is easily combined with a public channel of communication without warning is a security problem.

It's not about hurting people's feelings, it's about information leakage. And to cut off anyone that says passwords shouldn't be shared in a private chat, that's irrelevant. Good infosec security practices in one place do not preclude criticism of bad practices elsewhere. Security is about layers or protection, so any layer with problems should be noted. If that layer happens to be a third party application that mixes private and public channels in some instances and if there isn't warning as to this happening, it deserves to be called out.

Another way to look at it is that any minor information leakage can have a major impact if the information leaked is very important.


> And to cut off anyone that says passwords shouldn't be shared in a private chat, that's irrelevant. Good infosec security practices in one place do not preclude criticism of bad practices elsewhere.

It's not irrelevant. There are phone apps with no other purpose than to publicize your location. If you should happen to be a fugitive, using such an app would be a bad move. Does that make the privacy leak in the app a security problem? No, how could it? If you don't want your location publicized, the answer isn't to remove the only feature from a location-sharing app so you can run NOPs in peace. It's to stop using the app.

Your misuse of a feature that performs exactly as advertised can't justify calling that feature a security problem. The people responsible for the feature don't know how you're using it. The use pattern is the security problem, and it needs to be addressed by people who (1) know what it is, and/or (2) are responsible for it. Zoom fulfills neither criterion.


> It's not irrelevant. ... the answer isn't to remove the only feature from a location-sharing app so you can run NOPs in peace. It's to stop using the app.

That's clearly not the case here. It's irrelevant in that in the context of this specific discussion, which this application, which is marketed towards enterprises as secure, an argument that a sharing of private information accidentally though bad UI is the problem of the user for putting private information in that private channel is irrelevant.

> Your misuse of a feature that performs exactly as advertised can't justify calling that feature a security problem.

That depends on how we classify "exactly as advertised". If the overall claims of the product as to being secure are easily and often circumvented on accident through poor UI, then that may be a security or privacy problem. If that place it's advertised is not something that most users will encounter during normal usage, then how it's advertised is of little consequence.

One extreme end of this would be something hidden deep in the EULA or privacy policy that advertised how this works, and the other extreme end would be an alert every time you use it that explains this. I think one is obviously a problem (to the point that it looks purposeful), and the other obviously isn't, but the only difference between them is where the information is placed or how assured you can be that the user has encountered and hopefully understood it. I think this clearly indicates that the problem is not whether certain behavior is advertised, but how aware users are made of how the application functions.


Oh, that's it? Am I missing something here or are the anti-zoom news of late blowing a whole lot of things out of proportion.

I was under the mistaken impression that this would include one-on-one chats between the _non-host_ members of the meeting.


"she may accidentally include the transcript of her Bob-mockery"

According to my reading, she will by default include everything she typed and any messages sent to her. That's a horrible default if true.


The host can allow or not allow bob mockery on business calls. Many business just allow public chat. So the host has to allow private chat, then someone chatting with someone has to download their chat and publish it.

A fair number of use cases benefit from being able to see private chats - webinar replays it's nice to see the questions folks are asking, but usually host in a large webinar doesn't want to chat spam everyone by allowing 300 people to post to everyone. So they can set it host only, download, then edit and include to whomever cares what questions are getting asked.


They are public in the beginning, they do not appear in live stream but they are available in the downloaded record.


My comment asks the question: "Does the person downloading the log change the outcome?"


Personal anecdote: Use Slack for DMs.


Slack DMs aren't guaranteed to be private from server admins


matrix/riot.im?


Zoom is a wall-to-wall security disaster. When my company switched to it about a year ago one of my colleagues showed me how to force any Zoom room, anywhere on the planet, in any organization, to join our meeting. I just don't think it was architected for privacy and security, and you can't add those things later, so you should be prepared for a years-long trickle of this kind of news.


Out of curiosity: Could you be more specific on the issue and how it works?



Thank you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: