Hacker News new | past | comments | ask | show | jobs | submit login

I wouldn't say it's not one, just that it's a very minor one. Mixing of privacy contexts is a common problem on the social side of security. If it warrants an alert notifying you that you are saving both private messages and public ones if it notes both when saving locally (and I think that is definitely warranted), then not doing so is a (minor, most likely) security UI problem (if you accept that actual people and their common behaviors need to be accounted for as threat vectors).



> then not doing so is a (minor, most likely) security UI problem (if you accept that actual people and their common behaviors need to be accounted for as threat vectors)

Yes, you want to account for people's actual behavior. This isn't going to rise above the level of "minor" if viewed from a security perspective, because it's a self-only attack -- nobody gets any powers they didn't already have, and Alice is hurting herself, not someone else.

(She might inadvertently hurt Carl, if Carl was sending her messages making fun of Bob, but she was allowed to do that anyway.)

A usability or operational perspective might object to the behavior here more strongly.


You're assuming it's only used for interpersonal things. If Manager A requests a password regarding some system they are discussing from Manager B in a private chat which other Employees in the channel should not have access to, but inadvertently exposes it after downloading and sharing the channel chat, that is definitely a security problem. Any private channel of communication that is easily combined with a public channel of communication without warning is a security problem.

It's not about hurting people's feelings, it's about information leakage. And to cut off anyone that says passwords shouldn't be shared in a private chat, that's irrelevant. Good infosec security practices in one place do not preclude criticism of bad practices elsewhere. Security is about layers or protection, so any layer with problems should be noted. If that layer happens to be a third party application that mixes private and public channels in some instances and if there isn't warning as to this happening, it deserves to be called out.

Another way to look at it is that any minor information leakage can have a major impact if the information leaked is very important.


> And to cut off anyone that says passwords shouldn't be shared in a private chat, that's irrelevant. Good infosec security practices in one place do not preclude criticism of bad practices elsewhere.

It's not irrelevant. There are phone apps with no other purpose than to publicize your location. If you should happen to be a fugitive, using such an app would be a bad move. Does that make the privacy leak in the app a security problem? No, how could it? If you don't want your location publicized, the answer isn't to remove the only feature from a location-sharing app so you can run NOPs in peace. It's to stop using the app.

Your misuse of a feature that performs exactly as advertised can't justify calling that feature a security problem. The people responsible for the feature don't know how you're using it. The use pattern is the security problem, and it needs to be addressed by people who (1) know what it is, and/or (2) are responsible for it. Zoom fulfills neither criterion.


> It's not irrelevant. ... the answer isn't to remove the only feature from a location-sharing app so you can run NOPs in peace. It's to stop using the app.

That's clearly not the case here. It's irrelevant in that in the context of this specific discussion, which this application, which is marketed towards enterprises as secure, an argument that a sharing of private information accidentally though bad UI is the problem of the user for putting private information in that private channel is irrelevant.

> Your misuse of a feature that performs exactly as advertised can't justify calling that feature a security problem.

That depends on how we classify "exactly as advertised". If the overall claims of the product as to being secure are easily and often circumvented on accident through poor UI, then that may be a security or privacy problem. If that place it's advertised is not something that most users will encounter during normal usage, then how it's advertised is of little consequence.

One extreme end of this would be something hidden deep in the EULA or privacy policy that advertised how this works, and the other extreme end would be an alert every time you use it that explains this. I think one is obviously a problem (to the point that it looks purposeful), and the other obviously isn't, but the only difference between them is where the information is placed or how assured you can be that the user has encountered and hopefully understood it. I think this clearly indicates that the problem is not whether certain behavior is advertised, but how aware users are made of how the application functions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: