Hacker News new | past | comments | ask | show | jobs | submit login
Zoom Security Exploit: Cracking private meeting passwords (tomanthony.co.uk)
419 points by TomAnthony 16 days ago | hide | past | favorite | 158 comments



I believe Zoom's continued struggle represents the state of software development in 2020.

1. Are you a software engineer?

2. How many "security" tickets have you been assigned in your career?

3. Has your employer ever paid for security training for you? (and I'm not talking about annoying powerpoint websites that teach you how to identify phishing emails)

4. Has your organization ever run a blue team / red team exercise?

5. Who is in charge of APPLICATION SECURITY at your company? (Not network security, or database security, but actual APPLICATION level vulns)

6. Does your organization scan for outdated dependencies? (Do you uncover CVEs in your software on your own, or do you check how bad things are when the news tells you something big happened and might be in your stack?)

7. Are you running a web application, and have you implemented ANY security headers?

8. Did your business unit mandate that "we support all browsers", so they still have you running on TLS v1.1? (who tf knows, or cares, am I right?)

9. Do you use the software you built? (Is your personal information in the database, along with legitimate usage stats, and possibly sensitive information you'd like to protect, or do you just write the code and deploy into the void?)

10. Do you have access to the production systems or database? (Most likely the answer is NO, so you wouldn't know about brute-force attacks, invalid requests, corrupted data, or other anomalies the developers should have their eyes on).

My diagnosis; the profession of software development is a victim of a hostile takeover from product managers, while pushing engineers out of control of their domain.

My recommendation; use the least amount of software you can get by with, and assume it's compromised.


You touch on another important issue I don't see a lot of discussion about: There is a HUGE demand for application security engineers, or more broadly, security folks with software engineering backgrounds.

I'm in SecEng and was laid off when my company's regional office went under. It's become pretty obvious to me that there's a huge need for security minded engineers, and most applicants companies get for "Application Security Engineer" roles don't have any experience or background as SWEs.

I'm of the opinion that the industry needs to do a few things to help get traction on the problem:

1. Open up career paths that allow SWEs to move from product develop to technical security 2. Have companies better advertise their need for skilled technical security talent 3. Have InfoSec teams perform more cross team exercises. I've never met an engineer who wasn't all for participating in a red/blue team exercise. It's a fantastic way to cross-train and raise awareness.


From my experience; there is a huge NEED for app sec engineers, but the DEMAND isn't really there. Even if you find an organization that DEMANDs a security engineer (rare); that professional has an uphill battle to be granted the resources, time, and power to make meaningful change.


I'll second this.

So few organisations have dedicated security people at the time they need it most! Since we're on HN, it is maybe worth looking at startups, and how few early stage ones build with security in mind at the time. In regulated sectors or ones where reputation is key, it seems a no-brainer to have a security minded joint CTO/CSO to ensure the house is in order from the start. I understand the pressure and focus on an MVP, but equally I've seen first hand the cost of undoing (avoidably) bad engineering and security ignorance that had real financial cost. After MVP is validated and there are paying customers, it's really time to get security right, in my view.

I also see little demand for "real" security engineers. Most security roles I see are pretty non-technical, often doing the whole "let's turn security into a generic risk so we can hand it off to a non-expert". I don't really see how you can lead or drive security when you can't pop your way into a typical business webapp yourself, yet most of the security people I know what to learn these skills (but they're too busy firefighting for me to get time to teach them some of the tricks).

I've seen little demand or appetite for bringing together what seems to me to be a real security engineer - experienced technical developer lead, deep and broad security knowledge across the full stack, and the ability to make strategic business decisions (exec skills). Is this a missed opportunity, as the value of it seems clear to me? Or is the demand really just not there for this?


I second everything you said above and would like to add that I think another quality security engineer needs to bring is "human skills", by no means I am saying technical knowledge isn't important but human skills are equally important, as part of a job is to interact, teach, persuade people.


As a security professional, most startups understandably outsource security as much as I could so long as the cost met the needs, until I had a product shipped to market and had enough users and reputation to start to care about the risk.

Good luck. I’ll stick to Incident Response and SIEM software.


You make a good point. Let me revise my original statement: There is a huge market for security engineers.

I'm interviewing in SF with ~8 years of product engineering experience and ~2 years of AppSec/SecEng experience. I'm looking at 8 companies that are all willing to pay well for folks to do that work. Typically in range of ~180 - 220k base salary from what I've seen so far.

On the topic of meaningful change: You're absolutely correct in that it's easy for folks in security to find themselves in places where they identify work that needs to happen without receiving support or authority to make it happen. For aspiring technical security folks, there's a few things you can screen for to avoid companies that will do this to you:

1. Does the company have a formal CSIO (Chief Information Security Officer)? If not, move on. CSIOs represent security risks and needs to your executives and board members. Without that, you won't see security work get on anybody's road maps.

2. Does the company have an established security program? If not, do they have a roadmap for making one?

3. What is the size of the technical security team compared to the larger engineering organization? There's no bad ratio here, but the smaller the ratio is, the more critical it is to automate as much as possible.

4. What training programs exist within the larger engineering organization? Do they cover security awareness? Technical security? How well is this program executed? A good training program is critical to reducing new work created for security teams that are typically overloaded to begin with.

There's probably more you can look for here, but I find these questions to be reasonable filters.


I agree with majority of things you mention but wanted to point a couple of things out:

> There is a huge market for security engineers If you have a look on Linkedin for "Application Security Engineer" jobs in London, UK you would find there are not that many, some companies don't even have AppSec Engineers.

> Does the company have an established security program? If not, do they have a roadmap for making one?

For less mature company or company just starting out AppSec could be the force that creates and implements a security roadmap, adopting OWASP SAMM or BSIMM


This matches what I'm seeing as well--smaller companies looking for AppSec often want to bootstrap a new program.


As someone who is pretty much an app sec engineer, I feel like this rings true.

Furthermore, part of me suspects that the tangible business risk of application security flaws isn't felt until after a breach, when its far too late to change things. Even then, sometimes the cost of a breach does not justify the expense of building a robust secure software development life-cycle.


That's a great point. Do you believe that the regularity of significant breaches has cheapened the reputational cost of having experienced such a breach? (Which, in turn, makes it less likely that "a robust secure software development life-cycle" will ever be built.)


I think its worse than cheapening the reputational cost, it has put a concrete ceiling on the financial cost - something like users affected * 2 years of free credit monitoring.


I think you're right, and it's not just security. Nearly any every other application vertical (accessibility, performance, localization, etc.) faces the same struggle compared to adding features. If the org doesn't care about it it doesn't get done. Whoever does do this kind of work needs to think very carefully about how to get their influence to scale.


Absolutely this. Many engineers simply lack security mindedness or secure development training. Similarly, many managers/PMs/etc have gaps here as well. Its important for them to understand how to ask effective questions and prioritize security work accordingly.

I've run into many stupid security mistakes, and continue to do so :(. Even though security can be very hard, as an industry we'd be way better off if more understood at least the basics. Those with interest/expertise can then dive in and find the more tricky things. Those without, can at least understand WHY these things are important.


I think you're right here, and I worry we are overly compartmentalizing security away from engineering. Ideally you should be both. One of the issues I see is that (at least in my experience), security isn't too bad to get up to speed in, if you get the time, but it's pretty time consuming to stay on top of advances.

Cross team working is important, but I think it's also important to try and embed security "people" into the day to day development flow on a regular basis. It ought to help the developers through learning by osmosis.

Definitely think companies need to better signal the demand for deep technical skills in security though - too many seem to actual want generic risk management experience but no particular technical knowledge. To my mind, anyone in a security role should really be able to break their way into a typical corporate internal webapp in a few minutes, just out of their own residual talent and ability to show stakeholders the importance of their profession. It's amazing how quickly security gets a seat at the table when they show how they can pop the self service password reset portal with just a web browser (with appropriate permission of course) and are asking whose password to reset to show it works...


I think it’s even more so a matter of being time-consuming to do correct even if you know what to do... In a microservice architecture. mTLS everywhere, proper PKI, fine-grained access for all users and services, keeping dependencies minimal, vetting updates, securing network, firewall rules, keeping security patches up to date... all this takes time and without strict processes it’s very easy to cut corners.

And what about that PoC that slowly transitioned to bring a production system?

A large part of the blame lies in the culture of pressure on releasing fast and often.


> There is a HUGE demand for application security engineers, or more broadly, security folks with software engineering backgrounds

I wish. On the contrary, in my career I met more people that moved from security to development than the other way around.

In many organizations the priority is on churning out stuff and security comes later.

It is clearly proven by the small percentage of security people VS developers.

As a consequence, developers are paid and praised more. Security people end up fighting an endless uphill battle and sometime seen as "party poopers" by the developers.


As others have mentioned in similar responses, needing security vs supporting security don't always come hand in hand. That said, the technical security jobs are out there, pay well, and aren't receiving much in the way of competition.

To me, that's a fantastic situation as a job seeker. It gives me the luxury of choice. Knowing some companies don't value or support their security teams means that it's something I can screen for ahead of time. If I fail to screen appropriately and end up somewhere that I'm not happy with, I can just start looking again.


Where is this demand you speak of because I have only had jobs building features for Incident Response or being a “DevOps Engineer” that implements secure pipeline, docker, logging, artifact repo scanning...

The need for App Sec is there as far as I’m concerned the demand is non existent... even though most red teams could make them cry.


I fully agree with your sentiment, but the specific issues discussed here (having a maximum password length of 10, silently truncating passwords, silently replacing non-ASCII with '?', setting default passwords to 6 numbers, not rate-limiting password attempts) are not things that require a red team / blue team to figure out. They aren't things that require scanning dependencies, auditing source-code, etc.

These should be as basic as not storing cleartext passwords.


> having a maximum password length of 10, silently truncating passwords, silently replacing non-ASCII with '?', setting default passwords to 6 numbers, not rate-limiting password attempts

Most newly CS students would know better then to do this or at least would know they have to properly look this up.

It's sometimes hard to believe major stupid things like this are done accidentally. (But I know very well how they happen accidentally, it starts with some bug somewhere with non-us-ASCII/to long and similar, then it's constrained "temporary" and put on a must fix list but that list never gets any priority ever, things like this are sadly supper common. As long as companies don't get legally hold responsible for negligence this won't ever go away.)


I believe it's company / business level decision. 6 to 10 length numeric password is easy to remember. And non rate limiting enables older, non tech savvy users to have as many error as they want.

But the password complexity, rate limiting and other security measurements are there for a reason, and whoever cannot learn from history are doomed to repeat it.


Rate-limiting has nothing to do with the older, non-tech savvy users. You're thinking of maximum failed attempts. Rate-limiting is about preventing bots from spamming an API call.


Oh correct, it's as you've said.


I agree, if Zoom isn't capable of handling #2 or #3 on the list, then #4 is irrelevant to them.

You would think that "not storing cleartext passwords" is a universal given today, but the corporate structure controlling software design today actively suppress anything that isn't a feature. It silos security responsibility to the point where nobody was probably in charge of making sure there was a coherent password policy, that it conformed to any type of standard, or that it was enforced in production against their multiple running systems.

This is every company today.


Agreed. Yes, the state of InfoSec is bad in most companies, but with Zoom it is really abysmal. World class super bad.


Something that might help developers is the use of OWASP ASVS during requirements gathering stage or in general, when working on new features.


I'm just imagining each of those individual issues sitting at the bottom of a Jira backlog.

One thing that causes them to be prioritized where I work is that if they come up during a yearly review, you can't ship your app. Overriding that "stop-ship" order requires C-level approval. Because without a specific exploit, it might be hard to convince a PM to move those up in the backlog.


> My diagnosis; the profession of software development is a victim of a hostile takeover from product managers, while pushing engineers out of control of their domain.

Sorry, calling BS on this one. You’re calling out an entire group when, like most things, it’s a subset of the broader group. Good PMs understand that a complete product is not just new fancy features but a combination of Features, tech debt work, security and more. At the end of the day if our customers aren’t secure, we’re in trouble.

I’ve been on both teams. Once watching a PM stand n the midst of my engineering team proclaiming “I don’t give an F about your technical debt, when will my features be done?” And now I lead a product management team. Great companies have Product and Engineering trusting each other and working together. They may disagree from time to time, but they deliver together and work together to balance what gets done.


Yeah I like everything about that comment except for the line about PMs.


Agree.


You're right, and it all stems from impatience and a culture of do-it-later-ism.

Nearly every single project I've worked on directly or indirectly has had problems (involving security or otherwise) that could have been fixed by a more patient management.

The best thing to learn as an engineer at any level: "rushing makes messes."


Like pretty much any other worthwhile endeavor in life, the same rule applies: 1) Good 2) Fast 3) Cheap

Normally, you can pick one. If you are exceptionally lucky, 2. No project is ever all 3 at once, if anyone thinks so for a given project that is a sure sign of either delusion or inadequate ability to judge (after all, good is very subjective).


11. Do you audit third party libraries for security, malicious intent, and longevity?


I try do so. But non properly, i.e. I at least skim third party libraries as long as viable.

I have more then one time stumbled about some "wtf is this" thing in libraries which seem to be very good/well maintained/etc.

Things included:

- Setting socket options which are both unnecessary and cause bugs (like non blocking flag on a socket which is used as if in blocking mode without having non-blocking support in that library).

- Not properly clearing secrets while advertising to do so. (I.e. writing zeros without using volatile write or similar, not supper will known but authors of hashing libs can be expected to know better).

- Less obvious Memory leaks.

- Major logic flaws in the application logic which should easily have been cough by tests, except that the tests didn't really test anything. (Through ironically not security flaws.)

- Libraries pretending to support X but only correctly support that common special limited usage of X while having code for full X support but all buggy and 100% unusable outside of the common special case.

- EDIT: Fundamental design flaws in supposedly state of the art, supper fast, supper reliable web framework which makes it not so fast and not so reliable in many real work use-cases under load.

- etc.

It's sometimes really sad.


I would love to see public code reviews of open source projects to highlight this kind of stuff but actually having a community driven effort requires a central vendor to support it cleanly. GitHub/gitlab: I’m looking at you.


The problem is so much of what we use is not a community effort but the work of a single person in their free time unpaid. So you might do a big review of all the things you find weird and then the maintainer will say "eh, I don't have the time or desire to rewrite all of this" And fair enough, why should they accept all this extra unpaid work.

I'm not sure what the solution is but it probably involves companies getting more active in the development of all the stuff they depend on especially when its not some mega project like linux or postgres.


Sure, I understands your point. My fear is untrusted code executing on my machine. It wasn’t till I ran a tcp dump that I realised my terminal (KiTTY) decided to ‘phone home’ to ‘check for updates’ - I manage everything through apt, and a terminal by nature executes code so I view it as high risk and don’t really like this behaviour.



11.a: Do you even know if the libraries you are shipping are the official versions?


I can set two check marks.

I'm in finance, working on back-end systems used by banks. :-D

>My diagnosis; the profession of software development is a victim of a hostile takeover from product managers, while pushing engineers out of control of their domain.

Jop, that nails it. Software engineers are treated sometimes like children, sometimes like being crackbrained even, but almost never as the highly educated experts they are (or should be, if there wouldn't be so much botchers around also, which for sure causes to some degree the mentioned problem).

Engineering decisions can often be overridden by management no matter what. The result are the usual "catastrophes" repeating over and over everywhere. But as long as this "catastrophes" (like breaches with exfiltration of user data) "require" mostly only a "pardon blog-post" from some executive but don't have any really hurting (financial!) consequences this won't change I guess. If there are costs the insurance will cover them usually. And you need to have and pay the insurance anyway, so why care further? Product management doesn't optimize for security. They optimize costs.

Which as such is not wrong or so! If the incentives would be set by the market provider (the state) correctly the same mechanics could easily lead to improved security. I know many don't like to hear this but imho this industry needs stronger regulation. Without regulation nothing will change as on a free marked the "cheapest" product wins. And it will be (or is, as we can currently see) "cheap" in any dimension. "Worse is better" is a direct result of the fact that the "cheapest stuff" prevail in the long run.

>My recommendation; use the least amount of software you can get by with, and assume it's compromised.

Quite depressed recommendation and fatalistic viewpoint. But to be honest, after so many years in software this could have been my words. And as I see it: Without true political will this conclusion won't change on it's own.


Who's your employer? ;)


> (Most likely the answer is NO, so you wouldn't know about brute-force attacks, invalid requests, corrupted data, or other anomalies the developers should have their eyes on).

I would be much more worried about security if the developers had access to the production environment than if they didn't.


I once worked for a fortune 500 that blocked all but a few unix sysadmins from executing shell commands on prod servers.

So far, so good.

The issue was that the “sysadmins” knew almost nothing about how anything worked, so the procedure was that the devs would give them commands to type and they would just type them in to a terminal.

Of course, if anything went wrong, or was ambiguous, the dev would need to check on it, and would end up standing next to the “sysadmin”’s desk and just telling them which characters to type. I once had to explain what a pipe character was...

Anyway, the end result was that all of the devs had prod access, they just had a very slow interface to it.


I'm system engineer in a decent company. I wish devs could access to prod systems so they have to get to cleanup their mess / get responsible for actions they do...

They already have access to staging and preprod what they do is basically coming to me and saying 'it did run well on their macos machine'

There are outliers, some few good people and some random people having no idea what they are doing.

IMHO there should be random quiz/task/test each day you login. Something obvious but not trivial at the same time related with domain of the system. So you would get 24h access if you pass, get denied for 3 hours if fail. 3 questions in each session...

At least people might learn something out of requirement...


> Anyway, the end result was that all of the devs had prod access, they just had a very slow interface to it.

They also had a person who could see if the dev was stalking an ex-girlfriend or pilfering bank account info. Sounds perfect.


Normally, yes, but the people employed as unix admins literally didn't understand what most basic shell commands did, to the point that copying prod data to a remote server would have been completely trivial.


HSBC production support used to be pretty much exactly like that during rollouts (no idea if that has changed in the last 5 years).


Me consulting at a major media company, paraphrasing:"It's, um, interesting that most of the development staff has ssh to essentially every production system as well as sudo privileges. And logging isn't super great."

PHB: "They need access to all the systems."

Me: "Well, um, what if one of them got disgruntled and just tossed some malicious JS in one of your many, many front end servers? Given the procedures here, it would be unlikely anyone would stumble across it, but they'd get a good amount of hits before the browsers block the entire site and traffic precipitously drops."

PHB: "Nobody would ever actually do that. Besides: we have to give them access. (looks at me as if he's explaining something to a five year old) We're a devops shop..."


With GDPR, it's not really kosher to give devs access to production database if it contains data about people.


Honest question, who can keep the access to production in the eye of GDPR?


Operations staff gets access to production machines with Operations being explicitly forbidden from producing code that runs on the systems.

There are still vectors for bad actors of course, but the idea is to firewall those who write the code from those who run it.


the production team can. it has made very hard to debug. Now the production team has to do most of the debug work, they have to give us anonymised data (and you cant turn personal data in anomymous data. it would be pseudonymous at best) that trigger the bug.

It can be pretty hard if your organization was not organized with this in mind in the first place.


Honest answer, everyone who claims to be GDPR or HIPAA compliant is lying and hopes you never find out.


Is this true? If so, GDPR, like SOX 404 will stink from an actual getting devs to own their own code perspective.

It devolves into a bunch of managers saying no ops person can have any write access to git at all and no dev person can have any read access to prod, let alone deploy code, thus throwing up a wall to have stuff thrown over.

Separation of duties is the worst, stupidest, clumsiest control ever but all the auditors and management types love it because it doesn't require them to think.

IT controls are far less used as the only control as the Phoenix Project alludes and yet the default state for any auditor is "it's all in Sox scope and everything is an IT control, lock it all down" and unless management has a clue and a care, they just do it.

In the process, they contort the CICD pipeline in horrible ways to say that yes, they have obtained the magical way of separation of duties.


Why is that?


From a purely economic point of view: if the end user doesn't care about security, then why bother having security?

Regarding users caring about security, there's three possibilities:

- Users should care about security but they don't because they're dumb/ignorant.

- Users don't care about security because it's not worth the cost/it doesn't affect them, so they're right in not caring (they have 'nothing to hide').

- Some combination of them.


Because they don't know that they care. They only will when they find their credit has been stolen, or their identity. Having security upfront is doing the right thing so that the industry doesn't become overly regulated.


You forgot the key word: modern.

%s/software/modern &/g

In 2020, you are an "engineer" writing "modern" software.

Don't forget the "updates"!


> the profession of software development is a victim of a hostile takeover from product managers, while pushing engineers out of control of their domain.

This is a good observation, it summarizes very well my feeling on what is wrong with the industry today.

In the 90s in many (most?) companies where the product was tech, engineering was in charge of engineering decisions. This meant that feature requests had to pass a sanity filter which would discard or transform ideas which would compromise the soundness of the security, stability, maintainability and other core architectural considerations.

Today, engineering is reduced to fungible user story jira ticket implementors with no decision making power so the more abstract (to PMs) work such as security will get no attention except as a crisis when it generates bad PR.

(source: working as a security-focused eng/architect since the 90s, on both product engineering and infosec sides.)


> My diagnosis; the profession of software development is a victim of a hostile takeover from product managers, while pushing engineers out of control of their domain.

I do not agree, but I am biased. I'm a former engineer with a security focus that is now a product manager.

I think a more honest take is that security has never been a priority outside of some specialized use cases/industries and that didn't improve as software development moved from something esoteric to something which is business critical in every industry vertical. Even in industry verticals where security is theoretically a priority and a lot of money is spent on security, most of the "security" people don't actually know anything about security and most of the work is box-checking for compliance audits.

You can only do so much and if we're being real security will always compete with other development priorities and the ones which drive revenue always win in any for profit enterprise which creates software as a core competency and the ones which reduce cost always win in any for profit enterprise where software creation is not their core function. Tech is either a product/revenue driver, or it's a cost center, and in both cases security adds additional expense, overhead, and time to release timelines which doesn't pass the PHB smell test.

The other big issue is that security doesn't have strong advocates in most organizations because even the most technical people in most organizations are security illiterate, even in the tech industry vertical. As a SWE at most companies you're a "security genius" if you use a password manager and know how to generate a CSR with OpenSSL or configure Let's Encrypt.

Maybe I'm overly cynical, but I've largely given up on seeing most companies pursue security with the passion and commitment necessary and see policy as the proper way to address these concerns. I applaud things like HITRUST CSF, which is strongly prescriptive and helps drive security in industries where every single company is full of box-checkers who like to buy appliances. I've been fortunate enough to work at companies that take security seriously and appreciate my background and as a product manager I have always considered user security and privacy to be critical and core components of UX in my products. So, I wouldn't blame the PMs, I'd blame the realities of doing business combined with the lack of adequate security literacy across the board in every industry vertical and at every technical role level.


Product management is at the mercy of the business. Until the potential financial pain of having poor security is well understood at the leadership level they will never change direction. Improved security is the least attractive thing to have on a roadmap. Sales, marketing, and the business pretty much roll their collective eyes at the concept because it gives them nothing. Marketing a product as more secure is begging for someone to prove you wrong, and it is so hard to prove what would have happened if you did not make something more secure.


>financial pain of having poor security is well understood at the leadership level they will never change direction.

is there financial pain? Is zoom losing money over these security issues? or is paying to fix them going to cost more than the offset in losses they would've had?


"Cost externalizing is a socioeconomic term describing how a business maximizes its profits by off-loading indirect costs and forcing negative effects to a third party. An externalized cost is known to economists as a negative externality."

Here's our culprit. There IS financial pain but most of it is externalized onto the customers in unseen ways. What is it worth to someone to compromise a specific Zoom meeting? Add up all of the tangible and intangible losses Zoom customers have suffered from data leakage and there is the real cost in the market.

The price Zoom pays to fix another bug, maybe write a blog post, and have a few accounts closed is a small fraction of this full cost to society.

This is probably the strongest argument for regulation such as GDPR.


11. Do you think exploits are cool? Do you keep up to date on types of exploits, at least at a high level, just because you want to know about them?


I’m a front end engineer. I go to Defcon, i read CVEs and exploit news. It’s a hobby but I feel like it also helps to bring a different perspective into the application world because my peers don’t seem to care much.


Commenting to save to show my employer.



“ My recommendation; use the least amount of software you can get by with”

So, developers shouldn’t shovel a bunch of third-party advertising SDK’s into their apps?

That’s just crazy talk.

/s


> 9th April – Heard from the Zoom team that this was mitigated.

> 16th April – Heard they were working on updated bug bounty program.

> 15th June – Requested update on BB program. No reply.

> 8th July – Asked again if I could submit this for bounty. No reply.

> 29th July – Disclosure.

That's disappointing that Zoom never got back to you regarding the bounty.


There’s an update regarding this at the bottom of the post:

> Update edit: A few people have asked me or remarked about the lack of bounty. To be clear, I never actually submitted this bug via their bounty program (but was invited to do so), as was holding out for their new program (see post), and fell down the cracks a bit. Zoom didn’t decide against awarding a bounty – I never submitted for one, and disclosed here instead.


He was being courteous trying to follow an update to the bounty program and Zoom ghosted him,

> 16th April – Heard they were working on updated bug bounty program.

> 15th June – Requested update on BB program. No reply

> 8th July – Asked again if I could submit this for bounty. No reply. (Point of clarity here – the bug is fixed, and they have new issues to deal with so this isn’t exactly a priority for them. I could have chosen to file the bug for a bounty at the time, but didn’t, and wasn’t promised anything if I waited).

If Zoom were serious about their BB program they would have encouraged him to submit it for a bug bounty.


Gut check here on Zoom customer service. Has anyone heard from them in the last 2 months? I've been on a pro plan and 10-seat plan and had 2 issues about events sent to their customer service starting June 3rd. Have not heard back anything besides regular "we're moving slow because of COVID 19" template updates". They used to be responsive, even had a "chat" feature but now that's disabled.


I am going to raise my hands and say I have heard from them. I logged a support request on March 24th (the day the UK went into lockdown, so I was already expecting a delay). I finally got a response back on July 20th. It is a shame they are slow, as they have been really responsive in the past.

Hope that helps set a bit of a benchmark!


Yep they just replied. 2 month backlog on support requests for paid accounts, ouch.

Appreciate it! 2 months, sounds like I may get a response soon then.


Bug bounties seem to be a complete wild west.

I've reached the point of assuming the odds are stacked so heavily that, from a purely financial perspective, it's not worth the investment just to report an issue let alone find it.


You have to work with good companies that have a track record and clear policies. My last employer was pretty generous and timely AFAICT, including sometimes paying for stuff that was useful but clearly out of scope for the program.


Well that's just going to result in the next hacker going straight to public.


>In other testing, I found that Zoom has a maximum password length of 10 characters, and whilst it accepts non-ASCII characters (such as ü, €, á) it converts them all to ? after you save the password

Maximum password length of 10 chars, and auto-converting non-ASCII to '?' are both extremely egregious password practices.. Why does it not surprise me Zoom is doing both. I wonder it they also silently truncate passwords > 10 chars?

These are absolute basics. Let alone not rate limiting and the laundry list of other terrible (lack of) security practices.


They do silently truncate account passwords greater than 32 characters, but what's (arguably?) worse is they only do it in some places and not others.

I use 1Password and sometimes when it pastes in it works, sometimes the UI complains the password is longer than 32 characters.

I sent them a screen shot on Twitter [0] figuring their US support people would see it, but they didn't seem to care that much (got some generic response).

We just shouldn't be using them: https://zalberico.com/essay/2020/06/13/zoom-in-china.html

[0]: https://twitter.com/zachalberico/status/1257910514966908933


Thanks for linking that essay. It's a good read. I especially liked the Sarah/Exec conversation. Will definitely keep this one saved for later.


Do Chinese people not use Chinese characters in their passwords?


Entering Chinese characters requires using an input method engine that turns keyboard input into a list of candidate words from which the user picks the correct one. If you used that method to enter a password, shoulder surfing would be trivial. I think it's usually automatically disabled for password input fields.


Additionally, there are other methods like Zhuyin that some people (typically the older generation that used computers before contextual dropdowns) use. I believe those keys just map 1-1 with American keyboards so they would just type the keycodes for Chinese characters and ASCII is inputted into the password field, but correct me if I'm wrong.


Zhuyin is just another way to input Chinese phonetically, so it requires the same feedback mechanism to choose the correct character. You're probably thinking of Cangjie, which was designed to have a unique code for each character, so theoretically it doesn't require feedback but modern implementations seem to have it anyway.


It's never allowed, part of the reason is you need to install Chinese IME to begin with.


>I wonder it they also silently truncate passwords > 10 chars?

Is it possible to limit passwords to 10 characters and silently truncate them too?


SANS Incident Response Team has entered the chat


Looks like „utf8mb4” mysql issue ? :)


Rate limiting login attempts is a basic security principle that's both easy to implement and not overly intrusive. This once again confirms that Zoom just doesn't care about having a secure platform at all.


Their success seems to demonstrate that they were right to prioritize functionality over security.


No, it does not. It demonstrates that they were successful at getting a lot of people to install their dangerous software


If you measure success by installs then they were successful.


> This once again confirms that Zoom just doesn't care about having a secure platform at all.

I disagree. I think it shows that Zoom (at the time this was created) lacked the skill necessary to create a secure platform. But their prompt reaction and subsequent focus on security has given me hope.


According to wikipedia Zoom was founded in 2011, has 2000+ employees and had revenue of 600M last year. I somehow doubt that if they cared, it would be a problem for them to hire a security consultant (internal or external) and perform some pentests and I believe any professional pentester would find stuff like this AND their previous security mishaps (their definition of "end-to-end encryption", mac app backdoors, etc...)


The history of computers seems tell us that people don't care about security until they're compelled to.


> 9th April – Heard from the Zoom team that this was mitigated.

> 16th April – Heard they were working on updated bug bounty program.

> 15th June – Requested update on BB program. No reply.

> 8th July – Asked again if I could submit this for bounty. No reply.

> 29th July – Disclosure.

Prompt?

> Maximum password length of 10

Increased focus on security?


not sure if you've actually read the article


Feel free to elaborate?


I think he meant that they somehow mitigated problem in 10 days wheras haven't paid (ghosted) author for months...


I thin theyve been big enough long enough to have a guy or two who could look at the functionality or even the codebase and say: ' hold on a minute,how on earth we are doing this'.


It's pretty freaking hard to convince a PM to care about security. For that matter it's pretty hard to convince most engineers, let alone companies, even after a hack. Imagine yourself talking to the general counsel after an elasticsearch db gets hacked about ethical obligations to make customers whole. Then imagine that GC saying literally "ethics? It's not like we're building bridges here".


If a website stores passwords in cleartext instead of hashes, would you have the same response?

This isn't fancy stuff. This doesn't require tens of thousands of dollars in code-audits or pentests to come to light. It's literally the absolute basics of password management. There should be no need to "convince a PM".

Rate limiting, not silently truncating passwords, not setting an extremely low and arbitrary maximum on password length... All of this stuff is as basic as hashing a password.


I'm saying I've been in exactly that position in many companies. Spent all of my social capital to get password hashes fixed, or a hacked DB audited, or circuit breakers, or rate limits, alerts, admin and monitoring tools, etc. It's really easy to preach here on HN. Saying it's an uphill battle "out there" is a drastic understatement.


I get where you're coming from, I do these types of engagements often. I just wanted to highlight the difference between "Please spend $25,000 on this pentest engagement" and "Don't set a maximum password length of 10" or "Don't set the default password to be 6 digits".

One is an investment and requires convincing a PM or C-Suite. The other two are some of the most basic concepts possible (literally first semester, if not first week of CS) in the design of anything that has to do with a password.


The other two are some of the most basic concepts possible (literally first semester, if not first week of CS) in the design of anything that has to do with a password.

There are still ways this can fail: e.g. tech lead on a team full of good but uninformed bootcamp devs with an absentee manager and a domineering PM, run as a democracy when only a minority have (formal or self-taught) CS education. If the PM doesn't like your recommendation they'll get one of the bootcampers to do a crappy job without telling you.


That would help a bit, but you could probably still get the hash of the password and infinitely try. I honestly think them capping password at 10 characters is more egregious.


It's unconscionable they still hadn't implemented any sort of rate limiting.

It should have been there from day one. For the protection of their customers, and their own infrastructure. After the string of "zoombombings", it should have been a top priority and received ongoing attention from their CEO until implemented.

When I began using the platform, I assumed the randomly generated meeting numbers were buttressed by adequate account and connection attempt monitoring on their back end to make them "secure enough". After finding reason to suspect otherwise 5 months ago, I contacted Zoom about it twice and never received a response (from what I can tell support is overwhelmed and tickets even for serious issues like security breaches and billing errors can take months to hit human eyes).

The password-in-the-link approach felt to me like security theatre. Yes, it adds value, but really doesn't amount to anything more than a bit of additional URL obfuscation (particularly given the length and character limitations), unless you're distributing passwords separately - which can be onerous for attendees.

Hats off to this researcher for forcing the issue and finally incentivizing the company to work on cleaning up their act. But it makes me worry about where else in their platform they took shortcuts. They've really nailed the "frictionless" part (and I commend them for that) but I'm convinced you can achieve a friendly user experience while still maintaining a basic level of security.


Aren’t secret unguessable URLs used all the time to secure content? See gdocs, Dropbox shared files, etc.

Of course the password shouldn’t be 6 digits...but as long as the URL space is unsearchably massive, say 256 bits, and there’s some basic rate limiting, embedding a “password” or random token in a URL seems an acceptable way to frictionlessly share private content?


Exactly. The Zoom links don't tend to have a large search space by comparison, and AFAIK all the services you mentioned use connection throttling and other mechanisms to detect and stop abuse before it goes rampant.

Appending a passcode is little different than if they were to just use longer meeting numbers in the first place (but with sometimes-worse entropy e.g. when the user changes it to "123456"). So they bought a few more bits (evidently still not enough to beat the bad guys) at the price of extra user hassle.

If they were more aggressive on the server side, they could probably get away just fine with the smaller, more convenient links and wouldn't need to push users so hard to turn on "frictiony" features like waiting rooms.

Private links work great as long as the team providing them understands the tradeoffs and appropriately mitigates risks. Zoom isn't the first service to be brute-forced [1], and there are more subtle ways for links to leak [2] (e.g. I think someone's tax returns once wound up on Google after the secret Dropbox URL was passed in a referrer header).

[1] https://www.theregister.com/2011/05/08/file_hosting_sites_un...

[2] https://softwareengineering.stackexchange.com/a/325821/79139


Mainly just trying to say that not all secret URL approaches are security theatre. Though Zoom's approach definitely is!

Totally agree that in the long term secret URLs can end up being a risk. They're so easily leaked and URLs themselves often aren't treated securely. I'm sure many GSuite/Dropbox corp admins forbid them. Though for many personal situations IMHO a secret URL is a perfect fit.

Hah, that Dropbox referrer issue you mention brings back memories! I recall the annoying challenges involved in securing that in a way that still let users view raw files/previews in browser without just forcing the content to be downloaded.(And this was in a world before the Referrer-Policy header)


Hey, you didn't work there by any chance did you? I really loved the old "Public" folder and agree those direct-to-raw-file links were super convenient (when used appropriately of course!)


long ago :-)

You can still get links directly to raw files...mostly! Just use the very under-advertised "?raw=1" param on a shared link. For example: https://www.dropbox.com/s/9i4696v9kqewoyw/Screenshot%202020-...

(does a redirect, and won't work for HTML. And some content like PDFs are served from locked down temporary URLs so that that referrer is useless of course)


No, URLs are not secret or secure, at best they are hidden.

A proper secret is something that you can only give consciously, URLs are often shared without intent (via screensharing, screenshots, (often unencrypted) emails, server/proxy logs, etc).

Many services like the ones you quote do allow sharing via URLs for ease of use but should also have a option to turn off auth-in-url style authentication for things that are sensitive.

It's okay for things that have a low value but need to be shared easily, but should not be considered secure.


My org was forced to switch from Zoom to Microsoft Teams and it's become quite apparent Microsoft has a long way to go to catch up. There are small things Zoom did that enhanced meetings that you never even knew or thought about as a user until switching to something else. For example, noise filtering. Zoom has active noise filtering which gets rid of small background noise (like typing or computer fans). Microsoft Teams does not have this, and every meeting with more than a couple people has unbearable background noise and everyone has to be on mute if they're not talking.

We're now looking into an enterprise license for Krisp.ai just to remedy this. I am not sure how a trillion dollar company like Microsoft hasn't been able to figure this out yet. Maybe they'll buy a startup like Krisp just to fix it. But hey...at least it's more secure.


and everyone has to be on mute if they're not talking.

In all the calls I've attended this was always the case anyway, and there seemed to be an implicitly understood rule of "keep yourself muted unless you want to say something". Seeing someone's mute indicator turn off was a cue to pause and wait, as it indicated someone wanting to say something.


Yeah, Zoom works very well, and is so much better for video calls than most of the alternatives.

I do have sympathy for their team who were suddenly getting a wild amount more traffic, and scrutiny. They have scaled fast and kept the platform up and stable, which is impressive.


On the flipside (just an anecdote, not making any points), we had a hard time turning this noise filtering consistently off for a colleague of ours who speaks without a voice box.

He kept getting filtered out randomly and we couldn't understand him because of the feature. It was (at the time) turning back on without his knowledge. We eventually got it consistently turned off with one of our zoom admins' help.

We get a lot of background noise from him but his voice is more valuable there.


Discord actually just partnered with Krisp.ai to bring the feature to their app too.


Reading the whole story it makes me believe Zoom has really poor securities practices all across their board. Even basic stuff. Incredible.


It seems like one way to mitigate security vulnerabilities is to write software that looks for statistical anomalies. Attempting 1 million passwords in 28 minutes is such an obvious outlier that it's strange we have to guard against it explicitly.

It would also catch cheats in video games, for example, since those are statistical outliers too.

Is there a name for this kind of program?


Yep, it's called novelty and outlier detection. The sklearn page is pretty informative:

https://scikit-learn.org/stable/modules/outlier_detection.ht...


Even a really crude monitoring metric would trigger alerts after a bunch of recurring calls to an endpoint from a specific place. They simple don't care at all or are too dumb to come up with basic security checks.


Until it's a default in frameworks like Spring Boot and in ELK stack logging systems, in most places it's not happening.


Got any examples of such monitoring metric setup? Would love to get an idea what it's good to monitor :)

I am thinking of using Grafana with Prometheus. Love to find a nice resource with good ideas of rules/monitoring to have


That's exactly what I do (or would) use and it does plenty for many type of business and it scales up well. The metrics specifically depend on the rest of your stack though but these days any API gateway or ingress application or service mesh or proxy app can provide endpoints metrics for you nearly out-of-the-box.


Valve has an interesting talk about this with respect to VAC [1]. Which like you suggest relies heavily on statistical evidence from multiple replays in order to detect cheats.

In CSGO's case they test it against their existing system Overwatch which uses player moderators to detect cheats). With their other big title dota as far as I'm aware its fully automated.

1 - "GDC 2018: John McDonald (Valve) - Using Deep Learning to Combat Cheating in CSGO" - https://www.youtube.com/watch?v=ObhK8lUfIlc


I really hope they just extend the password to 8 upper letters (200 billion combinations) or 10 digits.

If they go for a longer and alphanumeric password as it seems they are doing, I am gonna dread having to enter that manually whenever joining a meeting, all because an hypothetically attacker might join in. Might as well switch back to webex for usability.


you're entering passwords manually to enter zoom meetings? I don't think I've ever done that, usually they are included (hashed) in the URL distributed to participants, as the OP mentioned several examples of.


Can't copy/paste from my desktop to my mobile, or the other way around. Can't copy/paste to the video conference system of a meeting room either.


Some environments don’t allow copy and paste.

For example blackberry work (depending on configuration) on a private phone.


I can't stand the thought of using Zoom after all the seemingly endless issues on security and privacy (and now this new issue with not paying a bug bounty).

For what might probably be a millionth time, what are the best alternatives (preferably free or easily self-hostable or priced low) for occasional calls of the following types:

1. Video calls with some people (say about 10 people max.). The free Jitsi Meet seems good for this.

2. Webinar platform where there are clear distinctions between a presenter and participants, and the presenter chooses what's visible at any point in time (video feed from camera or some file/presentation/screen sharing) and has control over recording the session.

3. Same as #2 but with two presenters on camera (different physical locations) switching back and forth (either as the main view or with the active presenter on the main view and the other in a smaller corner window).


Security isn't in Zoom's culture or DNA. It's not how they think, so they'll keep having issues.


as an fyi, csrf protection is not related to bot protection; the csrf protection failure means an attacker can execute this code in another user’s browser (and get no meaningful result)


My employer just switched to Zoom (yesterday was my first zoom meeting) and I wondered why we were switching to a company with such lame security.


> They seem to have mitigated it by both requiring a user logs in to join meetings in the web client

Well, that's unfortunate. I don't have a zoom account and have no interest in having one, but sometimes need to attend meetings I have no control over where they're held.


Why public education continues to use Zoom is beyond me. Not only do they use Zoom, they spend upwards of $10/mo per student for it. For that price you get the entire G Suite platform.


We tried using Google Meet for work meetings at my university. The experience SUCKED compared to Zoom. There were serious problems like taking 30 seconds to a minute for the screen share widget to popup after clicking the button in Firefox! It was, frankly, unusable for us.

We also tried self-hosting Jitsi, and while that kind of worked, we had some problems getting everyone to be able to connect to it and send/receive audio. It went on the backburner of things to look into more later.

Zoom has a lot of problems, clearly, but it solved THE most important issue: we can actually communicate successfully with it -- as in right now with minimal additional effort. That's why it won.


after clicking the button in Firefox

A not-so-subtle (and IMHO abusive) effort to get you to use Chrome instead? They've already done that with YouTube's new horrible redesign, it wouldn't surprise me if other Google app-sites were the same.

The state of web browsers is a mess but that's a rant for some other time...


What sort of problems did you experience with jitsi in terms of the connectivity you mentioned?


It's been a few months since we tried it -- so not fresh in my memory any more -- but as I recall, the biggest issue was with audio not working as expected. e.g. you could see people talking in the video feed, but not everyone could hear the audio. I don't think it was just a muted microphone on the other side; my recollection is that some people could hear the audio and some couldn't, but it wasn't clear why not. I remember having to call our group leader by cellphone because he wasn't seeing/responding to the issue in chat and was just continuing on... We did eventually get it working for almost everyone (I don't remember how though, unfortunately) -- but there were two or three people who just couldn't seem to get it to work no matter what they tried, and we ended up going back to Zoom for the next meeting.


Time to learn that minimal additional effort != best solution.

When you drink a soda, minimal additional effort is to throw the can away, but if you think about consequences you'll probably make some additional effort and recycle it.


Why public education continues to use Zoom is beyond me

Some schools and states have banned Zoom. I think New York's educational system is one.


The simple fact is that Zoom's connectivity is vastly better on low-performing internet, which is critical for students.

Where Facetime and G Suite (Meet) simply freeze, drop connectivity for 30 seconds at a time, etc... Zoom just chugs along at an extremely low bitrate but where you can still make out the audio and speak and be heard.

Other solutions are awesome when you've got dedicated bandwidth in your office. Zoom is awesome when your local ISP sucks, or the rest of your family is busy using the internet too and there's nothing you can do about it.


Reminded me of the time when it was possible to brute-force a Hotmail password brute-forcing via the Windows Messenger client connections


checked 91k passwords in 25 minutes.

250 minutes to crack any password?

Meeting will be over before this happens.


OP here.

I tested much higher rates for short bursts, and wasn't ever rate limited, but didn't want to risk blowing anything up. However, with a few AWS instances / lambdas it would have been possible to do it in a few mins.

Secondly, and more importantly, I found a variant (mentioned in the article) that allowed me to do this before meetings started, so you could have the password in advance.


On a single machine with 100 threads. But imagine what NSA (or any other relevant intelligence agency) can do for listening into some foreign government's meetings like the UK one in example. Or what huge corporates can do for corporate espionage.


Don't forget that many recurring meetings reuse the same password.


If you can choose the password, people will also use passwords they use for other things.


I agree, but the point I'm trying to make is that the time to guess the password isn't too big of an issue for passwords that never change. Ideally, Zoom would generate a password for each instance of a meeting.

Failing that, having some better factor for authentication (known email or number for a given company's Zoom setup) would make it harder to get in simply by guessing a short password.


That's on just one computer. Depending on how many servers (read: how much in AWS credits) you have access to, you could parallelise it nearly infinitely.


You missed the part where he said that you could just throw more servers at the bruteforce and drastically cut the time required.


This attack horizontally scales really well, and you don't have to try all 1 million passwords in the average case


It won't necessarily take that long though. That is an upper limit.


I would imagine using a GPU compute instance would bring this time down significantly.


It's network bound, not compute bound. A GPU wouldn't really help in this case.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: