1. Are you a software engineer?
2. How many "security" tickets have you been assigned in your career?
3. Has your employer ever paid for security training for you? (and I'm not talking about annoying powerpoint websites that teach you how to identify phishing emails)
4. Has your organization ever run a blue team / red team exercise?
5. Who is in charge of APPLICATION SECURITY at your company? (Not network security, or database security, but actual APPLICATION level vulns)
6. Does your organization scan for outdated dependencies? (Do you uncover CVEs in your software on your own, or do you check how bad things are when the news tells you something big happened and might be in your stack?)
7. Are you running a web application, and have you implemented ANY security headers?
8. Did your business unit mandate that "we support all browsers", so they still have you running on TLS v1.1? (who tf knows, or cares, am I right?)
9. Do you use the software you built? (Is your personal information in the database, along with legitimate usage stats, and possibly sensitive information you'd like to protect, or do you just write the code and deploy into the void?)
10. Do you have access to the production systems or database? (Most likely the answer is NO, so you wouldn't know about brute-force attacks, invalid requests, corrupted data, or other anomalies the developers should have their eyes on).
My diagnosis; the profession of software development is a victim of a hostile takeover from product managers, while pushing engineers out of control of their domain.
My recommendation; use the least amount of software you can get by with, and assume it's compromised.
I'm in SecEng and was laid off when my company's regional office went under. It's become pretty obvious to me that there's a huge need for security minded engineers, and most applicants companies get for "Application Security Engineer" roles don't have any experience or background as SWEs.
I'm of the opinion that the industry needs to do a few things to help get traction on the problem:
1. Open up career paths that allow SWEs to move from product develop to technical security
2. Have companies better advertise their need for skilled technical security talent
3. Have InfoSec teams perform more cross team exercises. I've never met an engineer who wasn't all for participating in a red/blue team exercise. It's a fantastic way to cross-train and raise awareness.
So few organisations have dedicated security people at the time they need it most! Since we're on HN, it is maybe worth looking at startups, and how few early stage ones build with security in mind at the time. In regulated sectors or ones where reputation is key, it seems a no-brainer to have a security minded joint CTO/CSO to ensure the house is in order from the start. I understand the pressure and focus on an MVP, but equally I've seen first hand the cost of undoing (avoidably) bad engineering and security ignorance that had real financial cost. After MVP is validated and there are paying customers, it's really time to get security right, in my view.
I also see little demand for "real" security engineers. Most security roles I see are pretty non-technical, often doing the whole "let's turn security into a generic risk so we can hand it off to a non-expert". I don't really see how you can lead or drive security when you can't pop your way into a typical business webapp yourself, yet most of the security people I know what to learn these skills (but they're too busy firefighting for me to get time to teach them some of the tricks).
I've seen little demand or appetite for bringing together what seems to me to be a real security engineer - experienced technical developer lead, deep and broad security knowledge across the full stack, and the ability to make strategic business decisions (exec skills). Is this a missed opportunity, as the value of it seems clear to me? Or is the demand really just not there for this?
Good luck. I’ll stick to Incident Response and SIEM software.
I'm interviewing in SF with ~8 years of product engineering experience and ~2 years of AppSec/SecEng experience. I'm looking at 8 companies that are all willing to pay well for folks to do that work. Typically in range of ~180 - 220k base salary from what I've seen so far.
On the topic of meaningful change: You're absolutely correct in that it's easy for folks in security to find themselves in places where they identify work that needs to happen without receiving support or authority to make it happen. For aspiring technical security folks, there's a few things you can screen for to avoid companies that will do this to you:
1. Does the company have a formal CSIO (Chief Information Security Officer)? If not, move on. CSIOs represent security risks and needs to your executives and board members. Without that, you won't see security work get on anybody's road maps.
2. Does the company have an established security program? If not, do they have a roadmap for making one?
3. What is the size of the technical security team compared to the larger engineering organization? There's no bad ratio here, but the smaller the ratio is, the more critical it is to automate as much as possible.
4. What training programs exist within the larger engineering organization? Do they cover security awareness? Technical security? How well is this program executed? A good training program is critical to reducing new work created for security teams that are typically overloaded to begin with.
There's probably more you can look for here, but I find these questions to be reasonable filters.
> There is a huge market for security engineers
If you have a look on Linkedin for "Application Security Engineer" jobs in London, UK you would find there are not that many, some companies don't even have AppSec Engineers.
> Does the company have an established security program? If not, do they have a roadmap for making one?
For less mature company or company just starting out AppSec could be the force that creates and implements a security roadmap, adopting OWASP SAMM or BSIMM
Furthermore, part of me suspects that the tangible business risk of application security flaws isn't felt until after a breach, when its far too late to change things. Even then, sometimes the cost of a breach does not justify the expense of building a robust secure software development life-cycle.
I've run into many stupid security mistakes, and continue to do so :(. Even though security can be very hard, as an industry we'd be way better off if more understood at least the basics. Those with interest/expertise can then dive in and find the more tricky things. Those without, can at least understand WHY these things are important.
Cross team working is important, but I think it's also important to try and embed security "people" into the day to day development flow on a regular basis. It ought to help the developers through learning by osmosis.
Definitely think companies need to better signal the demand for deep technical skills in security though - too many seem to actual want generic risk management experience but no particular technical knowledge. To my mind, anyone in a security role should really be able to break their way into a typical corporate internal webapp in a few minutes, just out of their own residual talent and ability to show stakeholders the importance of their profession. It's amazing how quickly security gets a seat at the table when they show how they can pop the self service password reset portal with just a web browser (with appropriate permission of course) and are asking whose password to reset to show it works...
And what about that PoC that slowly transitioned to bring a production system?
A large part of the blame lies in the culture of pressure on releasing fast and often.
I wish. On the contrary, in my career I met more people that moved from security to development than the other way around.
In many organizations the priority is on churning out stuff and security comes later.
It is clearly proven by the small percentage of security people VS developers.
As a consequence, developers are paid and praised more. Security people end up fighting an endless uphill battle and sometime seen as "party poopers" by the developers.
To me, that's a fantastic situation as a job seeker. It gives me the luxury of choice. Knowing some companies don't value or support their security teams means that it's something I can screen for ahead of time. If I fail to screen appropriately and end up somewhere that I'm not happy with, I can just start looking again.
The need for App Sec is there as far as I’m concerned the demand is non existent... even though most red teams could make them cry.
These should be as basic as not storing cleartext passwords.
Most newly CS students would know better then to do this or at least would know they have to properly look this up.
It's sometimes hard to believe major stupid things like this are done accidentally. (But I know very well how they happen accidentally, it starts with some bug somewhere with non-us-ASCII/to long and similar, then it's constrained "temporary" and put on a must fix list but that list never gets any priority ever, things like this are sadly supper common. As long as companies don't get legally hold responsible for negligence this won't ever go away.)
But the password complexity, rate limiting and other security measurements are there for a reason, and whoever cannot learn from history are doomed to repeat it.
You would think that "not storing cleartext passwords" is a universal given today, but the corporate structure controlling software design today actively suppress anything that isn't a feature. It silos security responsibility to the point where nobody was probably in charge of making sure there was a coherent password policy, that it conformed to any type of standard, or that it was enforced in production against their multiple running systems.
This is every company today.
One thing that causes them to be prioritized where I work is that if they come up during a yearly review, you can't ship your app. Overriding that "stop-ship" order requires C-level approval. Because without a specific exploit, it might be hard to convince a PM to move those up in the backlog.
Sorry, calling BS on this one. You’re calling out an entire group when, like most things, it’s a subset of the broader group. Good PMs understand that a complete product is not just new fancy features but a combination of Features, tech debt work, security and more. At the end of the day if our customers aren’t secure, we’re in trouble.
I’ve been on both teams. Once watching a PM stand n the midst of my engineering team proclaiming “I don’t give an F about your technical debt, when will my features be done?” And now I lead a product management team. Great companies have Product and Engineering trusting each other and working together. They may disagree from time to time, but they deliver together and work together to balance what gets done.
Nearly every single project I've worked on directly or indirectly has had problems (involving security or otherwise) that could have been fixed by a more patient management.
The best thing to learn as an engineer at any level: "rushing makes messes."
Normally, you can pick one. If you are exceptionally lucky, 2. No project is ever all 3 at once, if anyone thinks so for a given project that is a sure sign of either delusion or inadequate ability to judge (after all, good is very subjective).
I have more then one time stumbled about some "wtf is this" thing in libraries which seem to be very good/well maintained/etc.
- Setting socket options which are both unnecessary and cause bugs (like non blocking flag on a socket which is used as if in blocking mode without having non-blocking support in that library).
- Not properly clearing secrets while advertising to do so. (I.e. writing zeros without using volatile write or similar, not supper will known but authors of hashing libs can be expected to know better).
- Less obvious Memory leaks.
- Major logic flaws in the application logic which should easily have been cough by tests, except that the tests didn't really test anything. (Through ironically not security flaws.)
- Libraries pretending to support X but only correctly support that common special limited usage of X while having code for full X support but all buggy and 100% unusable outside of the common special case.
- EDIT: Fundamental design flaws in supposedly state of the art, supper fast, supper reliable web framework which makes it not so fast and not so reliable in many real work use-cases under load.
It's sometimes really sad.
I'm not sure what the solution is but it probably involves companies getting more active in the development of all the stuff they depend on especially when its not some mega project like linux or postgres.
I'm in finance, working on back-end systems used by banks. :-D
>My diagnosis; the profession of software development is a victim of a hostile takeover from product managers, while pushing engineers out of control of their domain.
Jop, that nails it. Software engineers are treated sometimes like children, sometimes like being crackbrained even, but almost never as the highly educated experts they are (or should be, if there wouldn't be so much botchers around also, which for sure causes to some degree the mentioned problem).
Engineering decisions can often be overridden by management no matter what. The result are the usual "catastrophes" repeating over and over everywhere. But as long as this "catastrophes" (like breaches with exfiltration of user data) "require" mostly only a "pardon blog-post" from some executive but don't have any really hurting (financial!) consequences this won't change I guess. If there are costs the insurance will cover them usually. And you need to have and pay the insurance anyway, so why care further? Product management doesn't optimize for security. They optimize costs.
Which as such is not wrong or so! If the incentives would be set by the market provider (the state) correctly the same mechanics could easily lead to improved security. I know many don't like to hear this but imho this industry needs stronger regulation. Without regulation nothing will change as on a free marked the "cheapest" product wins. And it will be (or is, as we can currently see) "cheap" in any dimension. "Worse is better" is a direct result of the fact that the "cheapest stuff" prevail in the long run.
>My recommendation; use the least amount of software you can get by with, and assume it's compromised.
Quite depressed recommendation and fatalistic viewpoint. But to be honest, after so many years in software this could have been my words. And as I see it: Without true political will this conclusion won't change on it's own.
I would be much more worried about security if the developers had access to the production environment than if they didn't.
So far, so good.
The issue was that the “sysadmins” knew almost nothing about how anything worked, so the procedure was that the devs would give them commands to type and they would just type them in to a terminal.
Of course, if anything went wrong, or was ambiguous, the dev would need to check on it, and would end up standing next to the “sysadmin”’s desk and just telling them which characters to type. I once had to explain what a pipe character was...
Anyway, the end result was that all of the devs had prod access, they just had a very slow interface to it.
They already have access to staging and preprod what they do is basically coming to me and saying 'it did run well on their macos machine'
There are outliers, some few good people and some random people having no idea what they are doing.
IMHO there should be random quiz/task/test each day you login. Something obvious but not trivial at the same time related with domain of the system. So you would get 24h access if you pass, get denied for 3 hours if fail. 3 questions in each session...
At least people might learn something out of requirement...
They also had a person who could see if the dev was stalking an ex-girlfriend or pilfering bank account info. Sounds perfect.
PHB: "They need access to all the systems."
Me: "Well, um, what if one of them got disgruntled and just tossed some malicious JS in one of your many, many front end servers? Given the procedures here, it would be unlikely anyone would stumble across it, but they'd get a good amount of hits before the browsers block the entire site and traffic precipitously drops."
PHB: "Nobody would ever actually do that. Besides: we have to give them access. (looks at me as if he's explaining something to a five year old) We're a devops shop..."
There are still vectors for bad actors of course, but the idea is to firewall those who write the code from those who run it.
It can be pretty hard if your organization was not organized with this in mind in the first place.
It devolves into a bunch of managers saying no ops person can have any write access to git at all and no dev person can have any read access to prod, let alone deploy code, thus throwing up a wall to have stuff thrown over.
Separation of duties is the worst, stupidest, clumsiest control ever but all the auditors and management types love it because it doesn't require them to think.
IT controls are far less used as the only control as the Phoenix Project alludes and yet the default state for any auditor is "it's all in Sox scope and everything is an IT control, lock it all down" and unless management has a clue and a care, they just do it.
In the process, they contort the CICD pipeline in horrible ways to say that yes, they have obtained the magical way of separation of duties.
Regarding users caring about security, there's three possibilities:
- Users should care about security but they don't because they're dumb/ignorant.
- Users don't care about security because it's not worth the cost/it doesn't affect them, so they're right in not caring (they have 'nothing to hide').
- Some combination of them.
In 2020, you are an "engineer" writing "modern" software.
Don't forget the "updates"!
This is a good observation, it summarizes very well my feeling on what is wrong with the industry today.
In the 90s in many (most?) companies where the product was tech, engineering was in charge of engineering decisions. This meant that feature requests had to pass a sanity filter which would discard or transform ideas which would compromise the soundness of the security, stability, maintainability and other core architectural considerations.
Today, engineering is reduced to fungible user story jira ticket implementors with no decision making power so the more abstract (to PMs) work such as security will get no attention except as a crisis when it generates bad PR.
(source: working as a security-focused eng/architect since the 90s, on both product engineering and infosec sides.)
I do not agree, but I am biased. I'm a former engineer with a security focus that is now a product manager.
I think a more honest take is that security has never been a priority outside of some specialized use cases/industries and that didn't improve as software development moved from something esoteric to something which is business critical in every industry vertical. Even in industry verticals where security is theoretically a priority and a lot of money is spent on security, most of the "security" people don't actually know anything about security and most of the work is box-checking for compliance audits.
You can only do so much and if we're being real security will always compete with other development priorities and the ones which drive revenue always win in any for profit enterprise which creates software as a core competency and the ones which reduce cost always win in any for profit enterprise where software creation is not their core function. Tech is either a product/revenue driver, or it's a cost center, and in both cases security adds additional expense, overhead, and time to release timelines which doesn't pass the PHB smell test.
The other big issue is that security doesn't have strong advocates in most organizations because even the most technical people in most organizations are security illiterate, even in the tech industry vertical. As a SWE at most companies you're a "security genius" if you use a password manager and know how to generate a CSR with OpenSSL or configure Let's Encrypt.
Maybe I'm overly cynical, but I've largely given up on seeing most companies pursue security with the passion and commitment necessary and see policy as the proper way to address these concerns. I applaud things like HITRUST CSF, which is strongly prescriptive and helps drive security in industries where every single company is full of box-checkers who like to buy appliances. I've been fortunate enough to work at companies that take security seriously and appreciate my background and as a product manager I have always considered user security and privacy to be critical and core components of UX in my products. So, I wouldn't blame the PMs, I'd blame the realities of doing business combined with the lack of adequate security literacy across the board in every industry vertical and at every technical role level.
is there financial pain? Is zoom losing money over these security issues? or is paying to fix them going to cost more than the offset in losses they would've had?
Here's our culprit. There IS financial pain but most of it is externalized onto the customers in unseen ways. What is it worth to someone to compromise a specific Zoom meeting? Add up all of the tangible and intangible losses Zoom customers have suffered from data leakage and there is the real cost in the market.
The price Zoom pays to fix another bug, maybe write a blog post, and have a few accounts closed is a small fraction of this full cost to society.
This is probably the strongest argument for regulation such as GDPR.
So, developers shouldn’t shovel a bunch of third-party advertising SDK’s into their apps?
That’s just crazy talk.
> 16th April – Heard they were working on updated bug bounty program.
> 15th June – Requested update on BB program. No reply.
> 8th July – Asked again if I could submit this for bounty. No reply.
> 29th July – Disclosure.
That's disappointing that Zoom never got back to you regarding the bounty.
> Update edit: A few people have asked me or remarked about the lack of bounty. To be clear, I never actually submitted this bug via their bounty program (but was invited to do so), as was holding out for their new program (see post), and fell down the cracks a bit. Zoom didn’t decide against awarding a bounty – I never submitted for one, and disclosed here instead.
> 15th June – Requested update on BB program. No reply
> 8th July – Asked again if I could submit this for bounty. No reply. (Point of clarity here – the bug is fixed, and they have new issues to deal with so this isn’t exactly a priority for them. I could have chosen to file the bug for a bounty at the time, but didn’t, and wasn’t promised anything if I waited).
If Zoom were serious about their BB program they would have encouraged him to submit it for a bug bounty.
Hope that helps set a bit of a benchmark!
I've reached the point of assuming the odds are stacked so heavily that, from a purely financial perspective, it's not worth the investment just to report an issue let alone find it.
Maximum password length of 10 chars, and auto-converting non-ASCII to '?' are both extremely egregious password practices.. Why does it not surprise me Zoom is doing both. I wonder it they also silently truncate passwords > 10 chars?
These are absolute basics. Let alone not rate limiting and the laundry list of other terrible (lack of) security practices.
I use 1Password and sometimes when it pastes in it works, sometimes the UI complains the password is longer than 32 characters.
I sent them a screen shot on Twitter  figuring their US support people would see it, but they didn't seem to care that much (got some generic response).
We just shouldn't be using them: https://zalberico.com/essay/2020/06/13/zoom-in-china.html
Is it possible to limit passwords to 10 characters and silently truncate them too?
I disagree. I think it shows that Zoom (at the time this was created) lacked the skill necessary to create a secure platform. But their prompt reaction and subsequent focus on security has given me hope.
> Maximum password length of 10
Increased focus on security?
This isn't fancy stuff. This doesn't require tens of thousands of dollars in code-audits or pentests to come to light. It's literally the absolute basics of password management. There should be no need to "convince a PM".
Rate limiting, not silently truncating passwords, not setting an extremely low and arbitrary maximum on password length... All of this stuff is as basic as hashing a password.
One is an investment and requires convincing a PM or C-Suite. The other two are some of the most basic concepts possible (literally first semester, if not first week of CS) in the design of anything that has to do with a password.
There are still ways this can fail: e.g. tech lead on a team full of good but uninformed bootcamp devs with an absentee manager and a domineering PM, run as a democracy when only a minority have (formal or self-taught) CS education. If the PM doesn't like your recommendation they'll get one of the bootcampers to do a crappy job without telling you.
It should have been there from day one. For the protection of their customers, and their own infrastructure. After the string of "zoombombings", it should have been a top priority and received ongoing attention from their CEO until implemented.
When I began using the platform, I assumed the randomly generated meeting numbers were buttressed by adequate account and connection attempt monitoring on their back end to make them "secure enough". After finding reason to suspect otherwise 5 months ago, I contacted Zoom about it twice and never received a response (from what I can tell support is overwhelmed and tickets even for serious issues like security breaches and billing errors can take months to hit human eyes).
The password-in-the-link approach felt to me like security theatre. Yes, it adds value, but really doesn't amount to anything more than a bit of additional URL obfuscation (particularly given the length and character limitations), unless you're distributing passwords separately - which can be onerous for attendees.
Hats off to this researcher for forcing the issue and finally incentivizing the company to work on cleaning up their act. But it makes me worry about where else in their platform they took shortcuts. They've really nailed the "frictionless" part (and I commend them for that) but I'm convinced you can achieve a friendly user experience while still maintaining a basic level of security.
Of course the password shouldn’t be 6 digits...but as long as the URL space is unsearchably massive, say 256 bits, and there’s some basic rate limiting, embedding a “password” or random token in a URL seems an acceptable way to frictionlessly share private content?
Appending a passcode is little different than if they were to just use longer meeting numbers in the first place (but with sometimes-worse entropy e.g. when the user changes it to "123456"). So they bought a few more bits (evidently still not enough to beat the bad guys) at the price of extra user hassle.
If they were more aggressive on the server side, they could probably get away just fine with the smaller, more convenient links and wouldn't need to push users so hard to turn on "frictiony" features like waiting rooms.
Private links work great as long as the team providing them understands the tradeoffs and appropriately mitigates risks. Zoom isn't the first service to be brute-forced , and there are more subtle ways for links to leak  (e.g. I think someone's tax returns once wound up on Google after the secret Dropbox URL was passed in a referrer header).
Totally agree that in the long term secret URLs can end up being a risk. They're so easily leaked and URLs themselves often aren't treated securely. I'm sure many GSuite/Dropbox corp admins forbid them. Though for many personal situations IMHO a secret URL is a perfect fit.
Hah, that Dropbox referrer issue you mention brings back memories! I recall the annoying challenges involved in securing that in a way that still let users view raw files/previews in browser without just forcing the content to be downloaded.(And this was in a world before the Referrer-Policy header)
You can still get links directly to raw files...mostly! Just use the very under-advertised "?raw=1" param on a shared link. For example: https://www.dropbox.com/s/9i4696v9kqewoyw/Screenshot%202020-...
(does a redirect, and won't work for HTML. And some content like PDFs are served from locked down temporary URLs so that that referrer is useless of course)
A proper secret is something that you can only give consciously, URLs are often shared without intent (via screensharing, screenshots, (often unencrypted) emails, server/proxy logs, etc).
Many services like the ones you quote do allow sharing via URLs for ease of use but should also have a option to turn off auth-in-url style authentication for things that are sensitive.
It's okay for things that have a low value but need to be shared easily, but should not be considered secure.
We're now looking into an enterprise license for Krisp.ai just to remedy this. I am not sure how a trillion dollar company like Microsoft hasn't been able to figure this out yet. Maybe they'll buy a startup like Krisp just to fix it. But hey...at least it's more secure.
In all the calls I've attended this was always the case anyway, and there seemed to be an implicitly understood rule of "keep yourself muted unless you want to say something". Seeing someone's mute indicator turn off was a cue to pause and wait, as it indicated someone wanting to say something.
I do have sympathy for their team who were suddenly getting a wild amount more traffic, and scrutiny. They have scaled fast and kept the platform up and stable, which is impressive.
He kept getting filtered out randomly and we couldn't understand him because of the feature. It was (at the time) turning back on without his knowledge. We eventually got it consistently turned off with one of our zoom admins' help.
We get a lot of background noise from him but his voice is more valuable there.
It would also catch cheats in video games, for example, since those are statistical outliers too.
Is there a name for this kind of program?
I am thinking of using Grafana with Prometheus. Love to find a nice resource with good ideas of rules/monitoring to have
In CSGO's case they test it against their existing system Overwatch which uses player moderators to detect cheats). With their other big title dota as far as I'm aware its fully automated.
1 - "GDC 2018: John McDonald (Valve) - Using Deep Learning to Combat Cheating in CSGO" - https://www.youtube.com/watch?v=ObhK8lUfIlc
If they go for a longer and alphanumeric password as it seems they are doing, I am gonna dread having to enter that manually whenever joining a meeting, all because an hypothetically attacker might join in. Might as well switch back to webex for usability.
For example blackberry work (depending on configuration) on a private phone.
For what might probably be a millionth time, what are the best alternatives (preferably free or easily self-hostable or priced low) for occasional calls of the following types:
1. Video calls with some people (say about 10 people max.). The free Jitsi Meet seems good for this.
2. Webinar platform where there are clear distinctions between a presenter and participants, and the presenter chooses what's visible at any point in time (video feed from camera or some file/presentation/screen sharing) and has control over recording the session.
3. Same as #2 but with two presenters on camera (different physical locations) switching back and forth (either as the main view or with the active presenter on the main view and the other in a smaller corner window).
Well, that's unfortunate. I don't have a zoom account and have no interest in having one, but sometimes need to attend meetings I have no control over where they're held.
We also tried self-hosting Jitsi, and while that kind of worked, we had some problems getting everyone to be able to connect to it and send/receive audio. It went on the backburner of things to look into more later.
Zoom has a lot of problems, clearly, but it solved THE most important issue: we can actually communicate successfully with it -- as in right now with minimal additional effort. That's why it won.
A not-so-subtle (and IMHO abusive) effort to get you to use Chrome instead? They've already done that with YouTube's new horrible redesign, it wouldn't surprise me if other Google app-sites were the same.
The state of web browsers is a mess but that's a rant for some other time...
When you drink a soda, minimal additional effort is to throw the can away, but if you think about consequences you'll probably make some additional effort and recycle it.
Some schools and states have banned Zoom. I think New York's educational system is one.
Where Facetime and G Suite (Meet) simply freeze, drop connectivity for 30 seconds at a time, etc... Zoom just chugs along at an extremely low bitrate but where you can still make out the audio and speak and be heard.
Other solutions are awesome when you've got dedicated bandwidth in your office. Zoom is awesome when your local ISP sucks, or the rest of your family is busy using the internet too and there's nothing you can do about it.
250 minutes to crack any password?
Meeting will be over before this happens.
I tested much higher rates for short bursts, and wasn't ever rate limited, but didn't want to risk blowing anything up. However, with a few AWS instances / lambdas it would have been possible to do it in a few mins.
Secondly, and more importantly, I found a variant (mentioned in the article) that allowed me to do this before meetings started, so you could have the password in advance.
Failing that, having some better factor for authentication (known email or number for a given company's Zoom setup) would make it harder to get in simply by guessing a short password.