Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft says 8.5M Windows devices were affected by CrowdStrike outage (techcrunch.com)
86 points by frays 47 days ago | hide | past | favorite | 121 comments



That sounds low... Really low. E.g. NYC has ~350k employees and I know they got hit hard. Not all of them have windows machines, but let's say 100k do. I know they basically all have falcon installed. That's 100k in just one org, not even counting their windows servers. How many Fortune 500s are mainly Windows?

Edit: I did some back of napkin math. ~30 million work for a fortune 500. Let's say 2/3rds of those have a Windows desktop provided by employer, so ~20M. I think I read crowdstrike has about ~25% market share, so that's 5 mil just in fortune 500. No way it's just 8.5M


Crowdstrikes pricing is really unapproachable for smaller businesses and they aren't interested in that market. Your maths for the f500 may well cover more than half their business.


I thought annual pricing started around $60 per laptop for small businesses. That’s pretty approachable, IMO.


No it's not. Small businesses are used to paying $5-$15 for remote control, reporting, 3rd party patching, reports, automation, etc. Unless the business has huge margins, you're getting laughed out of the room @60.


$5-$15 per ~year~ you say?


I've seen three hundred user businesses told they won't even quote them a price.


They aborted the update rollout, so not every box that could have been affected was.


Well, what's more important is which devices compared to how many, because many many many consumer home PC's are running windows, and not all machines are equal. A windows server running a train station != grandmas PC, so when they say "it's only 1% of the windows fleet" they are missing out "it's all of the production machines that are running society"


Windows AND Crowdstrike. I work for a pretty large corporation and the only effects for us was a few third party applications. We use M365 extensively and didn't see any issues there at all either.


>How many Fortune 500s are mainly Windows?

Front end or back end?

Because the backend hasn't been windows in most places for a very long while.


You would be surprised. Yes Linux dominates, but there are many millions, easily billions of windows backend servers


You think there's "easily" the same order of magnitude Windows Server backend machines in the world as there are humans in the world?


I think they're way off with billions.

I do think there are a mountain of AD servers out there though. Not sure I care to quantify exactly how many makes a mountain. but I'd think 2 commas for sure. more than a million? less than 100 million? that seems like the right ballpark.


And Windows with Hyper V can also be found in the Backend of many shops with JTL et al being mostly windows focused.


Do they run crowdstrike though?


22% market share of the corporate anti-virus market, I read.

Its not only big corps, but hospitals, governments, mom and pops.

Crowdstrike was baaaaaaad.


My observation was that not 100% of devices with CrowdStrike had an issue and of those that did, about 70% recovered on their own after a few reboots.


How is it possible for some to recover on Thier own?


Apparently the "virus" definition data (whatever it is called) can be auto updated while the computer is booting. Not sure if the intention was for it to update before or after Crowdstrike is activated, but there is a racing condition where on some machines the update will execute before activating after a lot of attempts.


Microsoft is just jealous, it took the focus from their large Azure outage, also on Friday: "Major Microsoft 365 outage caused by Azure configuration change" - https://www.bleepingcomputer.com/news/microsoft/major-micros...

To compensate and keep the focus on them, as masters of all outages...They will take at least until Tuesday, (according to their own info...) to fix the current ongoing issue with Teams scheduling: https://portal.office.com/servicestatus


What if I told you the cause was a failure of a dependency running CrowdStrike


Then I would say they should read their own blog post: "...a reminder of how important it is for all of us across the tech ecosystem to prioritize operating with safe deployment and disaster recovery using the mechanisms that exist..."

https://blogs.microsoft.com/blog/2024/07/20/helping-our-cust...


From your linked piece:

"CrowdStrike has helped us develop a scalable solution that will help Microsoft’s Azure infrastructure accelerate a fix"

What if I told you that such Mag7 speak is not to be trusted, at all, even?


Much of the media has reported the crowdstrike mess as a Microsoft problem (because it only affected Windows hosts). Even still, this is not a good look for Windows. In no way is Microsoft happy about the crowdstrike situation.


cant have an office outage when you can’t boot into windows


Another article blaming the upstream vendor and not bothering to put any onus on the horrible security practices of companies allowing auto updating of executable code in production on critical systems.

This is unacceptable practice. I understand non tech media not getting it, but this lack of awareness from tech news is sad.


You want your antivirus to be autoupdating.


Not if you are running critical systems and the antivirus is not 100% guarantied to be safe and running in an isolated environment.

Hospitals should not loose their ability to provide care to sick people, just because of an misconfiguration of an antivirus. That is as bad as airplanes crashing because of a lack of redundancy and management of risks.


Not on production critical systems where there are human lives at stake. Last Friday is a pretty good example of what comes together with ungoverned ‘autoupdate’.


So let's imagine that it has to be updated manually. New threat appears and since it takes a while to manually update it means bad actors can act on it meanwhile, causing a similar or even worse disruption since it could have far more severe impact, because of the bad intents.

Would that be better?


"Immediate across the fleet" and "Entirely manual process" are not the only two options. HN rules say we must assume good faith, but there are obviously options in between, and all of them stop the issue that happened on Friday.


What option would you pick if Crowdstrike found a vulnerability that could affect everyone involved?


Your argument is the 0.01% of cases should dictate the other 99.99%s actions?

I would pick automated testing and spread fleet deploys. There's no reason in any enterprise this should take more than 1-2 hours, which is a perfectly acceptable window of risk.


I'm not fully sure what you mean by 0.01% cases? Where did you get those percentages?

Businesses are under a constant barrage of cyber attacks, with goals to steal the data, encrypt it and then blackmail or sell all the data. Ransomware payouts exceeded $1 bil last year. And that doesn't include all the damage done besides the payouts.

Edit: Supposedly global cost of cybercrime is expected to reach $20 trillion+ by 2027.


How often do you think RCE vulerabilities are dropping on enterprise machines that already have vectors for security (firewalls, password policy, software install policy, etc)?

I understand cybercrime is real, however I highly doubt the amount of real time RCE exploits leaked into the wild executed within 2 hours is > 0.01% of the updates pushed by CrowdStrike.


This would require a deep dive into analyzing the importance of that specific update and all the other updates they do and at which frequencies and for which reasons. 2 leading causes for ransomware are social engineering and unpatched software which something like CrowdStrike should be able to secure against.

If there's a new pattern of social engineering/phishing attack it might be a question of hours to be able to respond to that and identify those specific patterns. Or just every minute will mean that more companies and machines will be compromised if there's a mass phishing campaign going on.


If you need to have automatic updates then you need to apply risk analyses of what would happen if that system fails.

A typical solution would be to have two machines, one with the automatic updates and a second one without automatic updates that jumps in in case the first one breaks down.


>A typical solution would be to have two machines, one with the automatic updates and a second one without automatic updates that jumps in in case the first one breaks down.

Great, now the other one is still vulnerable and hackers can still steal information from it.


The proper solution is a hardened machine build for critical systems that doesn't have internet access, disabled USB, attachments blocked in email, etc.

However that isn't popular and most orgs would prefer a day of downtime from this type of outage vs the hassle and cost of doing it right.


By now I’d expect people including you to have a more sophisticated perspective about third-party software.

Yesterday was a catastrophe and you are still stuck with such naive and simplistic view: you want your antivirus to be auto updating.


Realistically what is the alternative if you are running servers that could seriously be the target of an attack?

I will give you that I highly doubt that a large number of these machines are anywhere near that critical nature, but there are some that will fall within that much risk.

What do you do, just not update to handle new risks? A lot of systems going down is really bad, don't get me wrong. But is it worse that you could be breached depending on the data (and other services) those systems may have access too?

To me this is a flaw in Crowdstrike but also Windows that this could happen in the first place, and a serious flaw on Crowdstrike's side that this somehow got out.

And yes I do acknowledge that much of this is security theatre, but I also would not be surprised if it does sometimes work.


To be clear, you blame CrowdStrike, Windows (??) but not the companies who picked this software, configured it and wrote their own internal risk policies around a kernel level piece of software?


Most of the blame here falls on Crowdstrike. Both from a software standpoint that it can cause a BSOD so easily and not be able to handle something like this happening. But also whatever failure happened to let that file get out.

Some, minor, blame falls on Windows due to its ability to BSOD as easily as it does.

As far as the companies, it is a tricky situation. Many of the companies have Crowdstrike enabled and automatic updates turned on to check some audit box. They have to keep the updates going out regularly.

We are well past the point in tech that a company is solely responsible for their systems with external dependencies being the norm. Either with the shared security model with cloud services like AWS or a reliance on external API's and servers. You have to trust the vendor you are working with for whatever critically important system is going to do their job. Could you look back and say that maybe you chose the wrong vendor for a specific piece of software, but this could have happened to other vendors.

Something that I am not entirely sure of is for those audit, compliance, etc requirements can they use an alternative update method. And this is something that would be different based on each compliance, but to the best of my knowledge for security software most want you to have automatic updates.

If this was the case of all of these servers going down because of a major AWS outage would you really be saying the companies are to blame?


> Many of the companies have Crowdstrike enabled and automatic updates turned on to check some audit box. They have to keep the updates going out regularly.

While many companies probably do that, it's usually not required if you can argue for an alternative approach and how it fits your risk appetite better (e.g. progressive updates on a routine schedule).


> You have to trust the vendor you are working with for whatever critically important system is going to do their job

This is an absurd take, specially after an outage who took down 911 response centers, hospitals and has millions of passengers still stranded.

You trust no vendor and assume everything fails all the time.


At some point you have too, you will never control 100% of the system between your servers and whoever or whatever will be interacting with it, and between your servers and whatever other services you have to work with.

There might be smaller parts of your system you could say this, but unless your system is 100% airgapped, and all of the wiring, servers, etc are all put down by you and you are working with a LAN.

There are not many systems that fall within that definition. As soon as you hit using the internet for communication you are reliant on your ISP working. Maybe you can have a redundant connection, but then you have to assume both of those will do their job and that they don't have a dependency that could bring them both down.

So no, it's not absurd unless you are never going to the internet. You have to make the decisions on what your system relies on and what it can handle.

I fully understand what this brought down, but again there are plenty of other instances where you assume an outside company is going to do their job.

Looking back and saying, well maybe this was a bad idea because its an external dependency isn't helpful when we can point to any number of other external dependencies that may not have brought down as many systems but can just as easily bring down critical systems.


I still don't see your point. I am responsible for my systems not other vendors.

- You need more than one ISP

- You need diverse Operating Systems and Databases

- You deploy in phases with canary releases

- You don't deploy on Fridays....

How difficult can it be?


Let's be clear, this wasn't a new version of Crowdstrike. Admins can control version updates, and have a policy if n-1. This was a channel update (similar to antivirus definitions). AFAIK you cannot control channel updates.

This is entirely on cloudstrike, or perhaps clown strike is more appropriate.


> - You need more than one ISP.

I addressed this in my previous response. It is still an external trust, even if you have redundancy.

> - You need diverse Operating Systems and Databases.

I have never ever seen a company run the same server side software deployed to multiple different operating systems.

> - You deploy in phases with canary releases.

As I mentioned in a previous post, there are going to be critical enough systems that may be under a serious threat of breach that any wait is not worth the risk.

Also as I have already mentioned, in many cases automatic updates is turned on for compliance reasons that may not allow what we think is common sense for the vast majority of software.

> - You don't deploy on Fridays....

I agree but to the best of my knowledge this was essentially a security definition updates not a code update. That is the kind of thing that you would push out when you have it otherwise your systems could be vulnerable over the weekend.


> there are going to be critical enough systems that may be under a serious threat of breach that any wait is not worth the risk.

Disagree strongly. You are analyzing risk the wrong way. That is what I call: "Security by being on the latest patch"

Zero days occur every day and many are ongoing right now. Your antivirus vendor or OS vendor, needs hours to days, to weeks, to detected them, understand the attack, come up with a defense, test (hopefully...) the defense patch, deploy in phases (hopefully). So you are always many hours to days behind the latest threats and before getting such a protection.

The core idea here is "Critical System"

If the system is critical, it's security and robustness needs to rely on it's security architecture. Not "being on the latest patch". You will always be catching up to any threats.


How is "being on the latest patch" (security definitions), not part of the security architecture? No where am I implying that it is the only part of security.

Also you are still ignoring, that for many of these companies they have not have a choice due to compliance requirements.

That being said, so great maybe we can avoid this issue. But instead maybe next time instead it will be. "Well, you run security software X and when you were breached they had a protection out for this, why were you not up to date?"

The fact remains that what happened yesterday was an extraordinary situation that I highly doubt anyone seriously thought it was a serious risk. Since most people would safely assume that a vendor pushing security updates would do basic testing.

Also you are focusing on security when there are other dependencies that could bring down your system. That is my point here. We are focusing so much on how this one thing should have been done differently and that the companies are somehow to blame when this could have been any number of other things that would not have been as global of an impact but could still bring down major systems.


You are completely ignoring the fact that some countries, some airlines and some 911 centers, many hospitals were not taken down. The reason? The diversity and phased deployments I am arguing for.

> Also you are still ignoring, that for many of these companies they have not have a choice due to compliance requirements.

They have a choice. They could run their system properly. You are arguing for reasons of compliance...When this incident is the clear demonstration being compliant has nothing to do with being secure and robust.


Welcome to new generation "cybersecurity" experts that just regurgitate buzzwords like "compliance" and "guardrails" in addition to filling out risk matrix spreadsheets.

Its all PaaS/SaaS now, old-school properly engineered isolated solutions require too much expensive staffing.

I'm waiting for a vendor like zscaler to be hacked - what could go wrong with having thousands of companies do MITM SSL interception via a single vendor.

That's a nice juicy target for hackers if I ever saw one...


How can an anti virus software protect from new threats if it can't auto update as soon as new threat is there?


It's a trade off. That said, we're in an age where companies do 100+ pushes per day. Automate a build, run a test, then deploy rolling updates across the fleet.

The options aren't "everyone auto updates or no updates for weeks", there's a balance point. It's very clear what choice most critical companies this week did though.


Crazy idea, for critical systems dont give them blanket internet access, USB, email attachments...


Most places I've seen already do this in addition to running CrowdStrike.


Doesn't match to my experience - its either open slather or properly restricted VDI/Citrix envs.


Maybe if my antivirus has basic filtering of input values. But in a critical systems scenario, I want to validate in my testing stage first, or at least run a split rollout so that my entite fleet doesn't shit the bed.


lol, haven’t used AV in a long time. It’s security theater and it’s trivial for a malware dev to get around these programs.

Only stops script kiddies, at best.


>companies allowing auto updating of executable code in production on critical systems.

crowdstrike said the update was a "configuration file".

https://www.crowdstrike.com/blog/technical-details-on-todays...


You're right, but I don't think it changes anything. Third party software bringing anything new in needs to go through a test phase. That could be automated and a simple 30 minute VM boot, regular tasks run, and a longer 24 hour one running in the background whilst you start rolling updates.

Either way, it's unacceptable for critical services to be beholden to the validation of a third party upstream. The companies in question are responsible for that negligent handing off of ownership.


>Third party software bringing anything new in needs to go through a test phase. That could be automated and a simple 30 minute VM boot, regular tasks run, and a longer 24 hour one running in the background whilst you start rolling updates.

I don't think anyone is going to disagree that's engineering best practice and should theoretically be done, but how is microsoft going to enforce this? Do you want to force developers wanting to publish software for windows to undergo annual audits (soc-2 style) to confirm that all the engineering best practices are indeed being followed? Not even Apple is that strict.


Stop signing CrowdStrokes shitty kernel driver? Because that's what made the thing load.

That is also pretty much the Apple way nowadays.


See my previous comment:

"how is microsoft going to enforce this? Do you want to force developers wanting to publish software for windows to undergo annual audits (soc-2 style) to confirm that all the engineering best practices are indeed being followed? Not even Apple is that strict."

Or are you saying that they should ban EDR vendors from installing drivers at all? How are you going to implement the invasive monitoring needed for EDR to work?


I don't think you understand what a kernel driver is. Apple won't sign yours full stop and Microsoft already has a certification program in place.


>Microsoft already has a certification program in place.

And you think Crowdstrike's driver isn't signed? Given all drivers have to be signed for windows to load it, I highly doubt that's the case. Moreover, I doubt WHQL's testing covers logic bugs. Graphics drivers crash all the time for instance, and they're definitely WHQL certified. You could inspect the code even harder, but that just goes into my previous question.


I don't understand why people keep bringing Microsoft into this.

This testing is the responsibility of the company whose computer fleet it is. They have many upstream software vendors - often 10s or 100s - and should be doing this testing every time. You should never rely on the vendor to test (evidently). I'd go so far as to assume all vendor updates are hostile and build your test model against that.

Automated testing of new software in companies with 10k+ desktops (which covers most affected companies here) should be as common as password policies or email attachment policies.

If the vendor implements things in a way that doesn't allow this style of testing, they don't meet security requirements and another should be found.


> the horrible security practices of companies allowing auto updating of executable code in production on critical systems.

That's the irony of the situation. The criticality of the systems (arguably) necessitates real-time updates, otherwise they'd be vulnerable to threat actors.


> The criticality of the systems (arguably) necessitates real-time updates,

This is an Oxymoron


It's contentious. I'm not sure how it's an oxymoron.


What saved my company from this is the recommended policy I’ve had the last three companies I’ve implemented this in. N -1.

The first time I ever rolled out Falcon, the sales engineer said, “if you want to be on the latest when it releases, choose this policy. Generally customers like to be one release (N -1) behind. This is the safest option in my experience. We rarely have issues but this is the way to prevent issues if we do ship something bad.”

I’ve been telling other admins this is the safest option moving forward. I don’t see a need for my org to run bleeding edge releases of newer products. This also applies to OS updates unless it’s a zero day. Major OS releases I wait for the first .1 update to release. Currently doing this with Ubuntu Desktop 24 LTS as it shipped with missing features from 22 and a broken autosetup functionality. August is the first update to 24 LTS and we’ll test and determine if the bugs have been squashed.

I can’t think of any way to always be on the latest upgrade of anything critical. All of these companies were on the bleeding edge release of CrowdStrike and it brought a lot down globally.


N-1 didn't save you, nor did N-2:

https://news.ycombinator.com/item?id=41015038

> "b) Since n, n-1 and n-2 versions of the sensor all died equally spectacularly, that bug as been around for at least three versions of csagent.sys."

There's so much misinformation around this Crowdstrike issue. The change deployed was in what is referred to as a "channel file" which isn't part of the software update mechancism (what you call N-X) but part of the intra-day frequent signature/channel updates it gets (that we all have no control over).

Crowdstrike are calling it an unfortunate "logic error" but they and few others are talking about the how a binary payload could get released to the public without seemingly any pre-release testing of the payload. If the content that was made available to the public had ran on a test endpoint, they would have discovered this "logic error" before taking down a high number of the world's systems simultaneously.


(N-1) release is recommended on any product/software that cannot be independently monitored (ie, proprietary shit).


I wonder how they came to this number? And how reliable is it? It is very quick and relatively a very small number. Very convenient for damage management.


Windows has telemetry out the wazoo doesn't it?


But Windows has to boot before it can send telemetry, right?


Device that was sending telemetry data with Crowdstrike installed and is no longer sending telemetry data after the event time frame can be considered to be affected.

Another possible source is Crowdstike itself who definitely has the data.


lack of signal is also a signal


Does IME contact just Intel or also M$?


They asked ChatGPT.


This outage (fuck up) impacted critical workflows. Lawyers should be foaming at the mouth to get a class action lawsuit going if criminal penalties are not applicable.

Hospitals - physicians/doctors/nurses lost access to critical equipment. Patients may have suffered degraded care as well. Reports of this outage impacting active surgeries. Patients forced to reschedule appointments around ClownStrike

Airlines - many flights grounded. Delays, delays, delays. Wasted fuel, time. Loss of revenue due to rescheduled flights, refunding customers. Local airports flooded with grounded flights, increased personnel to deal with it. FAA stressed.

Banks - many people lost access to money. Frustration for people trying to get access to pay bills, or get paid themselves.


Lawsuits against whom? Certainly won't be Micro-Soft and this Crowd Strike or whoever as their ToS say use their software "as-is" without any implied warranty.


Monocultures die fast and without survivors.


I vote we stop putting George Kurtz in charge of things.


He tried to take down the world before...He was given a second chance...I say he will try a third time...

"This is the 2nd time CrowdStrike CEO George Kurtz has been at the center of a global tech failure" - https://www.businessinsider.com/crowdstrike-ceo-george-kurtz...


George Kurtz will just get a golden parachute, and replaced with another clueless MBA.

We need jail time for these executives otherwise nothing changes


He should become Adam Neumann's CTO.


His methods are unsound


It seems like he doesn’t learn from postmortems or mistakes. Can’t blame him though, the incentives are aligned so he doesn’t have to


That is why he was only paid $47 million last year. He is an under performer...


I don't see any method at all, sir.


yes!


So my org (a random medium-sized healthcare system), with ~100k seats, was more than 1% of the devices? I don't buy it.


Off by an order of magnitude for sure.

I've heard of 250k employee companies where people got a snow day off this.


The ClownStrike Paid Holiday


Maybe I would think most non businesses will not need falcon sensor, and more critical systems will be the ones actually using it. So their “low” numbers are actually high if you only look at businesses or critical systems


That is 0001.5-M out of 1400.0-M windows devices in all.

That is about -.1% of all the MS machines.

As a linux user, I dont understand the big deal, the effects of this.


The issue wasn't necessarily the number of machines, but which machines were affected. The fact the major airlines were forced to ground planes, is an indication this was bad. Several financial companies couldn't trade. Hospitals couldn't function. I was personally unable to pump gas.

As a linux user, I don't see how you don't see this was a huge deal.


[flagged]


Just for the sake of discussion, as a long time Linux user (but never a Crowdstrike user) -- is there anything about this outage that could only have happened on Windows? If the update had been broken in another way, could it have crashed all Ubuntu machines, or all Macs?

Not excusing what happened in any way, I'm just curious.


MacOS doesn’t allow third party kernel modules any more, so CrowdStrike on Mac can’t cause the same problem. On Linux, the latest versions of CrowdStrike use eBPF while older versions were kernel modules. The older versions definitely could have had the same issue. In theory, use of eBPF should prevent the same problem in the newer Linux version.



Bad software updates can come from anywhere, and it remains bad no matter what the intention is and no matter what operating system it runs on.

When people install bad kernel-level software, they open their systems up to [more] kernel-level flaws and crashes.


Linux doesn't have a BSOD .. is this the equivalent of a kernel panic?


systemd, a software suite providing system components for the Linux operating systems, implements a blue screen of death similar to those of Microsoft Windows using a systemd unit called systemd-bsod since August 2023, which was fully added on December 6, 2023 starting with version 255 of systemd.[36] It does not replace the kernel panic featured within Linux (see below); rather, it is only used in the event of a boot failure.

https://en.wikipedia.org/wiki/Blue_screen_of_death#Linux


It does now: https://www.phoronix.com/news/Linux-DRM-Panic-BSoD-Picture

But yeah, kernel panics or the equivalent happen to ~all OSs.


I did read somewhere yesterday that macOS does not allow third party software to run at kernel level in the same way, thus couldn’t face the same issue.


[flagged]


No.

Software freedom 0 is that you should be free to run the program as you wish, for any purpose.

If that means loading your system up with third-party crapware, that's on you, not them. That doesn't mean Microsoft shouldn't work on making it easy to recover a system, if they can, but preventing you from running software would make the cure worse that the disease.


You're quoting Software freedom 0 when referring to Microsoft? Wow, that's bold!


Bold would be accusing them of not allowing you to install shit quality kernel drivers in ring0, and then accusing them of negligence when you hose your machine by doing it.


If you install some pointless garbage as root on a free OS, it can make your system unbootable. I don't think an OS vendor has any blame here.

The mistake here is people think they need third party security solutions that are actually worse than nothing.


if you burn out the engine of your car with third-party tuning chips, would auto manufacturers cover that under warrantee?


To keep up your analogy, it can be said that Microsoft delivers a 2024 sports car with a 1980s naturally-aspirated diesel engine.

If Microsoft had done their job instead of inventing new UI frameworks basically for each recent Windows release, there would be no need for an antivirus engine at all.


Is there really a need for antivirus software anyway? What does it actually prevent?


I think it's actually one of MS moats that you can do this on Windows.

This had been much harder on macOS, and practically impossible on iOS; considering that few business take those OS seriously, it's most likely a requirement than a flaw.


Absolutely not. If they did, we would immediately have a walled garden. We’re moving too far in that direction already, we don’t need to give Microsoft an excuse to jump there immediately. If you install stupid shit on your computer, that’s on you, not Windows, not MacOS, not Linux.


Should MS hold liability for selling a system who needs additional 3rd party components to be considered secure?


Crowdstrike is also used on Linux and MacOS devices. So by your definition, does that means they aren't considered secure either?

Crowdstrike is certainly not required for a device to be considered secure, it is a tool that uses AI and ML to detect and prevent malware, 0-day exploits and other cyber threats in real-time across endpoints.


> uses AI and ML

No it does not. Such thing does not exist.

Most antivirus programs rely primarily on signature-based detection and heuristic analysis rather than advanced machine learning algorithms. While some modern antivirus solutions incorporate basic machine learning for anomaly detection, they do not typically employ the sophisticated, self-learning AI systems that are characteristic of true artificial intelligence applications. And I do understand Crowdstrike is more than just an antivirus.

Please don't repeat CNBC Stock market pumping memes. No they are not an "AI Stock"


Crowdstrike doesn’t make anything secure. It’s the same smoke and mirrors bullshit they were selling in the early 00’s via Norton or McAfee. It slows down machines in the best case and periodically renders them inoperable.


>Crowdstrike doesn’t make anything secure.

It's not security pixie dust, and might introduce more security vulnerabilities itself, but it's probably pretty good at preventing Alice from accounting from running a random macro-enabled excel file with a malicious payload.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: