I'm not a lawyer, and I'm definitely not a French lawyer, but I don't think the OVH comparison is valid.
In the OVH case, their backup system (as a whole) failed. Many customers were left with 0 data, and per the article "the court ruled the OVH backup service was not operated to a reasonable standard and failed at its purpose".
Meanwhile CrowdStrike "just" crashed their customer's kernels, for a duration of about 1 hour (during which they were 100% safe from cyber attacks!). Any remaining delays getting systems back online were (in my view) due to customers not having good enough disaster recovery plans. There's certainly grounds to argue that CrowdStrike's software was "not to a reasonable standard", but the first-order impacts (a software crash) are of a very different magnitude to permanently losing all data in a literal ball of fire (as in the OVH case).
Software crashes all the time. For better or for worse, we treat software bugs as an inevitability in most industries (there are exceptions, of course). While software bugs are the "fault" of the software vendor, the job of mitigating the impacts thereof lies with the people deploying it. The only thing that makes the CrowdStrike case newsworthy, compared to all the other software crashes that happen on a daily basis, is that CrowdStrike's many customers had inserted their software into many critical pathways.
CrowdStrike sells a playing card, and customers collectively built a house with them.
(P.S. Don't treat this as a defense of CrowdStrike. I think their software sucks and was developed sloppily. I think they should face consequences for their sloppiness, I just don't think they will, under current legal frameworks. At best, maybe people will vote with their wallets, going forwards.)
Most computers that were affected by the fault needed physical remediation via safe mode boot to fix the issue because they were not able to download a fix because of being stuck in a reboot loop. The understanding is that for most cases, the fix needed to be applied by an IT technician dispatched to physically access the computer.
A week or 168 hours later, there are still many, many computers out there that remain bricked by this fault because it is so heinously difficult to fix.
For what it’s worth - I got the BSOD, once I got the email from IT with the instructions, it took me about 20min to apply the fix. Almost all of the company employees who were affected were able to easily apply a self help fix.
I could imagine this was not the case if you had to physically access remote servers, or didn’t have access to bit locker recovery keys
How is it someone other than CrowdStrike's fault that the systems failed again at every reboot until someone with physical access and know-how deleted the crashing driver manually from recovery mode? What should a company operating, say, an MRI machine protected by CrowdStrike have done to recover access in a reasonable amount of time?
CrowdStrike's software should not be installed on an MRI machine, per CrowdStrike's own guidance:
"Neither the offerings nor crowdstrike tools are for use in the operation of [...] direct or indirect life-support systems [...] or any application or installation where failure could result in death, severe physical injury, or property damage."
If the PC controlling an MRI crashes nothing will happen to the instrument itself. The data might be lost and you can't continue using the MRI until this is fixed, but not more. This would not violate these guidelines.
It didn't just crash, it crashed 100% of computers running it at that time and in a way that required physical intervention to fix. So I think you can considers this quite different from regular crashes because recovery is much more difficult and because it affected a lot of computers simultaneously.
On top of that there are companies that had failures of their own in their recovery procedures. But even with good procedures this can be a significant outage because it is not trivially reverted and would typically affect many configurations that are redundant for many other failures.
That would mean that you always need a fully redundant copy of everything based on entirely different OSes and software with no common component. That is obviously not realistic.
My understanding is that customers believed they had control as Crowdstrike gave them configuration options to delay updates / stagger them. Apparently many of them were surprised that Crowdstrike had the ability to bypass all these configuration options and force the update. I think that is where Crowdstrike's liability skyrockets through the roof.
Building high-assurance systems is expensive. Anyone not doing so must accept the associated risks (which is fine, not everything needs to be high-assurance).
What if the uptime of 50% of your computers does, but you need 100% percent of your computers to run at 100% capacity? If a shop has two lathes and one crashes, and now the shop has 50% capacity, is not losing money because of CrowdStrike's incompetence?
In the OVH case, their backup system (as a whole) failed. Many customers were left with 0 data, and per the article "the court ruled the OVH backup service was not operated to a reasonable standard and failed at its purpose".
Meanwhile CrowdStrike "just" crashed their customer's kernels, for a duration of about 1 hour (during which they were 100% safe from cyber attacks!). Any remaining delays getting systems back online were (in my view) due to customers not having good enough disaster recovery plans. There's certainly grounds to argue that CrowdStrike's software was "not to a reasonable standard", but the first-order impacts (a software crash) are of a very different magnitude to permanently losing all data in a literal ball of fire (as in the OVH case).
Software crashes all the time. For better or for worse, we treat software bugs as an inevitability in most industries (there are exceptions, of course). While software bugs are the "fault" of the software vendor, the job of mitigating the impacts thereof lies with the people deploying it. The only thing that makes the CrowdStrike case newsworthy, compared to all the other software crashes that happen on a daily basis, is that CrowdStrike's many customers had inserted their software into many critical pathways.
CrowdStrike sells a playing card, and customers collectively built a house with them.
(P.S. Don't treat this as a defense of CrowdStrike. I think their software sucks and was developed sloppily. I think they should face consequences for their sloppiness, I just don't think they will, under current legal frameworks. At best, maybe people will vote with their wallets, going forwards.)