You can release patches 6 years after your device is EoL but there will forever be more security issues and people using your ancient product (think how long it takes some versions of Windows to truly reach less than 100k active machines. Hell I wonder if Windows 3.1 has really reached that number or not. The long tail is going to be loooong). Not to mention you've created a precedent that the device is still getting patches and can be used by users, only making the lifecycle issue worse.
You can release a version which severely limits the capability of the product or effectively disables it but this is just a guaranteed way of getting bad press and even more customers will be mad at you for killing a device early.
You can turn the device over to the community (if you can managed to get it through legal and 3rd party agreements) but that isn't actually going to solve anything as it's not a product for extremely tech savvy users, at best it buys deflection in the news report in exchange for the effort of doing this (if you can at all).
You can claim the lifecycle is over and years later and be technically correct but still get the bad press and user feedback anyways.
Retrospective in this particular case it would have been really great for them to fix this particular issue but that's an extremely hard thing to accurately judge unless you're from the future. I mean what percentage of devices that are many years EoL you've heard get this bad of a publicity hit? I'm sure there are many individual cases to point to but it's nothing against the absolute deluge of devices that have been discontinued over the years quietly without issue.
As more security-related features get added in Android is also losing some of its differentiating abilities - e.g. wifi scanning isn't possible on iOS, but I think it's also been getting more restricted on Android in the last couple releases. That and the must-use-our-webkit browser restrictions on iOS are the only things that that have really annoyed me on iOS.
I looked at the Pixel 4, released October 2019, discontinued October 2020, no guaranteed updates after October 2022...
Require all hardware devices to have ongoing security support for twenty years. A company can opt out of that support after three years, provided that they open source everything down to the firmware, and provide documentation and any necessary jailbreaking tools.
also an amendment to bankruptcy law that requires open sourcing propriety digital tech as part of filing.
This is so disingenuous (from the first report of this flaw):
“Western Digital takes the security of our customers’ data seriously, and we provide security updates for our products to address issues from both external reports and regular security audits.”
"Winning" isn't "gee if we could just say we'll patch every discontinued product forever and imagine it had no downsides wouldn't that be great" - that's known as "dreaming".
Having a vendor support patch releases indefinitely would not be a huge burden for Western Digital.
In this case, it was obvious errors in their PHP application.
Make a NAS drive which exposes everything to the gateway without bothering for authentication and then let Apple/Google/Amazon work out how to defend access to the resource.
Random OEMs are unable to pull off IoT safely.
There is: don't release devices with security flaws in the first place. The fact is, they released a fatally flawed device. That the flaw was discovered later doesn't change that fact.
I think the way we talk about security patches and updates obscures the fact that they're correcting fundamentally flawed software. In other circumstances, this would result in product recalls. Those financial incentives are a big reason why product recalls for serious faults are fairly rare in other types of products.
This is why 1) Security patches will pretty much always be necessary and 2) relying on perfect software alone is insufficient. Other precautions also need to be taken to prevent or mitigate attacks. In this case, it appears the My Book attack requires the IP address of the device. That indicates that people impacted may have been running these networked in a way that exposed them or the relevant ports to the world, which is very bad security.
This is a false proposition.
All actual engineering professions I know of have processes, checks and balances to avert disasters and premature failures. No one expects all of the shingles to be perfectly straight or have the same color, occasionally a roof may have a bit of a leak, yet I think even a single 4y old roof developing a massive leak would be a big deal. Imagine the consequences if all 4-11y old red roofs from a large construction company collapsed or developed massive leaks overnight.
Neither bridges nor roofs keep standing because they are built perfectly, nor are all bugs security issues. Yet a single fatal flaw can bring a bridge down and a single off-by-one can be a root exploit.
We shouldn't expect perfect software, yet we also shouldn't need security updates (at least not often and on everything). WD SW was fatally flawed, shouldn't have been released, WD should be responsible. SW "engineers" should be ashamed to be associated with such practices. I know I am.
Regardless, we're talking about security, so let's go to the actual real-world equivalent. Physical security. Is perfection possible there? Nope. Anything even remotely capable of surviving a sustained attack is going to cost an amount of money reserved for corporations and nations, not everyday consumers.
If you think the current quality of physical engineering is something we should hold up as a comparison to the security quality expected of software, let's compare notes in a few hundred years when programming reaches the same level of maturity.
Maybe I should have used electrical installations as an example?
> real-world equivalent. Physical security
Physical security isn't a good analogue, because you don't have a line of hackers, trying to get into your cupboard and magically also millions of other cupboards, closets and storage rooms with near 0 marginal cost and incomparably easy way to avoid getting caught.
Unlike physical security, perfect SW security is pretty much attainable (shocking, isn't it), with 0 marginal cost (no cost to duplicate). Physical access, social manipulation etc is physical security.
> footbridge in London
One bridge in 100, caught before it failed? Compared to all SW with network access and weekly updates.
We know how to engineer things so they are reasonably safe, we actually do it.
We also know many ways to make software much, much safer with only moderate investment, and some ways to make provably correct SW, and we do not care.
Hardware needs to be liberated from unsupported software, and users should be made aware of vulnerabilities and support status. Making software vendors liable for future exploits of unknown vulnerabilities opens a can of worms that would have non-neglible consequences for everyone who writes software, and not all of those would be beneficial to security.
Many products have flaws, but if the worst that can happen is that the customer feels ripped off, they don't get recalled. There might be remedies under a express or implied warranty, but tech products typically disclaim all of that in their terms of service.
Just that attackers didn't choose to destroy a device in this instance, but just erased it, doesn't mean allowing control of all software on the device to randos on the internet, by not patching a root RCE vulnerability is not a physical safety issue.
Having total remote control of an IoT device doesn't mean any of these things. Thermal protection is hardware driven typically on die, on cheap devices fan control is implemented in hardware because it's cheaper and easier than software solutions. Voltage regulators are hardware devices that aren't adjustable, even adjustable ones have a working range that is set via resistor. Switching power supplies aren't software driven.