Hacker News new | past | comments | ask | show | jobs | submit login

There's really no winning with this.

You can release patches 6 years after your device is EoL but there will forever be more security issues and people using your ancient product (think how long it takes some versions of Windows to truly reach less than 100k active machines. Hell I wonder if Windows 3.1 has really reached that number or not. The long tail is going to be loooong). Not to mention you've created a precedent that the device is still getting patches and can be used by users, only making the lifecycle issue worse.

You can release a version which severely limits the capability of the product or effectively disables it but this is just a guaranteed way of getting bad press and even more customers will be mad at you for killing a device early.

You can turn the device over to the community (if you can managed to get it through legal and 3rd party agreements) but that isn't actually going to solve anything as it's not a product for extremely tech savvy users, at best it buys deflection in the news report in exchange for the effort of doing this (if you can at all).

You can claim the lifecycle is over and years later and be technically correct but still get the bad press and user feedback anyways.


Retrospective in this particular case it would have been really great for them to fix this particular issue but that's an extremely hard thing to accurately judge unless you're from the future. I mean what percentage of devices that are many years EoL you've heard get this bad of a publicity hit? I'm sure there are many individual cases to point to but it's nothing against the absolute deluge of devices that have been discontinued over the years quietly without issue.

EOL is 3 years after purchase (if bought in 2015)? Just imagine a car and after 3 years no more spare parts? Personally everything below 10 years sounds ridiculous to me.

With modern high end smartphone it's the same issue. At best you get 2 years of patchs now, which is ridiculous because they really expect people to be able to afford >700€ phones that often.

I think that's more a specific vendor / android thing? I just bought a 2016 iPhone SE second hand. It's running the latest iOS 14 right now, and is officially supported on iOS 15 too.

I think Apple is an outlier in terms of support, which i think is silly, because 5 years of support really should be taken for granted. There are many working phones or computers that are 10 or 15 years old and healthy.

This is why I'm partway through switching from Android to Apple as I replace devices. I could go with Android devices more likely to have third-party ROM support, but if I'm going to be concerned about security and patching why should I trust a pseudonymous person on a website forum to run my device?

As more security-related features get added in Android is also losing some of its differentiating abilities - e.g. wifi scanning isn't possible on iOS, but I think it's also been getting more restricted on Android in the last couple releases. That and the must-use-our-webkit browser restrictions on iOS are the only things that that have really annoyed me on iOS.

2015 iPhone 6S's are still getting patches, in fact it'll be compatible with iOS 15 this coming fall. It'll have seven years of ongoing support by this time next year.

You have a Citation? I read 5-7 years.

Pixel XL released October 2016 (from [1]), no guaranteed updates after October 2018 [2].

I looked at the Pixel 4, released October 2019, discontinued October 2020, no guaranteed updates after October 2022...

[1] https://www.gsmarena.com/google_pixel_xl-8345.php [2] https://support.google.com/nexus/answer/4457705?hl=en#zippy=...

That is not at best, probably this is at worst. But thanks for the info.

Unfortunately the 3 guaranteed years of updates for pixels is pretty much the best in the Android world. There are very few exceptions like FairPhone reliably providing long term updates

In Android-land this is currently about the best actually. There is a chance of it improving over time as the updates will be decoupled from hardware support but the situation is markedly worse than on iOS.

It's difficult to imagine many of the smaller IoT companies providing support for junk made back in 2010, but I do have a daydream law/policy compromise:

Require all hardware devices to have ongoing security support for twenty years. A company can opt out of that support after three years, provided that they open source everything down to the firmware, and provide documentation and any necessary jailbreaking tools.

also an amendment to bankruptcy law that requires open sourcing propriety digital tech as part of filing.

Huh. This is a huge security flaw and they decided to not patch it. Winning is patching this.

This is so disingenuous (from the first report of this flaw):

“Western Digital takes the security of our customers’ data seriously, and we provide security updates for our products to address issues from both external reports and regular security audits.”

> Huh. This is a huge security flaw and they decided to not patch it. Winning is patching this.

"Winning" isn't "gee if we could just say we'll patch every discontinued product forever and imagine it had no downsides wouldn't that be great" - that's known as "dreaming".

If a company releases an internet connected appliance, they need to support the software for at least the half-life of the underlying hardware.

Having a vendor support patch releases indefinitely would not be a huge burden for Western Digital.

In this case, it was obvious errors in their PHP application.

Which is why I think if we are going to do IoT, it needs to all be behind some gateway designed by a company that will maintain security for decades.

Make a NAS drive which exposes everything to the gateway without bothering for authentication and then let Apple/Google/Amazon work out how to defend access to the resource.

Random OEMs are unable to pull off IoT safely.

The IoT device will still be connected to a LAN/WiFi network to access that gateway. Local attacks will still be possible (imagine a flaw in the TCP/IP stack, local firewall, or anything on that direction). And if that big company gets hacked (never say never), hackers would get access to a huge amount of devices ready-to-pwn. Not even thinking then about what this would mean from a privacy/mass surveillance point of view (by those companies or by agencies infiltrating them).

I can kind of see a device that has no ports open on the local network, and just uses an always-on ssh tunnel to HQ to receive commands. That way it's not exposed to anything on the local network, and it's the company resources that would have to be hacked first.

> There's really no winning with this.

There is: don't release devices with security flaws in the first place. The fact is, they released a fatally flawed device. That the flaw was discovered later doesn't change that fact.

I think the way we talk about security patches and updates obscures the fact that they're correcting fundamentally flawed software. In other circumstances, this would result in product recalls. Those financial incentives are a big reason why product recalls for serious faults are fairly rare in other types of products.

Software should be more secure. However I don't think it's realistic to expect bug-free perfectly secure software. There is no field of engineering where 100% perfect tolerances are possible, and when you start getting past 99% the resource requirements to get to the next fraction of a % quickly go non-linear.

This is why 1) Security patches will pretty much always be necessary and 2) relying on perfect software alone is insufficient. Other precautions also need to be taken to prevent or mitigate attacks. In this case, it appears the My Book attack requires the IP address of the device. That indicates that people impacted may have been running these networked in a way that exposed them or the relevant ports to the world, which is very bad security.

> There is no field of engineering where 100% perfect tolerances are possible

This is a false proposition.

All actual engineering professions I know of have processes, checks and balances to avert disasters and premature failures. No one expects all of the shingles to be perfectly straight or have the same color, occasionally a roof may have a bit of a leak, yet I think even a single 4y old roof developing a massive leak would be a big deal. Imagine the consequences if all 4-11y old red roofs from a large construction company collapsed or developed massive leaks overnight.

Neither bridges nor roofs keep standing because they are built perfectly, nor are all bugs security issues. Yet a single fatal flaw can bring a bridge down and a single off-by-one can be a root exploit.

We shouldn't expect perfect software, yet we also shouldn't need security updates (at least not often and on everything). WD SW was fatally flawed, shouldn't have been released, WD should be responsible. SW "engineers" should be ashamed to be associated with such practices. I know I am.

Roof/building/bridge construction are things that have been around thousands of years. And they still get things extremely wrong sometimes: look up the footbridge in London as an example, it needed the equivalent of a security patch. On the issue of roofs, it's also often recommended that you do annual roof inspections and maintenance, looking for potential issues.

Regardless, we're talking about security, so let's go to the actual real-world equivalent. Physical security. Is perfection possible there? Nope. Anything even remotely capable of surviving a sustained attack is going to cost an amount of money reserved for corporations and nations, not everyday consumers.

If you think the current quality of physical engineering is something we should hold up as a comparison to the security quality expected of software, let's compare notes in a few hundred years when programming reaches the same level of maturity.

> thousands of years

Maybe I should have used electrical installations as an example?

> real-world equivalent. Physical security

Physical security isn't a good analogue, because you don't have a line of hackers, trying to get into your cupboard and magically also millions of other cupboards, closets and storage rooms with near 0 marginal cost and incomparably easy way to avoid getting caught.

Unlike physical security, perfect SW security is pretty much attainable (shocking, isn't it), with 0 marginal cost (no cost to duplicate). Physical access, social manipulation etc is physical security.

> footbridge in London

One bridge in 100, caught before it failed? Compared to all SW with network access and weekly updates.

We know how to engineer things so they are reasonably safe, we actually do it.

We also know many ways to make software much, much safer with only moderate investment, and some ways to make provably correct SW, and we do not care.

Exposing IoT to the world is (going to be) common with IPv6, not sure how much SLAAC privacy extensions and "temporary" (hours to days ?) addresses help ?

That's essentially impossible for this class of device, though. While this case is a bug in the on-device API, there are countless others involving previously unknown vulnerabilities in widely-trusted software components. Heartbleed comes to mind.

Hardware needs to be liberated from unsupported software, and users should be made aware of vulnerabilities and support status. Making software vendors liable for future exploits of unknown vulnerabilities opens a can of worms that would have non-neglible consequences for everyone who writes software, and not all of those would be beneficial to security.

Product recalls are for defects that can cause physical injury. If the MyBook had an electrical defect that could cause a shock hazard or start a fire, it would have been recalled.

Many products have flaws, but if the worst that can happen is that the customer feels ripped off, they don't get recalled. There might be remedies under a express or implied warranty, but tech products typically disclaim all of that in their terms of service.

Total remote control of the device allows quite a bit... disabling thermal protection, disabling fans, changing voltages on regulators to be out of allowed range, etc.

Just that attackers didn't choose to destroy a device in this instance, but just erased it, doesn't mean allowing control of all software on the device to randos on the internet, by not patching a root RCE vulnerability is not a physical safety issue.

> Total remote control of the device allows quite a bit... disabling thermal protection, disabling fans, changing voltages on regulators to be out of allowed range, etc.

Having total remote control of an IoT device doesn't mean any of these things. Thermal protection is hardware driven typically on die, on cheap devices fan control is implemented in hardware because it's cheaper and easier than software solutions. Voltage regulators are hardware devices that aren't adjustable, even adjustable ones have a working range that is set via resistor. Switching power supplies aren't software driven.

All of that is software controllable on many devices, especially on SoC based NASes. Working range of adjustable regulators is almost always higher than the device connected to it (resistor may be used to set the default voltage for example), on pretty much all HW I've seen so far with no way to set hard limits. And you can also cause issues just by abusing transients even on regulators that can only be turned on/off. Thermal protection on SoCs is usually based on SW regulation loop. (grep for cooling-device through DTSes in Linux tree, all those SoCs have regulation in SW)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact