Hacker News new | past | comments | ask | show | jobs | submit login

In general, taking control away from users sets up all kind of bad incentives. For example, automatic updates with no way to downgrade save vendors from having to compete with their own older versions. This means regressions in functionality or design can be pushed out with little recourse for users other than complaining online. This is compounded by ecosystem lock-in and lack of data portability. The software industry as a whole is heading towards treating users more and more paternalistically.



Conversely, before automatic updates web developers were stuck supporting Internet Explorer for the best part of twenty years. Many of the people using it had neither reason or knowledge to update it, and it became the reason my parent's computers got riddled with malware.

There's a sensible middle ground here. Take the paternalistic approach that (generally) protects people like my mum. Add settings that allow people like you and me to turn off updates or roll backwards. Push the people controlling the updates (like the Chrome store) to better protect their users.


Users need to be motivated to upgrade. If their current software works sufficiently on the sites they care about, then they have no need to upgrade. If the sites themselves are enabling this behavior, by bending over backwards to work on with old browsers, then they are part of the “problem”.

I don’t like automatic updates and generally keep them disabled. Software upgrades tend to reduce functionality and instead force unnecessary UX redesigns on users, so I’d rather avoid them. I wish developers had the [EDIT: incentive] to release security patches independently from functionality changes, but few do that anymore, sadly.


It's been an age since I've worked in an agency, but back in the IE era, at least once a month a dev would ask to use a 'modern feature'. Something to support some a new piece of design from the design team, or save hours or days of dev, or remove the need for hacky 'fixes' that could be done cleanly with modern browser support.

So off to analytics they would go. "X thousand users are using IE8. We're converting at X%. Removing support for IE8 just means these people will shop elsewhere and we'll lose X thousand pounds a month. You need to support IE8."

Believe me, I wish it was as simple as saying developers are "part of the problem," because it would be an easy fix. But try selling that (without a huuuuge struggle!) to the person who holds the purse strings.

Sadly the new features usually only came on new sites. It's much easier to push it through when you're not cutting off an existing income stream.


>I wish developers had the competence to release security patches independently from functionality changes, but few do that anymore, sadly.

You do realize it's not competence developers are lacking, it's resources that are finite, do you?


Despite automatic updates, web developers are still stuck with Safari, IE, old android browsers and old edge. Automation doesn't help with bugs and functionality if there are just no updates to be installed that fix bugs and bring new functionality.


>Conversely, before automatic updates web developers were stuck supporting Internet Explorer for the best part of twenty years. Many of the people using it had neither reason or knowledge to update it, and it became the reason my parent's computers got riddled with malware.

The failure is not that of Internet Explorer, but rather the OS in which it runs, which has a faulty security model. No operating system should trust executables with everything by default.


It wasn’t faulty at the time since people were more concerned about protecting computers from users than protecting users from applications.

We all seem to forget that computing has changed drastically in the last decade.


I would say that "protecting users from applications" (or at least, external attackers) has been commonplace for maybe even two decades now, ever since major malware 'plagues' of the early 2000's (pre-SP2 Windows XP) like Blaster or Sasser.

That said, in that era it was often assumed (more so than now) that software the user installed himself is trusted.


Internet Explorer was only replaced by automatic updates after its usage felt enough that sites stopped supporting it.


The major problem with internet explorer was that it was impossible to update without updating windows which costs money so most people and organizations didn't do it.


I don't mind automatic updates per se as long as they're thoroughly checked and vetted. I'm not convinced Android and the Chrome web store do ANY checking / vetting. I have more trust in Apple's stores.

Vetting could be better with a lot of companies as well; remember not so long ago when Windows Defender decided a critical system file was malware and broke a ton of systems?

Verification. Vetting. Gradual release. Automatically disable extensions if they changed ownership, or if there's suspicious activity on the account of the owner (e.g. new login in another country).

And they need to take a MUCH harder stance on malware. Right now they're not even acknowledging there's a problem, let alone acting on it.


For any extension that makes any money, the solution is a deposit scheme.

"Google will withhold $1 per user of your ad revenue forever. If your extension is found to contain malware, you forfeit all the $1's. Decisions on malware'y ness shall be made by XYZ malware researchers."

Allow a developer to get back their $1 when a user uninstalls the extension, or the developer stops making the extension. Also give the developer a certificate anytime showing how many $1's you hold of theirs (they could use that to get a loan from someone willing to trust them not to distribute malware).


Not really a solution, just the minimum price a buyer would need to pay.


True. But even the most profitable malware won't want to forfeit hundreds of millions of dollars for a popular chrome extension.


On the other hand users are generally pretty poor at managing software themselves and as long as it works they'll happily and probably ignorantly run something that is not secure already and needs an update.


> users are generally pretty poor at managing software

This is an assertion which begs many questions.

Who are these users? What do you mean by "generally"? What do you mean by "poor"? What do you mean with "managing software"? Which software specifically? Why is "managing software" hard? What are specific case where this might be true? Is this statement falsifiable?

For instance, how does age, social background, education level, language, culture,... factor into the experience of "managing software"? Sure, the problem can't be software itself in it's entirety?

See, statements like these tend to break down once you start digging into the murky nuances and specificities of reality.

Moreover, accepting them at face value tends to reinforce a belief which isn't based on fact: that the users of digital technology can't manage their devices, and therefore shouldn't be confronted with managing their devices.

... which is then translated and implemented in interfaces and systems that simply lack the functionality that gives users fine grained control over what is or isn't installed.

Over a longer term, this promotes a form of "lazy thinking" in which users simply don't question what happens under the hood of their devices. Sure, people are aware of the many issues concerning privacy, personal data, security and so on. But ask them how they could make a meaningful change, and the answers will be limited to what's possible within the limitations of what the device offers.

A great example of this would be people using a post-it to cover the camera in the laptop bezel.

People don't know what happens inside their machine, they don't trust what happens on their machine, and there's no meaningful possibility to look under the hood and come to a proper understanding... so they revert to the next sensible thing they have: taping a post-it over the lens.

The post-it doesn't solve the underlying issue - a lack of understanding which was cultivated - but it does solve a particular symptom: the inability to control what that camera does.


It really doesn't beg those questions - we have 25+ years of data backing it up. People across the board are bad about running updates. I'm guessing you missed the mid-late 90s when things like buffer overflows started to be exploited and firewalls became necessities because even the folks whose job it was to run updates of vulnerable systems with public IPs on the Internet... weren't. Then came the early 2000s and all the worms running amok because people still weren't running their updates. Then the collective web development industry screamed in pain because things like Windows XP and IE6 just would not die.

The collective Internet has been through this before and (mostly) learned its lesson. People don't run updates when it's not shoved down their throat. And it's not a small segment of people. And it hasn't changed. Look at how many hacks still happen because of servers and apps that aren't patched for known vulnerabilities. Or the prevalence of cryptojacking which is still largely based on known vulnerabilities that already have patches available - indicating it's successful enough that people keep doing it.

Most users don't question what happens under the hood of their devices because they don't care. They have other things to care about that actually mean something to them besides the nuances of the day to day maintenance of their devices. There does not exist an effective way of making people care about things like this, let alone educating the masses on how to appropriately choose which commit hash of their favorite browser extension they should really be on. How many security newsletters do you really expect the average person to be subscribed to in order to make informed decisions about these things?

Hell my "Update" notification on Chrome is red this morning and I'm at least in the top 10% of security-conscious folks in the world (it's really not a high bar).

I'm not saying automatic updates are without their problems - I'm in a thread on HN about that exact thing. But trying to claim it's somehow about sociodemographic issues and the answer is solving that and going back to selectively running updates is just ignoring the lessons of the past.


I, and everyone else I know, do not install updates to our software in a timely manner unless we actively need a feature.

Users are "I, and everyone else I know".

Generally is "unless we need a feature".

Poor is "do not install updates to our software".

Managing software is "install updates".

Software is any software we use that provides updates, which is all of it.

Managing software is hard because doing it manually would require checking the website of every piece of software you've ever downloaded at regular intervals, where regular could be as frequently as minutes for security-critical tools.

If I ever downgrade my software and lock it to a specific version, I am now managing it manually, and all of the above applies.

I honestly don't think there are unquestioned assumptions here, because the task of keeping security-critical software up to date manually is nearly impossible for any user.


I honestly am not at all sure what you mean by much of that.

Demographics don't change the fact that if you don't automatically update software, many users simply won't. That's bad.


... in the usual pedantry of HN your use of "poor" was interpreted to mean socio-economic, rather than... "just bad at something"...


I don’t see how one could parse ”On the other hand users are generally pretty poor at managing software themselves” and assign that interpretation to “poor”.


I agree, but the user who responded to me seemed to talk about demographics as if I had meant "poor" as in not having much money.

The internet is global, sometimes I think things get lost in translation.


That's a reductionist reading of my comment.

I'm challenging your initial assertion that "people are poor at managing software". That's not enough of an explanation to support the second part of your claim:

> and as long as it works they'll happily and probably ignorantly run something that is not secure already and needs an update.

Are they poor at managing software because they are ignorantly running insecure software? Or are they ignorantly running insecure software because they are poor at managing software?

The replies so far take the entire context out of the picture and reframes the issue to "Users use their devices the 'wrong way'." and this can only be solved through technological advances.

I'm here questioning and challenging those assertions.


Oh I see. That's, weird, but thanks for letting me know.


That would cover users who are poor at managing software. Being able to turn them off would require someone to be good at managing software. Why remove control from those users?


I don't want to be saying that we should remove control, but I actually do think it's reasonable to. Even on a single-user device, security issues are not isolated. An infected machine will likely be used for things like spam and DDOS.

If you make something available for people to toggle that improves their experience, people are going to take advantage of that even if they don't really grasp or decide to ignore the consequences. In the case of updates the improved experience is not being nagged or forced to restart an application or the whole OS. And unfortunately the only way to really gatekeep that control to people who know what they're doing is giving it enterprise pricing.


I want to think that folks who would chose that option would be responsible, but the amount I hear from other developers who defer updates on Windows 10 to the maximum (1 year...) and still are upset when they have to reboot makes me think that even experienced users present a risk.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: