Hacker News new | past | comments | ask | show | jobs | submit login

> While it's true that most software is lower stakes than bridge design, I can't help but thinking basing huge and ever-increasing parts of our economic and social life on software that is _complete crap_ is going to bite us hard eventually.

Sadly, I think you're right.

I only laid out part of my view: most coding isn't comparable to bridge design and shouldn't be attempted that manner, and attempts to seriously slow down software development are probably hopeless now that we're so immersed.

But I share your fears, because most software falls in the wide gulf between bridge design and desk organizer design. It may not do anything terribly high-stakes, but neither is it a single user per instance and a constrained failure mode. Even simple projects are often public-facing, and running alongside more significant things. Many are open to user input, rely on external dependencies, and gather relatively sensitive info like "who read what".

The standard failure case for software really has no comparison in mass-produced goods: one user employing a flaw in the product to harm another user. It's stuff like Magecart, which unpredictably harms users in settings where no one they knowingly dealt with was malicious or negligent. Worse, software tends to have major exposure to adjacent software and hardware; not only do attacks like XSS turn low-importance software into a threat vector, but major companies (e.g. Lenovo, Sony, Sennheiser) keep shipping hardware that compromises everything running on a machine. Honestly, software seems to fail more like monetary and political systems, where malice converts small mistakes into large, indirect harms.

(And all of that is just how failures happen now. Crap like NPM left-pad suggests that we could see serious global outages over trivial errors, and any major threat like Heartbleed or Spectre could become a disaster if the wrong people get there first.)

I'm still pretty dismissive of professional standards, but it's not for lack of concern. There are definitely appealing aspects, especially when I see things like companies using and defending plaintext password storage. I'd like to live in a world where people are at least told why that's bad practice before they do it, and maybe a world where there's some kind of authority to intercede with the idiots who steadfastly defend it after being taught.

I just fundamentally don't think they'll work for most of what ails us, and I except bad faith to become a problem almost immediately. The examples I see cited aren't just fields with high-stakes work, but ones with defined owners producing linear throughput; a few engineers design one bridge, a doctor treats one patient at a time. Indirect failures happen, bolts shear and drugs have side-effects, but even there chain of custody and area of responsibility can be clearly defined. But software seems to function more similarly to fields like banking, politics, or even intelligence, where effects are nonlinear, often extraterritorial, and ownership of output isn't clear. Professional standards bodies in those fields, even ostensibly powerful ones, seem to incessantly come up short or late. And that's before the infighting starts; already it seems like most calls for standards groups slide near-instantly from "banning malpractice" to "banning work I consider immoral".

Perhaps I'm not so much sanguine as fatalistic. There is too much software being produced, but I'm not sure how to fight that without crippling existing dependencies. There are a few legal changes I'd very much like to see, centered around making companies financially responsible for harms from bad practice so that they at least have to worry about profits. But overall, this feels more like a dilemma than a problem; we've already baked in so many major risks that it's not clear we have a way back.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: