Hacker News new | comments | show | ask | jobs | submit login

The problem is software complexity. You can't really own anything you don't understand, and you can't be free in a world you don't understand. And we don't understand software.

Imagine a naive young king who doesn't understand how to do anything for himself. He will easily be controlled by those around him that do understand the world.

Of course, nobody before ever understood everything about the world. But different people understood different aspects in depth, and education gave literacy so people could share expertise.

Software is so complex that nobody really understands it. Nobody understands how a car works, nobody understands how a phone works, because they are both controlled by hopelessly complex software. Governments can't regulate emissions because the software changes the emissions controls based on arbitrary unknown factors. Security problems are just a small subset of the surprising ways complex software behaves -- surprising even to the software engineers who build it.

And the key is that this is all new complexity that didn't exist before. Physics and biology were developed to understand pre-existing complexity, so we always made forward progress. But we are moving backwards now because we are introducing complexity faster than we are understanding it -- much faster.

Therefore this is actually worse than the naive king mentioned above. In that example, it's just an asymmetry of information, and that can be resolved through education. Software is so complex education can't hope to keep up.




> Software is so complex that nobody really understands it.

Unless this statement is trying to imply something deeper than what it states, I would disagree.

I work with embedded systems and connected devices. To me it's not a problem of complexity, but a problem of not having a strong standard of practices on both the device side and the facility side.

Have sensitive data on your network? Separate it from the rest of the network, or don't put devices on the network that don't meet your security needs.


"don't put devices on the network that don't meet your security needs"

Does it not disturb you that the mere presence of a device on your network could compromise everything else? Why is that?

I argue it's because of complexity. We don't really know how these devices behave outside of very controlled circumstances.

And the vulnerability could be anywhere in the stack. I remember this bug from a while back:

https://lcamtuf.blogspot.com/2014/10/psa-dont-run-strings-on...

The gnu strings utility was vulnerable to untrusted input! Who would possibly imagine that would ever be the case!

Honestly, that bug just made me give up. Software cannot be reasoned about any longer. I used to believe that solid components strung carefully together could add up to something understandable. But no, we are beyond that.

I'm not talking about crazy James Bond hackers that somehow infected your compiler or something. I mean that, by accident, a basic utility does something crazy.

This is not math or science or engineering any more. It's wizardry, witchcraft, and alchemy.

(I'm exaggerating a bit, but it really is discouraging to me.)


You might find Normal Accidents by Perrow interesting. He describes accidents caused by systems (e.g. nuclear power plants) becoming so complex that they are incomprehensible to humans.

"I used to believe that solid components strung carefully together could add up to something understandable. But no, we are beyond that."

I think that's true, but it turns out that we don't have any solid components. None.


If you put two nuclear power plants on the same grid, it is pretty hard to imagine how a meltdown of one plant would trigger meltdowns elsewhere on the grid (because the grid carries electrical energy and is incapable of carrying high-speed neutrons). But with software, you don't have to imagine such failures, they happen all the time (because the internet can carry any data, including more software). So I still maintain that software systems are more complex. And if they aren't more complex today, they will be soon, because the complexity is growing without any obvious bound.

But for the sake of argument, let's say they are of comparable complexity. If you show a layperson a nuclear power plant, and say "who do you think should run this: you, or a team of nuclear engineers?" they would probably answer "a team of nuclear engineers, please". Show the same person a television, and they will feel like they should be able to operate it. But it's actually an internet-enabled TV running sophisticated software that is on the same home network as your internet-enabled security camera system, and it's a very unsettling situation. In other words, now software makes everything -- toasters, TVs, phones, cars -- into incomprehensible systems.


I don't know how to make clothes (I guess very few of us do) but I do own them and I can somewhat fix them. Imagine if we had to go back to the manufacturer to sew on a button. We can even mod clothes and copy designs without being sued.

A couple of links about that

https://www.techdirt.com/articles/20100526/0039459578.shtml

https://www.ted.com/talks/johanna_blakley_lessons_from_fashi...


I disagree. Not understanding how my car works is not a problem for ownership, so long as I can decide where to delegate all the maintenance etc that requires it to work. When I can choose my mechanic, I own it. When I have to take it to a dealership, because of all the DRM'd electronics inside, I no longer do.


"I disagree...When I can choose my mechanic, I own it."

I think we are in agreement here. You don't personally need know everything about everything you own. But someone needs to know enough about it that you can use their expertise or learn from them if you care to.

The myth is that it's about who knows -- the right laws (or the right consumer pressure) will open up the right information, and freedom will supposedly follow. That's a minor factor, sure, but I believe it's more of a myth.

The real thing holding us back is that nobody understands these complex software stacks, we are just building them bigger and bigger and understanding them less and less.


How can you prevent your car from reporting everything you do to the manufacturer? Look at this article as an example: http://www.businessinsider.com/ford-exec-gps-2014-1


A prime example of all this are AI black boxes like neural nets.


> And we don't understand software.

People are perfectly capable of understanding very large programs even from reverse engineering.

The issue is that things change so quickly that generating such knowledge is economically useless.

I have this discussion with my CTO all the time:

"I'd like you to go do <X>".

"No, please give that to <junior person>."

"But he's junior. I want someone senior to do it."

"The half-life on the knowledge to do <X> is weeks to months. Senior people will be no better at <X> than a junior person, and our junior people are quite capable precisely because we trust them with things that other places would consider a "senior" task. The first couple of weeks are going to be spent on Google, Slack, IRC, mailing lists, forums, and anything else figuring out all the ways that thing fails and how to debug it. After a week or two, they won't be junior at it, and we'll make them give a talk about it if we deploy it."

Sometimes I still have to step in, and that's fine. But I avoid learning software things with short half-lives as much as possible, nowadays.

Of course, that makes me an out-of-touch, crusty, old fart to my juniors.

But, they are smart enough to acknowledge that my code always seems to be so much more reliable than theirs. And that's fine, too.


"The issue is that things change so quickly that generating such knowledge is economically useless."

That's a good point, and certainly part of the problem, but it does not detract from my point.

The bottom line is that we don't understand the software in the world around us and that it would be hopeless to try. Whether that's because of static complexity or dynamic complexity seems like a separate discussion.

Changing software on the scale that humans do is kind of like changing the laws of physics every year. Great to know that we understand some past snapshot of the software stack that we built, but not relevant to living in today's world.


Nobody understands how a car works, nobody understands how a phone works, because they are both controlled by hopelessly complex software.

At least for cars and other home appliances, you can still buy older models which don't have software at all and are thus relatively understandable and repairable (and such machines often sell for surprisingly high prices even when well-used, for this and a few other reasons), but then you give up convenience and efficiency and safety and other things that could be considered progress.

Of course, a lot of people will think you're a redneck or similar if you pursue that way of life, but it's certainly one way to stay away from the "complexity explosion".


> The problem is software complexity. You can't really own anything you don't understand, and you can't be free in a world you don't understand. And we don't understand software.

What this means is that for all practical purposes AI is already here. AI, or rather the fear of AI, is about software making decisions that impact us outside of our knowledge or control.

Prior to software this was done by bureaucracy. We can see software (on the slope to AI) as disempowering in the same way.


+1. The AI apocalypse looks more like a Kafka-esque bureaucracy and not so much like I, Robot with machines shooting at us.


This is one of the issues with sci-fi movies for the less metaphorically minded. Or, well, anyone. I find books make the ideas much more portable, which is half the fun of sci-fi in the first place. Things in real life are never exactly what they're like in the pictures.


Is it truly Kafka-esque when it provides us with adorable kitten gifs? At worst Brave New World-esque, at best actually good. Kitty!


and much of that complexity, at least on personal computers, gets added in the name of "user friendliness"...


also in the name of saving money/time (by using large libraries/frameworks and such)


And it does! That's what's maddening about it. The complexity always seems to justify itself, but the end results are probably not where we want to be.


>> But we are moving backwards now because we are introducing complexity faster than we are understanding it -- much faster.

That's a brilliant insight - thank you




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: