Imagine a naive young king who doesn't understand how to do anything for himself. He will easily be controlled by those around him that do understand the world.
Of course, nobody before ever understood everything about the world. But different people understood different aspects in depth, and education gave literacy so people could share expertise.
Software is so complex that nobody really understands it. Nobody understands how a car works, nobody understands how a phone works, because they are both controlled by hopelessly complex software. Governments can't regulate emissions because the software changes the emissions controls based on arbitrary unknown factors. Security problems are just a small subset of the surprising ways complex software behaves -- surprising even to the software engineers who build it.
And the key is that this is all new complexity that didn't exist before. Physics and biology were developed to understand pre-existing complexity, so we always made forward progress. But we are moving backwards now because we are introducing complexity faster than we are understanding it -- much faster.
Therefore this is actually worse than the naive king mentioned above. In that example, it's just an asymmetry of information, and that can be resolved through education. Software is so complex education can't hope to keep up.
Unless this statement is trying to imply something deeper than what it states, I would disagree.
I work with embedded systems and connected devices. To me it's not a problem of complexity, but a problem of not having a strong standard of practices on both the device side and the facility side.
Have sensitive data on your network? Separate it from the rest of the network, or don't put devices on the network that don't meet your security needs.
Does it not disturb you that the mere presence of a device on your network could compromise everything else? Why is that?
I argue it's because of complexity. We don't really know how these devices behave outside of very controlled circumstances.
And the vulnerability could be anywhere in the stack. I remember this bug from a while back:
The gnu strings utility was vulnerable to untrusted input! Who would possibly imagine that would ever be the case!
Honestly, that bug just made me give up. Software cannot be reasoned about any longer. I used to believe that solid components strung carefully together could add up to something understandable. But no, we are beyond that.
I'm not talking about crazy James Bond hackers that somehow infected your compiler or something. I mean that, by accident, a basic utility does something crazy.
This is not math or science or engineering any more. It's wizardry, witchcraft, and alchemy.
(I'm exaggerating a bit, but it really is discouraging to me.)
"I used to believe that solid components strung carefully together could add up to something understandable. But no, we are beyond that."
I think that's true, but it turns out that we don't have any solid components. None.
But for the sake of argument, let's say they are of comparable complexity. If you show a layperson a nuclear power plant, and say "who do you think should run this: you, or a team of nuclear engineers?" they would probably answer "a team of nuclear engineers, please". Show the same person a television, and they will feel like they should be able to operate it. But it's actually an internet-enabled TV running sophisticated software that is on the same home network as your internet-enabled security camera system, and it's a very unsettling situation. In other words, now software makes everything -- toasters, TVs, phones, cars -- into incomprehensible systems.
A couple of links about that
I think we are in agreement here. You don't personally need know everything about everything you own. But someone needs to know enough about it that you can use their expertise or learn from them if you care to.
The myth is that it's about who knows -- the right laws (or the right consumer pressure) will open up the right information, and freedom will supposedly follow. That's a minor factor, sure, but I believe it's more of a myth.
The real thing holding us back is that nobody understands these complex software stacks, we are just building them bigger and bigger and understanding them less and less.
People are perfectly capable of understanding very large programs even from reverse engineering.
The issue is that things change so quickly that generating such knowledge is economically useless.
I have this discussion with my CTO all the time:
"I'd like you to go do <X>".
"No, please give that to <junior person>."
"But he's junior. I want someone senior to do it."
"The half-life on the knowledge to do <X> is weeks to months. Senior people will be no better at <X> than a junior person, and our junior people are quite capable precisely because we trust them with things that other places would consider a "senior" task. The first couple of weeks are going to be spent on Google, Slack, IRC, mailing lists, forums, and anything else figuring out all the ways that thing fails and how to debug it. After a week or two, they won't be junior at it, and we'll make them give a talk about it if we deploy it."
Sometimes I still have to step in, and that's fine. But I avoid learning software things with short half-lives as much as possible, nowadays.
Of course, that makes me an out-of-touch, crusty, old fart to my juniors.
But, they are smart enough to acknowledge that my code always seems to be so much more reliable than theirs. And that's fine, too.
That's a good point, and certainly part of the problem, but it does not detract from my point.
The bottom line is that we don't understand the software in the world around us and that it would be hopeless to try. Whether that's because of static complexity or dynamic complexity seems like a separate discussion.
Changing software on the scale that humans do is kind of like changing the laws of physics every year. Great to know that we understand some past snapshot of the software stack that we built, but not relevant to living in today's world.
At least for cars and other home appliances, you can still buy older models which don't have software at all and are thus relatively understandable and repairable (and such machines often sell for surprisingly high prices even when well-used, for this and a few other reasons), but then you give up convenience and efficiency and safety and other things that could be considered progress.
Of course, a lot of people will think you're a redneck or similar if you pursue that way of life, but it's certainly one way to stay away from the "complexity explosion".
What this means is that for all practical purposes AI is already here. AI, or rather the fear of AI, is about software making decisions that impact us outside of our knowledge or control.
Prior to software this was done by bureaucracy. We can see software (on the slope to AI) as disempowering in the same way.
That's a brilliant insight - thank you