Hacker News new | past | comments | ask | show | jobs | submit login

It's impossible to build a safe airliner, but we can get pretty damn close. Airline engineers know one cannot create a component or system that cannot fail. So the question then becomes, assume a system fails. Now how does the airplane survive?

With software systems, instead of demanding a perfect defense against the root password being compromised, think "if the root password is compromised, how do we prevent that from bringing it all down?"

In other words, think in terms of redundancy and isolation between systems.

And the largest piece of hubris and madness in critical systems is allowing over-the-internet updates.




But there is a big difference between airline safety and software safety. An airliner survives against the environment, it's PvE, a software system has to survive against hackers, it's PvP. If you shoot a rocket at an airliner, the airliner will fail, in that case we blame the person who shot the rocket.


> But there is a big difference between airline safety and software safety

I've worked professionally in both industries; they are not fundamentally different. Software practices can learn a lot from aviation practice, but they seem determined to spend decades rediscovering the methods the bitter, expensive way.

For example, software is still stuck in the dark ages where the idea is better training / better programmers / more punishment will prevent these sorts of failures.


> For example, software is still stuck in the dark ages where the idea is better training / better programmers / more punishment will prevent these sorts of failures.

What is your source on this? This goes against what anyone at any company where I have worked at ever believed.

No-fault root cause analysis, process improvements, inherently safer practices, languages, libraries is what every place aimed for. I don’t even know what you might mean by punishment?


There are many, many programmers, you can see their comments right here, that fit (for many, probably despite their age), into what you could call brogrammer/cowboy coder/lone star/rockstar developer types and that will try to shame developers making mistakes or present certain types of failures as inevitable, "you just need better developers".

You can frequently see them come out in Rust threads, they're generally against it, coming from C/C++, it seems a common attitude amongst low level devs in my experience (there's a thing with "hardware" sounding "hard" which I guess makes them feel more "hardcore").

It's obviously not universal, but it's super easy to find if you search for some programming language discussions.


Okay, that criticism is legit, but they also have a point, and more importantly, they have docs, tooling, and cross-compiling experience. When you're implementing the basic C machine, or kernel, or drivers on which every other tool chain ever conceived at some level relies on for new architectures and hardware, you are operating in the most constrained setting of just about any programmer today. It is different, and you have to think different because you're trying to make sure you're getting the foundation right.

When the docs exist, and are accurate they can somewhat hide behind "get better programmers"; when they aren'the some can be even moreso, because there is nothing worse than trying to drive poorly documented hardware. It either works or it doesn't.

T. QA guy amongst a bunch of dev types who regularly points out how they do a great job implementing the wrong thing on a regular basis, and helps shape process to make that harder.

The fact they come out in Rust threads has more to do with Rust's evangelist types running afoul of the long standing love of "things that work". Somewhat in the cowboy camp's defense, none of theach no guardrail's type ever turns down a good static analyzer or test suite once you figure out how to get it smoothly integrated into their process. That's where I think Rust gets their outreach wrong.

Don't try to sell development on a brand new lang to learn and replace what they are using. Use the lessons you learn with making that lang, and improve the tooling they are familiar with. We don't have an infinite capacity to learn a new lang and library ecosystem every 6 months to just keep doing what we do. Once you get savvy enough with C and where the spec holes are, you've gotten to a point where you've gotten insight into how things actually work many levels more accurately than just about any other programming toolchain, and also onenjoy of the only languages completely divested of licensing lock-in on the planet.

There is also the point that you can'take really argue against C's effectiveness. It's always the first code to be made functional on any new silicon. I'm interested to see if Rust supplant's it, but I'm weary of any language that's heavily reliant on LLVM as I'm getting more savvy on how licensing risk tends to play out in the long run.

You can't beat the immortality and ubiquity of GPL. It is as close to the unrevocable toolbox from the public domain you'll ever get.


> What is your source on this?

See "Trust the programmer" https://beza1e1.tuxen.de/articles/spirit_of_c.html

Also, a general belief among C++ programmers that better training is the answer to programming bugs. This belief is slowly fading, but it's got a long way to go. Scott Meyers' books on Effective C++ represent a lot of effort to educate programmers out of making mistakes. For example, from the table of contents: "Prefer consts, enums, and inlines to #defines". If C++ was an airplane, #define would simply be removed.

> I don’t even know what you might mean by punishment?

There are several calls for punishment in the comments on the article.


I think the work of the people operating a system is just as important as the one of the programmer. You can build the very solid plane or software and then have it fail due to being operated in the wrong fashion.

The question is whether both sides are doing their best, within reason, to mitigate issues. The programmer doing everything right while the admins forget to patch for years won't change a thing. The opposite is true, patching or configuring correctly won't do a thing if the system is full of "built-in" holes.

It's not a stretch to think of a setup where specific conditions that define this "within reason" are established for software developers and administrators. It's what an audit should normally uncover: weaknesses in the process, points for improvement, etc. Only this time it would be in the form of general and specific guidelines that get progressively stronger as time passes. It's not a sure thing but it raises the bar enough for most ransomware attacks to become cost prohibitive for the attacker.


>If C++ was an airplane, #define would simply be removed.

So would that make D the airplane version of C++?


In my paper "The Origins of the D Programming Language" I enumerate many direct influences aircraft design has had on D.

https://dl.acm.org/doi/abs/10.1145/3386323#:~:text=The%20D%2....

BTW, I practice dual path in my personal life. If I'm doing something risky, I have a backup. For example, when I work under my car, I put the car on two sets of jackstands, even though I use stands that are rated for trucks. I'd never rely on a single rope/piton if rock climbing. I cringe when I see climbers doing that. I carry an extra coat in the car in winter, and water when driving in the desert.


Thanks for sharing. I knew some of D's history, but there was stuff in there I hadn't read before.

I like much of the way D's designed. It doesn't try to be flashy, gimicky or different for the sake of being different. It gives you a set of practical tools and doesn't try to be too opinionated on the way they should be used. It mostly makes it hard to shoot yourself in the foot. But if you really want to you can. You gotta really try though.


That's defense in depth as applied to system design. Think of it like cleaning out a cat box, and only having bags with holes. You only need a couple bags whose holes don't line up, and you're good to go.

The simpler they are, the easier they are to learn. The easier they are to learn, and less "opinionated", the less resistance they tend to build up against adoption.

D is interesting, because it seems, from my experience, D, like Ada, has been a hypeless language. Though I haven't checked on licensing encumber meets that might be behind that.


In about 2008 I started working for SAIC, on a contract to NASA's "Enterprise Applications Competency Center". While I was waiting for my computer and all the accounts and permissions to get set up, I was sent to do a code review for a minor application written in Flash/Flex/ActionScript + Java as was popular at the time, written by one guy. Everything looked pretty decent to me, except that he'd done all of the authentication/authorization in the Flash frontend. I pointed out that anyone who could connect to the app and fake the protocol could do anything the app could do, at a minimum. He said yeah, he'd have to do something about that. It went into production the next week. He's now part of the architecture/"engineering" group.

All of the things you mention are great, but they don't really address the problem. You need developers who know what the issues are and are willing to do the work to fix them even though they don't add anything to the feature list. In my experience, I don't have much reason to believe that today's developers are any better about that than yesterday's. There is a lot of security cargo-culting going on, which probably does improve the situation, but there's also a lot of "bootcamp" developers without the background to know that there are issues.


First thing I have a group of developers do in a new context is learn the existing business process without automating or writing a line of code. They can dissect, name, and research any code they want generated by who they are to build for, but no writing until they get the business context.

You'll never build a better tool than the one that eases your own pain. Make the user's pain your own, and beautiful things happen.


That would seem to require the software industry to take responsibility.

The software industry is to responsibility roughly as surgeons are to checklists.


Not only that, but we spend billions of dollars on defense to protect those airlines from bad actors. I mean when a person blows up a bomb in an airplane, our response isn't "build bomb-proof airplanes".


You're correct.

Historically the choices were made to spend billions (and trillions) of dollars to invade countries harboring terrorists and use the situation to project power against other adversaries, advantageously control the price of oil, work trade deals, etc.

I predict the same path will be taken with cybercrime. The U.S. defense apparatus won't be giving subsidies to non-tech companies to boost security. Rather, they'll be waging war and using overlapping objectives and narratives to further other goals.


Cyberwarfare will be used to further terrible agendas (and already is) - that must be fought politically, but I am plenty jaded enough to see where that is likely to go. Unfortunately not participating in Cyberwarfare is not an option.


I disagree - Russia seems to be a large source of these crimes and they are a bit too big to invade (without nuclear bombs it might be possible, but only a fool would invade given they have them)

We might seem some special forces go into action under cover. However it would be assassinations done in such a way that Russia either won't know who did them, or is willing to look the other way (the later implies something diplomatic).


If sheltering hackers means war countries might think twice of letting them operate from within their borders.

But there are other options: assassination for instance like Israel does with nuclear scientists.


It turns out that airplanes are fairly resistant to bombs aboard. Several attempts with smaller bombs have failed, despite causing significant damage. The cockpit door has been hardened, too.

Airliners are now pretty resistant to engine explosions, once thought to be impossible to do.

Keep in mind that a bunker will never fly.

Nobody is suggesting not going after criminals who attack software.


The cockpit door being hardened introduces its own set of problems as well, and the engine failure containment is a good point.

Though the prevailing logic on a bunker taking flight is that the engine size will be too large to be economical, which you probably factor in, Walter, but the uninitiated in the aerospace industry tend to simplify away.


We design military aircraft to fly into warzones. They are very much PvP. We design them with various countermeasures to deal with rockets and ejector seats if those fail. Yeah, planes will be shot down and pilots may die despite these precautions, but skimping on the ejector seat because "hey they might die anyways" is totally unacceptable.


This is a great way to frame the issue.


Yes, building a safe airplane is doable. But this is not a good comparison.

Securing a company is like saying that you have to chnage all of the wiring in a country without impacting power supply. ALL of them - the house wirings, the cables transporting power, everyting. At once.

Security in a company is not a single system, it is a messy interaction of unknown dependencies nobody understands. And this mess runs a business.

Of course, there are plenty of things one can do but even for simple tasks such as "let's reset all the 100,000 accounts to make sure they are long/complex/whatever". This is asking for apocalypse.

How it is difficult is visible when you work in information security and have to balance the "we MUST NOT be hacked" and "we MUST NOT impact the business".


> Yes, building a safe airplane is doable.

It didn't start out that way. It took a long time to figure out how.

> But this is not a good comparison.

I can't agree with that. I don't see any rationale for either airplanes or software systems being special.

> Security in a company is not a single system,

An airplane isn't, either. For example, part of airplane safety is the air traffic control system. Part is the weather forecasting system. And on and on.


> Yes, building a safe airplane is doable. It didn't start out that way.

And now only FAA/EASA etc. certified companies and individuals can build a commercial aircraft.

And they can only build the aircraft they are certified to, using the same certified components, and the same certified tools. They cannot change any aspect of the construction without another round with the authorities.

Let me know when the CIOs of listed companies are up for that kind of lifestyle for their email and word processors.


> Let me know when the CIOs of listed companies are up for that kind of lifestyle for their email and word processors.

I think you're absolutely right that this kind of rigidity is not part of our tech culture, but maybe it should be if that tech is running power grids, [oil] pipelines, and other critical infrastructure.

In summary - maybe we should spend more money so that we get systems which are reliable and resistant to this kind of attack. (_I_ think that's probably a good investment for power/transit/core network/safety systems)


"this kind of rigidity is not part of our tech culture"

Yes and no. "No" because there are best practices and bits of midleware that although may still get improvement over time, receive nevertheless fewer and fewer changes (and have logarithmic looking dynamic of development). They mature. Advising strongly things that passed the test of time and broad use scrutiny just makes sense, regardless if that may look "rigid". (Not that many implement their own double linked lists nowadays.) Then "yes" because the our "tech culture" pool is big enough to also accommodate fashion, hype, and a whole lot of other psychosocial can of worms...


"And they can only build the aircraft they are certified to, using the same certified components, and the same certified tools."

And that level of rigor is appropriate for the stakes that selling mass produced commercial aircraft implies. The discussion context was critical systems. But then you threw "word processors" in there. Why?


Because word processor documents have often been the vectors for attacks. And once an attack is inside your systems, there is nothing preventing the attack attached to a document from infecting and encrypting your machine or infecting your PLC and destroying your industrial equipment.


I'd say that the surface of attack here is the industrial equipment's link to general computing equipment (which it's expected to be less secure). The solution just can't be to secure the whole world of software that may somehow end up on general use computers. The point is, my remark is still valid, as a discussion on critical systems got mixed with clearly non-critical ones.


Ok, so I shouldn't have confused the issue by mentioning Stuxnet.

The point is, failing to secure those general use computers has bad consequences.


I don't think you confused the issue there at all, but forced a clarification of boundaries. The safety critical PLC industrial controller network should be isolated from the Net, however, even with the pipeline hack, the shutdown of the PLC network was due to compromise of billing systems, which are non-safety critical to the immediate user population (administration) but mission critical to the architecture of the western, market-mediated economic activity. You can't secure those systems perfectly, though we can definitely do better. The correct response, however, in this case is effective deterrence of those looking to engage in cyber offensives. Like it or not, when you can sit back outside the reach of effective enforcement measures, and cause mayhem and havoc, and make a buck doing it financial incentivization mechanisms pretty much ensure it will happen.

I just hope we don't take it too far. Many young and talented people in the CS and IT space cut their teeth testing the limits of legitimate access without pushing into the full on destructive regime these attackers have.

I'd hate to see things cracked down on so hard we lose a good signal for talent because we decide that the integrity of cyber systems must be defended at all costs. However, there needs to be a much more pronounced reaction to the types ofor blatantly malicious activity that has been escalating for the past decade or so.


Certification/regulation is something orthogonal to the design methods used.


I disagree, mandated certification ensures that the budget required for certain design (and testing) methods is available.

And its precisely those methods that keep the planes in the sky.

Its not orthogonal, is a necessary prerequisite.


Dual path systems came first. Regulation came much later, it wasn't a prerequisite. Regulation didn't design airplanes, it standardized existing practice.


Imagine you had airplanes be built the way they wanted, crashing from time to time, not starting and having people work on the wings to fix things in flight.

If this was something done for fun and without impact on people then nobody would care.

Suddenly, a Monday morning, someone says "woah, this cannot be - you have to fix this". But this is not fixable, you have to build a new plane from scratch, or completely review the existing ones. Planes would be grounded.

Now a software company: typically your old plane flying by more or less miracle (when it flies). You cannot fix it, you have to rebuild it. Either you ground the company and force them to build something new, or you will always have legacy.

The legacy is not fixable - it simply is not. You need money to redo everything and if you do not have the proper pressure then it will not happen.

Then, building a new company/software can be done the right way. This is not even difficult, I would even say that having these constraints will help in the overall quality. But this is a new software, not a "fix" of the old one.


Um, airplanes are constantly undergoing revision and improvements and bug fixes. Only very serious ones result in grounding. Eventually, they become too expensive to upgrade and Boeing/Airbus designs a ground up replacement.

Just like software.


> And the largest piece of hubris and madness in critical systems is allowing over-the-internet updates.

What would you suggest in its place?

You'd need to replace the internet with something - postal mail, Fedex, courier deliveries, etc, or just have things that never get upgraded. Every one of those options has significant limitations, and in many countries, I'd trust SSL over postal mail every single day.

I think if you alter the wording to be "more-secure internet deliveries" then you'll have me agreeing with you, but unless I've missed something, your comment seems poorly aimed (which is odd, as your previous example of the root password is spot-on).


Note, the American postal service got it's reputation for reliability among the citizenry (which has been soiled by hostile management and politics in recent years) in part because the U.S. government was willing to back it with men with guns. The Marines were tasked seeing that mail was delivered, or dying in the process. This was incredibly effective at the time to the point that even today, no one even considers attacking the post a realistic option despite the withdrawal of armed forces from active involvement.

The Internet has a two-fold issue.

A) It's fundamental ideation was an interconnected network of trusted nodes, with a self-healing capability to facilitate C&C continuity in case of nuclear attack. All protocols have underneath them that starting assumption.

There is an entirely unexplored depth of "authorization/security first" computer networking practice out there waiting to be enumerated, instead of trying to bolt-on security mechanisms to what is already built without an ideation of distrust built in from the get-go.

It's just so wildly impractical to implement, and undesirable to at least the Western philosophical foundation to free by default expression that it's not a natural thing to wrap one's head around.

B ) What are you gonna do to me? I'm behind 7000 proxies in different jurisdictions that work fundamentally different from yours and are unlikely to cooperate with your projection of power!

In short, it's a people problem, not a technical one. To the degree it is a technical one, the middle-boxes hold everything back <shakes fist>.


For critical systems, I suggest using a usb drive.

Do you really want a missile guidance system update-able over the internet? How about the auto drive system on your car? What about the code that keeps track of accounts in your bank? Don't forget the code that keeps the pipeline running!


How do you get the usb drive to the user ?


I don't think this is valid comparison. If you are trying to compare software on a plane to application, then airplane software is not attempted to hack into due to it being generally well isolated from outside networks. If you are comparing physical build of systems in a plane to software, then hacking of software is equivalent to bird or drone running into an engine or a laser attack or hijack attempt... Which while we know do not happen often, but lets say if a lot of money were to be made by doing so, I'm sure the frequency would increase.


At the risk of putting words in their mouth, they are comparing the method of airplane safety, where they look at redundancy (assume X will fail and the plane needs to survive this), looking at system solutions over individual fault (redesigning a warning indicator so pilots cannot miss it rather than blaming individual pilots that do miss it), and a regulatory body of investigators that enforce standards and investigate failures with the aim of learning from them and improving practices. You are thinking about the specific resulting design choices rather than the system that led to them.


Yes.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: