Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've found my experience designing gearboxes for Boeing has applicability to software design. For example, the fundamental idea with airplane design is not to design components that cannot fail, as that is impossible. The idea is to design the system to be tolerant of failure. Every part in the system is not "how can we make this part never fail" but "assume it failed. How does the airplane survive?"

This is a fundamental shift in viewpoint.

It seems pretty well established that making secure software is impossible. Time to pivot to designing software systems that are tolerant of inevitable security breaches.

One example of this is compartmentalization. A single breach must not have access to all the sensitive data. Another is backups must be air gapped (or put on physically read-only media) so ransomware cannot compromise them.

Compartmentalization is used in battleship design, aircraft design, spy networks, etc. Time to use it in software systems design.

P.S. Just to be clear, one still strives to design airplane parts so they won't fail, it's just that one does not rely on them never failing.

P.P.S. It's really really hard to sink a battleship as they are so compartmentalized. See the sinking of the Bismarck and the Yamato.



Some questions a security professional should ask:

1. What happens when the root password is guessed by a malicious person?

2. What happens when a trusted employee is really an enemy agent?

3. What happens when we download and install a malicious update from a trusted vendor?

4. What happens when the server room burns down?

5. What happens when a malicious USB stick is plugged into our secure network?

6. What happens when our CEO's laptop gets stolen?

And so on. I deliberately wrote "when", not "if".


These hypothetical scenarios are not anchored with the language that most businesses will understand: cost.

Without providing the context of how expensive or cheap it will be to adhere to each of these best practices, it will be hard to convince those with decision-making authority to do the right thing, unless they are in a highly regulated environment to begin with.

An aircraft on the other hand is already very expensive to make, precisely because it involves transporting resources that are impossible to replace: human life.


> These hypothetical scenarios are not anchored with the language that most businesses will understand: cost.

I've argued this to well-placed people in two Fortune 250 companies. At the first place, the conversation led to a cleanup up the IT policies, so that they were more consistent and reasonable, but they still weren't grounded in reality. The reaction at the second place has caused me to realize that IT security -- like everything else at a company -- is just another political game of thrones. Aggressive people are trying to work their way up the ladder, and increase the size of their internal kingdoms. They use scary language to leverage fear in upper management to drive ever-larger budgets and ever-more inconvenience in the name of "security" which winds up being little more than security theater. If there's a Fortune-sized company out there which actually makes IT security decisions based on practical consideration of risk vs. cost, I'd love to know. My current belief is that, by the time you get this big, there's too much disconnection between the IT department(s) and everyone else to align the incentives.


Microsoft seems to do pretty well, at least in the sense their security is effective and still allows people to get work done. Lots of effort put into policy automation so it's easier to comply with policy than not.


The problem with IT security orgs is that they get to estimate the costs of not following their own recommendations.

Which trend towards infinity, because... well, they want the thing they're advocating for to happen.

This tends to attract toxic, political people who thrive in this sort of environment, and push out more reasonable, technical people who don't want to deal with that.

At some point, you get a critical mass of toxic IT security managers covering each other... and then everyone starts ignoring and/or hating security.

The real loss is that most IT security orgs I've seen are extremely deficient in developers. The kind of person you actually need to automate things, so that they're quicker, so that people use them, so that the organization is more secure. :(


I don't really see it that way. From my perspective it isn't toxicity as the driver, but a strong sense of user preservation. Many people I know in security value the customers more than they value the company itself. They're the antithesis of political - their inability to understand higher order company-level or product-level issues often hamstrings them because their goals aren't aligned with the company goals. Though I have a limited view - I'm sure there are many companies that are operating as you've said.

The teams I really strongly respect, for example at Square/Microsoft, manage to align user interest and company interest very well - that's a very hard thing to do and requires top to bottom investment. Microsoft's approach has been to monetize security, which makes that easier.

Both organizations also have resisted the mistake that you and I both agree most orgs make - they've invested heavily into hiring engineers to solve security problems, and you generally have to pass a coding interview even for security roles.

edit: Reflecting more, I suspect that the described behavior of hyper-political teams that have a self-preserving structure is probably something prevalent, but that I haven't had the misfortune of being involved in. I would imagine companies like that are generally toxic across orgs, and not just in IT, but probably any area where investment is forced and it isn't revenue-generating.


Clarification, in my context I'm typically talking about the IT Security and employee relationship. But the dynamics you outline still hold, although my experience has been reverse (they value what they perceive as "the company" over employees).

> I would imagine companies like that are generally toxic across orgs, and not just in IT, but probably any area where investment is forced and it isn't revenue-generating

I'd agree this is probably a key contributing factor.

It's been my fortune / misfortune to work for a number of companies that qualify (healthcare, financial, insurance).

But it has also been true (in my much more limited experience) with healthier, IT-as-profit-center companies.

I'd hazard better framing (than profit/cost center) might be "incentivized to improve" vs "penalized for mistakes."

Unless it's the former, it's in no one's personal interest to go above and beyond or suggest change. So you get glacial, incremental processes, and lose people who are impatient with working that way.

As you note, I think monetizing security is a key step to establishing a healthier balance.


The hardest thing to come to grips with in security is that you are dealing with large(ish) groups of people.


An aircraft carrier also has the benefit of being operated under a vastly different framework than a regular IT system: military vs. civilian. This means any inconvenience of using the system only matters if it leads to clear operational risks. But I've seen plenty of companies implementing solid (real) security measures only to see employees looking to bypass them themselves due to the inconvenience they caused, thus leaving doors open for attack.

The defense in depths / layered defense principle is probably well known in most companies but a strong defense is usually more expensive (in every single metric) than a strong offense, and it only has to fail once. A budget to cover this is easier to justify in a military setup for the reason you mentioned: lives. A trained soldier is expensive to replace, and so it a warship. This justifies a huge budget. In the corporate world the balance is "can we expect to lose more money (image, etc.) from the projected number of hacks than we save by not implementing specific measures?".

And lastly, companies like to say how they employed "software engineers" but the reality is that they employed mostly "coders". Coders are no more software engineers than bricklayers are civil engineers. A lot of the infrastructure and software architecture is left to people who don't have the broader picture because it's assumed that the particular system is inconsequential. Turns out almost no link in a chain is inconsequential when you're faced with a very determined, very well funded attacker.


> But I've seen plenty of companies implementing solid (real) security measures only to see employees looking to bypass them themselves due to the inconvenience they caused, thus leaving doors open for attack.

The New York Times had an article recently discussing how measures to prevent Covid have to take human nature into account in order to be effective. In other words, people only have so much capacity for adhering to strict measures, so it's important to stress the really important factors (avoid prolonged indoor exposure with large groups of people) and to allow for some relatively low risk behaviors (outdoor exercise with appropriate social distancing). Also cited HIV and teen pregnancy as similar public health problems (saying "don't have sex ever" vs "limit sex to these less risky behaviors").

So, coming back to security, I think any effective security policy will have to allow employees to get their work done efficiently without undue strain. Avoid "security theatre" while still mitigating against the most severe threats through training, compartmentalization, etc. Otherwise, you risk people going around the security measures so they can actually get their work done.


> military vs. civilian. This means any inconvenience of using the system only matters if it leads to clear operational risks.

The military are not immune to this either. Case study, the 2006 Nimrod crash in Afghanistan [0], which killed all 14 of its crew. This plane was the flying equivalent of an unsecured server whose root password was 'password'.

[0] https://en.wikipedia.org/wiki/2006_Royal_Air_Force_Nimrod_cr...


Nobody is "immune" to mistakes but the framework under which the military operates raises the bar for any kind of attack or mistake simply by forcing the personnel to more consistently adhere to stricter rules regardless of the sector (in front of a keyboard or a trigger).

And coming back to the particular Nimrod crash, the findings of the inquiry suggest a chain of issues slightly more complex than your analogy suggests. Setting a root password of "password" goes way beyond poor maintenance, it's designing the plane with a crack in the wing.


> the findings of the inquiry suggest a chain of issues slightly more complex than your analogy suggests

I agree that my analogy is an oversimplification. But I think my point stands - the plane had a massive set of issues that were tolerated because up to that point the operators had got away with it by sheer luck - there hadn't been an operational disaster, just a mass of technical and maintenance issues over the years.

In this case, the flight crew as military personnel were acting at a high standard but were fatally let down by the wider organisation that allowed them to fly. The military owned safety case was fundamentally flawed and evidence that suggested otherwise had been actively suppressed, as revealed by the Haddon Cave enquiry which found fault with a range of military and contractor organisations and individuals.


Last I was paying attention, all the legacy aircraft carriers hadn't been upgraded from windows xp, and had no upgrade path. That's only one of the platform's many vulnerabilities. Their purpose seems to have more to do with capital extraction than warfighting.


I was mostly thinking of the physical design of the ships (and their operation) as brought up by OP and GP. But even when it comes to IT, I can understand having a hard time fully validating a new OS on those ships. And still they take much better measures for securing those carriers overall than your average supermarket with XP cash registers.

The point, as I said below, is that the framework under which the military operate does a far better job at forcing the people to more consistently adhere to stricter rules. Which means under similar circumstances there's a good chance the military can keep a system safer than most companies.


> These hypothetical scenarios are not anchored with the language that most businesses will understand: cost.

If a CEO or other high level management doesn't have a sense of how to answer those questions, they have no business running a company in the 21st century.

There are plenty of stories of ransomware literally holding a corporation's entire business hostage. Software security is an existential threat for pretty much every business these days.


That's where risk assessments should come into play. Although not all questions asked may be answered, they can still be considered if the risk is commensurable to the consequences to either safety, financial or whatever your organization deems important.


Fantastic answer.


The name for this is "threat model".

https://en.wikipedia.org/wiki/Threat_model


Just to be blunt, here are wrong answers:

1. make the root password unguessable and change it often

2. background check employees

3. audit trusted vendor's security procedures

4. install sprinklers (!)

5. jam all USB ports with glue

6. train CEO on laptop security protocol


4. install sprinklers (!)

Yes, water based sprinklers are not the best choice for inside the server-room proper. But that doesn't mean you can't have other automated fire suppression systems. In the days of yore Halon systems were mostly used for this particular application. Halon has some issues though and has mostly been banned / replaced with cleaner alternatives. These days you see things like FM-200 used for fire suppression in these kinds of applications.

All of that said, speaking as a former firefighter, I'd take a water based sprinkler over nothing, even in that setting. Why? Because at the end of the day, human life is more valuable than a data-center full of computers and data, and if stopping a fire early saves even one life, but ruins a room full of computers, that's an acceptable tradeoff. And of course sprinklers are absolutely invaluable outside of the server-room. I don't have all the numbers memorized now, but to summarize in a succinct form, "sprinklers are wildly effective at preventing incipient stage fires from growing large, and at saving lives and property."

And as @WalterBright continues to say below - you are storing backups off-site, right? And you have a DR data-center on another continent that can be spun up in a matter of minutes by toggling a load-balancer or DNS setting somewhere, right?


I totally am sold on fire sprinklers. They're installed in my house. Yeah, they'll wreck the room they come on in, but will save the rest of the house. I also read that, in the US, nobody ever died from fire in a building where there were sprinklers that were not disabled.

Even counting 9-11, as the plane crash cut the water pipes.

But still, relying on sprinklers to save your IT business is a bad idea. Offsite backups are needed.


But still, relying on sprinklers to save your IT business is a bad idea. Offsite backups are needed.

Yep, totally agreed. Resiliency comes from having offsite backups, cold/warm standby sites, testing your backups and ability to switch to the DR site, etc. The sprinklers are mainly about saving human lives.


> install sprinklers (!)

Sure you install sprinklers in a data center to sprinkle water on servers and other electric devices in a context where there might or might not be broken or other wise un-isolated wires due to the fire...

What you can use instead is to flood the room with CO2 to suffocate the fire.

The problem with that is that it's also deadly to humans.

Still e.g. for rooms with long term data storage it's not uncommon to have measurements like that.

Besides that there are docent of other reasons why a data center might go down (temporary).

So the better answer is make sure you system is distributed over multiple data center, i.e. eliminated any single point of failure.


Installing sprinklers in the datacenter is probably a bad idea. But around it it may pay off, the fire may originate outside the datacenter. Inside the datacenter it’s be great to have some phisical separation so not all is lost in case of a fire and an extinguisher or whatnot can be used on the burning component without effecting the whole system. Also installing heat sensors and monitoring those may prevent a disaster. Installing partitions or building separated server rooms could also help.


Yes, from a fire safety standpoint, more partitions in the building are generally a good thing, especially when they consist of actual rated fire-resistant firewalls (physical firewalls not the "iptables" kind). It's amazing just how effective a firewall can be. I've been to structure fires before where you can basically say in the aftermath "everything on the 'fire' side of the firewall is rubble now, and everything on the other side is completely (or almost completely) intact". Of course you can still get some smoke and water damage, and even a good firewall can be breached if the fire burns long enough. But generally speaking, they help a lot.

And even a closed door in a residential building can slow things down enough to make the difference between life and death. This is why the fire service encourages people to sleep with their bedroom doors closed and to have a smoke detector near said bedroom door.

The downside to a heavily partitioned building, especially a very large building, is that it may become a maze that is hard to exit, and lots of doors between different areas slightly increase the risk that somebody will have a brain-fart and lock/chain doors that should never be locked, or otherwise do something that effectively traps people inside. Note to everybody: never chain/lock/barricade a door that must be used to exit a building in an emergency, in such a way that it can't be opened from the inside.


Keep a fire axe under the bed to create a new doorway as necessary :-)


Plus in a pinch, critical data can be recovered from wet hard drives and SSDs (say if it didn't make it to offsite backup yet). Melted hard drives and SSDs, no.


what are some right answers?


I'm not a security professional, but I'll spend a couple minutes and make a stab at it:

1. don't store everything on the machine(s) accessible via that root password

2. don't allow any employee unfettered access to everything

3. don't allow one piece of software to have access to everything

4. do not store backups in the server room, or even in the same building

5. buy computers that do not have USB support in any form

6. full disk encryption on laptop. Minimize sensitive data on laptop. Use smallest capacity disk drives on laptop. Don't allow any laptops to have access to secure network. Issue hardwired desktops to employees. No wifi to secure network.


However, with each new thing resources needed to implement and operate go up to the point when it is not feasible - money, people, time ...


It's always a tradeoff between how much to spend on security vs how much a breach is going to cost you. But if you are going to spend it on security, spend it on things that are more likely to work, rather than on impossible things.


From a risk perspective you're advocating bundling all of your eggs in one basket - perfect fault-tolerance & prevention - whilst that is actually a system nobody has built in the history of computing.

Neither detection or prevention can be perfect like nothing human-designed, but rather detection should be used to hedge yourself in situations where prevention fails.


It seems to me that they are doing the exact opposite of what you claim they are doing.

None of the mitigations that were described were aimed at preventing a breach or disaster. Instead all were designed to mitigate the damage that happens when a breach occurs.


I'm not sure what you're referring to, but my comment is to the person advocating fault-tolerant systems, segmentation and the other things as panaceas to the situation.

These are not new concepts in the security industry. In fact very much of the opposite; manifestations of them like zero trust have been one of the main buzzwords for the last ten years or so in the cyber industry.

It's a different thing sketching something on paper and how it actually works when trying to apply it into the chaotic mass corporate IT usually is. I work in this industry and talk to a lot of corporations from all over the globe about their security postures. I don't see a huge amount of companies who have the resource to pull off a well segmented environment. Truth is it is expensive, quite complex to pull off, potentially disrupts business and it's still just an internal security cost that when you present it to your boss they will ask "why are we spending all that money on firewalls then?"

Anyhow the point was that if you build your security posture on the assumption that you can successfully lock everything down and you don't need to do any monitoring internally, you're setting yourself up for that massive disruption when the stars align for the attacker and they get a free roam environment in your internals. And it happens very often, as we can see.


I did not suggest not doing monitoring internally, I suggested not relying on internal monitors. Because the SolarWinds hack had gone undetected on the security company's own servers and ran rampant for (weeks?) before it was detected.

As reported on "60 Minutes" yesterday.

The security company did not compartmentalize their own system. They relied on monitoring, which failed.

Furthermore, it was said on the segment that firmware on the hardware can be infected, so reinstalling the system software won't get rid of it. The obvious solution to that is to put the firmware in ROMs, so it cannot be electrically reprogrammed. But nobody does that.


Very nicely put. Can't agree more.


I'm always curious about solutions to 2, so what does unfettered mean in this instance? Audited? Someone always needs root or Domain Admin or whatever to get the company out of a mess, so do we just heavily audit those accounts and hope they aren't a enemy agent from Pepsi trying to get our secret formula?


People don't need root access all the time. You can construct a system whereby elevated access is granted as needed with the approval of another person or persons. Logged and time- or task-limited of course.

Inconvenient? Yes. But sometimes appropriate.


No, the questions the security professional should ask are these:

1. Which resources are held by the company?

2. Which actors are interested in these resources (both benign and malicious)?

3. What could threaten the confidentiality, integrity and availability of these resources?

4. How likely are these threats?

5. What would the impact be if these threats would occur?

6. Given the likelihood and impact of each of these threats, form a method to handle these threats.

7. Execute on the plan.

Furthermore, the results of each of these steps should be documented and periodically reviewed.

That is what a security professional should ask.

An even more professional security professional should model the network of threats. For example, one disgruntled employee might have a relatively small negative impact on many different resources, but causing a large impact as a whole.


How to defend against these:

1) detection

2) detection

3) detection

4) sprinklers

5) detection

6) detection

unfortunately, most orgs outsource their internal detection or have no capability at all.


Detection is inadequate, because detecting the open barn door after the horse left is not helpful.

> Sprinklers

That is "we can prevent the server room from being destroyed" thinking, rather than "how do we survive the server room being destroyed" I'm proposing.


It may be helpful if you have more then 1 horse.

And you are wrong in labeling it "prevention". It is a feedback mechanism. Fast feedback is essential in almost ANY scenario. Most of the things can be fixed if detected early on.

The first thing is knowing. If you don't know something, you can't act on it. To take your battleship analogy, if I remove all torpedo sensors from the ship, how long will it last given it is compartmentized and whatnot.


> Most

That's still focusing on making components that will not fail, rather than a system that can tolerate failure.


FWIW, I agree with the point you're making, vis-a-vis system resiliency. Ideally the system should be able to tolerate having a data center burn down. My comments above about the value of sprinklers are really addressing a separate issue, which is the immediate life hazard risk to the people working in the data center in the event of a fire. With my "firefighter hat" on so to speak, I don't really much care about the servers and the data in the short-term if there' a fire - I just care that everybody goes home safely.


Why do we have pain receptors then ? Nature sux as designer ?


For the same reason you don't have a backup heart


You actually do have organ backups.


If you have a second heart, please report to area 51 immediately


That would be Krogan, not me. There are animals with multiple hearts. People can be made to have 2 hearts too, but let me not go into that.

But we have bunch of non-heart redundancy.


Yeah, if you ignore the various single points of failure in our circulatory, digestive and nervous systems there are plenty of redundancies.

There is no intelligence behind our design. Your body takes its current form because no mutation which would change it has yet led to a statistically greater chance of you passing on your genes. That nature designed you to have pain receptors is no more an argument for their utility than it is for the peacock's feathers.


Youre suggesting detection is only possible much after the fact, when it is possible to also do before an adversary has been able to achieve anything significant on the target. But credit where credit is due, detection is not the answer but rather early detection.

Sprinklers comment was made in jest.


I bought a book some years back about how to defeat burglar alarm systems, as I wanted to make my home more resistant to burglars. The book described a sophisticated system that would detect burglar entry and then automatically phone the cops.

The defeat was to chop the phone line where it entered the house, because the telephone company puts their box on the exterior of the house. (The book was printed before cell phones.) The sophisticated $$$$ detection system, defeated by the simple swing of an axe.

Relying on detection requires a perfect detection system, which is impossible.


If we're looking at how to do detection right in IT systems, the physical world example you give doesn't really apply all that well. First the environment is different and second threat detection in IT infra is a multi-stage process involving probabilities on each stage. You can't cut the wire and disable them all from one spot. (or you can if you slice the cable, but then you cut your access too - and a bunch of other collateral...)


We deal with imperfect information all the time when building complex software systems.

Let's talk dead node detection, for example, and consensus. "Perfect detection" is impossible due to possible network partition, yet we can build reliable distributed systems on top of that imperfect reality, even in face of malicious nodes (B-F/T). We do this via redundancies; mutli-step 'actualization' of system changes (e.g. transactions); and the ability to revert to a known stable state.

Based on that analog, I think it reasonable to strive for security regime approaches that permit for security failures at component levels. The computational and bandwidth loads may be excessive (ATM) but ultimately there will be systems observing systems and a type of 'security consensus or quorum' theoretical work informing how to build such systems.

There is a thin layer of semantics that overlays the generalized and irreducible state transition due to input: one instance is preventing inconsistent action on data; the other is preventing insecure action on systems.


You can expect single detection system to fail, so you need to make it redundant. For example impossibly loud siren.

Like you said before, any solution is a mix of 2 paradigms.


Another method of defeating a sophisticated burglar alarm system is taking an axe to the power cable to the building.

How many people have a battery hooked up to the siren?


Pretty much every residential system and every commercial system has a battery backup.


but isn't focusing on detection the fundamental change in perspective required? I.E.: You're no longer assuming there is a set of steps you can do to ensure it will never happen to you because it _will_ happen to you.


Yes, but it is in the opposite to what the parent comment advocated (prevention of any incidents)


The company I've founded is very much about providing better detection capabilities, but I'd say this is an oversimplification.

First of all, detection is methodologically bankrupt. We have almost no one out there saying how detection should be done with consensus - only in the last 5 years have we even started to improve here.

In my opinion, detection of attackers, which is what the industry focuses on today, is a huge waste of time and resources - it's the last step in the process that I would recommend.

I would personally say that detection should be staged as:

1. System inventory (can you attribute an IP or Hostname to a device identitiy, a user, etc)

2. Policy enforcement (can you detect when policies change, or are violated?)

3. Unexpected behaviors - go to the people building systems - ask them what's expected, what isn't, and build rules for that, or even better, have them build and maintain the rules under your guidance.

4. Attacker behaviors - finally, spend some time building rules for attacker behaviors.

Most organizations skip straight to 4, and then you have a team of defenders who have no idea how the network they're supposed to detect is supposed to actually work. This is throwing away the greatest advantage defenders have - that they know where the attack will take place, and they know all of the stakeholders for those environments.

Here's the chief of the NSA Tailored Access Operations saying this at USENIX four years ago.

https://youtu.be/bDJb8WOJYdA?t=83

"If you really want to protect your network you really have to know your network"

None of this is as simple as "detection" - it means working with the policy teams, with your infrastructure teams, with your product teams, to better understand your environment.


Substitute all the parts in these hypotheticals with humans and the answer is constant tracking and vigilance.


I am not a security professional now, but I kind of used to be (at least one aspect of it). I'll take a run at giving answers. Caveat with these answers is that it assumes security > usability > cost, and the budget is high enough to afford the answer implementations. It also assumes the organization is extremely paranoid and security-conscious, both good things in this area. None of this information is Classified or FOUO. All of it is pulled from publicly available best practices, or my own thoughts.

1. Computer stations use two types of fingerprinting at all times, facial recognition and typing biometrics. Also, login to the system requires password or pin entered after a card is inserted, followed by a fingerprint authentication, followed by the password of the day. Critical software/data must be accessed at an air-gapped machine inside a Faraday cage.

2. Employees only access based on what they need for their job that week, access controls are fine grained, employee access is logged, and that log goes via data diode to an otherwise air-gapped computer inside a Faraday cage.

3. Users can't download and install. Only trusted professionals can, and then only after the software is vigorously tested and approved.

4. Hot site goes fully active, personnel are immediately transitioned there, and a root cause analysis is performed to figure out just who screwed the pooch to allow the server room to burn. Repairs are made as quickly as possible by vetted personnel, then checked for security by different employees, and then checked by a third team.

5. Physically disassemble the computer to the point where you can unsolder the USB ports (some come with support on board for USB ports but none in the case, enterprising bad actors could open the case and install their own USB port). Also, have anti-tamper cases with anti-tampering turned on after that. Have OS protections preventing media not whitelisted.

6. Step 1: Phone home and wipe procedure on drive activates if the computer boots up and authorized use does not log in with X minutes. Disk has full disk encryption and is reencrypted with new password each month. Step 2:Fire the CEO unless they were mugged, or a K&R family situation.

The above steps are extremely expensive though, and only very large organizations will be able to afford them. For startups, I have no idea..some of them are implementable, but most aren't. Also, you have to accept that 5m-10m of every hour is taken up by security measures.

None of the above prevents a computer that a bad actor has physical access to being compromised, but it makes it hard enough that generally it's only going to be state-level actors that will take the trouble, and you'll likely know it's happened so you can take steps.


> Fire the CEO unless they were mugged

People don't learn from mistakes if that's the response. They'll also hide their mistakes, making things worse. You want a CEO to report the loss ASAP, not cover it up. Having a "no fault" culture encourages people to quickly report mistakes.


Just trying to understand how this would work:

>1. Computer stations use two types of fingerprinting at all times, facial recognition and typing biometrics. Also, login to the system requires password or pin entered after a card is inserted, followed by a fingerprint authentication, followed by the password of the day. Critical software/data must be accessed at an air-gapped machine inside a Faraday cage.

Is the air-gapped machine machine seeded with all future passwords of the day, as well as trained for all potential operators' biometrics (facial, typing, fingerprint) before being put into service?


The air-gapped machine might get passwords provided for the days in the coming week, or might get changed every day. Usually the first.


I've also worked on similar systems but only ever seen them fail to replace the system that's under someones desk, and it's almost always because the guy coming up with the solutions is far removed from building them, using them, or from comparing them with the previous solution.

Nice way to filter $$$$$$$ through consultancies though.


Speaking as a security professional, this viewpoint already exists in software in various concepts like "defense in depth" and "zero-trust networking".

> It seems pretty well established that making secure software is impossible. Time to pivot to designing software systems that are tolerant of inevitable security breaches.

This is the nature of "zero-touch networking". More generally, we talk about "security boundaries" as the equivalent of what you'd consider "compartmentalization". For example, the virtual machine is a security boundary (infect the OS but the VM should keep it contained to not affect the host hardware), the network firewall is a security boundary (treat everything outside as potentially hostile unless authenticated), the authentication system is one, and so on, because each one imposes substantial security barriers that you depend on to some degree, but you also model the "what if it fails?" scenario.

> One example of this is compartmentalization. A single breach must not have access to all the sensitive data. Another is backups must be air gapped (or put on physically read-only media) so ransomware cannot compromise them.

All of these things are already commonly known practice, and both data compartmentalization and airgaps have been in practice for decades.

I'm not saying that it's perfect everywhere, because it's a shitshow in general, but rather that the field has been aware of and has been implementing these practices for a long time when people have the incentives to do it.

I think there are two major differences. The first is that in cyber security the failure mode is adversarial, not coincidental, so tolerances work differently. It's less about the stress the system can take, more about how long it'll take the adversary to figure it out and how useful the exploit is. The second difference is that there isn't remotely as much market/government infrastructure around making sure that you don't screw up the security. That's both good and bad: good because it means lower costs to entry. Bad because it means plenty of people will play fast and loose.


Seems that a lot of these attacks (except for this one) are just simple social engineering: an employee is phished to get into the company VPN, and from there, it's maybe a couple more simple exploits on systems that were never meant to be exposed and then it's over. You can compartmentalize employees but it's harder to do than compartmentalizing software I think.


I think that's part of what security people mean by "zero-trust security".

Instead of building a giant moat and assuming that everyone who got past the moat is trusted, assume that everyone is untrusted by default, and build a capability system that's expressive enough that you can give everyone just enough capabilities that they can do their job without going through a bunch of pointless checks.

In practice that model is impopular because corporations tend to screw up the later part.


Yeah, honestly, I hate the truth that this compartment-based system is the best method for security. I never have the access to do my job, and it's a constant frustration to get access. Also, it makes for really uninteresting problems to solve. Instead of using something interesting to secure our systems, like cryptography, the most effective method is just phishing tests, employee training, and web form fuzzing. Cryptographic innovation is part of the solution, but at a certain business level, it's just about training.


This 1000%

WE are the weakest link! The most technically secure system would be nearly unusable by humans without enormous inconvenience. So long as software systems are built by humans, for humans, they will always be vulnerable at the interface.


Yeah I mean this is called Zero Trust Architecture/defense in depth, and it's a popular thing.

The budget and architecture changes are not popular, though.

I partially blame this the security community as well (of which I'm a part of). The common tendency to think exclusively in risk controls and absolutist statements on secure vs. insecure means the incremental improvements needed to hit eventual ZTA are difficult to event start. In short, security teams suck at intra-company sales sometimes.

That said, the rumor/article I'm pretty sure I read detailed how the SolarWinds CEO was an ex-CFO type, and shredded their "cost centers" the last few. "Cost centers" mean security teams as a rule. Their security team was tiny, just like every SaaS vendor skating by only through passing audits with important vendors and getting breached (which you can do w/o a security team).

Fwiw, anyone at places fighting cloud migrations, cloud migrations get you several easy long jumps into ZTA almost by default.


> In short, security teams suck at intra-company sales sometimes.

I've been at this a long time. My POV is that this is true, but doesn't matter. The leadership almost never actually cares about security. It's the correct choice for most orgs. https://hbr.org/2015/03/why-data-breaches-dont-hurt-stock-pr...

Even the most impacted companies only take a hit in the medium term (1 yr) at best. https://investorplace.com/2019/03/equifax-stock-investors-ar... Right after that article, the stock took off and is still doing well.

Leadership is sophisticated these days. They aren't ignoring security due to lack of salesmanship from their security team.


You make a good point.

I think there are two levels of security sales, if you will. One is junior//mid level to junior//mid level, and one is CISO<>CIO, and they both handle different pushes that play their own, sort of non-overlapping role in company risk management.

The CISO<>CIO thing falls right into the bucket that you're mentioning... big architectural changes need much more going for them than good salesmanship from security. If there's not a major benefit for the enterprise in non-sec areas by doing changes like ZTA, or a lot of precedent at this point about a SIEM/SOC just being something one has, it's an uphill battle.

That said, a lot of these breaches, like the Equifax admin:admin, isn't so much a "for the stock price, this is a net positive we'll run the risk for" type of decision that security loses. A lot of this is just patiently needling/selling to dev and IT teams about finally rotating a pw to something complex, for instance. That happens successfully.... attackers try another IP door on the internet and perhaps Equifax doesn't happen (using the theory that Equifax wasn't from an APT, which is of course a different situation). Of course insane change management boards exist, but in many cases it's just that easy of a fix.

My theory is even more relevant when you look at SaaS vendors, like SolarWinds, as really driving govt's innovation edge and by extension being the vector into the environments. In those cases, the security team's political capital is even stronger, as smaller companies mean much more cross-org influence is possible. Sometimes, it's just flipping "min pw complexity" as a radio button in AWS's IAM console, and no one is the wiser. When so many attacks source to these very easy wins to fix, that's where the salesmanship is lacking (or can really help) for sec.

To your point about "12 months and everyone forgets," that's not the case for the SaaS vendors working in a crowded space providing generic products. While the pool of possible vendors is shrunk by like how many SaaS vendors bother to pass FEDRAMP, and/or you stack on contractual inertia, there is still a pool. SolarWinds doesn't really provide that exceptionally unique of a product, and they got the entire Fed Govt pwned. That'll be >12 month impact for them. If it's not, I owe you a beer. This is even more pronounced of a motivator across the legion of SaaS vendors who have their first handful of serious clients, but a sec incident w/o a seriously embedded platform blows them up.


Look, you're talking about the exceptions. I'm talking about the rule. We are in violent agreement! A friendly kind of violent. :)

> flipping "min pw complexity"

I couldn't disagree more about this, but that's a particular sore point of mine. Otherwise, yeah, cloud services in general give you abilities like this. I mean, so does on-prem -- AD DS has the same (actually, far far far better) switch, but cloud is where the action is.

> selling to dev and IT teams about finally rotating a pw to something complex [...] just that easy of a fix.

Yes, and that isn't selling failure so much as security incompetence from the top. If you have a solid core, it's easy to throw switches when you see a blemish on the surface. If the core is rotten, that surface defect is not going away, even if you do catch it and even if you do fix it! The security team is only as good as the leadership.


Late, but I actually think more accurately we're talking about doing sec at > 200 headcount companies, vs <200 or more likely <100 head count companies.

I firmly agree with what you're saying, violently agree perhaps! But I think the scope of a smaller SaaS company means the sec team has an amount of technical and people agency that's sort of unheard of at the bulk of companies.

That's true, that's perhaps "exceptions," but a not unimportant amount of SaaS vendors are at that headcount/company profile .


That's pretty much the concept of defense-in-depth, and breaches like Solar Winds of others like this kind of threat actors, are so sophisticated that they do work around every single aspect of this.


> are so sophisticated

Their password was "SolarWinds123".

Everyone who gets pwnd tells a story about sophisticated state actors to make it sound like some unstoppable force has hit their impenetrable defences.


1/ The article talks about the SolarWinds hack, as the campaign that then targeted Microsoft, the DoJ and others. Not the hack "into" SolarWinds.

2/ For SolarWinds itself, the password is one tiny step along the way. I can guarantee you that having that password won't allow you in any way, to deploy persistent malware on developers's machines (first you'd have to work around Windefender) and getting knowledge of the company's architecture, its internal Repos, etc. You'd have to bypass MFA too at some point, which the attackers did.

3/ If having malware than monitors developers VisualStudio console in realtime to inject a few lines into it so that it gets secretly compiled into their day-to-day work, without breaking the project, while also providing C2 capabilities; if that, is not "sophisticated", i'd be curious to see an example of "sophisticated" malware.


>I can guarantee you that having that password won't allow you in any way, to deploy persistent malware on developers's machines (first you'd have to work around Windefender)

SolarWinds recommended to proactively add exceptions to your AntiVirus such as Windows Defender for their stuff[0]. They probably followed their own advice for their own systems as well. Just saying...

https://twitter.com/ffforward/status/1338785034375999491


> I can guarantee you that having that password won't allow you in any way, to deploy persistent malware on developers's machines

I don't think I'll believe such guarantees about a company whose update dissemination server password for a write-privileged account is CompanyName123.

I mean, you could be correct, but still.


Yeah and often, what makes state actors so threatening is not their untold leetness but the impunity with which they can operate. Common cybercriminals usually know better than to go after certain targets, but state actors can do whatever they please and will not face legal repercussions.


Yeah the true wizardry on this/SUNBURST was not the initial breach/privesc.


I think another great example of this is turbochargers. One of the biggest issues when Chinese companies started making counterfeit turbos is that they physically looked the same/identical. It turns out though, when a Cummins/Holset turbo failed, it was in a controlled manner. When the Chinese knock-offs failed, you were probably replacing an engine block, and potentially could kill someone.

https://www.youtube.com/watch?v=Za0DieZHMKc


This is not really a helpful mnemonic -- security breaches are adversarial. Boeing airliners are not designed to keep flying to the intended destination if the cockpit is breached by hijackers.


Battleships and spy networks are about as adversarial as it gets. Even airliners give some consideration toward not being too easy to hijack or bring down. For instance the hardened cockpit doors added after 9/11.


Airliners now also give consideration to the pilots being bad actors, due to a couple incidents where a pilot decided to crash the airplane.

There's also effort at making the airplane survive malicious cargo.


> "assume it failed. How does the airplane survive?"

The very next step of this that so many people starting out forget is:

When the part has failed, and the redundant systems taken over, how will the original be repaired? If a repair does not happen soonish, then other parts will fail, and the system as a whole will fail.

Too many systems are designed to be redundant, but with no self-test, monitoring and alerting system, redundancy is useless.

In the case of the compartmentalization proposed, this means there is no point in compartmentalizing your data if nobody will do anything when a compartment is compromised.


This is already a thing, checkout Zero Trust Networks: https://en.wikipedia.org/wiki/Zero_trust_networks these systems kind of operate on the premise the network is compromised to being with so what do you do then?


That's an interesting thought but I'm not sure if it can be applied to software. Part of what makes security hard seems that it only takes a single point of failure in a chain of dependencies to go bad. If there is a loophole on one server OS, then it's everywhere deeply ingrained in software. Even if you split the data, wouldn't the underlying platforms all have that flaw? Ironically the greatest advantage of software - easy distribution is also the cause of its greatest weakness.

What I think might be a good idea for big organizations with sensitive data is to download a portion of the web and run their own intranet and isolate the network. Maybe Google can license their search engine and deliver some of the indexed web on prem as a service. If there is nothing coming in and going out, that might be a good foundation for good defense.


> That's an interesting thought but I'm not sure if it can be applied to software. Part of what makes security hard seems that it only takes a single point of failure in a chain of dependencies to go bad.

I think this is the point. Software is about operating at the top level predominantly, packaging together stuff other people wrote using only simplified API's which abstract away their internal complexity. So you can zoom along at light speed producing "solutions" before the other guy and make the most money the fastest and move on.

Engineering is about starting from physical principles and building a product that fits the understanding of those fundamental principles at work. The engineer generally stays with a set of fundamental domain principles their entire career (e.g. sticks with bridges as opposed to transmission lines) while programmers tend to stay at the top layers while the technologies below the surface change radically.

It doesn't have to be that way for all software, presumably. Critical software and hardware can be designed together, for example, from the bottom up. I wonder if software patents is at fault for allowing those at the top layer to capture almost all the value in computers, and commoditize the technology below.


> Engineering is about starting from physical principles and building a product that fits the understanding of those fundamental principles at work. The engineer generally stays with a set of fundamental domain principles their entire career (e.g. sticks with bridges as opposed to transmission lines) while programmers tend to stay at the top layers while the technologies below the surface change radically.

That's not how engineering works at all. The vast majority of engineering is working at a high level to take systems other people have long since perfected and slapping them together to solve new problems. If you need a solenoid valve, you don't design one, you buy one. And engineers do not stick to narrow domains - I've personally designed a seismic rated tv mount and a machine for testing helicopter flexbeams in the same day. How many bridges would need to be built for people to do nothing but bridges for 40 years?

No, "I did not care enough to read the documentation and understand what I was working with" is not a valid excuse for making something that fails catastrophically. Of course there are going to be little idiosyncrasies that you would never expect if you weren't down in the weeds, but that's true of every field, and the solution is to assume that such issues will inevitably exist and design with appropriate factors of safety and redundancy.


Your field may look vast to you but from where I'm sitting I couldn't even tell you what is different about the principles needed for tv mounts versus helicopter flexbeams. Both sound like statics class to me, presumably decades after you took it, still being used in your work. Fine the bridge guy may be able to build other structures. Though I suspect it will get increasingly difficult for them to get past the job interviews as they stray from their narrow domain. Meanwhile a software person who knows databases can create a database for practically any kind of organization in existence.

I didn't say build things from scratch, I said use fundamental principles. Your solenoid operates directly on your fluids or whatever and you presumably need to be able to understand how. Is it not possible to choose incorrectly without having the right expertise?


So when you don't know the difference between two things you assume there is no difference and not only disregard those better informed than you who claim otherwise, but try to tell them how simple their profession is?

Neither the tv mounts nor the testing machine were remotely close to statics class. For the seismic TV mounts, I had to make an economical laser cut sheet metal assembly which could be assembled in a field and still survive earthquakes. For the flexbeam tester, I had to design a machine that could survive millions of cycles while placed in an oven at high temperature to simulate the effects of 3 years of flight time in less than 1 year. To say it was just statics would be like saying software is just assigning variables.

Conversely, if you are a database person, that is an incredibly narrow skillset. Creating a database isn't a profession, it's not even a job, it's just a task. Yes a database guy can create a database for practically any kind of organization, and the structures guy can analyze a truss for any kind of organization. But the engineer who can only analyze trusses is fired.

Yes, an engineer must understand fundamental principles, but that is true of any profession. How can you set up a database without knowing the fundamentals of how a database works? How could an accountant manage money without knowing the fundamentals of transactions? How could a ditch digger dig a ditch if they don't understand the fundamentals of a shovel? Any job that requires no understanding whatsoever was automated away long ago. That said, most professions, engineers included, can treat many of these things as black boxes during day to day life. I don't know how any given solenoid valve is implemented, and I don't generally care beyond it being rated for my use case. Of course this can lead to problems - I once designed a big machine for scanning cars and the power cables wrapped around it made it act like a giant antennna interfering with the data cables to the cameras - that took a while to debug.

The world is a fascinating place, you would do well to learn about it instead of resting on assumptions.


> So when you don't know the difference between two things you assume there is no difference and not only disregard those better informed than you who claim otherwise, but try to tell them how simple their profession is?

Are you really snapping at me rudely over a disagreement about the differences between engineering and programming? To answer your question, no. I'm saying that it's the same technical domain.

> Yes, an engineer must understand fundamental principles, but that is true of any profession. How can you set up a database without knowing the fundamentals of how a database works?

Excellent point. Because it's at the heart of my reasoning too. Identify those fundamentals and you can apply my argument. In my prior post I noted the fundamentals in engineering are specifically physical principles. In the case of databases they are not, instead they are completely at an abstract level.


> To answer your question, no. I'm saying that it's the same technical domain.

And I am saying that these things are radically different, and thus the only way you could classify them as the same technical domain is if you are ignorant of those differences, or define domain so broadly that everything affected by physics is the same domain. You specifically refer to engineers remaining in a narrow domain and specifically gave an example of bridges and transmission lines as distinct domains, and it would be oxymoronic to talk about a narrow broad domain. You not only admitted that you have no idea what you're talking about, you used the fact that you were ignorant as evidence in support of your claim.

Going back to the example, both things employ statics. Statics is the most basic tool of engineering - it is the analysis of forces on something which isn't accelerating. There is no way to completely avoid statics for anything that physically exists. Statics is to engineering as wood is to carpenters. It is equally absurd to say that two engineering tasks are in the same domain because they both use statics as it is that two carpentry tasks are in the same domain because they both involve wood. The engineers designing bridges and transmission lines both employ statics, as do those designing kidney dialysis filters and rocket engine combustion chambers and acoustic panels and laser cutters, all six of which could very well be done by the same engineer (I've personally been employed to do five of the above so far in my career).

Your original claim, which I argued against, was:

> The engineer generally stays with a set of fundamental domain principles their entire career (e.g. sticks with bridges as opposed to transmission lines) while programmers tend to stay at the top layers while the technologies below the surface change radically.

Your argument had nothing to do with the difference between principles rooted in physics vs mathematics. You were saying that programmers use high level, general abstractions while engineers make more limited use of abstractions and remain highly specialized. I am saying that engineers work on a wide variety of problems over their careers, involving radically different physical principles, and that they abstract away many of the details of these various narrow domains so that they can work on the top level.


> Your argument had nothing to do with the difference between principles rooted in physics vs mathematics.

True. Where did mathematics come in? Math does not abstract away the fundamentals in the way that I am talking about. Whether you use messy coordinates or elegant tensors to describe a fundamental property, you are still working with the fundamental property.

Also perhaps the only way to get through to someone like you is to point out I have a doctorate in electrical engineering, and teach in a EECS department, working with both engineers and programmers daily and even helping to design the curricula (fyi, a person who specializes in databases is a real thing, we have no less than 4 grad-level classes on them, and there are no CS classes entirely on "variables" analogous to classes on statics). I have spent literally decades pondering the mindset differences between them. You should give up trying to argue me down because you won't get there by trying to twist my own words against me.

Oh and in EE we generally don't use statics. Pretty much ever. Though I did teach a dynamics class that included some one time. By the way that tongue-in-cheek reference to statics was an failed attempt at using self-deprecation to tiptoe around your oversized ego. Nice job weaponizing it.


Engineers work with high level abstractions and heuristics nobody really understands all the time. There are people working on R&D, and building design tools, but then again there are also people working on algorithm theory and writing kernel drivers.

I don't think there is a fundamental difference, just a difference in degree. Software enables far more layers of abstraction and far quicker development cycles than other engineering disciplines.


You mean using experimental data rather than theory? It's still connected to the underlying science, which is the key component. It's like saying the difference between a peninsula and an island is one of degree. Yes in a sense, but one is tethered to the land. Certainly engineers can veer off into solely doing software, and there are vast numbers of people that straddle fields. But if they aren't using their fundamentals learned in engineering school (to oversimplify somewhat) you didn't need to hire an engineer.

As for people building design tools, you mean engineers? When I say "top level predominantly" I mean things like this. The top level is the highest level for the software at hand. Practically everyone uses software.

As for research, when it comes to engineering they are generally doing either applied science (of one of the hard sciences) or applied math. Certainly if you go to research conferences in CS or the right niches of engineering, you meet lots of people who are basically just applied mathematicians.


I am a mechanical engineer by trade. By "heuristics nobody really understands" I do mean just that. Some of it may be grounded in experimentation, some of it might be grounded in data sets collected by god knows whom go knows when, some of it might just be "do it this way because the graybeards/application manuals say so". So on and so forth.

I find that a lot of software people have a highly idealized notion of how the "traditional" engineering disciplines work. Generalizing strict safety procedures found in some niche industries like aerospace to other fields where they don't exist. The truth is that working under time or budget pressures and without fully understanding what you're doing is extremely prevalent outside of software too.


> I'm not sure if it can be applied to software

It certainly can be. But you've got to design it in from the beginning.

But the first step is pretty hard - convince programmers that writing perfect software is impossible and to stop relying on it being perfect. Fire anyone in charge of security who certifies "the system is secure". :-/


This is also taken into account in software engineering. Most software have defense in depth against single point of failure, they can recover or cope with wrong input.

But you cannot compare them because the context is different: software is less regulated than aviation so while anyone can write of very bad software in 5min and sell it, this is not possible for a plane. And not everyone can pilot a plane, while there is no mandatory licencing and training to use a software.

In addition, no safety feature on a plane can prevent the pilot from crashing it willingly. Setting such an easy password and not adding additional protection (due to lack of training) is self sabotage. So far, anything related to SolarWind points towards a bad usage of the tools.


> no safety feature on a plane can prevent the pilot from crashing it willingly.

This is no longer true. There are procedures where no crewmember is allowed to be alone in the cockpit.


Yes, that is the correct way to approach security. Preventive measures alone are not enough to secure an organization.

In the security world we refer to this concept as "Assume Breach" and to throw in a buzz word you might hear often these days "Zero Trust".

Joking aside, Assume Breach, Zero Trust and what I call "Homefield Advantage" are the main strategies to help secure the modern workplace in my opinion.


I served aboard a nuclear submarine as a reactor technician. The term for what you describe in the nuclear industry is "fail safe"


Yes. But I wish to point out that the Fukishima disaster happened because one failure caused a cascading sequence of failures that destroyed the plant.

The design failure was "assume the seawall would not be breached". But it was, which destroyed the backup electric generators, and the rest followed from that.

A better design would assume the seawall is overtopped, and so would have put the generators on a platform. Simple, cheap, and effective.


> Every part in the system is not "how can we make this part never fail" but "assume it failed. How does the airplane survive?"

This is also the principle behind Qubes OS, security-oriented OS based on security through isolation.


we as security professionals try to do this. you hear a lot about defence in depth and the ol saying "it's not of but when". the business pushes back a lot, because they already did this one thing to stop stuff, why do another? unfortunate reality is that people do not what to change how they work, to support security, and their managers will happily blame security for any and all hiccups in projects


> It seems pretty well established that making secure software is impossible. Time to pivot to designing software systems that are tolerant of inevitable security breaches.

There are variations on this theme. An under studied and under used approach is extreme compartmentalization, where components are e.g. isolated in processes with only one simple input and one simple output allowed: for a codec for example, the only thing you can do if the codec code has a security bug and if you manage to control the stream and are able to inject a malicious one, is to change the output; which is not very useful given you could always change the output to whatever you want if you have arbitrary control on the input stream... Well at some point you may want to avoid CPU hogging too, and limit max memory allocation if that primitive is available, but having that kind of limits can be systematized in such an architecture.

So this variation makes the hypothesis that the codec can fail, but that the container is reliable. And there is no sound security architecture anymore if the container is not secure (or let's say not simple enough to be rated secure with high confidence). So you will always have to put some root(s) of trust in your system. Although you can get some benefits from mere mitigations, and sometimes they are more realistic esp. if you try to encapsulate legacy software, and we know a lot of SW activity is management of legacy software. But that's quite weak compared to a real security architecture, where it is reasonable to strongly trust the trusted components because they are simple enough, and where hard security properties are logically derived from those reasonable hypotheses.

Now you may want to imagine russian dolls of containers with decreasing restrictions (and then even more complex network of components), but if you dig it is likely to rely on the same techs as your extremely restricted component, and when not the case you only have very few tech-distinct layers (let's say process, then VM), so breaking out of extremely isolated ones would likely be the most problematic cases anyway. => you really can't do sound designs without some trusted components (or in a network: computers -- etc.) Of course you can add random mitigations everywhere on top of that, esp. if you are sure they won't decrease the security.

TLDR: mere mitigations are not a security architecture, and you won't necessarily be able to stack tons of them to be sure to resist in case just one fail. Some highly reliable components are still a must if you want confidence in the resulting system. Evidence: see the complex JS->browser->OS->VM escape chains in capture the flag or even real world attacks.


Yea but airplanes still crash all the time


No, they don't. Their safety record is incredibly good, especially considering you're zipping along at 500 mph in an aluminum balloon at 30,000 feet with flaming engines and surrounded by jet fuel. They really are a triumph of engineering.


yes, but also with 100+ years of engineering, for far fewer companies/nations, and with trillions (more?) thrown at the discipline. Software engineering is, what, 50 years old, tops?


> Software engineering is, what, 50 years old, tops?

I've been working in this industry for nearly 45 years now. I still see endemic vulnerability to single points of failure, and little recognition of that.

Heck, the SolarWinds hack was first discovered by a security company because it had compromised their own internal systems and gone undetected for some time.


Given that software engineering is part of aerospace engineering, some of it is part of that engineering success.

But a key aspect of this is that one does not develop software for airplanes the same way and with the same constraints/goals as other areas of software engineering.

If anything, aerospace engineering is a prime example of how software can be made more reliable by tolerating failures instead of relying on it not to fail, to come back to GP's point about failure-tolerant designs.


> aerospace engineering is a prime example of how software can be made more reliable by tolerating failures instead of relying on it not to fail

Exactly. I often have a hard time getting this point across, glad I succeeded.


Unfortunately, sophisticated threat actors are still very hard to defend against in aviation like in software.


In my day at Boeing, nobody considered that the pilot might be a bad actor. Unfortunately, that was a mistake. It turns out pilots can be bad, and now there are procedures for that.


Funny that Boeing are capable of considering that, but not a single sensor failing and how that might impact a system designed to hide the actual aerodynamics of the plane.

Boeing really aren't a good example for anything besides negligence and how to game regulators.


I'm not going to make any excuses for MCAS's reliance on a single sensor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: