
What Should We Do to Prevent Software from Failing? - sarapeyton
https://sloanreview.mit.edu/article/what-should-we-do-to-prevent-software-from-failing/
======
efficax
The analogy between software "engineering" and structural engineering is used
a lot to point out the failures of software. I think this analogy is a
category mistake caused entirely by the misapplication of the word "engineer"
to software developers.

We have already solved large areas of common classes of bugs at the language
level, through strong static typing, memory safety and safe concurrency (Rust
is the vanguard here but there are lots of languages that tackle these
problems in effective ways).

Institutional inertia (technical debt) is the only reason why we can't apply
these solutions to _all_ software. I work on a PHP app with code that goes
back 15 years. To convert it to rust we'd have to invest basically the same as
the operating cost of the company. That'll never happen.

Once "safe" languages are used, there are still bugs of course. But what are
those bugs? Failures of thought, plain and simple. Structural engineers work
within the limits of physical reality and these constraints greatly limit the
range of possible creation, thus also the range of possible failure.

Software has limits too, in memory, storage and processing capacity. But
otherwise it works entirely in the realm of pure thought.

Well defined specifications that can be turned into concrete tests can solve a
lot of these problems, too. (But this is also very expensive).

But even if you have specs and tests, there is no magic formula for ensuring
your ideas are always sound. If the idea is wrong you can't even write tests
to ensure it's correct. "Correctly" functioning components can have complex,
difficult to predict side-effects when composed into complex systems.

We do have everything we need to write highly reliable software. But solving
the software failure problem is the same task as solving the erroneous
thinking problem. I don't know that we meat-sacks can "solve" that problem.

~~~
_bxg1
None of the above changes the discussion. It's a very elaborate way of saying
"writing correct software is hard". You know what's also hard to do correctly?
Designing airplanes. Building skyscrapers. It's not that their designs are
made flawless, it's that (in theory) the sheer volume of man-hours spent
pouring over every minute detail of them, incorporating lessons learned from
every failure that's ever happened in the field, amount to a very very high
success rate.

In software the mentality is that tests, or formal specs, or even thoughtful
design, are luxuries. That it's better to just hire someone who can make the
computer do things than spend twice as much on someone who can write "good"
code and take twice as long to do so (whatever "good" means, because we have
no standard). Of course for "the Uber for X" those standards may be fine. But
for autonomous vehicles, or credit bureaus, or the nascent private space
industry, they are absolutely not.

~~~
jandrewrogers
A key difference between physical engineering and software engineering is that
the economics of over-engineering to improve safety margins are wildly
different. For physical engineering, the costs of substantially decreasing
failure risk (e.g. via increasing strength, different materials, more complex
design, etc) is approximately linear. For software engineering, that cost is
something closer to quadratic or worse.

This is intrinsic to the differing nature of the problems. Verifying the
safety of a bridge design is computationally trivial, verifying the safety of
non-trivial software is NP-Hard. Safety is so inexpensive to add in physical
engineering designs that we invest a lot of effort _removing_ incidental
unnecessary safety.

~~~
avmich
Try to solve an aerospace problem with this "simple" approach. No, you don't
get to add strength willy-nilly, or just swap materials on a hunch. You're
working with safety factors which sometimes are less than 1 (come back for
references), and those physical properties of materials are notoriously
stubborn to changes. You know, some Richard Feynman once quipped "nature can't
be fooled, dammit" \- that extends to "water boils at 273.16K full stop
(standard conditions), no, you're not going to change that".

Yes, you can often do more complex design. Sometimes though it costs you
centuries to choose a good combination of existing physical properties
(architecture), and sometimes it's "just" decades (Gravity Probe B). Wanna
propose similar approach to software? :)

~~~
8note
safety factor less than 1? you're planning that the thing will break early?

~~~
avmich
Liquid fuel rocket engines sometimes could be planned to work under the
plastic deformation stresses, not elastic stresses. Given that their life-
cycle is measured in seconds, such an approach - which brings some admirable
weight savings - could produce results.

------
pg_bot
Speaking as someone who builds mission critical software, licensing is not the
answer to this problem. Occupational licensing only serves to raise barriers
to entry and crowd out competition without raising quality. To put my opinion
bluntly, Bernie Madoff had a license.

The things you should do to prevent catastrophic failure:

\- Reduce your attack surface as much as possible.

\- Automate your infrastructure, human input should be nonexistent

\- Write a lot of automated tests

\- Take the time to learn attack methods and actively try to break your own
systems

\- Have an external firm that will try to break your systems

\- Add additional redundancy for hot spots in codebase

\- Code reviews are mandatory and you should follow an internal style guide.

\- Adopt a maintenance first culture, it should take less than 48 hours for
you to be on the most recent version of all your dependencies. Bug fixes are
your #1 priority.

\- Educate your customers on what your potential failure conditions are, your
recovery point objectives, and your recovery time objectives.

\- If you do have a failure, fail loudly in a way that a customer can
understand, report the failure and hopefully have a backup plan that users
know and can run through.

\- Customer support is a responsibility of the engineering department.

\- Learn from others and contribute back with your tooling.

You can write mission critical code in any language. (Some are definitely
better than others) You can't manage a mission critical application without
investing in great operations.

Read google's SRE books for more practical information:
[https://landing.google.com/sre/books/](https://landing.google.com/sre/books/)

~~~
sgift
> Occupational licensing only serves to raise barriers to entry and crowd out
> competition without raising quality. To put my opinion bluntly, Bernie
> Madoff had a license.

Almost every country in the world established at one point or another some
kind of licensing for various professions. I think the barrier of "this is all
bullshit" is a bit higher than one bad apple for something found to be a good
idea by so many people. Do you have some supporting arguments for your
position?

~~~
bgorman
Look at teaching in the United States. All states require teacher licensing at
public schools. However private schools do not and get better outcomes. Same
thing with many unions for trades. I have yet to see any evidence a union
electrician or plumber is better than a non-union worker, however unions do
all they can to keep competition off the market.

~~~
pbourke
Would you be willing to be treated by a Doctor without a medical license or be
represented by an attorney who hasn’t taken or passed the bar exam?

~~~
repolfx
Me? In a world where such licenses were not mandatory, i.e. the doctor in
question wasn't actually breaking any laws? Sure.

I'd probably want some alternative evidence of competence of course, like a
long stream of happy, healthy, cured patients who could vouch for him. Or
alternatively some sort of private sector approval scheme, based on trademark
licensing. Sort of like how you can't claim a phone supports Bluetooth unless
it's been licensed and approved.

But in general I don't see any reason to assume the specific licensing regimes
in place today are so grea.t

------
_bxg1
"Imagine joining an engineering team...You start by meeting Mary, project
leader for a bridge in a major metropolitan area. Mary introduces you to Fred,
after you get through the fifteen security checks installed by Dave because
Dave had his sweater stolen off his desk once and Never Again. Fred only works
with wood, so you ask why he’s involved because this bridge is supposed to
allow rush-hour traffic full of cars full of mortal humans to cross a 200-foot
drop over rapids. Don’t worry, says Mary, Fred’s going to handle the walkways.
What walkways? Well Fred made a good case for walkways and they’re going to
add to the bridge’s appeal. Of course, they’ll have to be built without
railings, because there’s a strict no railings rule enforced by Phil, who’s
not an engineer. Nobody’s sure what Phil does, but it’s definitely full of
synergy and has to do with upper management, whom none of the engineers want
to deal with so they just let Phil do what he wants. Sara, meanwhile, has
found several hemorrhaging-edge paving techniques, and worked them all into
the bridge design, so you’ll have to build around each one as the bridge
progresses, since each one means different underlying support and safety
concerns. Tom and Harry have been working together for years, but have an
ongoing feud over whether to use metric or imperial measurements, and it’s
become a case of “whoever got to that part of the design first.” This has been
such a headache for the people actually screwing things together, they’ve
given up and just forced, hammered, or welded their way through the day with
whatever parts were handy. Also, the bridge was designed as a suspension
bridge, but nobody actually knew how to build a suspension bridge, so they got
halfway through it and then just added extra support columns to keep the thing
standing, but they left the suspension cables because they’re still sort of
holding up parts of the bridge. Nobody knows which parts, but everybody’s
pretty sure they’re important parts. After the introductions are made, you are
invited to come up with some new ideas, but you don’t have any because you’re
a propulsion engineer and don’t know anything about bridges.

Would you drive across this bridge? No. If it somehow got built, everybody
involved would be executed. Yet some version of this dynamic wrote every
single program you have ever used, banking software, websites, and a
ubiquitously used program that was supposed to protect information on the
internet but didn’t."

[https://www.stilldrinking.org/programming-
sucks](https://www.stilldrinking.org/programming-sucks)

------
jonahss
Licensing isn't the solution here. Software changes too quickly, while
institutionalized tests and textbooks, and certifications move incredibly
slowly. Treating this industry the same as others will hopelessly slow down
the already glacial pace of government software, while innovation will
continue outside the system.

Software is a different realm, as other commenters point out. When faced with
a problem in the software world, what's the solution? More software!

The accrediting bodies that regulators might want to form should focus on free
test suites and tools for gauging security. Static code analyzers, auto
penetration testers, something along these lines.

Regulate the quality of the software, not the software developers. It's easier
to objectively test.

~~~
nerpderp82
The math doesn't change. High reliability software has the same quality as
high reliability knowledge, namely it has been proven logically to be correct.

------
jgeada
The problem with software is that, for almost all software, the only
specification for what it is supposed to do is discovered as the software is
written. _Nobody_ knows all the data, the data semantics, its dependencies,
what output is really required and how that it to be computed from that data.
Virtually all cases, and certainly all software I've ever been involved with,
all this is discovered on the fly during the development effort. We even
encode this into our processes, after all one of the main reason agile and
similar processes exist is the understanding that we need to iterate to
discover what the spec should be.

If a full specification is truly known up front and validated, then building
software that works correctly to that specification is a well understood
problem with well understood processes (sure, much more time consuming and
expensive than regular software engineering, but that is a different category
of issue)

But look at the Boeing 737-Max8 problem: the issue doesn't appear to have been
the implementation of the software, the problem seems to have been that the
specification itself was poorly constructed for business reasons. And now
software gets the blame? Sigh.

------
gtirloni
Civil engineers aren't asked to design a small 2-story building that can be
retrofitted as a 300-story skyscraper when the time comes. Or connect two
buildings 15km apart over the weekend.

~~~
dpc_pw
Design: "A bridge that can deploy itself on any river, in any city, any
climate, safe in any whether, handle any vehicle (even ones designed after the
bridge was deployed), with guaranteed time to cross (99th percentile under
5s), and regularly replace itself with newer version (while people are using
it). It also has to be nuclear-attack resistant, and prevent 99% suicides. All
built from readily available, standard components. On $100k budget, in 3
months."

------
airbreather
Certification already exists - called Functional Safety Engineer, issued by
people like Exida and TUV.

I hold this Certification. It is painfully slow to use performance based
standards to extract users risks and needs and then produce software designed
and tested to standards like IEC 61511 and IEC 61508.

They are basically Q/A standards, encourage use of the V-model, defined life
cycle activities etc etc in an attempt to produce critical software to an
estimated known quality.

When you do the calcs the equipment is usually the easier part, it is almost
always the humans that end up causing the problems. It is amazing how creative
they can be to do a task, or to not do a task.

~~~
BeetleB
Kind of sad that out of 83 comments, only 1 mentions "functional safety".

If you want to write SW for, say, cars, ISO safety standards exist, and you
generally get your work and development audited by a 3rd party agency who will
confirm whether you've conformed to the safety standards.

Conforming to those standards is not fun. And obviously, conforming to them
doesn't mean your SW is actually "safe". If most engineers, managers, and
auditors follow the spirit of the standards, it _probably_ is safe.

You really _don 't_ want such rigorous standards for your average SW product.

~~~
airbreather
Yeah ISO 26262 is a derived standard from the parent 61508, as is 61511 for
process sector.

It is all about risk management, probabilistic outcomes and not spending all
your money on a highly visible/emotive risk and ignoring an equivalent risk
that is pervasive but lower consequences.

A day to day example of how humans can be poor at intuitively allocating risk
and avoidance is the immense fear some people have of the shark attack or
plane crash, when they are statistically at way more risk driving to the beach
or the airport, but think nothing of it. (Cars can be mighty convenient
though).

In my case risk decisions are usually driven by the company risk matrix as to
tolerable risk, the companies way of conveying the enterprise wide appetite
for risk.

The risk matrix is also known as "the death quota" because it actually says
how much is the maximum they are prepared to spend to avoid a fatality, or
multiple fatalities, but you are never popular with the client when you frame
their risk aversion guidance in that manner, not at all.

Usual guidance is that the company wants no real increase of risk of harm to
employees as they experience in general public day to day life, around 1x10-5
chance of death per year in western countries.

Not thinking this sort of risk is part of the day to day thinking of most SV
software engineers, because the consequences of your Uber going to the right
street in the wrong suburb are undoubtedly significant to someone at the time,
hardly likely to be life ending.

------
skaomatic
Only when the cost of failure exceeds the cost of quality will we see real
improvements. The company producing the software should bear this burden.

Licensure for software developers will only benefit the professional liability
insurance industry.

~~~
bcheung
We can also reduce the effort to write better code. Certain language and
tooling features make quality easier.

~~~
carlmr
I think that's the key. There was ADA now there's Rust. If you can have GC on
your system there are plenty of other languages that make strong static typing
and other compile time guarantees that beat any analyzed C++.

~~~
dnautics
Is static typing really what you want? I would think that if you want safety
against failure, you really want redundant systems that live in separate
failure domains and can coordinate in the event of failure.

~~~
carlmr
Why can't you have that with static typing? Btw I think only strong static
typing really helps (C is statically typed but allows way too many implicit
conversions).

There's also the question of what you regard as safety. Fault tolerance (like
in elixir/erlang), or a design for fewer faults (e.g. ADA/Rust)?

Fault tolerance is good for high availability, which can be a safety goal
(e.g. What does your airplane do when all the computers are off?) it doesn't
help you however if yoir software didn't crash but does the wrong thing (like
the boeing).

For the second part, strong static type systems definitely help, they provide
clear documentation what can and can't be done, and they cause as many compile
time errors if you do the wrong thing as possible. It doesn't rule everything
out, but it gets you closer to the goal of correctness.

Fault tolerance is still needed. Usually you want to have redundancy (I think
most airplanes use 5 controllers calculating at the same time to prevent
atmospheric radiation causing bit flips to cause issues). And you want the
system to go into safe states (e.g. Detect a fault and reinitialize, if that
doesn't work do your best to stay operational until you've reached safety).

I think we could probably take over some of the supervision ideas from elixir,
but combine it with strong static typing guarantees.

~~~
dnautics
Static typing doesn't help you if your software doesn't crash but does the
wrong thing (like the Boeing). In that case not even dimensional analysis
helps you, because by my understanding that code was really stupid.

I find that if you're only documenting via types, that's a problem. You should
also document by writing documentation.

------
ozim
Nothing new under the sun... MISRA C and Autosar were introduced around 1998,
so we can also say standards for safe software are there for decades (around 2
decades but always). You still get companies skipping best practices,
guidlines, etc.

What is funny you get the same stuff in construction industry. Companies take
shortcuts, architects take shortcuts and there are buildings, bridges
collapsing.

I just don't like dichotomy "proper engineers, mechanical, construction",
"kids in the fog, software devs". There are tons of reliable software, and
tons of buildings that might collapse tomorrow because of
wind/temperature/vibrations which never were considered by any builder or
architect...

~~~
AnimalMuppet
Even in the mechanical world, chicken coops aren't engineered to the same
standards as skyscrapers. In software, we can build to skyscraper standards
when we need to. But most of our software is done like we're building a
chicken coop. And that's actually appropriate for some of it - your web app
probably isn't the software equivalent of a skyscraper.

~~~
ozim
I don't know how Boeing is doing their software but 737 Max is nowhere near
chicken coop. I kind of believe they use something like MISRA.

Article also is not saying anything about web applications.

What I have seen on my own eyes was for example airport parking lot in
Eindhoven which is also not a chicken coop:
[https://nltimes.nl/2017/09/25/eindhoven-airport-parking-
gara...](https://nltimes.nl/2017/09/25/eindhoven-airport-parking-garage-
collapse-caused-construction-error-report)

On the other hand I have never seen collapsed chicken coop.

~~~
carlmr
The problem is that you can be nearly 100% Misra compliant and still have shit
software.

It takes away about 90% of the footguns you have in C and C++, but those 10%
are just never going to be covered by a coding standard.

Furthermore it's not only about using the worst language for safety in a
corset. The disagreeing AoAs, the lack of a warning signal, the weird UI to
disable Mcas. Nothing in MISRA or similar coding standards prevents these.

It's good old common sense that you sadly can't teach (although you should
maybe pay for a design course for your software developers). If not at least
read the design of everyday things. Most of the design problems are the same
that downed these two planes.

------
bcheung
Licensing is not the solution.

Here are some things that can help:

1) Limit mutations, shared state, and side effects to very small sections of
code. The majority of code should be pure functions and immutable code.

2) Tooling and IDE's should have "code coverage" to show where code is pure
and referentially transparent and where it is not.

3) Learn from mathematics / computer science. Adopt formal methods and proof
based techniques.

4) Keep code simple and understandable. Category theory has many insights on
how to better compose and structure code. Code that is in harmony with certain
laws will be more predictable, simpler, and more reusable. Constraints ==
freedom.

5) Use strong typing. And by typing I mean types that limit what values can be
and even what code can do. Types that are nullable are not sufficient for
safety. Types that only describe the data format and not the allowed
functionality are not sufficiently safe or descriptive. Dependent typing
allows for even greater safety because it can specify acceptable values
("tolerances") not just the data format of the value.

6) Adopt metrics to measure complexity and other factors that affect
reliability of code. Once we have metrics we can have better discussions
different ways to write the same code and why certain ways might be better
under certain circumstances.

7a) Expand your vocabulary of concepts. Words affect the very thoughts you are
capable of thinking about (linguistic relativity). We went from machine code
to assembly to higher level languages. Each requires more concepts to know and
yet it gets simpler and less error prone each time. The more words you have
the easier it is to describe, write, and understand code. Category theory has
many new concepts that are not in common use that improve communication and
understanding. What other fields can we borrow from?

7b) Keep code short and simple to maximize understanding. More code means more
surface area for software defects. 'map' is easier to understand and less
error prone than 2 different arrays, counters, conditionals, increments, and
mutations.

8) Typing and pattern matching provide a framework where the compiler can
determine if you have handled all the cases for all possible inputs (total
function). Failure to do so is a compile time error.

~~~
commandlinefan
> Licensing is not the solution. > Here are some things that can help:

Maybe license people who can demonstrate that they really understand this
stuff then?

~~~
bcheung
Certification might be less heavy handed and flexible. It keeps control within
the industry and out of gov't where there can be lots of other inefficiency
and conflicting interests.

Laws also tend to be very slow to change (software moves too fast). If
colleges are struggling to keep up with relevant skills how much worse will it
be for laws?

Licensing typically also creates a paywall (student loan debt) and denies
access to otherwise bright and capable people who don't have the resources to
go through a college / licensing system.

------
pnw_hazor
Hold executives and VC personally liable for the failures.

Crunch; poor office environments; PMs, sales, marketing, and so on, lying to
customers/stakeholders; bad tools; technical debt; or the like; are beyond the
control of software developers and contribute greatly to software failure.

------
ww520
Licensing will stifle innovation and inflate cost. Licensing is the old guild
approach to control the labor pool and control the market. It's really the
wrong approach to the problem.

------
tomxor
Without getting too philosophical about _why_ things fail (yes I'm cheating),
I reckon there are only two practical generic things you can do:

1\. Make it very small (how small is subjective and language dependent of
course).

2\. Make the large part fault tolerant and recoverable, e.g triple redundancy.
(or make the lower level part fault tolerant).

Leaving a third option of making perfect software in the face of immense
complexity sounds more like magic to me, I doubt any solution in this area
could be generalised.

------
djakjxnanjak
Don’t make excuses to yourself when you know there is a problem with your
code. Don’t say “well, it will only fail in these edge cases and I know not to
use it that way.” You’ll forget, or someone else will use it the wrong way and
it will break. The easiest time to fix it is when you are already working on
the code and first notice the problem, not later.

------
NateEag
I think software systems whose failure directly jeopardizes human lives should
be engineered, and should be defined and enforced legally. Systems that don't
directly risk human life should not be so constrained.

Defining that standard and enforcing it is a huge task which could easily go
very wrong.

Businesses would of course be free to apply the "engineered" standard to any
systems they choose to. Outside of mission-critical systems operating at
gigantic scale (FAAMANG-level) I doubt it would often be profitable to do
that.

The hypothetical standard would also be less relevant when building proofs-of-
concept, so long as the POC is not usefully deployable (if it is there's risk
of someone deploying it despite it being unsafe).

In the end, most software does not put human lives directly at risk. Acting
like it does would waste resources and doom many small businesses whose profit
margins couldn't absorb the costs of genuine engineering (it could kill
several companies I have worked for).

Even software systems that do risk human lives don't do so in all subsystems -
as far as I know, painting a mural in your skyscraper's lobby requires no
engineers to be involved. Similar distinctions may be reasonable in
sufficiently-isolated components of software packages that do some life-
protection tasks. Maybe allow a formal verification to show that subsystem X
cannot impact the critical subsystems and therefore does not need the same
level of rigor?

------
zmmmmm
The problem is: most of the worst software disasters have been perpetrated by
exactly the kind of people who would get licensed under such a system, while
much of the most important software in history has been written by people who
would never get licensed (Linux, Python, etc etc). As appealing as it is to
try and solve the software quality challenge through regulation, all the
evidence seems to be that it just wouldn't work and would probably be highly
counter productive.

------
invalidOrTaken
There's only one answer: exhaustive testing and attention to detail, NASA
style. With NASA it's probably worth it. The rest of us, not so much.

Like efficax says, there's an oft-used analogy between software "engineering"
and structural engineering that I think is way overplayed. The better analogy,
IMO, is between software and _business_.

What "CEO" means is basically "authority and responsibility." We have some
guardrails called laws, but outside of that we acknowledge that "CEO'ing" is a
complex, ambiguous task requiring personal judgment, and is not reducible
beyond that.

"How careful should we be?" is a question with some interesting
characteristics:

\- it's a matter of opinion. How risk-averse are the shareholders?

\- it's a matter of context. On the CSS for the company promo site: eh. On the
embedded code running the flight-control system---very. But what if that stems
from MVP code that originally ran in a simulation on the company promo site?

I mean, what you really need here, is for your board members to write the
code.

I'm speaking tongue-in-cheek a bit, but I'm not sure it's _100%_ wrong, maybe
just 75%. In a very real way, the authority-responsibility-reward lineups are
so out of whack in most organizations. Which is not to say I know what they
_should_ look like.

(Two words I need to think about more: "Software insurance")

------
naringas
ah, here we go

fast forward 25 years and we'll need a government approved license to open dev
tools in a browser.

I hope I am wrong.

~~~
brootstrap
Yeah i think this is the 'software engineers are not REAL engineers'. There is
a fair argument on both sides and the article raises an interesting point.
Perhaps the root cause is how new & rapid all this tech has come across our
world. Folks have been building things thousands of years right?

We've only been building scalable docker microservice agile jira lambda
functions for a few years now. Curious to see where time takes us. Change
takes a while to come along, companies and people have been getting hacked for
many years but i havent heard any whispers of change.

------
Iv
I love how in this article "If industry fails to self-regulate, governments
might seize the opportunity." is considered a bad outcome.

But indeed no one forbids anyone from providing certifications or imposing
them. I actually have a diploma that says (rightly) that I know how to write
provable code and good software. Only once have I been asked to do so by my
clients.

I, personally and unoriginally, blame Microsoft for the current situation.
"Runs on Windows" is a common requirement and when it is present, you know
your software won't be held to a higher standard than Microsoft's is.

If we had a certification that for instance required that we are 100% certain
that a given process can not be interrupted for a software reason, we would
need to examine Windows code or blindly trust Microsoft, rendering the first
point of the certification moot.

I know how to write realtime code where I can give hard guarantees on the time
it takes to finish a task. But it requires an explicit and transparent
scheduling system, a known IRQ system and a precise account of the number of
cycles necessary to execute a given procedure.

Maybe it can happen now, but until recently, a certification procedure that
would require an open OS would have been seen as ideological.

------
daenz
So...write a bug, lose your software license? Sounds like fun.

~~~
pron
Most building "bugs" don't result in harsh sanctions. Only catastrophic (or
potentially catastrophic) ones, done with some negligence do.

------
Silhouette
The first question when anyone proposes licensing as a solution to anything in
this field should always be what the objective criteria will be for granting a
licence. Until there is some authority qualified to answer that question on a
credible technical basis, any legally mandated licensing scheme will just be a
tool for lawyers, "thought leaders", insurance firms and other people who
don't actually make useful software to beat up those who do.

------
cesarb
"Consider the construction industry, which has had formal standards in place
for decades."

The construction industry has existed, in some form, for centuries. It has had
time to develop these formal standards. Software is much younger, and the
"formal standards" it should follow are still in flux.

------
Animats
A sharper line between critical and non-critical software would help. Only
some software needs to work well.

------
Crinus
"All of these professionals have years of schooling and relevant work
experience and have passed rigorous certification exams. [...] To start,
coders who work on critical infrastructure should have a professional
accreditation framework that issues licenses."

"mit.edu"

Uh, huh.

~~~
Jtsummers
This is the MIT Sloan Management Review, not the university itself. The author
never attended the university (based on his LinkedIn profile), rather he
attended the University of Waterloo.

------
revskill
As i see, without failure, there'll be no improvement in software.

It's fact. Just look at frontend ecosystem. The next big thing is mostly the
one that "fixes" previous failures.

You can't have both "improvement" and "without failure" though.

------
pfdietz
That question was asked, but the actual question that appears to have been
answered is "What can we do to get all software development to move offshore?"

------
taylodl
I certainly don't disagree but I'd be interested in discussing the nature of
the licensing.

------
Haga
Don't try to handle all the edge cases, instead in the else give control to
the user.

------
xhgdvjky
we can do better than licensing. we can have formal verification. soon

~~~
codr7
Formal verification is the future, and always will be.

The gap between what's doable and writing all software that way is epic, and
it's not at all clear to me that its the best way forward.

Software is (potentially) about creating what didn't exist before, nailing
everything to the floor based on past experience isn't going to lead anywhere
worth going.

Maybe part of the solution is writing software that's not verifiable or
rejected. Probably, since it has to be something we didn't try yet.

------
aluminumtunes
wouldn't develop software like airplane takes us back to something very
similar to waterfall process?

------
thatoneuser
Nothing. Capitalism solves all problems. If not then wed be socialist.

