
Programmers who want to change how we code before catastrophe strikes - mattrjacobs
https://www.theatlantic.com/technology/archive/2017/09/saving-the-world-from-code/540393/?single_page=true
======
valuearb
"For Lamport, a major reason today’s software is so full of bugs is that
programmers jump straight into writing code. “Architects draw detailed plans
before a brick is laid or a nail is hammered,” he wrote in an article. “But
few programmers write even a rough sketch of what their programs will do
before they start coding.”

I almost always dive in, but I almost always write my code twice. Essentially
the first round is my rough sketch, the second is written when I fully
understand the problem, which for me only occurs once I've tried to code it.

~~~
Latty
Exactly. This is ignoring the fact that code is a more useful blueprint than
anything else for programs.

I'm sure if architects had the ability to magically conjure building materials
out of thin air and try things in real life for free, architecture would
involve a lot more trying and a lot less planning.

Of course, I'm sure the article is talking about people just rushing into
production code without thought, and I get that is a problem. It's just when
people make comparisons like that it can imply this horrendous future where we
whiteboard program for months, which is just a terrible idea.

~~~
SiVal
I think there's a bit of a problem with the architecture metaphor in general.
It implies a limited range of things planned and built. As software eats the
world and does more and more things, a metaphor that more closely reflects the
variety of software is people telling other people to do stuff.

If you tell someone to bring you your socks, you don't need to plan for it. If
they bring the wrong socks, you just fix it. If you tell them to invade the
beaches at Normandy, you might want to work out more of the details in
advance. You can tell someone to remove a splinter or remove a brain tumor,
and _your_ part of the instructions might be roughly equivalent if you are
telling an "API" that has already been adequately told _how_ to do what you
ask.

The problem of unintended consequences of instructions has been with us far
longer than computer software. In any story of a magic jini granting three
wishes, the third wish always ended up a version of `git reset --hard`. I love
having direct manipulation tools that simulate your proposed solution, giving
you much faster feedback. Midas with VR goggles would have quickly seen
something unintended turn to gold and canceled before committing. That's
extremely helpful.

But this isn't the ultimate solution for how to deal with software complexity.
It's a very helpful tool in some cases. Some software should still just be
coded immediately and fixed as needed (takes less time to do it again than to
create a simulator), some would benefit most from a good, debugged library
(I'd rather the robot already know how to remove the tumor than show me direct
feedback of my fumbling), some from direct manipulation tools, some from
mathematical techniques (remembering that mathematically proven software is
buggy if an errant RAM chip violates the axiom that `0=0`), some from better
testing, some from better runtime monitoring, and so on.

But as with humans' verbal instructions, there will always be leftover
unanticipated consequences due to flaws in the spec, bugs in the code, and
breakage in the implementation.

~~~
dasil003
One way the architecture metaphor breaks down is that very tiny details can
have incredibly large knock-on effects in software. This is different from
normal architecture and building engineering where it's certainly true that
there are many details that need to be carefully considered, but those things
are well understood and managed from project to project. Software on the other
hand could be doing _anything_ , in a world of pure logic traversing scales
analogous to the smallest quark ranging to the entire universe. Building
physical things just doesn't deal with those scales, you only have to worry
about material properties and structures within a range of few orders of
magnitude.

~~~
AndrewDucker
Apparently a "subtle conceptual error" can have massive consequences in
architecture.

[http://people.duke.edu/~hpgavin/cee421/citicorp1.htm](http://people.duke.edu/~hpgavin/cee421/citicorp1.htm)

------
tmnvix
> "Programmers were like chess players trying to play with a blindfold on—so
> much of their mental energy is spent just trying to picture where the pieces
> are that there’s hardly any left over to think about the game itself."

This is an insightful description of the biggest mental challenge I face when
programming.

~~~
nomel
My personal method is, every line of code that I write, I write for someone
else. If it's internal, it's always for another developer (who may not exist)
and, for the upper layers, a user (of the function/API/library) that may peek
behind the scenes trying to debug some problem.

If it's anything facing the "user", it's entirely written for that user, and I
know they're not very good at reading or writing software.

Thinking about the developer helps prevent me from being lost in complexity,
abstraction, or the general specific of the problem. Thinking of the user
keeps the game in sight.

In all cases, I've had very "clever" solutions that are fun and challenging,
that I completely scrap for something the other developer, and myself, can
understand in 3 months.

~~~
gregmac
> for another developer (who may not exist)

You, 6 months from now (or whenever you stop touching this code base), are
effectively "another developer".

New developers don't seem to _really_ grasp this until the first time they
have to maintain their own old code. I don't think it really hits home until
the first time you experience: "What is this? Who wrote this garbage??" _runs
git blame_ "..oh, crap."

~~~
milkytron
This is why I usually throw in comments with myself as the recipient for
understanding where complexity exists.

I've heard people debate over whether comments should be in well written code,
including some that argued comments should never be used at all.

Party A: Code should be easily understandable so that comments aren't
necessary.

Party B: Comments should exist where complexity exists to save a developer's
time when determining what is and is not important to them.

Sometimes though, you just can't avoid it. The few times I had to write PERL
are prime examples.

~~~
gregmac
Obviously the worst comments are utterly obvious:

    
    
        // Get the users name
        name = get name();
    

There are lots of cases where inline comments are very useful though.

As an example, most of my team isn't strong on regular expressions, so when I
use one, I'll usually put a comment explaining what it does in slightly more
detail than I normally might (to the point it would be a bit too rudimentary
to anyone well versed in regex).

I've also been writing lots of automation scripts for stuff running at AWS
lately, which often involves using strange AWS CLI filter syntax, jq
expressions (for JSON parsing) and some other random utilities like sed, cut,
sort, etc. Even for myself, I put fairly detailed comments, but I don't use
those on a regular basis and usage isn't always obvious anyway.

~~~
flukus
Regular expressions are the perfect use case for unit tests, they are simple
input/output pure functions. Not only do you ensure that they work, you
provide several examples to future devs of what should and should not be
considered a match.

On the automation scripts I really need to take a leaf out of your book
though, I'm awfully undisciplined at commenting them and it always comes back
to bite me.

~~~
gregmac
100% agree on unit testing regex, but spending a minute to type a detailed
comment that saves sometime the extra time of navigating to and reading
several unit test cases is worth it. If someone is modifying the regex the
tests are essential, but for someone just reading the code why not save them
the time?

------
HumanDrivenDev
A lot of it comes down to the fact that this isn't engineering, and we aren't
engineers.[0] The field has a very low barrier to entry (much lower than a
bachelors degree). The standards are low, the expectations are low, and
there's a strong anti-intellectualism streak. Don't believe me? try talking
about "esoteric" stuff like category theory, logic programming, LISP or
writing functional specs. At the workplace most of your fellow professionals
will stare at you blankly, and even on self-selected places like here or
proggit, half the people will rush to dismiss it.

There aren't - as an example - mechanical engineering "bootcamps" because
mechanical engineering is an actual engineering field, with high standards,
real accreditation, and professionalism. We don't have that, we have middle
aged men giving talks in t-shirts and saying stuff like "ninja" and "awesome".

I think we need to face up and rectify the fact that we aren't engineering
before we can advance. And yes this requires excluding people who aren't up to
standards. We're not the equivalent of chemical engineers - we're the
equivalent of alchemists.

[0] with the Caveat that stuff done in Avionics, or Medical Imaging, or
anything else with very high standards and rigorous processes could probably
be called that.

~~~
arjie
We're the last market-driven field. So most companies take the attitude of
"Take your approach. I'll see you in the market". This isn't anti-
intellectualism _or_ realism _or_ pragmatism. This is a different fitness
function for software. Bug-free programs aren't inherently good.

If I'm building X-ray scanning software I'm going to be careful. But if I'm
writing a Slack lunch bot in Coq for anything but the fun of it, I'm making
questionable choices.

You know the old saying: "Anyone can build a bridge that stands. It takes an
engineer to build a bridge that barely stands."

~~~
HumanDrivenDev
I'm not talking about anything as ambitious as making all software be formally
verified.

All I want is for the people in this industry to take it seriously, for us to
have standards as an industry, and some kind of professional certification to
exclude those that don't know the basics. Software is rife with cowboys and it
harms the image of those of us who actually care.

------
carlmr
I agree with the premise, but I'm unsatisfied with the solutions offered. I've
worked on both C code and model based designs in Simulink and other more
purpose built tools for safety critical embedded systems.

First off I can't agree with the notion that programmers don't care about the
system. They do. A lot. Especially the good ones. You can't do any meaningful
work without caring about the system you're working on. But like in any
profession you have some useless people.

Second on MBD. In my opinion it leads to much more complex code, especially
because the people writing the code aren't coders anymore. Or they don't see
themselves as such. You say the people crafting the requirements are now
speaking the same language as the coders? I say we've siloed people working on
requirements from the actual coders now, which leads to exactly the coders
having no clue about the requirements, and the requirements engineers having
no clue what their model based changes do to the software. MBD is the worst
reason I've seen for spaghetti code making it mainstream.

MBD promotes spaghetti simply because it's so much easier to tack on more and
more complexity without a serious understanding of what it does to code.

Another thing is that I find MBD often harder to read than code. I can't
really point to hard evidence, but I have a good analogy here in terms of UX.
If code is like Google, MBD is like Bing's 2D tiling. It's much easier for a
human to parse a list of statements, than statements that are all over the
place. I think if we did meaningful research here, we'd get the same result
for MBD vs code.

We still shouldn't give up to look for better ways to code, but I think the
better ways are easier to use programming languages that preserve the general
purposeness of programming, better interactive environments (like F#
interactive, Python REPL, etc.) and helpful syntax highlighting and better
code annotation systems. Maybe live rendering of markdown and explanatory
pictures in the IDE would be a helpful first step.

~~~
humanrebar
> Another thing is that I find MBD often harder to read than code. I can't
> really point to hard evidence, but I have a good analogy here in terms of
> UX.

It _is_ harder to read. Both for humans and for programs.

In code, if you see a mysterious call to "flushData", there are standard ways
in each language to discover exactly where the definition of that logic is. In
contrast, what is the standard way to discover what a circle means? A dotted
arrow?

OK. Maybe your IDE has a right-click "go to shape definition" option. But
that's not a language feature. That's a tool feature.

So you don't really have a language spec. You have a tool spec. So the problem
is that the language is hard to read because it's not a concrete language.
It's a nebulous implementation detail of a tool.

------
vtange
So we need to write more code to make fancy Photoshop-like editors for the
average-Joe programmer who can't see the big picture? That just adds more to
the code issue - even Photoshop has bugs you know.

The fact that "Few programmers write even a rough sketch of what their
programs will do before they start coding" is a soft-skills/experience issue
that isn't specific to software. I too can use Photoshop to draw logos, maps,
and diagrams, but I'm pretty sure someone out there who uses Photoshop
professionally knows a lot more tricks within Photoshop and sees the bigger
picture of what they want to create before starting. I generally free-hand my
drawings, as opposed to the structured outlining most professional artists do.

~~~
haskellandchill
No, we need to use logic and set theory and provide tools for visualizing the
implications of our rules and checking correctness of desired properties.
There's a strong history and lots of good people working on these things but
it's tough to get our message out to working programmers. This article helps
but based on the comments we've got a lot of perception work to do :)

~~~
Bartweiss
There are definitely people making a real effort here, and I appreciate you.
But I can't shake the feeling that the reason your task is so hard is that
_everyone_ before you has been selling snake oil. "Visual programming",
"human-readable software languages" and so on are all just ways of saying
"crippled tools".

It's not an accident that the examples in the article were WYSIWYG editors,
Photoshop, Squarespace, and Mario. All of those things flatten neatly into two
dimensions, and their terminal form is visual. The visualized code _is_ the
product.

Meanwhile similar initiatives for software in general are almost always
nonsense. There was a pretty compelling TED talk* about visual tools for
cybersecurity, shared by lots of people I know. Only one problem: none of the
fancy tools showcased work, or will work. They're operating backwards from a
known answer. Like most programming tools, they collapse right where they're
needed most, stopping at the same edge cases that cause these problems.

More broadly, it seems like the existing tools for known-safe software run
exactly the opposite direction from Bret Victor's vision. Correctness proofs,
exhaustive test suites, reversible debuggers, and so on are far from non-code,
but they're what we use where reliability matters most.

I can appreciate that "code everything like the space shuttle" is hopeless,
and I would love to see breakthroughs on easier correctness checking. But
right now, it does feel like an attempt to push tools that won't work when
they're needed.

*TEDx, but selected by TED for special featuring: [https://www.ted.com/talks/chris_domas_the_1s_and_0s_behind_c...](https://www.ted.com/talks/chris_domas_the_1s_and_0s_behind_cyber_warfare)

~~~
haskellandchill
Yeah I'm not a fan of visual programming, but visualizing effects of programs
to help with reasoning, I'm a fan of that. Proofs is my approach, but as
humans we don't deal with proofs as text well and have things like sequent
calculus and semantic tableau that are amazing aids to reasoning.

~~~
catnaroek
Humans can read and write proofs just fine, if they are taught how to.

~~~
wolfgke
The problem is that the proofs that one finds in typical math papers are very
different from the proofs for computer programs.

Proofs in math papers are "mostly right" about rather complicated facts. This
means that what is shown is highly non-trivial, but when an error is found, it
rather does not matter since it is rather easy to fix the hole in the proof.
The reason seems to be (but this is my personal opinion) that the typical
things mathematicians love to write proofs about have a high level of
redundancy for this kind of error.

Proofs for computer programs rather prove statements that are rather trivial
and obvious, but very subtle in the edge cases. Often the a non-formalized
proof is "trivially correct" when a human skims it, but often is wrong for
very, very subtle reasons. Thus there is typically no way around formalizing
it - which with today's tools is very tedious and boring.

~~~
catnaroek
> The reason seems to be (but this is my personal opinion) that the typical
> things mathematicians love to write proofs about have a high level of
> redundancy for this kind of error.

Rather than “redundancy”, it's a matter of having nice (algebraic, geometric,
whatever) structure. Of course, a pure mathematician has more freedom than a
programmer to decide what kinds of structures he wants to work with.

> statements that are rather trivial and obvious, but very subtle in the edge
> cases.

This is a contradiction in terms. If it seems “trivial and obvious”, but is
actually “subtle in the edge cases”, then you are underestimating its
complexity.

> Often the a non-formalized proof is "trivially correct" when a human skims
> it, but often is wrong for very, very subtle reasons.

Then you need to roll up your sleeves and actually prove things, not just skim
through purported proofs.

------
drawkbox
Programmers write code badly due to time constraints in many cases or really
disconnected teams being led by business not engineering.

Rarely is budget allotted for quality code, you have to fight for it as a
coder to your own detriment internally on timelines/shipping.

Engineer led companies, or companies that value engineering as a main decision
maker in the company usually fare better with issues like this and are already
smart about design, architecture, standards, security, reliability,
interoperability, user experience and more.

~~~
maerF0x0
Even given good amounts of time engineers often skip tools that could catch
some bugs. Examples, TLA+, fuzzers, full code coverage etc.

Sadly all things are economic, so we apply the tools where the cost of a bug
is high. In the case of 911, its very high but underfunded. In the case of
some consumer app, its (presumably) very low...

------
brfox
The article didn't mention it, but I think that Excel is a great example of
where "the masses" have learned to do programming. It is visual and the
effects of changes are immediate.

~~~
nitwit005
My uncle worked at a very large corporation that couldn't update to a later
version of Excel because they relied on some spreadsheet to do their taxes,
and some minor difference between versions broke the spreadsheet. No one could
reverse engineer what the spreadsheet did. With a complex enough sheet, every
cell becomes a separate function with no documentation.

That's unfortunately a pattern you see with these efforts to make things
simpler and more visual. They're great tools, until it gets too complicated,
and then you find you'd be better off just writing code.

~~~
brfox
I totally agree. I just wonder if there is some way to take that open space,
visual programming which so many people do with Excel and make it more robust.
Well, actually, lots of people are working on making a better spreadsheet or
putting the table data into a database and making interactive plots, so maybe
that's not the angle here. Maybe we can figure out how to make our IDEs more
Excel-like in some ways. Not for us regular programmers, but for people who'd
like to do serious programming in their subject/domain of expertise but don't
want to just do it in Excel with VB.

~~~
flukus
For things excel is used for I find a few unix tools and piping between them
can achieve a similar affect yet still be maintainable, debuggable and
repeatable. In lieu of convincing the masses to work this way, I wonder if
visual tools could take the best parts? Instead of updating cells on sheets
(global variables), have a UI that allows users to break down it down into a
series of steps with an input and an output, much like a makefile. Then it
would be possible to step through the script, inspect the input and output for
each step and view the final result.

------
DrScump

      “Software engineers don’t understand the problem they’re trying to solve, and don’t care to.”
    

In environments where management judges by (and _is_ judged by) other metrics,
this result is inevitable.

~~~
jasonmaydie
it's like blaming the assembly line worker for a badly designed car.

Software engineers don't solve problems, they implement solutions someone else
thinks will solve a problem.

~~~
Jtsummers
The analogy would be more appropriate if you said programmer and not software
engineer. Anyone calling themselves a software engineer should be providing
input into the design of the thing they're producing. Their role is very
explicitly managing requirements and codifying them in, well, code. Either
directly doing the programming or managing those who do.

~~~
Epenthesis
Are there companies that actually distinguish between these roles? For all the
ones I've ever worked at, "software engineer" and "programmer" are synonyms.

(I'm aware that there's been some effort to extend the legal significance of
the word "engineer" to the software industry, but I'm curious if that
difference is actually in practice anywhere today)

~~~
Clubber
I dunno, it's just a title game. I would guess it's because programmers in the
90s simultaneously had massive hubris and insecurity issues and wanted to be
just as respected in the community as actual engineers, but who knows.

Engineer invokes a sense of design, but so does programmer. When I started,
there was no job where you just coded straight pseudocode specs. Business
described the problem the way business does, and you had to convert that into
a computer solution. Within that solution, different developers would own
various parts of it.

I would say the entire process of having coders who design systems and coders
that just write code is inherently broken. Businesses want to think of
development as prod-ops, but it's more like book publishing. I'm of the mind
that if a coder who designs a system in enough detail to write pseudocode, he
should just write the application.

The better approach is have an architect/lead model subsystems and a general
skeleton of the application, then have different teams implement the
subsystems. Clearly defined interfaces on all inter-subsystem communications.

------
tasuki
I'm surprised understanding the domain hasn't been mentioned. Whether I am
developing for someone else or for myself, it turns out
misunderstanding/misrepresenting the domain is the most common source of
trouble.

If your understanding of the domain isn't thorough, is TLA+ going to be much
help?

~~~
hwayne
It helps in two ways:

1) You have to specify your system, right? With TLA+, you can just wave you
hands and say "okay this part does something, I guess." You have to force
yourself to understand what, exactly, you want your system to do and what you
want out of it.

2) Most systems have edge cases, side effects, and race conditions. Are you
sure your design is robust against them? You might think you have good
arguments for that, but wouldn't it be better to rigorously _check_?

Tests and types and stuff help you find bugs in your implementation. TLA+
helps you find bugs in your blueprints.

~~~
tasuki
Good points!

Writing unit tests before code can help avoid mistakes in the interface
design.

Perhaps similarly, writing formal specification could expose the holes in your
domain understanding.

------
Karrot_Kream
Our company uses memcached to store short-lived pieces of user-specific data
for one of our (web) products. The way this was architected, any time a user
visited a page, a request would be made to memcached with the user, check to
see if memcached had data, fetch the data, and then clear the cache.

This feature had always been a bit buggy, but then we decided to start adding
more things to this user-specific data store. And it blew up. Data we wanted
to store for the user wasn't being stored at all, data was being fetched many
requests after they should have been fetched. Sometimes stale user data was
being fetched days after the data should have been cleared, and the data was
not being cleared.

After being pulled into the war room for this, I started looking into the code
and architecture of the issue and was appalled. Code for the feature was
designed piecemeal, tests were nonexistent, and of course only certain cases
of the code were tested. The whole thing was a concurrent nightmare and would
not hold up to concurrent reads/writes.

Rather than sit there and make sense of it all (I started sketching it out on
paper because it was so convoluted), I ended up writing the spec out in TLA+.
Just forcing myself to write the spec out made me consult the implementation
dozens of times, to verify the exact behavior in a way that TLA+ could model
it. And then after I did that, it was obvious that the code was broken.

So in my TLA+ model I shored up what I thought were the problems, ran the
model checker, and was expecting a fix. Nope. The model checker found another
bug. I tried a different strategy, and it found another bug. I iterated with
the model checker dozens of times, and slowly changed the model entirely,
making it simpler and clearer to understand. Finally the model checker
couldn't find any bugs. I converted the model to real code and, after some
code review, I shipped the fix. It worked.

I wish more people would spec their code with something like TLA+, but as I've
seen in my own teams, the mantra is ship first think later.

------
bsaul
Great article, but it leaves me with one question : what's the difference
between good code generating systems (the ones mentionned, that you'd use to
build critical system), and the horrors that we saw back in the time with
wysiwig HTML editors such as dreamweaver ( to the point where nobody today
would dare to write an html website with something other than a text editor) ?

The second problem seems a lot easier to me, and yet it is where wysiwyg
failed spectacularly.

~~~
Animats
What went wrong with Visual Basic?

(HTML/CSS/Javascript was botched so badly that Dreamweaver-type editors no
longer work. This is embarrassing.)

~~~
david927
_What went wrong with Visual Basic?_

The language was simple enough for beginners but not sophisticated enough to
scale for large projects. All too often you would have a project that started
in VB as a proof of concept which then extended to become the actual system,
and then as that system grew it started to collapse under its own weight. The
conversion to the CLR fixed a lot of that, but now it isn't really for
beginners anymore.

 _HTML /CSS/Javascript was botched so badly_

That combination is atrocious out-of-the-box. There was no "botching" it, but
rather (again) scaling to larger problems showed that the foundation was
always made of sand.

~~~
justherefortart
Visual Basic was fantastic for kicking out a UI. If you needed anything
larger, all you had to do was build DLLs and reference them. So easy front end
and easy back end. The best of both worlds.

~~~
rurban
Visual Basic was also fantastic to kick out multithreading server code. The
problem was always libraries and lack of language reflection.

------
chroem-
The dangerous unreliability of software is an accountability problem, not a
technology problem. Engineering disciplines are regulated so that engineers
are held personally and even criminally responsible if they endanger property
or lives. Software, being the newcomer, has no such regulation. Instead we
have people trying to claim that software deserves a free pass from due
diligence because "software is hard," when in reality other engineering
disciplines deal with unexpected failure modes all the time. The only
difference is that these other engineers are motivated to be significantly
more thorough about their jobs.

~~~
Bartweiss
> _The only difference is that these other engineers are motivated to be
> significantly more thorough about their jobs._

Hang on. Sure, all engineering has hurdles. But how many engineering projects
face regular enemy action? The last time somebody actively tried to compromise
a system I was building was two weeks ago. How many engineers live like that?

We hold civil engineers responsible when bridges collapse unprompted. We also
hold them responsible if their bridges fail from some minor, anticipated harm
like vandalism or a car crash. We don't hold them responsible for building
bridges _that can be blown up._

If we want to hold software developers accountable for downtime, unprompted
data leaks, and so on, fine. We should do it proportionate to the damage
caused - there's no point in pretending that a shoddy smartphone game is as
bad as a shoddy bridge - but fine.

But let's not pretend that software has the same difficulties as every other
type of engineering. Most engineering doesn't face an epidemic of people
actively trying to disable safety precautions to hurt people, and a lot of it
reliably fails as soon as it _does_ face enemy action. Punishing every
engineer who ever gets outwitted (say, by a leaked NSA-developed
vulnerability) is absurd.

~~~
coldtea
>* Hang on. Sure, all engineering has hurdles. But how many engineering
projects face regular enemy action? The last time somebody actively tried to
compromise a system I was building was two weeks ago. How many engineers live
like that? We hold civil engineers responsible when bridges collapse
unprompted. We also hold them responsible if their bridges fail from some
minor, anticipated harm like vandalism or a car crash. We don't hold them
responsible for building bridges that can be blown up.*

Even more so, how many engineering problems have fuzzy requirements that on
top of that change constantly and arbitrarily?

If someone thinks building bridges is the same as writing some big software,
they really have no idea about working in the trenches.

~~~
hwayne
How many software projects have to deal with gravity? Weather patterns? How
many software projects need teams of construction workers to actually build?

Different kinds of engineering have different requirements. Just because the
challenges in software are different from those in building bridges doesn't
mean we get a free pass on correctness.

~~~
coldtea
> _How many software projects have to deal with gravity?_

Gravity comes with a pretty little equation, and as far as problems goes is as
predictable as it comes.

Tolerances of various materials are also well known in advance, and their
behavior under different designs and with different levels of stress assigned
can be trivially modeled with CAD packages (and even manually).

Same for weather. It comes down to a few behaviors (rain, wind, earthquakes of
various degrees, sunlight) that people can model, and have been modeling for
ages. We have 25+ centuries old buildings that still stand.

> _How many software projects need teams of construction workers to actually
> build?_

Not sure what you even mean here.

> _Different kinds of engineering have different requirements._

That's obvious. The question we ask here is different: sure they are
different, but are those requirements equally well defined and equally
difficult across software and other engineering fields (like construction)?

And to that I say no. Construction has pretty solid, rarely changing
requirements (and almost never changing ONCE construction has begun), and
works with specific materials with a limited set of interaction. With software
modeling the entire universe is the limit as far as complexity goes.

------
bitwize
Model-based design has been the wet dream of software managers looking to
eliminate costly, finicky programmers for what? Decades now? I remember when I
was a college student being told that eventually, the work products of
software architects would be UML diagrams that could be turned into code by a
simple turn of the crank on an automated generating tool. I didn't buy it then
and I sure don't buy it now. The reason why is because once you specify models
with sufficient granularity to be automatically turned into code, the
graphical symbols in your modelling language become isomorphic to keywords in
some programming language, and your programmer-replacing code generator
becomes a compiler.

As for Bret Victor... Programming is hard because you are trying to reason
about not just _the_ future of a dynamic process, but _all possible_ futures
of that dynamic process. That's _way_ too much information to be represented
in a visual manner for all but the simplest of systems, and it's why visual
programming tools have been met with spectacular failure outside of
constrained niches (e.g., LabVIEW, Max and its relatives).

------
sharno
After reading the whole article, it seemed impressive .. Although didn't like
his way of writing so simple ideas in a very very very long article.

Anyway, TLA+ and Formal Methods seem a promising thing and definitely need to
check that out. I totally agree with him with the way we are not giving a good
weight to planning especially for applications that need robust security and
safety. Especially after the appearance of Agile methodology and alike. (no
system is perfect anyway) But definitely we need more of that verification and
we don't need it in the esoteric way they mentioned but in the easy way that
allows every programmer to use without so much complexity. Maybe tooling
around something like TLA+ could make it easier to understand. People in here
though say that it's not that hard but it's not beneficial in every single
situation just when you have complex algorithms.

I got convinced though with another idea that's easier to apply at least for
now, using type systems (type theory) through using a programming language
with sound type system (Facebook is making nice progress in that) on the front
end FB made ReasonML , a language derived from OCaml to generate javascript.
It really erases a whole set of bugs by getting a good type system like that.
I'm learning these days OCaml and Reason. New languages as Rust are doing
great too. I think writing code is improving these days but it'll get
definitely better in the future as we understand more about how we actually do
it. The field is just around 70 years old and it's still in its infancy I
think.

------
KirinDave
The irony is the "this is all too hard write less code use more axiomatic
principles and compiler assistance" is the functional programmer's call and it
keeps getting shot down as 'too complex.'

~~~
Periodic
Where I get the most resistance to functional programming is more in the
abstractions: both that they are too complex and that people aren't already
familiar with them. High-level/complex/abstract patterns are hard to
understand the first time and FP seems to make those patterns easier to
express.

I'm surprised how much we take for granted an understanding of object-oriented
coding in the industry. Any graduate with a four-year degree in CS can be
expected to write passable object-oriented code, but many have absolutely no
exposure to functional idioms. Of course, those same people may have very
little exposure to more complex OOP design patterns. I've seen people's eyes
gloss over the same why when saying, "It's just a monad" as when saying "It's
just an interpreter pattern".

~~~
KirinDave
> I've seen people's eyes gloss over the same why when saying, "It's just a
> monad" as when saying "It's just an interpreter pattern".

Same. All we can do is try and change education patterns. Neither are that
complex.

------
hwayne
> He began collaborating with Gerard Berry, a computer scientist at INRIA, the
> French computing-research center, on a tool called Esterel

INRIA is also maintains Coq, TLAPS, and much of TLA+.

INRIA is a scary scary place.

~~~
simlevesque
> INRIA is a scary scary place.

Why ?

~~~
schoen
I assume scary as in "scar(il)y smart", in the sense that it's (almost)
disturbing to see their sophistication and competence.

~~~
simlevesque
Thank you.

------
osteele
This article reports uncritically the plaintiff’s claim that Toyota runaway
acceleration issues were a software issue. The Wikipedia article presents a
more balanced report; the black box data is particularly interesting.

[https://en.wikipedia.org/wiki/2009%E2%80%9311_Toyota_vehicle...](https://en.wikipedia.org/wiki/2009%E2%80%9311_Toyota_vehicle_recalls)

------
sjm
This seems over the top to me. Everything has issues, and everything is
increasingly complex, not just programming. Take any industry and over a long
enough time-span I'm sure things have also increased in complexity in drastic
ways. Sure, programming is relatively new, and we are ramping up fast, but we
already have vastly complex systems, tools, and infrastructure relying on
programming which has incredibly impressive uptime and efficiency.

Yes, people will die from errors in the code of autonomous cars, but at the
same time can anyone argue that autonomous cars won't be significantly safer
than us?

The argument here seems to be that it's dangerous if the programmers relying
on StackOverflow to copy/paste solutions work on real problems without
"stepping up". I think that's like someone suggesting a bricklayer would
suddenly be in charge of designing blueprints for a 150 storey skyscraper.
They are different jobs performed by people doing very different things. They
may write code in the same language, but they're not in the same profession.

------
dsfyu404ed
I work in security. People not giving enough shits about the "situation in the
field" with respect to how their code will be used, how long it will need to
be used and what will need to be done to keep it functional over that time is
job security for me.

Some points the author makes I agree with, others I don't. Nothing will change
until incentives change.

------
combatentropy
If TLA+ really does let you prove your program is bugless, then maybe it is
something to look into for cars, airplanes, medicine, etc. For the rest of us,
who still wrestle with complexity but probably would't accidentally kill
someone, here are some simpler aids:

1\. Don't put a computer there. My microwave doesn't need to be digital. It
needs two dials, time and power. My mother's washing machine can be controlled
by smartphone. How useful is that, since you can't load it by phone?
Computerized light switches are another example of something that sounds
nifty, but the complexity outweighs the benefit by a thousand to one. Plus,
you need the exercise. Get up and just hit the light switch. I question
whether a car needs any computer at all, especially if the car is electric,
ironically. My ignorance will allow that maybe a computer can mix air and gas
better than a purely mechanical fuel-injection system. But since an electric
motor is simpler, this advantage disappears.

Don't get me wrong. I like computers. But I think computation should be
gathered instead of spread all over the place. I like my smartphone and my
laptop (pure computers). And I would prefer a car from the 1960s (pure
mechanics). I don't like the hybrids --- today's complicated, hackable,
beeping nannymobiles.

2\. Give more time for refactoring. It's an embarrassingly unflashy point, but
I think refactoring is important. Don't rush software out. Let programmers
refine it. Maybe even force them to, beyond their natural tendencies, like the
schoolteacher telling a pupil to revise once again.

It is unflashy, but it makes a big difference. I've revised programs down to a
tenth their size (no, not by using one-letter variables and that sort of
thing, but by finding more efficient ideas), made them run a hundred times
faster on the same hardware, all while adding features and improving safety.

(See the advice about "going deep" instead of "high" or "wide" by Hacker News
member _bane_ :
[https://news.ycombinator.com/item?id=8902739](https://news.ycombinator.com/item?id=8902739))

3\. Data-driven programming. I'll just quote some really smart people:

"Show me your flowcharts and conceal your tables, and I shall continue to be
mystified. Show me your tables, and I won’t usually need your flowcharts;
they’ll be obvious. --- Fred Brooks, _The Mythical Man-Month_

"If you've chosen the right data structures and organized things well, the
algorithms will almost always be self-evident. Data structures, not
algorithms, are central to programming." \--- Rob Pike
([https://www.lysator.liu.se/c/pikestyle.html](https://www.lysator.liu.se/c/pikestyle.html))

"I will, in fact, claim that the difference between a bad programmer and a
good one is whether he considers his code or his data structures more
important. Bad programmers worry about the code. Good programmers worry about
data structures and their relationships." \--- Linus Torvalds
([https://lwn.net/Articles/193245/](https://lwn.net/Articles/193245/))

"Even the simplest procedural logic is hard for humans to verify, but quite
complex data structures are fairly easy to model and reason about. To see
this, compare the expressiveness and explanatory power of a diagram of (say) a
fifty-node pointer tree with a flowchart of a fifty-line program. Or, compare
an array initializer expressing a conversion table with an equivalent switch
statement. The difference in transparency and clarity is dramatic. . . .

"Data is more tractable than program logic. It follows that where you see a
choice between complexity in data structures and complexity in code, choose
the former. More: in evolving a design, you should actively seek ways to shift
complexity from code to data. --- Eric Raymond, _The Art of Unix Programming_
([http://www.faqs.org/docs/artu/ch01s06.html#id2878263](http://www.faqs.org/docs/artu/ch01s06.html#id2878263)).
See also ch. 9
([http://www.faqs.org/docs/artu/generationchapter.html](http://www.faqs.org/docs/artu/generationchapter.html))

~~~
FrancoisBosun
Smartphone controllable light switches and blinds are useful for disabled
people.

~~~
User23
Modern smartphones are an incredible accessibility device for the both
partially and fully blind persons. The freedom they offer is virtually
unimaginable to people who haven't seen the different they make in people's
lives.

------
gabriel34
That which this article describes is only possible to some extent. In my
experience code generators are great until you run into a case for which you
have to manipulate the generator to account for some edge case or some
unforeseen problem, then they tend to become more complex than simply writing
the code yourself.

Most of the problems described are already tackled in modern software
development. TDD, Correct-by-Construction and input validations are some tools
to ensure resilience. The car acceleration issue could also have been avoided
with a default to safe approach. Spaghetti code should have been refactored.
It is not programming itself that is the issue, it is the auto maker that is
at fault here.

In ETL there are some tools to visually manipulate the data flow from one end
to another. Some ETL software even allow you to visualize the effect of the
changes on the fly, much like the Mario game. Solutions developed like this
are easily understandable and maintainable by others even with minimal
documentation (but require understanding of the business problem). But, much
like normal programming, once you want to do something that exceeds the
capabilities of the ETL software you are using, or when performance is an
issue, you have to understand how the underlying software works under the
hood. You can become really good at solving one set of problems with a ETL
software once you have mastered it, but this is limited to one domain of
problems. Likewise, specialized software allowing easy visualization and
manipulation is usually very domain specific.

You can tackle complexity in programming hiding it behind libraries and
databases that do the heavy lifting while the programmer integrates the pieces
and accounts for particularities of the problem he is solving. I could
envision representing library functions as black boxes and connecting arrows
to integrate them, having input validation and strong typing or automatic
typecasting. Still, when you get too far from the machine, you miss the edge
cases, the things you can't imagine when you visualize whatever you are
creating in your head, the problems that only arise when you externalize and
codify the knowledge. I think this is at the core of the issue; in order to
instruct the computer to do something, you have to externalize tacit
knowledge. In doing so you come across problems you just can't see from too
high up.

------
walshemj
The example however (911 outage) is a poor one a traditional telco could not
make that mistake with POTS they talk in major failures (ie losing a switch)
as a once in a generation or two.

~~~
Animats
In the history of the Bell System, no electromechanical exchange was ever
totally down for more than half an hour for any reason other than an natural
disaster.

~~~
Animats
Someday I should write up how that was done in modern terminology. You can
read the "Number 5 Crossbar" documents, but the terminology is archaic.[1]

Crossbar offices consisted of a dumb switching fabric and lots of
microservices. The switching fabric did the actual connecting, but it was told
what to connect by other hardware. Each microservice was implemented on
special-purpose hardware, and there were always at least two units of each
type. Any unit of a type could do the job, and units of a type were used in
rotation.

Microservice units included "originating registers", which parsed dial digits,
"markers", which took parsed dial digits and routed calls through the switch
fabric, "senders", which transmitted dial digits to other exchanges for inter-
exchange calls, "incoming registers", which received dial digits from other
exchanges, and "translators", which looked up routing info from read-only
memory devices. There were also trouble recorders, billing recorders, trouble
alarms, and other auxiliary services.

Every service unit had a hardware time limit. If a unit took longer than the
allowed worst case time, the unit got a hard reset, and a trouble event was
logged. This prevented system hangs.

Failures of part of the switching fabric could only take down a few lines.
Failures of one microservice unit of a group just slowed the system down.
Retry policy was "try one microservice unit, if it fails, try a second one,
then give up and log an error". If a retry with a different unit didn't work,
further retries were unlikely to help.

All microservice units were stateless. At the end of each transaction, they
went back to their ground state. So they couldn't get into a bad state. All
state was in the switching fabric.

All microservice units were replaceable and hot-pluggable, so maintenance
didn't require downtime.

This architecture was very robust in practice. It's worth knowing about today.

[1] [http://etler.com/docs/Crossbar/](http://etler.com/docs/Crossbar/)

~~~
tmccrmck
Please, please, please do this!

------
kazinator
A programmer should be able to dive straight into code and get it right the
first time, up to a certain level of complexity of problems.

I'm not going to say that the bigger the problem you can solve just by
throwing code at it the better the programmer you are. But it speaks well for
you; you have an advantage in an certain dimension.

If I can visualize the solution, see all the cases and debug it in my head,
why not go to code? We should encourage that; the more you practice taking a
problem straight to code, the better you get at it. Just people have to be
reasonable about it; don't try to just code something whose complexity is two
orders of magnitude out of the ballpark for that.

Currently I'm grappling with the design of compiling functions with nested
lexical scopes. I haven't written any code on this for a couple of weeks, but
I have some box and arrow diagrams representing memory and pointers. I've
mulled it over in my head and have hit all the requirements. I have a way to
stack-allocate all of the local variables by default, and hoist them into the
heap when closures are made. I have kept in mind exception handling, special
variables and such. Amid all this thinking, I barked up a few wrong trees and
backed down, thereby avoiding making such mistakes more expensively in coding.

------
vssr
Even though this article resonates with me, I think it portrays everything
much too glamorous. I wish the subjects described were the only source of
problems. I suspect that in reality, most mistakes have quite 'simple' causes.
Some observations:

\- By putting an abstraction layer in between, people visually creating
applications, the problem is pushed to the layer below and new problems will
be introduced.

\- Supporting intricate, bespoke functionality in a visual environment will be
incredibly hard and error-prone. \- If you have trouble thinking about how
your software will run, you should run it on a computer instead of your brain.
I.e. continuous builds, debuggers and sandboxes.

\- Pushing for deadlines and using prototypes as production software is part
of this.

\- Those millions of lines of codes usually include a number of Linux kernels.

\- Before getting to all the fancy stuff and visions about the future, why not
first:

\--- Get all software unit tested / test driven.

\--- Get all software functional tested / behaviour driven.

\--- Use domain driven techniques to close the gap between 'reality' and code.

\--- Create truly comprehensive tests and testing environments for areas that
matter.

~~~
imtringued
>\- Those millions of lines of codes usually include a number of Linux
kernels.

Yes. They didn't write the majority of the code themselves.

Here is a repository that shows how much opensource software is inside a BMW
i3 [https://github.com/edent/BMW-OpenSource](https://github.com/edent/BMW-
OpenSource)

------
narvind
Great article!

My $0.04:

1\. Programmers often see a small part of the jigsaw puzzle they are
assembling. Most spaghetti code is the result of too many cooks over time.

2\. The architects should ensure that testers know and care more about the
problem than the programmers who are coding the solution

3\. There is a clear need for improving the stone age tools programmers use

4\. There is a need to create simulated environments for all kinds of software
so they can be battle tested

------
zubairq
For me, it was nice to see Chris Granger (now working on Eve at witheve.com)
and the Light Table guys get recognition for Light Table influencing Apple. To
quote: "The default language for making new iPhone and Mac apps, called Swift,
was developed by Apple from the ground up to support an environment, called
Playgrounds, that was directly inspired by Light Table."

------
igravious
So. Minus all the doom and gloom. Better safety harnesses, better developer
abstractions, more interactive/responsive programming environments. Whatever
Bret Victor's selling, I'm not buying. He's the type of self-promoter who
doesn't acknowledge all the actual hard work that has been going on for
decades in all of these areas.

~~~
shalabhc
Can you point to what work specifically has improved the situation in terms of
`developer abstractions` and `interactive/responsive programming` in the last
few decades?

~~~
igravious
Yup, I can.

Safety harnesses -> work in type theory and proof theory, witness the success
of Rust and Haskell and Coq and so on.

Developer abstractions -> work on design patterns and anti-patterns among
other things.

interactive/responsive programming -> IDEs, refactoring, time-travel
debugging, visual programming spring to mind.

To write off all these advances is suspect in my opinion. Also, why so glass
half empty? Why not glass half full? Why not say, "wow – given how many lines
of code are out there isn't it amazing how much has not gone tits-up? (pardon
the expression!) That wouldn't fit the Bret Victor narrative though.

One last thing – the claim that we are oh so crap at manipulating symbolic
machinery. That too is suspect, one could argue we actually seem hard-wired
through evolution for linguistic, logical, and symbolic thought.

~~~
shalabhc
> interactive/responsive programming -> IDEs, refactoring, time-travel
> debugging, visual programming spring to mind.

> one could argue we actually seem hard-wired through evolution for
> linguistic, logical, and symbolic thought.

I think you're still talking about programming environments layered on top of
the text centric representation, while Victor seems to be talking about
something a little different. Sure we can do some symbolic and linguistic
manipulation, but consider that rigorous blueprints for physical products such
as cars and buildings are 'diagrams', not text. So purely language based
description and manipulation doesn't apply everywhere. Now, what if many
programs, or parts of programs may be better represented not as text with
symbols, but in some other form that we haven't discovered yet?

------
jasonrhaas
Like some others have said, I think that the specific tools they mentioned for
"model based programming" are not the be all, end all, solution. However, I
agree with the very high level premise of this article, that programming will
continue to evolve to higher levels, and eventually "writing code" may be
abstracted all together.

We have already been moving towards this approach for quite some time, even
though the author doesn't mention this. If you are going to build a web app or
API today, do you just start writing code wily nily? No, of course not. You
look for frameworks and proven/established ways of doing this. We're
practical, we don't want to re-invent the wheel (unless its a side project).

I've recently built an event based API for a client, and much of the work is
not "writing code", but figuring out the right tools for the job, based on the
requirements of how I believe the tool should work. A lot of it is plugging in
frameworks and tools that already exist (and are established), which mitigates
(but doesn't solve completely) the question of using complex/untested code.
For example, most people use take advantage of TCP or HTTP for a web app. Do
we test that low level code? Of course not... its already the most tested code
in the world.

I think there is a danger in writing a lot of "custom code" as I like to call
it, because then you really are introducing brand new functionality, that
potentially has never been tested before. You can and should test your code,
but often that will fall short, because you (or your team) cannot possibly
test every imaginable scenario the code will face.

My general belief is this: the programming of the future won't involve much
programming at all, it will be more like "system architecture", where you pick
and choose the tools to meet your requirements. In the world of open source
and AWS -- all the building blocks are there, but its your job to figure out
how to put them together.

------
3131s
" _a software developer who worked as a lead at Microsoft on Visual Studio, an
IDE that costs $1,199 a year and is used by nearly a third of all professional
programmers._ "

It's tangential to the article, but is that really true?

------
BerislavLopac
"Since the 1980s, the way programmers work and the tools they use have changed
remarkably little."

On one level this is very wrong -- since that time we have introduced and
improved numerous amazing tools like Internet and its family (protocols etc),
distributed version control systems, CI systems and Stack Overflow, to name
just a few. On another level it is close to truth, as we still do most of the
coding in various text editors -- but not for a lack of trying to find
something better. If we found something that would provide the same balance
between simplicity and power, we would be more than happy to switch.

------
YeGoblynQueenne
>> The whole problem had been reduced to playing with different parameters, as
if adjusting levels on a stereo receiver, until you got Mario to thread the
needle. With the right interface, it was almost as if you weren’t working with
code at all; you were manipulating the game’s behavior directly.

This sounds remarkably similar to the ambition of the Logo language (from
1967):

[https://en.wikipedia.org/wiki/Logo_(programming_language)](https://en.wikipedia.org/wiki/Logo_\(programming_language\))

------
kwhitefoot
WYSIWYG has been mostly a disaster. We now have several generations of people
who 'paint' documents instead of writing them and styling them. Every
paragraph has its own explicit style. It's next to impossible to render such
documents to a different published format because nothing is tagged to explain
why it looks the way it looks. I was at least as productive writing in
Wordstar 35 years ago as I am now in Word 365 or whatever the current name is.

------
firethief
One of the best arguments for Lisp I've heard this decade

------
chis
Does anyone work in the field of formal verification? I’m a senior in school
and always enjoyed the actual “computer science” more than coding itself. I’m
gonna end up at a google/Microsoft type place programming webapps unless I
find something else to do in the next couple months.

I guess my question is just: is it a worthwhile career, and is the field
growing?

~~~
jahewson
Microsoft Research has a group which explores tooling for software
engineering:

[https://www.microsoft.com/en-us/research/group/research-
in-s...](https://www.microsoft.com/en-us/research/group/research-in-software-
engineering-rise/?from=http%3A%2F%2Fresearch.microsoft.com%2Frise)

------
mongmong
"The software was doing what it was supposed to do. Unfortunately it was told
to do the wrong thing" I feel the value of Domain Driven Design more and more
every day and I think it is one method to make the business intent be
reflected clearly in the code base, using domain language and modelling the
domain to be self documenting.

------
shusson
One of the biggest benefits of model driven development is having a common
language to talk about design with other people. State becomes clear and any
complexity is inherently linked to how you designed your model. I really hope
it becomes more popular and more companies start supporting it.

------
Aeolun
I don't know. I think the problem with software is that as we become able to
do certain things with it, people realize they can do that and then want to do
more.

Ad infinitum this adds up to hopelessly complex software.

------
rhinoceraptor
I’m surprised no one’s mentioned it yet, but unintended acceleration is most
likely a myth. In any functioning car, the brakes can easily overpower the
engine and stop the car.

------
skywhopper
This article is silly in a lot of ways. The real problem of software
engineering is not "how do you prove this code follows algorithm X exactly?"
It's "how do you know what algorithm X needs to be in sufficient detail to
implement it?" It's "when you realize you were wrong about algorithm X, how do
you change your existing, working code to implement the new algorithm X-prime,
without interruption?" It's "what do we do when we realize algorithm X-prime
was completely the wrong thing, and now we need to transition to algorithm Y,
and also fix all the data that algorithm X-prime messed up?"

No matter what awesome tool you come up with to translate requirements into
CPU bytecode, you still have to translate human requirements into that
requirements language, whether that language is assembly, C, Java, Rust,
Haskell, or TLA+.

When you have the money and time and patience to fully examine the problem
space and work out exactly what the requirements are, that's great. In the far
more common scenario, you have impatient investors, or you're inventing
something entirely new, or your software will deal with the real world which
is not as predictable as an imaginary algorithm. Government regulations, flaky
hardware, network latency, human error, financial limits, competition,
malicious actors, gamma rays, other people's software, power outages,
terrorists, dependencies, operating system upgrades, corrupt data, hurricanes,
all are conspiring against your software. Can you plan for all of that?

~~~
brians
The article's not silly. Leveson and Hamilton's works cover that explicitly.
Start with Engineering a Safer World's Intent Specifications, move on to
Safeware, and look at the attention that goes into writing the right TLA+
specs.

Lists like "Government regulations, flaky hardware…" look a lot like the upper
levels of the hierarchical control systems contemplated there. It didn't make
it into a pop science piece, but it's absolutely core to the work at issue.

You can get a copy of the book at [http://sunnyday.mit.edu/safer-
world/index.html](http://sunnyday.mit.edu/safer-world/index.html) .

~~~
SilasX
It sounds like you're both in agreement, which would support the criticism of
the article, which hides the (addressing of the) very points the parent was
raising, and gives the impression that it only solves the very academic
version of the problem rather than the real-world software snags.

------
p0nce
This gives an idea for an interesting tax. Every (money-making) codebase
should cost money commensurate to the number of LOC.

~~~
insertnickname
It'll just encourage companies to rewrite all their software as one line of
Perl.

~~~
hwayne
My J skills will finally be worth something!

~~~
eggy
No kidding! I love J, and I find it very easy to troubleshoot, since it is all
within a few lines or a page. Forth is like that too.

------
kmicklas
Disappointing to see such a long article and no mention of type theory, or any
other work from the "correct by construction" school of formal methods. It's
all normie model checking, TLA+ etc.

~~~
AnimalMuppet
Type theory is much less than "correct by construction" formal methods. Type
theory is great at preventing a whole class of bugs (an operation on a value
for which that operation doesn't make sense). But it's inadequate for the
larger problem, of whether the operation is the correct one.

Formal methods can ensure that the code matches a formally written spec, for
all the aspects of the code that are covered by the formal method. Note well:
That's not _all_ aspects of the code. And you have the little problem of
(correctly) creating the formal spec. This move the problem up a layer, but
the problem doesn't go away.

Even given a formal spec, though, it's my impression that formal methods are
_s l o w_. Does anyone have data on this?

~~~
catnaroek
> And you have the little problem of (correctly) creating the formal spec.

How on Earth did we end up putting people who can't write down precisely what
they want in charge of programming machines that do exactly what you tell them
to?

~~~
perl4ever
I was just told today that gathering requirements is not my job, it's the
PM's. But they are swamped in administrative minutiae, and all too often the
only person who knows how something worked has quit anyway. So in sum "no
spec, no spec, you're the spec!"

