Hacker News new | comments | show | ask | jobs | submit login
Programmers who want to change how we code before catastrophe strikes (theatlantic.com)
383 points by mattrjacobs 11 months ago | hide | past | web | favorite | 269 comments



"For Lamport, a major reason today’s software is so full of bugs is that programmers jump straight into writing code. “Architects draw detailed plans before a brick is laid or a nail is hammered,” he wrote in an article. “But few programmers write even a rough sketch of what their programs will do before they start coding.”

I almost always dive in, but I almost always write my code twice. Essentially the first round is my rough sketch, the second is written when I fully understand the problem, which for me only occurs once I've tried to code it.


Exactly. This is ignoring the fact that code is a more useful blueprint than anything else for programs.

I'm sure if architects had the ability to magically conjure building materials out of thin air and try things in real life for free, architecture would involve a lot more trying and a lot less planning.

Of course, I'm sure the article is talking about people just rushing into production code without thought, and I get that is a problem. It's just when people make comparisons like that it can imply this horrendous future where we whiteboard program for months, which is just a terrible idea.


I think there's a bit of a problem with the architecture metaphor in general. It implies a limited range of things planned and built. As software eats the world and does more and more things, a metaphor that more closely reflects the variety of software is people telling other people to do stuff.

If you tell someone to bring you your socks, you don't need to plan for it. If they bring the wrong socks, you just fix it. If you tell them to invade the beaches at Normandy, you might want to work out more of the details in advance. You can tell someone to remove a splinter or remove a brain tumor, and your part of the instructions might be roughly equivalent if you are telling an "API" that has already been adequately told how to do what you ask.

The problem of unintended consequences of instructions has been with us far longer than computer software. In any story of a magic jini granting three wishes, the third wish always ended up a version of `git reset --hard`. I love having direct manipulation tools that simulate your proposed solution, giving you much faster feedback. Midas with VR goggles would have quickly seen something unintended turn to gold and canceled before committing. That's extremely helpful.

But this isn't the ultimate solution for how to deal with software complexity. It's a very helpful tool in some cases. Some software should still just be coded immediately and fixed as needed (takes less time to do it again than to create a simulator), some would benefit most from a good, debugged library (I'd rather the robot already know how to remove the tumor than show me direct feedback of my fumbling), some from direct manipulation tools, some from mathematical techniques (remembering that mathematically proven software is buggy if an errant RAM chip violates the axiom that `0=0`), some from better testing, some from better runtime monitoring, and so on.

But as with humans' verbal instructions, there will always be leftover unanticipated consequences due to flaws in the spec, bugs in the code, and breakage in the implementation.


One way the architecture metaphor breaks down is that very tiny details can have incredibly large knock-on effects in software. This is different from normal architecture and building engineering where it's certainly true that there are many details that need to be carefully considered, but those things are well understood and managed from project to project. Software on the other hand could be doing anything, in a world of pure logic traversing scales analogous to the smallest quark ranging to the entire universe. Building physical things just doesn't deal with those scales, you only have to worry about material properties and structures within a range of few orders of magnitude.


Apparently a "subtle conceptual error" can have massive consequences in architecture.

http://people.duke.edu/~hpgavin/cee421/citicorp1.htm


Coding isn't always the best blueprint. One example: what if you're writing a service that talks to two other internal services and a third-party API? Your code might capture what your specific service does, but it doesn't capture the overall design and intent of the complete system. That's one of the places where formal methods like TLA+ excel.


You might be interested in Servant, a haskell library that uses types to capture these high level interactions between components and then validate and even generate clients, documentation, servers, and simulators.


I definitely would! One thing a lot of people miss in these discussions is that this isn't an either-or, and we should be using multiple different correctness techniques to increase our confidence.


> This is ignoring the fact that code is a more useful blueprint than anything else for programs.

Personally, I find good 'ole bubble-and-arrow diagrams, list of data members, and hand-written C structs with arrows between them to be extremely valuable prototyping tools.


It's true that source code is a blueprint (we don't have builders; for software, unlike civil engineering, the build process is done by automated tools, not people.)

There's certainly an argument that coding is often done without clear understanding of the larger-scale structure that the module is going to fit into, the exact domain functionality, etc., and that much development has gone too far in avoiding analysis and requirements, overreacting to the broken and frontloaded abstract design process that often used to be done prior to writing any code.

But missing blueprints are the wrong analogy.


> code is a more useful blueprint than anything else for programs

This is so clearly false that it can actually be mathematically disproven. To prove that there are better software blueprints than code, I will prove that there exist algorithms/systems for which sufficient formal blueprints exist, yet no code can capture. To prove that, I will need to define the terms more precisely. A "sufficient blueprint" is one that makes it possible to prove properties of interest about your software. "Code" is a formal representation of an algorithm, which always allows efficient mechanical execution, meaning, mechanically executing code is within a polynomial bound of the complexity of the algorithm that code expresses.

Now for the counterexamle: take the specification of the Quicksort algorithm, [1], shortened here as: 1. Pick a pivot. 2. Partition the elements around the pivot. 3. Recursively apply 1 and 2 to the resulting partitions.

I claim that this specification is sufficient. For example, from a direct formal statement of it, you can formally prove that it actually sorts and that it runs in worst-case quadratic time complexity. Yet, I claim that no code can express the above specification; in fact, no code can express any of the three steps. For example, step 1 says "pick a pivot", but it doesn't say which. This is not an omission. It doesn't say which because it doesn't matter -- any choice will do. And yet, code must necessarily say which pivot to pick, and when it does, it is no longer captures the specification, and is no longer a blueprint for all implementations of that specification. Similarly for the other two steps: code must specify which partitions are created of the very many possible and completely fine ones, and in what order the recursion takes place. QED

Languages like TLA+ can formally, i.e., mathematically, capture precisely the specification above and express anything code can. There is no magic here: Efficient execution of code (provably!) cannot be done in the presence of some quantifiers, yet quantifiers increase the expressiveness and "generality" of the language.

[1]: https://en.wikipedia.org/wiki/Quicksort#Algorithm


Oh dear. Thank goodness civil engineers do not build by trial and error. Many programmers do use the trial and error approach but their programs are riddled with bugs. There is no way that code is the best or most useful "blueprint" for software.

Your analogy is quite good, though, just not in the way you intended. If civil engineers did what you suggest then they would build magnificent structures which work in their office environment. Of course when that window that's never opened is opened by a cleaner everything comes tumbling down due to a gust of wind.


> Thank goodness civil engineers do not build by trial and error.

Oh but they do. Hundreds - if not thousands - of years of trial and error and catastrophes and deaths.

But the failures get written up, taught, remembered, and applied to future projects.

(And even then not perfectly - witness Mexico City and buildings on a drained lake in an earthquake zone; or Houston's wild building expansion in a hurricane-prone floodplain.)

Software, in contrast, has barely 60 years under its belt and a culture of hiding failures. Give it another 500 years and it'll start to look more like the architecting and building industries we have today.


I feel like this is part of why Brooks says build the first one to throw away, because you're going to.

People seem quite unable to imagine life with a program that doesn't exist. Building a more precise specification would be valuable if people were really able to evaluate the specification as if it were a program, but they don't seem to be able to do that. So we wind up building the whole thing to see if it's right, and it never is, but then we build the next one, and it's much closer.


> Brooks says build the first one to throw away

A great theory, but all too often, companies have a hard time scheduling time for a rewrite, and the hacked-together rough-prototype code gets pushed into production in perpetuity.


I'm more and more wondering if the real reason isn't that business people have issues with technical terminology. In this case: A prototype is something you throw away before you build the real thing. Yet somehow many non-technical people think its meaning lies more along the lines of "beta".

I've seen other misunderstandings like people describing security-focused static analysis as "pentest". Each time I'm just left scratching my head...


Solution: Be pedantic in your usage of words and correct people around and before you all the time if they use a word wrongly.


Let's say it that way: If that's an effective way for you to deal with this, I envy you for your work environment.


And that's because companies have learned that customers usually aren't patient enough to wait for something rewritten. Most customers want something now, even if it's not perfect, rather than wait for something better. So, to survive in a markplace with these types of customers, companies have to shorten time to market...


Well, even that explanation would imply that the company should give you time to at least fix the worst hacks after they've released the first version (which will probably have problems anyway), but that also doesn't seem to happen...


I code for a couple of the other groups in my division as a favor. I still run our entire QA group out of a folder called "mock database" for this exact reason.


They are able to do that. But, unlike the physical world, it's really expensive compared to just building a virtual one first. And then really tempting to just use that prototype, or patch it up.

If it was super cheap and easy and almost free to build things in the physical world we'd probably do the same thing and just skip the architecture and wing it.


I feel like maybe they simplified Lamport's sentiment here, because Lamport's on record as pointing out that often times the minimal specification of code is the code itself, and that's a very big difference from the architectural world.


Me too.

The very few design specs I've seen during my 15+ yrs as a programmer are typically high-level enough to be the mechanical equivalent of "cogs needed here, and springs!". What kind of cogs, e.g. diameter, teeth count, diameter of the center hole (should the hole even be in the center?) and what kind of springs are never mentioned.

If you ask the customer about functionality their response is "we want it to work and we wanted it yesterday!".

So what you have is an intuitive feel of what the customer wants and so you write some code to see how it could come together. Then you show it to the customer to see if this is what they wanted and BAM!!

The sales rep at your company just sold it to five other customers while you were at coffee break waiting for the response of the original customer. It's barely a prototype, not even an alpha. It's five lines of code and a mock up UI. Bug reports start piling in. The sales rep keep pestering you every five minutes at Skype/email/phone/whatnot because the customers refuse to pay before the bugs are fixed and the sales rep is not getting his bonus before the money is in.

The filed bugs are actually just badly concealed feature requests and the software for which you just started getting a nice mental model over starts to grow the most hideous excrescences. Especially as you have to hire a bunch of new developers just to keep up with the piling bug reports. There's no time to train them or even teach them what the software is supposed to do.

  "What do I do? Where do I start?"
  "Look at the bug reports and fix them!"
  "Which should I start with?"
  "The ones labeled 'blocker'"
But they are all labeled 'blocker' because the sales rep has write access to Jira. You tried to prevent that but then the sales rep escalated all the way to the CEO:

  "I need write access so I can fill in info from the customers!"
Sounds perfectly safe and sane. It's not. But you don't have time to think about that because the sales rep keep Skyping you links to Jira asking "Is it done? Is it done?". You uninstall Skype and your email client and shut off your phone just to get some work done. After an hour the sales rep comes by with the CEO: "What are you doing? We need this fixed now!" Then the yelling starts. You quit your job. Get another. It's the same deal.

The software is FOOBAR but just usable enough that the customers put it everywhere. Airplanes start to crash. Cooling systems at power plants start to fail. Nuclear systems for Mutual Assured Destruction start to detect incoming missiles.

THE END


Exactly.

Software is not eating the world, the world is eating software, and the world is starving and just wants to shove it down without even tasting it.

Sometimes the world gets food poisoning for eating raw software.

Software developers are just the cooks trying to tackle this impossible task.


> Sometimes the world gets food poisoning for eating raw software.

This is my new tagline. Thank you.


This is brilliantly funny, you made my day. In all seriousness though, go find a new job.


You had me at 'and springs!'


Sounds like you should have gone to the CEO and got there before the sales guy did....


> Sounds like you should have gone to the CEO and got there before the sales guy did....

He did the right thing: He did quit the job.

No, seriously: If you work at a company where the CEO gives the salesperson that kind of permission (with the described consequences), you really better quit, since it is by definition a dysfunctional company.


"Everything should be built top-down, except the first time" --Alan Perlis


This was the last straw. After many years of seeing Alan Perlis's Oscar-Wilde-style epigrams about programming show up everywhere, I finally looked him up.

It turns out he published all those epigrams together in a single document in 1982: http://pu.inf.uni-tuebingen.de/users/klaeren/epigrams.html


An architect's product is not a building. It is a plan for the building. So the architect is doing what the software engineer is doing, he's iteratively drawing his blueprint.


One problem I see with that quote is that it is presuming that all programmers choose to just dive in at their discretion. It ignores the fact that often programmers are simply told to just dive in and deliver quickly. It's simply a game of mixed attitudes on project management delivers mixed results.


I almost always write the major steps as comments (beginning with 'TODO:') first and then do the implementation once I'm satisfied that I know what a good solution looks like. Over the years I have had the misfortune to discover way too many functions that just were 50% implemented or simply did something that was not at all what they were supposed to do.

I don't doubt that conscientious developers can arrive at good code however they prefer. But there are a lot of people out there who will start with a quick and dirty implementation that they then debug into a more or less functional state. I see a push towards making people stop and take a moment to think about what they are going to do. Test-driven development is another tool to achieve that end.


That's more like the carpenter drawing marks in the wood to tell him where to cut.

I feel like, complexity is a problem when there is more software than one person could write. At this point, you probably need to do some Business Analyst type stuff.

For instance, we are aware of race conditions in code, and maybe we can avoid them by writing purely functional programmes in rust or whatever people do these days.

But what about the race condition of several servers restarting repeatedly and coming back up in a different order? Or a customer registering online when he already sent a form in by post?

The oldschool way of doing this was big UML drawings. This was a compromise because both programmers and business subject matter experts hate drawing them equally.

People don't tend to do this anymore because it is 90s and smells of XML and Java. But what replaced it?


Note that Rust doesn't prevent race conditions. Race conditions are application specific. Rust prevents data races, which has a very specific technical definition: an unsynchronized read and write to the same location in memory.


Programming is making the detailed plans - the building phase the one that is analogue to the brick laying or hammering nails is done by the compiler. Of course this is not an argument against making less detailed, more overall sketches - they are still useful - it is only that that particular argument for it is incorrect.

By the way the article makes another similar mistake: "Instead of writing normal programming code, you created a model of the system’s behavior" and then it talks about how programmers like to code and oppose this new approach - but this model creation is still coding - the code is visual and a little bit more high level. But building this model is still programming.


> But building this model is still programming.

I agree with you, but I don't think this is a mistake in the article. I read it as normal programming code meaning low level procedures as are typically written these days.


More and more I'm writing high level pseudo-code, and planning ahead, before diving in. When I'm done, I basically have a to-do list and a good chance of not painting myself into a corner, which would otherwise happen way too often.


I tend to write code four times following the same principles, the first two are similar, then the client explains his real needs and I have to start again.


So basically all our problems will go away if we return to UML and requirements documentation that's hundreds of pages long and edited for ages before anyone writes any code.


No. Have a look at this video. It's worth it. He's just saying that you should think before you code and that most specifications should be just a few sentences stating what you intend to do. Code as a blueprint is not the same.

https://youtu.be/-4Yp3j_jk8Q


> So basically all our problems will go away if we return to UML

UML belongs to the class of semiformal models. In other words: It leads people into believing that a lot more things are precisely specified than actually are.


That sort of development practice has a much closer relation to engineering than current practice does: it's just that it's extremes are on the other end of that one.


The difference is that engineering has to get it right the first time. There is no second try. Most software isn't like that. It gets continually updated and maintained over several years after the initial release.


Thats how most Medical Device software is written :)


Fair enough but it's hardly a panacea. I think it likely does work better with something like that where the requirements are truly well understood, though. Most of this "agile" stuff is working around people not being able to tell you what the hell they want.


I was intern at a company that writes medical software and software for medical devices. I tell you: The usual standards are mostly bureaucracy and rather worthless in that they change anything about development practice.

I also had a nice private talk with a person who worked in software development for avionics at some very well-known company. Before I talked to him, I thought that at least there the development practices are better. After that I seriously ask why there are not serious accidents about everyday and how the plane even comes off ground. :-(

TLDR: Dream on... :-(


> "Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself."

This is an insightful description of the biggest mental challenge I face when programming.


My personal method is, every line of code that I write, I write for someone else. If it's internal, it's always for another developer (who may not exist) and, for the upper layers, a user (of the function/API/library) that may peek behind the scenes trying to debug some problem.

If it's anything facing the "user", it's entirely written for that user, and I know they're not very good at reading or writing software.

Thinking about the developer helps prevent me from being lost in complexity, abstraction, or the general specific of the problem. Thinking of the user keeps the game in sight.

In all cases, I've had very "clever" solutions that are fun and challenging, that I completely scrap for something the other developer, and myself, can understand in 3 months.


> for another developer (who may not exist)

You, 6 months from now (or whenever you stop touching this code base), are effectively "another developer".

New developers don't seem to really grasp this until the first time they have to maintain their own old code. I don't think it really hits home until the first time you experience: "What is this? Who wrote this garbage??" runs git blame "..oh, crap."


This is why I usually throw in comments with myself as the recipient for understanding where complexity exists.

I've heard people debate over whether comments should be in well written code, including some that argued comments should never be used at all.

Party A: Code should be easily understandable so that comments aren't necessary.

Party B: Comments should exist where complexity exists to save a developer's time when determining what is and is not important to them.

Sometimes though, you just can't avoid it. The few times I had to write PERL are prime examples.


Obviously the worst comments are utterly obvious:

    // Get the users name
    name = get name();
There are lots of cases where inline comments are very useful though.

As an example, most of my team isn't strong on regular expressions, so when I use one, I'll usually put a comment explaining what it does in slightly more detail than I normally might (to the point it would be a bit too rudimentary to anyone well versed in regex).

I've also been writing lots of automation scripts for stuff running at AWS lately, which often involves using strange AWS CLI filter syntax, jq expressions (for JSON parsing) and some other random utilities like sed, cut, sort, etc. Even for myself, I put fairly detailed comments, but I don't use those on a regular basis and usage isn't always obvious anyway.


Actually I would consider that a decent comment, since get_name() doesn't actually say what name it is getting. It could have been some other name in the system.

This would depend on the context and the surrounding code though.


Regular expressions are the perfect use case for unit tests, they are simple input/output pure functions. Not only do you ensure that they work, you provide several examples to future devs of what should and should not be considered a match.

On the automation scripts I really need to take a leaf out of your book though, I'm awfully undisciplined at commenting them and it always comes back to bite me.


100% agree on unit testing regex, but spending a minute to type a detailed comment that saves sometime the extra time of navigating to and reading several unit test cases is worth it. If someone is modifying the regex the tests are essential, but for someone just reading the code why not save them the time?


Generally the best comments are to describe "why" (sometimes "how") more than "what".

Of course, good writers know their audience. If that code is somehow strange in its organizational context, a few "what" comments and links to references, documentation, and/or tutorials can be great.


It's analogous to a translation note: any time you need to add a comment is a time where you've failed as a programmer. Sometimes it's necessary, but it should be a last resort.

I can well believe that writing Perl would require comments, but to my mind that's a case of "don't do that then".


This is very much my style.

I've found that my approach almost always boils down to "how can I make this obvious". This doesn't mean you can't have complexity, but it should be obvious how to use something, and hard (ideally impossible) to use wrong.

A great many projects fail this, or make assumptions about what obvious is.

A bit of a digression, but I have found that while I have started shifting towards more strictly typed tooling to facilitate this (the typing can often act as a guide which is easier to understand than prose due to consistency), the projects that tend to do this best are ones in the loosely typed languages.

My assumption is that in a language like Python, people are forced to think about what is obvious more because it's so much easier for users to do the wrong thing. As where when you have (for example) a Java system, people just assume that because the types are there, it's self-explanatory.


> My personal method is, every line of code that I write, I write for someone else. If it's internal, it's always for another developer (who may not exist) and, for the upper layers, a user (of the function/API/library) that may peek behind the scenes trying to debug some problem.

> If it's anything facing the "user", it's entirely written for that user, and I know they're not very good at reading or writing software.

In my opinion a very reliable way to get a deep hate on the respective person.


I feel that it is dramatically less like playing blindfold chess and more like playing normal chess.

You can always refer to the code or state in the debugger at a given time, but being able to think through steps ahead in your head is very helpful. Likewise in normal chess you can always refer back to the board to reset your mental state, but it helps to be able to think 5 moves into advance.

Playing blindfold chess isn't much harder at the beginning of the game; you can always reset to the initial state and replay moves if you want to. But as the game goes on it becomes hard to remember "Did I move the A rook or the H rook?", basic playing of the game isn't too hard. If you could refer to the board every 10 moves, blindfold would not be very hard at all.

When programming, all the information is there, being able to run it in your head is just faster than asking the computer to do it.


+1. Same experience.


A lot of it comes down to the fact that this isn't engineering, and we aren't engineers.[0] The field has a very low barrier to entry (much lower than a bachelors degree). The standards are low, the expectations are low, and there's a strong anti-intellectualism streak. Don't believe me? try talking about "esoteric" stuff like category theory, logic programming, LISP or writing functional specs. At the workplace most of your fellow professionals will stare at you blankly, and even on self-selected places like here or proggit, half the people will rush to dismiss it.

There aren't - as an example - mechanical engineering "bootcamps" because mechanical engineering is an actual engineering field, with high standards, real accreditation, and professionalism. We don't have that, we have middle aged men giving talks in t-shirts and saying stuff like "ninja" and "awesome".

I think we need to face up and rectify the fact that we aren't engineering before we can advance. And yes this requires excluding people who aren't up to standards. We're not the equivalent of chemical engineers - we're the equivalent of alchemists.

[0] with the Caveat that stuff done in Avionics, or Medical Imaging, or anything else with very high standards and rigorous processes could probably be called that.


We're the last market-driven field. So most companies take the attitude of "Take your approach. I'll see you in the market". This isn't anti-intellectualism or realism or pragmatism. This is a different fitness function for software. Bug-free programs aren't inherently good.

If I'm building X-ray scanning software I'm going to be careful. But if I'm writing a Slack lunch bot in Coq for anything but the fun of it, I'm making questionable choices.

You know the old saying: "Anyone can build a bridge that stands. It takes an engineer to build a bridge that barely stands."


I'm not talking about anything as ambitious as making all software be formally verified.

All I want is for the people in this industry to take it seriously, for us to have standards as an industry, and some kind of professional certification to exclude those that don't know the basics. Software is rife with cowboys and it harms the image of those of us who actually care.


> category theory, logic programming, LISP or writing functional specs

But does it fix my problem? I'm sure category theory is interesting for it's own sake, but it won't help my boss add this extra attribute to our product, so it won't help me get paid.

Given that I've already added fifteen form fields this week, my mind is too overburdened to care much about category theory, which will have exactly zero relevance to my next week of adding form fields.


If your role in a company is so removed from the business or stakeholders that you're being assigned tasks like "add 15 fields", then IMO you're already in trouble.

That's part of a larger issue though, where we actually let non-technical people dictate technical specs to us.


I remain open-minded to the idea that category theory could help me make better computer programs, but I have yet to see anything that suggests to me that it really would.

FWIW, I understand how monads work in Haskell & co, and I definitely see their value. But I don't consider myself to know any category theory at all.


SQL and category theory have a fair degree of overlap so there is that.


Are you suggesting that reading a book on Category Theory might make me better at writing SQL queries?


Mechanical engineers are overseeing a software sub-culture with regards to many of their creations being physically manifested. CAD/CAM that spits out g-code to run on mother machines, aka cnc machine tools. I personally would like to see the existing macro primitives available on cnc controls formally structured and abstracted up to enable "normal programming" to occur. Eliminating the fragile, spaghetti code mess that seems to be entrenched. As a g-code programmer I am unable to constructively edit anymore, but this need not be the case.


I agree with the premise, but I'm unsatisfied with the solutions offered. I've worked on both C code and model based designs in Simulink and other more purpose built tools for safety critical embedded systems.

First off I can't agree with the notion that programmers don't care about the system. They do. A lot. Especially the good ones. You can't do any meaningful work without caring about the system you're working on. But like in any profession you have some useless people.

Second on MBD. In my opinion it leads to much more complex code, especially because the people writing the code aren't coders anymore. Or they don't see themselves as such. You say the people crafting the requirements are now speaking the same language as the coders? I say we've siloed people working on requirements from the actual coders now, which leads to exactly the coders having no clue about the requirements, and the requirements engineers having no clue what their model based changes do to the software. MBD is the worst reason I've seen for spaghetti code making it mainstream.

MBD promotes spaghetti simply because it's so much easier to tack on more and more complexity without a serious understanding of what it does to code.

Another thing is that I find MBD often harder to read than code. I can't really point to hard evidence, but I have a good analogy here in terms of UX. If code is like Google, MBD is like Bing's 2D tiling. It's much easier for a human to parse a list of statements, than statements that are all over the place. I think if we did meaningful research here, we'd get the same result for MBD vs code.

We still shouldn't give up to look for better ways to code, but I think the better ways are easier to use programming languages that preserve the general purposeness of programming, better interactive environments (like F# interactive, Python REPL, etc.) and helpful syntax highlighting and better code annotation systems. Maybe live rendering of markdown and explanatory pictures in the IDE would be a helpful first step.


> Another thing is that I find MBD often harder to read than code. I can't really point to hard evidence, but I have a good analogy here in terms of UX.

It is harder to read. Both for humans and for programs.

In code, if you see a mysterious call to "flushData", there are standard ways in each language to discover exactly where the definition of that logic is. In contrast, what is the standard way to discover what a circle means? A dotted arrow?

OK. Maybe your IDE has a right-click "go to shape definition" option. But that's not a language feature. That's a tool feature.

So you don't really have a language spec. You have a tool spec. So the problem is that the language is hard to read because it's not a concrete language. It's a nebulous implementation detail of a tool.


What is your approach to delivering a programmatic solution to a given problem? Without knowing that, I can't quite tell if you understood the article at all.


So we need to write more code to make fancy Photoshop-like editors for the average-Joe programmer who can't see the big picture? That just adds more to the code issue - even Photoshop has bugs you know.

The fact that "Few programmers write even a rough sketch of what their programs will do before they start coding" is a soft-skills/experience issue that isn't specific to software. I too can use Photoshop to draw logos, maps, and diagrams, but I'm pretty sure someone out there who uses Photoshop professionally knows a lot more tricks within Photoshop and sees the bigger picture of what they want to create before starting. I generally free-hand my drawings, as opposed to the structured outlining most professional artists do.


No, we need to use logic and set theory and provide tools for visualizing the implications of our rules and checking correctness of desired properties. There's a strong history and lots of good people working on these things but it's tough to get our message out to working programmers. This article helps but based on the comments we've got a lot of perception work to do :)


There are definitely people making a real effort here, and I appreciate you. But I can't shake the feeling that the reason your task is so hard is that everyone before you has been selling snake oil. "Visual programming", "human-readable software languages" and so on are all just ways of saying "crippled tools".

It's not an accident that the examples in the article were WYSIWYG editors, Photoshop, Squarespace, and Mario. All of those things flatten neatly into two dimensions, and their terminal form is visual. The visualized code is the product.

Meanwhile similar initiatives for software in general are almost always nonsense. There was a pretty compelling TED talk* about visual tools for cybersecurity, shared by lots of people I know. Only one problem: none of the fancy tools showcased work, or will work. They're operating backwards from a known answer. Like most programming tools, they collapse right where they're needed most, stopping at the same edge cases that cause these problems.

More broadly, it seems like the existing tools for known-safe software run exactly the opposite direction from Bret Victor's vision. Correctness proofs, exhaustive test suites, reversible debuggers, and so on are far from non-code, but they're what we use where reliability matters most.

I can appreciate that "code everything like the space shuttle" is hopeless, and I would love to see breakthroughs on easier correctness checking. But right now, it does feel like an attempt to push tools that won't work when they're needed.

*TEDx, but selected by TED for special featuring: https://www.ted.com/talks/chris_domas_the_1s_and_0s_behind_c...


"Visual programming" means "crippled tools" because tooling is at least an order of magnitude harder with a visual language than with text.

As a small example, let's talk about ignoring cosmetic details in a program.

How do you do that with text? You strip whitespace, comments perhaps, and you have a pretty good approximation short of building a syntax graph.

How do you do that with flow charts? How do you "strip" purple diamonds versus green boxes? Is the shape cosmetic? The length of line? What about dashed lines, are those semantically meaningful? What about the layout? Do leftmost lines take precedence or can you rearrange the order of edges? Or are the edges labelled somehow to specify precedence?

What does this all mean? Well, it's possible to come up with really compelling demoware. So it's easy to write an article or pitch a concept. But writing a commonly understood visual programming language is hard because we don't really have visual languages, full stop. The closest thing we have are mostly-universal signage, but that's just about tagging matter and locations, not about communicating complex thoughts about procedures.


Yeah I'm not a fan of visual programming, but visualizing effects of programs to help with reasoning, I'm a fan of that. Proofs is my approach, but as humans we don't deal with proofs as text well and have things like sequent calculus and semantic tableau that are amazing aids to reasoning.


Humans can read and write proofs just fine, if they are taught how to.


The problem is that the proofs that one finds in typical math papers are very different from the proofs for computer programs.

Proofs in math papers are "mostly right" about rather complicated facts. This means that what is shown is highly non-trivial, but when an error is found, it rather does not matter since it is rather easy to fix the hole in the proof. The reason seems to be (but this is my personal opinion) that the typical things mathematicians love to write proofs about have a high level of redundancy for this kind of error.

Proofs for computer programs rather prove statements that are rather trivial and obvious, but very subtle in the edge cases. Often the a non-formalized proof is "trivially correct" when a human skims it, but often is wrong for very, very subtle reasons. Thus there is typically no way around formalizing it - which with today's tools is very tedious and boring.


> The reason seems to be (but this is my personal opinion) that the typical things mathematicians love to write proofs about have a high level of redundancy for this kind of error.

Rather than “redundancy”, it's a matter of having nice (algebraic, geometric, whatever) structure. Of course, a pure mathematician has more freedom than a programmer to decide what kinds of structures he wants to work with.

> statements that are rather trivial and obvious, but very subtle in the edge cases.

This is a contradiction in terms. If it seems “trivial and obvious”, but is actually “subtle in the edge cases”, then you are underestimating its complexity.

> Often the a non-formalized proof is "trivially correct" when a human skims it, but often is wrong for very, very subtle reasons.

Then you need to roll up your sleeves and actually prove things, not just skim through purported proofs.


Do you also consider static types, for loops and exceptions to be "snake oil"?


No, and I don't think they have anything to do with Bret Victor's vision outlined in the article.

If by "change how we code" you mean use Rust for safety guarantees? Then yeah, sure. But that's not how you get to "not putting code into a text editor".


It goes beyond perception; I think there's something missing, and I think someone is close to finding it.

Databases are already employing logic and set theory to great effect (whether practitioners realize it or not). The fact that you can write a few dozen lines of SQL in a few minutes, and run it over millions of records, and feel confident in the results is astounding.

But there are other types of programming where it has not been so successful. I'm trying to learn some TLA+, and I'd like to see if it can help model resource management/lifetimes.

I think rust is very promising because it brings a lot of PL ideas into a practical setting. One of the best ideas they had was to make it not depend on a runtime, so you could use it without reinventing the world.


Yet! The 911 outage described in the article was the result of crossing a threshold that was - logically - dead simple both in how it worked and how it was to be understood. The issue was that normal operation of the system pushed through that threshold. The problem here wasn't that the software was changed in a way that made it perform unexpectedly.

I don't have a silver bullet, or at least not a pithy one. But I don't see how set theory & etc could have led to any different outcome here.


In my book, operating software is a different skill set from writing software.

In the 911 outage, my first thought was: Alright, we have a server accepting calls, and dispatching calls. If there is too much of a difference between incoming and dispatched calls, there is a problem. Maybe it's a capacity problem and we don't have enough 911 operators, maybe it's a software problem, maybe we're dealing with a DoS. I don't need to know or understand anything about the dispatch software, and I'm pretty sure I can monitor, measure and alert these metrics independently from the software.

We can twist, turn, push and shuffle Dev, Ops and DevOps around, but good operators largely try to mitigate failure and risk, including their own limited imagination how much infrastructure and software could fail at once. That's a very different skill than trying to make a single component in a network bug-free.


How long have you worked as a professional programmer? Because you sound like a just-out-of-school idealist with an idea of the "right way to do it". That ideal tends to get blunted after a decade or two of experience in the real world.

In particular: Set theory? I don't want to program using set theory for the same reason that I don't want to program using octal - it's the wrong level of abstraction for what we're actually doing, no matter whether it's formally equivalent or not.

Tools for checking the correctness of desired properties? I can agree with you there; better tools for doing so would be useful. So far, the problem tends to be that the tools are less generally useful than the tools' authors and proponents think they are.


I don't know what kind of code you've been writing, but I've been working in compilers and static analysis for decades, and set theory is exactly the right tool for the job. I don't imagine it's the only thing you would ever need for any kind of programming, but I bet there aren't many kinds of programming to which it is entirely irrelevant.


The quality of the vast majority of my code is evaluated in 'soft' ways. Anywhere from "does this webpages layout work well on all browsers" to "does this AI feel like a challenging opponent" to "did you deliver software results fast enough" to "is the code neat enough and well documented". The part that can be meaningfully tested through any kind of formal system or automation is minuscule and it's very rarely where the problems lie.


What you're talking about is what I call the "specification-implementation gap". In some domains, that gap is vanishingly small; webpage layout is a good example. Sometimes the specification is as simple as "when the user clicks this button, display this message". There's nothing to be gained by formally verifying that particular property of the specification.

But sometimes the gap is wider. An only slightly more complex example would be "the shopping cart contains all items added to it in the current session and not subsequently removed". Sounds trivial, right? Well, as it happens, just a couple of days ago I bought two items from a site whose cart was broken -- it would show only the most recently added item. (I had to make two transactions. Fortunately no shipping fees were involved.) Okay, that's a rare sort of bug, but it just shows how even very simple invariants can get broken on occasion.

Sometimes the gap is even wider. "This app must have no XSS vulnerabilities." There's no one place in the program where you can see that that property is correctly implemented.

You see where I'm going with this: the wider the specification-implementation gap, the more useful formal methods can be.


For static analysis, OK. For a compiler, OK. But I want the static analyzer and the compiler to do that so that I don't have to - I don't want to have to think in set theory to program.

And in saying that, I see that I was probably unfair to haskellandchill's point. The claim wasn't that I needed set theory; the claim was that the tools needed it (at least, so it appears to me on re-reading).


Not every program property of interest can be (conveniently) verified using automated tools - especially if your verification tool is a type checker! Sometimes you need to roll up your sleeves and prove things yourself. (Which, by the way, isn't always a bad thing.)

That being said, hopefully you won't have to use set theory, where any structure that actually matters to you has to be encoded in terms of sets containing sets containing sets, until you reach turtles.


8 years of web dev under my belt, but yes I do have some academic pedigree and have been investing my time heavily in the history and techniques of formal methods. I'm hoping to contribute to bridging the gap. Don't get too hung up on set theory, I was just referencing the article.


It is my impression (as an outsider) that web programming might be the hardest environment for formal methods. Yes, I could see formal methods proving that only valid HTML was ever emitted. But how do you prove that your layout aligns on Firefox 55.0.3?

So it seems to me that, if you can make progress on formal methods in your environment, you're not picking low-hanging fruit...


> I could see formal methods proving that only valid HTML was ever emitted.

I think we could also prove the absence of XSS vulnerabilities. That would be very valuable.

> But how do you prove that your layout aligns on Firefox 55.0.3?

This is harder, admittely. Mozilla is probably not going to provide the formal model that haskellandchill imagines, unless that model can be automatically extracted from the actual code, which may be possible someday.


If FireFox provided a formal model of how it lays things out I can prove I use it correctly in my code. It's about trust and providing models for other programmers to use in their proofs along with our code. Or code and proofs can be even more intertwined, since they are very related.


Interesting point.

It's hard to prove theorems without the [correct] axioms.

I suppose what our field needs to do is some "triage" categorizing aspects of systems that MUST be "defined and proven" vs those that can be "intuited" - e.g. - back-end database updates need to be correct, but it's acceptable to have the user reload a web page if something crashes occasionally???

But The Man wants lots of 'wares made by cheap full stack developers, er, dev-ops :-(


> So we need to write more code to make fancy Photoshop-like editors for the average-Joe programmer who can't see the big picture?

Rather, the idea is to remove layers of unnecessary mental burden and complexity required, similar to how well designed programming language might reduce cognitive load of dealing with the syntax itself (vs the problem you are trying to solve).


Programmers write code badly due to time constraints in many cases or really disconnected teams being led by business not engineering.

Rarely is budget allotted for quality code, you have to fight for it as a coder to your own detriment internally on timelines/shipping.

Engineer led companies, or companies that value engineering as a main decision maker in the company usually fare better with issues like this and are already smart about design, architecture, standards, security, reliability, interoperability, user experience and more.


Even given good amounts of time engineers often skip tools that could catch some bugs. Examples, TLA+, fuzzers, full code coverage etc.

Sadly all things are economic, so we apply the tools where the cost of a bug is high. In the case of 911, its very high but underfunded. In the case of some consumer app, its (presumably) very low...


The article didn't mention it, but I think that Excel is a great example of where "the masses" have learned to do programming. It is visual and the effects of changes are immediate.


My uncle worked at a very large corporation that couldn't update to a later version of Excel because they relied on some spreadsheet to do their taxes, and some minor difference between versions broke the spreadsheet. No one could reverse engineer what the spreadsheet did. With a complex enough sheet, every cell becomes a separate function with no documentation.

That's unfortunately a pattern you see with these efforts to make things simpler and more visual. They're great tools, until it gets too complicated, and then you find you'd be better off just writing code.


Yes, exactly. What fools people is that programming in the small is indeed trivial, and so they think that all programming is trivial. What they don't understand is that as programs grow, their scale and complexity become barriers in themselves; and then managing that complexity becomes the critical factor for success. The reason they would be better off just writing code is that, well, for one thing they would be working in a language with better-defined semantics that wouldn't change out from under them, but beyond that they would be able to use some of the other tools that software engineers have found useful over the years: starting with simple things like well-chosen variable and function names and comments, all the way to version control, none of which is possible with Excel (AFAIK; I'm not an Excel user; but even if they're possible they don't seem to be standard practice).


I totally agree. I just wonder if there is some way to take that open space, visual programming which so many people do with Excel and make it more robust. Well, actually, lots of people are working on making a better spreadsheet or putting the table data into a database and making interactive plots, so maybe that's not the angle here. Maybe we can figure out how to make our IDEs more Excel-like in some ways. Not for us regular programmers, but for people who'd like to do serious programming in their subject/domain of expertise but don't want to just do it in Excel with VB.


For things excel is used for I find a few unix tools and piping between them can achieve a similar affect yet still be maintainable, debuggable and repeatable. In lieu of convincing the masses to work this way, I wonder if visual tools could take the best parts? Instead of updating cells on sheets (global variables), have a UI that allows users to break down it down into a series of steps with an input and an output, much like a makefile. Then it would be possible to step through the script, inspect the input and output for each step and view the final result.



And yet its results aren't any more reliable. They seem less so to me. https://www.bloomberg.com/news/articles/2013-04-18/faq-reinh...


The effects of a change may be immediate, but not necessarily immediately obvious. Subtle bugs are a problem in any programmable system.


Yes, and there are plenty of examples of spreadsheet-bugs having significant real-world impact. E.g.,

1. https://www.bloomberg.com/news/articles/2013-04-18/faq-reinh...

2. https://www.cio.com/article/2438188/enterprise-software/eigh...


  “Software engineers don’t understand the problem they’re trying to solve, and don’t care to.”
In environments where management judges by (and is judged by) other metrics, this result is inevitable.


More specifically, I think those engineers are trained not to care. The have tried caring, experienced unpleasantness from their "superiors", and learned to stop.

But this is reversible. I've done coaching in large-company environments and it's not at all hard to wake developers up, to get them to care again. The hard part is changing the environment so that caring is rewarded, not punished.


> Software engineers don’t understand the problem they’re trying to solve, and don’t care to.

I think this is unfair. Engineers usually have great (at least qualitative though often quantitative) insight into potential, cost, and complexity. The problem tends to be in getting that feedback back into "the room where it happens", the place where budgets are set, approvals are given, disputes are resolved, and performance is evaluated. In that room, there is usually a heavier presence by people who understand organizations, political concerns, messaging, and revenues. So when the cost and risk experts (the devs) are not in the room, of course we end up with overly complex, poorly understood products that chug along for a while and then become so massive that either:

a. the project collapses under its own weight

b. the project creates its own "gravitational pull" and starts pulling in resources to support itself; sometimes this is locked-in customers who cannot migrate away, sometimes this is locked-in enterprises who can't imagine how (and sometimes why) to kill it off


In some cases, however, ignorance on the part of the engineers (especially those with any say in the architecture of the application) of the users' needs can seriously damage the user experience for the life of the design.

I face numerous examples in the sports-ticketing world all the time. For example, if you are going to a football game, what is the most important consideration to a fan? I say the most common priorities are:

1) nearness to midfield ("What yardline are my seats nearest?")

2) sun exposure ("Will I be baking in the sun all day? Do I need sunscreen? Will I be looking into the sun when trying to watch play on the field for a portion of the game?")

3) home vs. visitor side

4) elevation ("Am I low enough to see the players? Am I high enough to not be obscured by bench and media personnel?")

If you're designing the site/app and you've never been to a (an American) football game, you won't know these criteria are important.


Sure. I was just saying the cluelessness goes the other way, too, so putting it on devs is at least one sided.

Maybe a product genius can come up with better ways to sell shady stadium seats. But the technical contributors would be able to tell you that the shade-o-matic feature is layers of bad hacks that cost the company $X million a year in dev costs and astronomical security and maintenance risk since it is still running (partly) on old hand-configured Windows XP boxes using a hand-rolled UDP server library backed by an Access database. And that's for a key differentiator for the company.

Profit and success doesn't happen until both sides of the story are accounted for.


This is the environment of large software companies.


it's like blaming the assembly line worker for a badly designed car.

Software engineers don't solve problems, they implement solutions someone else thinks will solve a problem.


The analogy would be more appropriate if you said programmer and not software engineer. Anyone calling themselves a software engineer should be providing input into the design of the thing they're producing. Their role is very explicitly managing requirements and codifying them in, well, code. Either directly doing the programming or managing those who do.


Except software engineers are often placed in subordinate roles because "WE WANT ONLY THE BEST". If you aren't given the authority to change the design, caring about it is an unnecessary source of anxiety and stress.


Many times, I feel this.

Really though, caring about what the problem is and trying to solve it means: 1) you work slower, 2) you ask a lot of questions, 3) you push back on things. Basically you're spending more mental effort trying to encapsulate what the problem is than the stakeholder.

Those things will often make people either hate you or think you're bad at your job.


I agree with this.

Imagine studying about web design, accessibility and usability for many years. You feel that you have something to contribute to the design of any app.

Then the higher ups just hands you a spec and assigns you to a task to fix this bug/implement this feature without your input.

That's pretty damn demoralizing. It will just push you towards caring more about streamlining your own workflow to rather than care about the actual usability of the app.

Maybe that's the reason why there is so much interest in the in what workflow to use, what IDE/text editor to use, which languages to program in, etc.


Are there companies that actually distinguish between these roles? For all the ones I've ever worked at, "software engineer" and "programmer" are synonyms.

(I'm aware that there's been some effort to extend the legal significance of the word "engineer" to the software industry, but I'm curious if that difference is actually in practice anywhere today)


I dunno, it's just a title game. I would guess it's because programmers in the 90s simultaneously had massive hubris and insecurity issues and wanted to be just as respected in the community as actual engineers, but who knows.

Engineer invokes a sense of design, but so does programmer. When I started, there was no job where you just coded straight pseudocode specs. Business described the problem the way business does, and you had to convert that into a computer solution. Within that solution, different developers would own various parts of it.

I would say the entire process of having coders who design systems and coders that just write code is inherently broken. Businesses want to think of development as prod-ops, but it's more like book publishing. I'm of the mind that if a coder who designs a system in enough detail to write pseudocode, he should just write the application.

The better approach is have an architect/lead model subsystems and a general skeleton of the application, then have different teams implement the subsystems. Clearly defined interfaces on all inter-subsystem communications.


There was definitely a time in the 90's when there was interest in having specialized software engineers/architects produce comprehensive designs for software that code monkeys would be tasked with fulfilling. I wouldn't be surprised if there were places today with strong management bias that thinks of "designing" as high value-add and "coding" as low value-add, and the two tasks as falling under the purview of different individuals.


Depends where you live, the distinction is clear where the title of "engineer" is protected.


Was thinking about this the other day: in big companies SWE individual contributors are the new line workers. Large enterprise ends up trying to manage them with business metrics goals without much visibility into the process issues that hamper overall quality or the production efficiency. Has anybody thought to apply the Toyota Production System in Engineering management?

https://en.wikipedia.org/wiki/Toyota_Production_System


Yes, a whole lot of people have thought of that.


DevOps takes this as a primary input.


Also, a software engineer, being a technical person, can probably understand the problem quite well - but that doesn't help when the management doesn't care about solving the problem, but about sorta-solving it in order to maximize profits.


I'm surprised understanding the domain hasn't been mentioned. Whether I am developing for someone else or for myself, it turns out misunderstanding/misrepresenting the domain is the most common source of trouble.

If your understanding of the domain isn't thorough, is TLA+ going to be much help?


It helps in two ways:

1) You have to specify your system, right? With TLA+, you can just wave you hands and say "okay this part does something, I guess." You have to force yourself to understand what, exactly, you want your system to do and what you want out of it.

2) Most systems have edge cases, side effects, and race conditions. Are you sure your design is robust against them? You might think you have good arguments for that, but wouldn't it be better to rigorously _check_?

Tests and types and stuff help you find bugs in your implementation. TLA+ helps you find bugs in your blueprints.


Good points!

Writing unit tests before code can help avoid mistakes in the interface design.

Perhaps similarly, writing formal specification could expose the holes in your domain understanding.


> “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work. “Software engineers like to provide all kinds of tools and stuff for coding errors,” she says, referring to IDEs. “The serious problems that have happened with software have to do with requirements, not coding errors.”

Did you mean something different from that?


Well spotted - missed that!

Though I think the "and don’t care to" part is a little harsh. I do care, though mostly fail. Sometimes I find it difficult to get useful information from the domain experts, and it doesn't help that domain experts often keep subtly changing the meaning of the concepts we've been working with, until nothing is left of the original and the whole system is a bit of a mess.


Yes, DDD exists to solve this very problem. It has been around since 2004 and I'm surprised to see it is still relatively unknown within the HN community.



> If your understanding of the domain isn't thorough, is TLA+ going to be much help?

I suspect it could, actually, because it lets you formalize and work with the implications of the understanding you do have without getting bogged down in the details of actual coding. Seems like this could give you the opportunity to debug your mental model much earlier in the process. (I confess I haven't actually tried TLA+, but I plan to.)


Our company uses memcached to store short-lived pieces of user-specific data for one of our (web) products. The way this was architected, any time a user visited a page, a request would be made to memcached with the user, check to see if memcached had data, fetch the data, and then clear the cache.

This feature had always been a bit buggy, but then we decided to start adding more things to this user-specific data store. And it blew up. Data we wanted to store for the user wasn't being stored at all, data was being fetched many requests after they should have been fetched. Sometimes stale user data was being fetched days after the data should have been cleared, and the data was not being cleared.

After being pulled into the war room for this, I started looking into the code and architecture of the issue and was appalled. Code for the feature was designed piecemeal, tests were nonexistent, and of course only certain cases of the code were tested. The whole thing was a concurrent nightmare and would not hold up to concurrent reads/writes.

Rather than sit there and make sense of it all (I started sketching it out on paper because it was so convoluted), I ended up writing the spec out in TLA+. Just forcing myself to write the spec out made me consult the implementation dozens of times, to verify the exact behavior in a way that TLA+ could model it. And then after I did that, it was obvious that the code was broken.

So in my TLA+ model I shored up what I thought were the problems, ran the model checker, and was expecting a fix. Nope. The model checker found another bug. I tried a different strategy, and it found another bug. I iterated with the model checker dozens of times, and slowly changed the model entirely, making it simpler and clearer to understand. Finally the model checker couldn't find any bugs. I converted the model to real code and, after some code review, I shipped the fix. It worked.

I wish more people would spec their code with something like TLA+, but as I've seen in my own teams, the mantra is ship first think later.


Great article, but it leaves me with one question : what's the difference between good code generating systems (the ones mentionned, that you'd use to build critical system), and the horrors that we saw back in the time with wysiwig HTML editors such as dreamweaver ( to the point where nobody today would dare to write an html website with something other than a text editor) ?

The second problem seems a lot easier to me, and yet it is where wysiwyg failed spectacularly.


Dreamweaver was actually famous at the time for creating clean and readable HTML. It's FrontPage you're thinking of.


What went wrong with Visual Basic?

(HTML/CSS/Javascript was botched so badly that Dreamweaver-type editors no longer work. This is embarrassing.)


What went wrong with Visual Basic?

The language was simple enough for beginners but not sophisticated enough to scale for large projects. All too often you would have a project that started in VB as a proof of concept which then extended to become the actual system, and then as that system grew it started to collapse under its own weight. The conversion to the CLR fixed a lot of that, but now it isn't really for beginners anymore.

HTML/CSS/Javascript was botched so badly

That combination is atrocious out-of-the-box. There was no "botching" it, but rather (again) scaling to larger problems showed that the foundation was always made of sand.


Visual Basic was fantastic for kicking out a UI. If you needed anything larger, all you had to do was build DLLs and reference them. So easy front end and easy back end. The best of both worlds.


Visual Basic was also fantastic to kick out multithreading server code. The problem was always libraries and lack of language reflection.


> good code generating systems

To discuss a different perspective - why have text as an intermediate representation at all?

To elaborate - if you look at something like Smalltalk, you modify the 'running program' directly.


LabVIEW is the future


Any decade now...


The dangerous unreliability of software is an accountability problem, not a technology problem. Engineering disciplines are regulated so that engineers are held personally and even criminally responsible if they endanger property or lives. Software, being the newcomer, has no such regulation. Instead we have people trying to claim that software deserves a free pass from due diligence because "software is hard," when in reality other engineering disciplines deal with unexpected failure modes all the time. The only difference is that these other engineers are motivated to be significantly more thorough about their jobs.


> The only difference is that these other engineers are motivated to be significantly more thorough about their jobs.

Hang on. Sure, all engineering has hurdles. But how many engineering projects face regular enemy action? The last time somebody actively tried to compromise a system I was building was two weeks ago. How many engineers live like that?

We hold civil engineers responsible when bridges collapse unprompted. We also hold them responsible if their bridges fail from some minor, anticipated harm like vandalism or a car crash. We don't hold them responsible for building bridges that can be blown up.

If we want to hold software developers accountable for downtime, unprompted data leaks, and so on, fine. We should do it proportionate to the damage caused - there's no point in pretending that a shoddy smartphone game is as bad as a shoddy bridge - but fine.

But let's not pretend that software has the same difficulties as every other type of engineering. Most engineering doesn't face an epidemic of people actively trying to disable safety precautions to hurt people, and a lot of it reliably fails as soon as it does face enemy action. Punishing every engineer who ever gets outwitted (say, by a leaked NSA-developed vulnerability) is absurd.


>* Hang on. Sure, all engineering has hurdles. But how many engineering projects face regular enemy action? The last time somebody actively tried to compromise a system I was building was two weeks ago. How many engineers live like that? We hold civil engineers responsible when bridges collapse unprompted. We also hold them responsible if their bridges fail from some minor, anticipated harm like vandalism or a car crash. We don't hold them responsible for building bridges that can be blown up.*

Even more so, how many engineering problems have fuzzy requirements that on top of that change constantly and arbitrarily?

If someone thinks building bridges is the same as writing some big software, they really have no idea about working in the trenches.


How many software projects have to deal with gravity? Weather patterns? How many software projects need teams of construction workers to actually build?

Different kinds of engineering have different requirements. Just because the challenges in software are different from those in building bridges doesn't mean we get a free pass on correctness.


>How many software projects have to deal with gravity?

Gravity comes with a pretty little equation, and as far as problems goes is as predictable as it comes.

Tolerances of various materials are also well known in advance, and their behavior under different designs and with different levels of stress assigned can be trivially modeled with CAD packages (and even manually).

Same for weather. It comes down to a few behaviors (rain, wind, earthquakes of various degrees, sunlight) that people can model, and have been modeling for ages. We have 25+ centuries old buildings that still stand.

>How many software projects need teams of construction workers to actually build?

Not sure what you even mean here.

>Different kinds of engineering have different requirements.

That's obvious. The question we ask here is different: sure they are different, but are those requirements equally well defined and equally difficult across software and other engineering fields (like construction)?

And to that I say no. Construction has pretty solid, rarely changing requirements (and almost never changing ONCE construction has begun), and works with specific materials with a limited set of interaction. With software modeling the entire universe is the limit as far as complexity goes.


>How many software projects have to deal with gravity?

A better question is: How many civil engineers need to deal with the fact that gravity keeps changing?

The nice thing about most engineering is that the laws of physics are assumed to be constant.

A program's universe is the hardware it runs on. When that hardware changes, the equivalent of the laws of physics changes.

Amount of RAM, clock speed, number of cores, cache size, etc are all things that you can't assume as constant. Real engineers need not worry too much about deadlocks the way we have to for software. If you have physically moving objects, you can easily design to have things synchronized (gravity doesn't change, friction won't change much, etc).

I have plenty of old software that won't work properly on today's PC's because they can't run on fast computers.


> But let's not pretend that software has the same difficulties as every other type of engineering. Most engineering doesn't face an epidemic of people actively trying to disable safety precautions to hurt people, and a lot of it reliably fails as soon as it does face enemy action. Punishing every engineer who ever gets outwitted (say, by a leaked NSA-developed vulnerability) is absurd.

The issue isn't exactly how software fails, but that software today doesn't properly protect confidentiality and integrity of data, ensure services and data are available without providing oracles, authenticate and authorize correctly, log properly, etc. This is a well known methodology that is consistently ignored in preference to shipping products immediately. That's the issue.

Unexpected things happen, accidentally or maliciously, but care should be taken - the proper kind of non-negligent care called due care. In general, software is not fit for the purpose it was created, appearing to work even though it really doesn't.


>> how many engineering projects face regular enemy action? The last time somebody actively tried to compromise a system I was building was two weeks ago. How many engineers live like that?

Every bit of military hardware, or child's toy, for starters.

More generally, you cite anticipated harm from vandalism or car crashes as being part of the general environment.

It is far past time to recognize that the hacking environment in which we live IS THE NORMAL ENVIRONMENT, and these hacks ARE everyday anticipated harm.

Software does not live in a virtual vacuum of space. It is on the internet, and will likely be scanned withing seconds if unprotected. Even tall buildings are now designed after 9/11 to better withstand jetliners crashing into them. To follow your metaphor, most of the hacks are completely unsophisticated, the equivalent of a bridge falling because some kid came along with a can of spray paint, or a screwdriver and started pulling screws.

Let's also not pretend that other engineering disciplines doesn't also have to face real-world unpredictable and hostile inputs. Even children's toys have to be designed so as not to be hazardous from persistent, random, and clever unexpected attempts at use, misuse, and abuse from children.

The same thing for consumer tools, which have many design features to frustrate the unending supply of clever idiots out there trying to win Darwin awards and sue you if they fail (or their families if they succeed).

I've done both software and hardware, and your expressed attitude that software is somehow inherently more difficult or threatened is utter nonsense. The only basis for this is impatient project management where the key priority is to to ship yesterday because the management are idiots and think software is trivial.

These approaches to start slinging code, write it fast and change it often are useful only for prototyping. In writing real software, my first two steps are 1) understand your problem in depth, and 2) work very hard for a long time to avoid writing code (i.e., better architecture to simplify, separate dependencies, etc.++). After substantial effort is put into those, then write only the code that is necessary. I take a similar approach to manufacturing, understand it, design a process with minimal key elements, then build those.

Software is not a 'special snowflake'.


> It is far past time to recognize that the hacking environment in which we live IS THE NORMAL ENVIRONMENT, and these hacks ARE everyday anticipated harm.

> The only basis for this is impatient project management where the key priority is to to ship yesterday because the management are idiots and think software is trivial.

I agree with the former but it is negated by the latter. The economic incentives certainly do.

> These approaches to start slinging code, write it fast and change it often are useful only for prototyping. In writing real software, my first two steps are 1) understand your problem in depth, and 2) work very hard for a long time to avoid writing code (i.e., better architecture to simplify, separate dependencies, etc.++)

Ideally this would be the case but this approach is a luxury many could not afford. This is especially true in an "Agile", first-to-ship mentality seen in most startups.


I agree that the First-To-Ship mentality you mention and other shortsightedness negate the reality of the current environment.

But if the industry doesn't start changing itself, it'll have change imposed upon it.

The Equifax breach alone, especially if it is weaponized by a state actor has the potential to completely undermine our banking system. All because of weak software and inadequate maintenance.

This is not like we have sophisticated attackers going after solidly designed & engineered structures. It's like we have teenagers and squirrels wandering around the back and freely climbing in windows where the specification didn't even include installing glass. And then commenters like GP saying that it is too great a threat to design around teenagers and squirrels.


Beyond that, the organization they're in sets the priorities; how is it going to fix anything if the engineers are on the hook for Equifax leaks but they still have the same (presumably not very security-conscious) managers?


You raise an interesting question here. Civil engineering these days is done on computers, mostly with incredibly convoluted FEM software and other simulation tools. All software vendors refuse to take any responsibility for bugs in their software and even block you from using the software if you don't agree on the terms.


The problem with software as opposed to most artifacts that humans produce in the world is: "chaos". Sofware is insanely sensitive to initial conditions and a single logical error anywhere along the line leads to... well, chaos[1]. A single errant system may also have insane reach (as opposed to e.g. a bridge), so the consequenses just amplify.

Unfortunately, we've started to rely on the reliability of software in unprecedented ways. (I'd guesstimate that it's probably mostly short-sighted economics that's driving it.)

[1] This is inherent to Turing Machines[2], so if we keep insisting on Turing Complete languages in every realm of computation we'll forever suffer this problem. Please don't think that I'm making an argument against general computing, I'm not. I'm making an argument for restricted DSLs for things that don't need more.

[2] We're not even talking the 'ordinary' chaos of dynamical systems here. Those can at least usually be approximated numerically for the near future unless they're actually really close to a point of divergence. Not so with programs.


> How many engineers live like that?

Well for one thing the engineer planning the electrical isn't the engineer planning the structure. The engineer who scoped the HVAC isn't the guy who wrote the company's website either.

This is a long term problem that we need to begin recognizing and addressing today. I wouldn't expect the programmer who is building the frontend to understand the network security. I wouldn't expect the database guy to know the server administrative role.

Sure there's overlap but to have a team of people whose focus is that specific role is partially what engineers do. As a software engineer myself it's illegal for me to go onto my client's sites and start engineering their electrical system. I pass that effort to my coworkers who are electrical engineers in just the same way they ask me to design them their databases and their network.


> Hang on. Sure, all engineering has hurdles. But how many engineering projects face regular enemy action? The last time somebody actively tried to compromise a system I was building was two weeks ago. How many engineers live like that?

I don't think that's a good analogy. Engineers might claim we have it easy because we don't have to face earthquakes or electrical surges or birds flying into our turbines.


Most of the article, and the comment you're replying to, wasn't about hackers, though. It gave the example of people dying because the Therac machine or a Toyota car responded wrongly to non-malicious, normal user input, like a "bridge collapsing unprompted" in normal use.


Yeah, I agree about the article. I wouldn't offer this criticism to the main essay - fundamental bugs in critical systems are absolutely an engineering failure, at least in the sense that we should look for engineering practices to solve them.

But with the post I responded to? I see this idea of "it's because engineers don't care, so give them liability" a lot, and frankly it's stupid. It neglects both cost-benefit differences and any interest in the question of hacking, and then goes on to pretend that you can fix bad systemic pressures by beating up the guys at the bottom. The difficulty of writing good code is not a function of how heavily you punish bad code.

I flatly don't believe that the Therac bug would have been caught by personal liability for the bug; either the code would have been cancelled altogether to avoid liability or the bug would have gone on unabated. Realistically, criminal liability for lethal bugs mostly sounds like a way to drive talented devs away from high-stakes applications and leave those apps to be handled by people desperate enough to run the risk, or by international shops operating outside the reach of prosecution. I don't love the thought of a shop unwilling to use git because it creates personal responsibility. "Oh, we just pass code around on thumbdrives, I can't remember who wrote that critical line."

I'd shrug and let the whole thing go, but the rumblings of "regulate software" keep getting stronger from a lot of directions. I don't really want to wake up in a decade and find out that degree requirements and expensive-but-useless licensing have become mandatory, and it feels like pretending bugs come from lazy devs is another snowflake in that avalanche.


>The only difference is that these other engineers are motivated to be significantly more thorough about their jobs.

Well, it's a whole ecosystem problem. Most of the places I've worked, security was just supposed to be something you learned and credentialized on your own, and no time or resources were deliberately allocated to it.

In fact, you're likely seen as an underperformer if you slow down to get things right or to refactor bad code. And when software does break, `git blame` tends to accuse the developer that was in the worst position to fix a ship that had begun sinking long ago.

I'm not sure how engineer culpability works in other fields, but surely it's always coincided by bad process and management at which point the whole company is responsible.


In professional engineering the accountability comes with a stamp, and a professional ethics requirement to report up the chain if a critical problem has been found that could impact public safety. This includes whistleblowing if management ignores the problem.

There are clear guidelines as to what types of products and services require a p.eng.'s stamp. There are also many professional engineers out there who's stamp has never touched ink. For example, the engineers drafting and churning out spreadsheets in the engineering sweatshops aren't accountable for the safety of the product, you'll have one or more engineers providing their stamp for that group's work.

If we were to implement a similar model on software development, most developers would not have personal liability for work they do e.g. for a company or open source project. What is needed is better classification of software products/activities that require a software engineer's stamp[1], and a corresponding professional association.

So no, developers should not be accountable. However, certain safety-critical products - where safety can be defined in various ways expanding the current relatively small set of NASA, aviation and health-care software - need engineers overseeing design, development, and QC to be accountable.

And yes, this means that the self-taught developer, no matter how talented, will not be able to release certain types of software without some sort of oversight.

[1]As a side-note - not everyone can call themselves an 'engineer' in other fields without the proper credentials and professional membership. I see developers calling themselves software engineers all the time though they don't have the chops. This would, of course, need to change.

Updated to change asterisk to brackets


Regulation on programmers is not the solution.

Regulation on companies might be. I'm usually really disappointed when a cheap re-seller takes a government purchase of a 911 or 112 ( here in EU ) and then hires a subcontractor, which hires a subcontractor, which ...

That's how crappy software gets into production.


Without radical, NASA-style changes to typical development, where each line spends much more time being audited than being written, it is completely unreasonable to expect that kind of liability from developers.


Ditch the dumb deadlines and the tight budgets and it would become a hell of a lot more feasible.

I WISH I had the time and budget to put due diligence into my work.


You have to think about what you're building too. Feature release would come much slower than it does today. Reasonable? If you're writing software that is absolutely safety-critical, that's likely. If you're writing iTunes, well, I have my doubts.


It's not like reliable, secure software doesn't exist. It's just limited to spaces where people are willing to use years-old tech, forgo new features, and pay fortunes for auditing. Air travel, industrial machinery, medicine, and so on.

Almost like the cost-benefit actually doesn't support making the development of trivial software ten times slower and more expensive...

If there's a problem here, it's business externalities (Equifax won't pay adequately for screwing people), not engineering impossibility (the rest of us know how to patch Apache).


I mostly agree, but this article cites the national 911 software, which arguably should be treated as a critical system, failing, and some medical devices whose software killed people.


Yeah, fair enough - I've mostly been responding to too-general comments, the article is more specific.

The 911 thing seems like an obvious issue that absolutely should have been caught by a rigorous development system; it was treated like a digital phone system instead of a life-saving service. The Therac bug sort of feels like a different story, because that bug led to IEC 62304 - it's older than the practices which would now prevent it. They're both critical systems that were written without due care, though.


Broken software does not typically kill people or makes you loose money (time or some other resources, maybe).

In contrast bridges that fall apart tend to kill people in a nasty way. They are also quite hard to reboot.


An excellent framing I saw recently was "I can tell the people who write these articles don't actually want secure code, because they aren't paying me to write it."

There's plenty of carefully-written, rigorously-tested code out there. Want to write software for air travel, drug manufacturing, or space flight? You'll have no choice but to write safe, first-rate code.

Want a smartphone weather app written to those standards? Well, for $50 and a feature set five years behind the state of the art, you can have it.


Isn't this a tooling and knowledge issue? If I was as productive with TLA+ or model based design as I am with React-Native/Ionic/Swift etc. then we could have a smartphone weather app written to those standards for the same price.


While there's plenty of room for improvement, I legitimately believe that TLA+ is now in a place where it's convenient enough to use in conjunction with Ionic/Swift.


Broken software has the capability to regularly kill people or cause people to lose money. It happens too often.


I'm pretty sure less people would be rushing into the software engineer profession if they knew it came with the liabilities you described. That would drive up the prices of engineers.. and I'm pretty sure the industry in general doesn't want that.

Maybe that's why the free pass mentality exists


Or maybe the "free pass mentality" exists because development doesn't happen in a vacuum and errors don't necessarily find their way into programs because the programmers are stupid or careless.


Can't it be both a technology problem and an accountability one? The tech problem might take ages to get solved without accountability forcing it. If we were accountable, we'd be forced to make better tools for writing reliable software more easily. That said, there are simpler low handing fruit "tools" like code review, safe git practices, etc that still aren't followed everywhere. Ignoring those low hanging fruit does point to accountability/prioritization being a cause of low reliability.


Motivation seems far from the only difference.

If we divide making into "builds with atoms" and "builds with bits", both sorts of things are generally unregulated. It's when you get into "builds life-critical things with atoms" that you see significant regulation. If you build a house or an office building, your engineering is regulated. If you build a doghouse or an office filing cabinet, it isn't.

I agree that societal accountability for engineers working on life-critical system, software and hardware both, is vital. For most atom-making and bit-making, though, I think stringent professional licensing is overkill. Instead I'd rather we started with accountability for the people who have much more control over the outcomes than individual software engineers: executives and managers.


> If you build a house or an office building, your engineering is regulated. If you build a doghouse or an office filing cabinet, it isn't.

A filing cabinet must be fit for the purpose it is used, or there will be liability incurred. OSHA would frown on keeping hazardous chemicals in your filing cabinet. Software should also be fit for the purpose it is marketed and used.

There are many kinds of paper, but paper being used improperly, by storing a list of credit card numbers on it, is quite similar to how many pieces of software are written.


You can definitely do that. You just have to pay them 20x more money and give them 50x more time and give them absolutely all the tools they need. Best possible hardware. Best possible software. Fastest possible internet connections. etc.

Show me an employer willing to do that and I will immediately sign up.

Most companies want to do the exact opposite. Spend the least amount possible, because, you know, the budget is tight and the business side wants all these features ready yesterday, nay, last week.


>The dangerous unreliability of software is an accountability problem, not a technology problem. Engineering disciplines are regulated so that engineers are held personally and even criminally responsible if they endanger property or lives. Software, being the newcomer, has no such regulation.

Regulation like that tends to follow after a series of disasters causing massive loss of life. Software hasn't yet managed to achieve that.


> The only difference is that these other engineers are motivated to be significantly more thorough about their jobs.

What evidence supports this as the only difference? I would argue these other engineers have significantly smaller jobs to do, so it's easier to be thorough.

Edit: Why are you so keen to fix a problem you don't understand? Are you a programmer?


You can't drop all of this responsibility on developers. You need people checking behind them.

But if we want to pay developers like surgeons and have malpractice insurance, all the better, I guess.


Model-based design has been the wet dream of software managers looking to eliminate costly, finicky programmers for what? Decades now? I remember when I was a college student being told that eventually, the work products of software architects would be UML diagrams that could be turned into code by a simple turn of the crank on an automated generating tool. I didn't buy it then and I sure don't buy it now. The reason why is because once you specify models with sufficient granularity to be automatically turned into code, the graphical symbols in your modelling language become isomorphic to keywords in some programming language, and your programmer-replacing code generator becomes a compiler.

As for Bret Victor... Programming is hard because you are trying to reason about not just the future of a dynamic process, but all possible futures of that dynamic process. That's way too much information to be represented in a visual manner for all but the simplest of systems, and it's why visual programming tools have been met with spectacular failure outside of constrained niches (e.g., LabVIEW, Max and its relatives).


After reading the whole article, it seemed impressive .. Although didn't like his way of writing so simple ideas in a very very very long article.

Anyway, TLA+ and Formal Methods seem a promising thing and definitely need to check that out. I totally agree with him with the way we are not giving a good weight to planning especially for applications that need robust security and safety. Especially after the appearance of Agile methodology and alike. (no system is perfect anyway) But definitely we need more of that verification and we don't need it in the esoteric way they mentioned but in the easy way that allows every programmer to use without so much complexity. Maybe tooling around something like TLA+ could make it easier to understand. People in here though say that it's not that hard but it's not beneficial in every single situation just when you have complex algorithms.

I got convinced though with another idea that's easier to apply at least for now, using type systems (type theory) through using a programming language with sound type system (Facebook is making nice progress in that) on the front end FB made ReasonML , a language derived from OCaml to generate javascript. It really erases a whole set of bugs by getting a good type system like that. I'm learning these days OCaml and Reason. New languages as Rust are doing great too. I think writing code is improving these days but it'll get definitely better in the future as we understand more about how we actually do it. The field is just around 70 years old and it's still in its infancy I think.


The irony is the "this is all too hard write less code use more axiomatic principles and compiler assistance" is the functional programmer's call and it keeps getting shot down as 'too complex.'


Where I get the most resistance to functional programming is more in the abstractions: both that they are too complex and that people aren't already familiar with them. High-level/complex/abstract patterns are hard to understand the first time and FP seems to make those patterns easier to express.

I'm surprised how much we take for granted an understanding of object-oriented coding in the industry. Any graduate with a four-year degree in CS can be expected to write passable object-oriented code, but many have absolutely no exposure to functional idioms. Of course, those same people may have very little exposure to more complex OOP design patterns. I've seen people's eyes gloss over the same why when saying, "It's just a monad" as when saying "It's just an interpreter pattern".


> I've seen people's eyes gloss over the same why when saying, "It's just a monad" as when saying "It's just an interpreter pattern".

Same. All we can do is try and change education patterns. Neither are that complex.


There is no silver bullet.


> He began collaborating with Gerard Berry, a computer scientist at INRIA, the French computing-research center, on a tool called Esterel

INRIA is also maintains Coq, TLAPS, and much of TLA+.

INRIA is a scary scary place.


> INRIA is a scary scary place.

Why ?


I assume scary as in "scar(il)y smart", in the sense that it's (almost) disturbing to see their sophistication and competence.


Thank you.


This article reports uncritically the plaintiff’s claim that Toyota runaway acceleration issues were a software issue. The Wikipedia article presents a more balanced report; the black box data is particularly interesting.

https://en.wikipedia.org/wiki/2009%E2%80%9311_Toyota_vehicle...


This seems over the top to me. Everything has issues, and everything is increasingly complex, not just programming. Take any industry and over a long enough time-span I'm sure things have also increased in complexity in drastic ways. Sure, programming is relatively new, and we are ramping up fast, but we already have vastly complex systems, tools, and infrastructure relying on programming which has incredibly impressive uptime and efficiency.

Yes, people will die from errors in the code of autonomous cars, but at the same time can anyone argue that autonomous cars won't be significantly safer than us?

The argument here seems to be that it's dangerous if the programmers relying on StackOverflow to copy/paste solutions work on real problems without "stepping up". I think that's like someone suggesting a bricklayer would suddenly be in charge of designing blueprints for a 150 storey skyscraper. They are different jobs performed by people doing very different things. They may write code in the same language, but they're not in the same profession.


I work in security. People not giving enough shits about the "situation in the field" with respect to how their code will be used, how long it will need to be used and what will need to be done to keep it functional over that time is job security for me.

Some points the author makes I agree with, others I don't. Nothing will change until incentives change.


If TLA+ really does let you prove your program is bugless, then maybe it is something to look into for cars, airplanes, medicine, etc. For the rest of us, who still wrestle with complexity but probably would't accidentally kill someone, here are some simpler aids:

1. Don't put a computer there. My microwave doesn't need to be digital. It needs two dials, time and power. My mother's washing machine can be controlled by smartphone. How useful is that, since you can't load it by phone? Computerized light switches are another example of something that sounds nifty, but the complexity outweighs the benefit by a thousand to one. Plus, you need the exercise. Get up and just hit the light switch. I question whether a car needs any computer at all, especially if the car is electric, ironically. My ignorance will allow that maybe a computer can mix air and gas better than a purely mechanical fuel-injection system. But since an electric motor is simpler, this advantage disappears.

Don't get me wrong. I like computers. But I think computation should be gathered instead of spread all over the place. I like my smartphone and my laptop (pure computers). And I would prefer a car from the 1960s (pure mechanics). I don't like the hybrids --- today's complicated, hackable, beeping nannymobiles.

2. Give more time for refactoring. It's an embarrassingly unflashy point, but I think refactoring is important. Don't rush software out. Let programmers refine it. Maybe even force them to, beyond their natural tendencies, like the schoolteacher telling a pupil to revise once again.

It is unflashy, but it makes a big difference. I've revised programs down to a tenth their size (no, not by using one-letter variables and that sort of thing, but by finding more efficient ideas), made them run a hundred times faster on the same hardware, all while adding features and improving safety.

(See the advice about "going deep" instead of "high" or "wide" by Hacker News member bane: https://news.ycombinator.com/item?id=8902739)

3. Data-driven programming. I'll just quote some really smart people:

"Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious. --- Fred Brooks, The Mythical Man-Month

"If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming." --- Rob Pike (https://www.lysator.liu.se/c/pikestyle.html)

"I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important. Bad programmers worry about the code. Good programmers worry about data structures and their relationships." --- Linus Torvalds (https://lwn.net/Articles/193245/)

"Even the simplest procedural logic is hard for humans to verify, but quite complex data structures are fairly easy to model and reason about. To see this, compare the expressiveness and explanatory power of a diagram of (say) a fifty-node pointer tree with a flowchart of a fifty-line program. Or, compare an array initializer expressing a conversion table with an equivalent switch statement. The difference in transparency and clarity is dramatic. . . .

"Data is more tractable than program logic. It follows that where you see a choice between complexity in data structures and complexity in code, choose the former. More: in evolving a design, you should actively seek ways to shift complexity from code to data. --- Eric Raymond, The Art of Unix Programming (http://www.faqs.org/docs/artu/ch01s06.html#id2878263). See also ch. 9 (http://www.faqs.org/docs/artu/generationchapter.html)


> maybe it is something to look into for cars, airplanes, medicine, etc. For the rest of us, who still wrestle with complexity but probably would't accidentally kill someone, here are some simpler aids:

TLA+ isn't perfect, but it's actually not that hard to use! I'm a webdev and it comes in handy all the time.

> Data-driven programming. I'll just quote some really smart people:

You might want to check out Alloy. It's a formal spec language that specializes in verifying data structures.


Can you expand on your process for using TLA+ in web development?


Sure! I've written a quick demo here[1] and a longer-form piece here[2]:

[1] https://www.hillelwayne.com/post/modeling-deployments/

[2] https://medium.com/espark-engineering-blog/formal-methods-in...


Is TLA+ pseudocode? It doesn't actually do anything, right? Your example with updating servers, it will never actually update the servers, right? After you try it out in TLA+, you still have to write the real code in some other language, like Bash.

If so, then isn't there still a risk of bugs in your Bash program, from typos, leaving out something from the TLA+ plan, or otherwise miscopying it?

If so, TLA+ does less than I thought. But I can see how it might be useful to work out a complex algorithm and scan it for holes.


Yeah, TLA+ just verifies the spec, not the actual code. You still need to write tests and use code review and the like.


Okay, that makes sense. Thanks!



Thanks!


Smartphone controllable light switches and blinds are useful for disabled people.


Modern smartphones are an incredible accessibility device for the both partially and fully blind persons. The freedom they offer is virtually unimaginable to people who haven't seen the different they make in people's lives.


Voice control would be easier (assuming they are not deef bind )


> here are some simpler aids

All of your suggestions are good advice, and you're right that TLA+ isn't right for everything, but it is used in non-safety-critical systems, and, more importantly -- it is pretty damn simple. I've been using TLA+ for a couple of years now. It takes a programmer about two weeks -- just with available online material and no outside help -- to become proficient and effective in TLA+; faster and quicker than learning a new programming language.


That which this article describes is only possible to some extent. In my experience code generators are great until you run into a case for which you have to manipulate the generator to account for some edge case or some unforeseen problem, then they tend to become more complex than simply writing the code yourself.

Most of the problems described are already tackled in modern software development. TDD, Correct-by-Construction and input validations are some tools to ensure resilience. The car acceleration issue could also have been avoided with a default to safe approach. Spaghetti code should have been refactored. It is not programming itself that is the issue, it is the auto maker that is at fault here.

In ETL there are some tools to visually manipulate the data flow from one end to another. Some ETL software even allow you to visualize the effect of the changes on the fly, much like the Mario game. Solutions developed like this are easily understandable and maintainable by others even with minimal documentation (but require understanding of the business problem). But, much like normal programming, once you want to do something that exceeds the capabilities of the ETL software you are using, or when performance is an issue, you have to understand how the underlying software works under the hood. You can become really good at solving one set of problems with a ETL software once you have mastered it, but this is limited to one domain of problems. Likewise, specialized software allowing easy visualization and manipulation is usually very domain specific.

You can tackle complexity in programming hiding it behind libraries and databases that do the heavy lifting while the programmer integrates the pieces and accounts for particularities of the problem he is solving. I could envision representing library functions as black boxes and connecting arrows to integrate them, having input validation and strong typing or automatic typecasting. Still, when you get too far from the machine, you miss the edge cases, the things you can't imagine when you visualize whatever you are creating in your head, the problems that only arise when you externalize and codify the knowledge. I think this is at the core of the issue; in order to instruct the computer to do something, you have to externalize tacit knowledge. In doing so you come across problems you just can't see from too high up.


The example however (911 outage) is a poor one a traditional telco could not make that mistake with POTS they talk in major failures (ie losing a switch) as a once in a generation or two.


In the history of the Bell System, no electromechanical exchange was ever totally down for more than half an hour for any reason other than an natural disaster.


Someday I should write up how that was done in modern terminology. You can read the "Number 5 Crossbar" documents, but the terminology is archaic.[1]

Crossbar offices consisted of a dumb switching fabric and lots of microservices. The switching fabric did the actual connecting, but it was told what to connect by other hardware. Each microservice was implemented on special-purpose hardware, and there were always at least two units of each type. Any unit of a type could do the job, and units of a type were used in rotation.

Microservice units included "originating registers", which parsed dial digits, "markers", which took parsed dial digits and routed calls through the switch fabric, "senders", which transmitted dial digits to other exchanges for inter-exchange calls, "incoming registers", which received dial digits from other exchanges, and "translators", which looked up routing info from read-only memory devices. There were also trouble recorders, billing recorders, trouble alarms, and other auxiliary services.

Every service unit had a hardware time limit. If a unit took longer than the allowed worst case time, the unit got a hard reset, and a trouble event was logged. This prevented system hangs.

Failures of part of the switching fabric could only take down a few lines. Failures of one microservice unit of a group just slowed the system down. Retry policy was "try one microservice unit, if it fails, try a second one, then give up and log an error". If a retry with a different unit didn't work, further retries were unlikely to help.

All microservice units were stateless. At the end of each transaction, they went back to their ground state. So they couldn't get into a bad state. All state was in the switching fabric.

All microservice units were replaceable and hot-pluggable, so maintenance didn't require downtime.

This architecture was very robust in practice. It's worth knowing about today.

[1] http://etler.com/docs/Crossbar/


Please, please, please do this!


The reason cited seems much more a case of moving call-management into a high-level software system that didn't have the same sort of rigor that telecom systems traditionally have. Maybe it did and someone missed this? It's hard to know. In short it feels a lot more like a management failure than a failure of software.


Probably a lowest cost bidder and not helped by the US devolved government if this had happened in say the UK the papers would have a field day and the government would force the telcos to fix it stat!


A programmer should be able to dive straight into code and get it right the first time, up to a certain level of complexity of problems.

I'm not going to say that the bigger the problem you can solve just by throwing code at it the better the programmer you are. But it speaks well for you; you have an advantage in an certain dimension.

If I can visualize the solution, see all the cases and debug it in my head, why not go to code? We should encourage that; the more you practice taking a problem straight to code, the better you get at it. Just people have to be reasonable about it; don't try to just code something whose complexity is two orders of magnitude out of the ballpark for that.

Currently I'm grappling with the design of compiling functions with nested lexical scopes. I haven't written any code on this for a couple of weeks, but I have some box and arrow diagrams representing memory and pointers. I've mulled it over in my head and have hit all the requirements. I have a way to stack-allocate all of the local variables by default, and hoist them into the heap when closures are made. I have kept in mind exception handling, special variables and such. Amid all this thinking, I barked up a few wrong trees and backed down, thereby avoiding making such mistakes more expensively in coding.


Even though this article resonates with me, I think it portrays everything much too glamorous. I wish the subjects described were the only source of problems. I suspect that in reality, most mistakes have quite 'simple' causes. Some observations:

- By putting an abstraction layer in between, people visually creating applications, the problem is pushed to the layer below and new problems will be introduced.

- Supporting intricate, bespoke functionality in a visual environment will be incredibly hard and error-prone. - If you have trouble thinking about how your software will run, you should run it on a computer instead of your brain. I.e. continuous builds, debuggers and sandboxes.

- Pushing for deadlines and using prototypes as production software is part of this.

- Those millions of lines of codes usually include a number of Linux kernels.

- Before getting to all the fancy stuff and visions about the future, why not first:

--- Get all software unit tested / test driven.

--- Get all software functional tested / behaviour driven.

--- Use domain driven techniques to close the gap between 'reality' and code.

--- Create truly comprehensive tests and testing environments for areas that matter.


>- Those millions of lines of codes usually include a number of Linux kernels.

Yes. They didn't write the majority of the code themselves.

Here is a repository that shows how much opensource software is inside a BMW i3 https://github.com/edent/BMW-OpenSource


Great article!

My $0.04:

1. Programmers often see a small part of the jigsaw puzzle they are assembling. Most spaghetti code is the result of too many cooks over time.

2. The architects should ensure that testers know and care more about the problem than the programmers who are coding the solution

3. There is a clear need for improving the stone age tools programmers use

4. There is a need to create simulated environments for all kinds of software so they can be battle tested


For me, it was nice to see Chris Granger (now working on Eve at witheve.com) and the Light Table guys get recognition for Light Table influencing Apple. To quote: "The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table."


So. Minus all the doom and gloom. Better safety harnesses, better developer abstractions, more interactive/responsive programming environments. Whatever Bret Victor's selling, I'm not buying. He's the type of self-promoter who doesn't acknowledge all the actual hard work that has been going on for decades in all of these areas.


Can you point to what work specifically has improved the situation in terms of `developer abstractions` and `interactive/responsive programming` in the last few decades?


Yup, I can.

Safety harnesses -> work in type theory and proof theory, witness the success of Rust and Haskell and Coq and so on.

Developer abstractions -> work on design patterns and anti-patterns among other things.

interactive/responsive programming -> IDEs, refactoring, time-travel debugging, visual programming spring to mind.

To write off all these advances is suspect in my opinion. Also, why so glass half empty? Why not glass half full? Why not say, "wow – given how many lines of code are out there isn't it amazing how much has not gone tits-up? (pardon the expression!) That wouldn't fit the Bret Victor narrative though.

One last thing – the claim that we are oh so crap at manipulating symbolic machinery. That too is suspect, one could argue we actually seem hard-wired through evolution for linguistic, logical, and symbolic thought.


> interactive/responsive programming -> IDEs, refactoring, time-travel debugging, visual programming spring to mind.

> one could argue we actually seem hard-wired through evolution for linguistic, logical, and symbolic thought.

I think you're still talking about programming environments layered on top of the text centric representation, while Victor seems to be talking about something a little different. Sure we can do some symbolic and linguistic manipulation, but consider that rigorous blueprints for physical products such as cars and buildings are 'diagrams', not text. So purely language based description and manipulation doesn't apply everywhere. Now, what if many programs, or parts of programs may be better represented not as text with symbols, but in some other form that we haven't discovered yet?


> To write off all these advances is suspect in my opinion.

> That wouldn't fit the Bret Victor narrative though.

Consider the possibility that there are real gaps people see and are trying to fill, rather than just trying to 'fit' a particular narrative.


If someone actually want to work with such things as opposed to trying to solve the problem directly with huga amounts of code, where to apply?


Bret Victor is like an "all-mouth-and-no-trousers" version of Alan Kay. And many of his talks recapitulate work that Kay and Ingalls did in Smalltalk and Squeak in their various forms. His sexy presentations very closely resemble the "Active Essays" that got promulgated in the Squeak project in the 90s.


Dude's got a research lab where they're building things. What counts as trousers?


Like some others have said, I think that the specific tools they mentioned for "model based programming" are not the be all, end all, solution. However, I agree with the very high level premise of this article, that programming will continue to evolve to higher levels, and eventually "writing code" may be abstracted all together.

We have already been moving towards this approach for quite some time, even though the author doesn't mention this. If you are going to build a web app or API today, do you just start writing code wily nily? No, of course not. You look for frameworks and proven/established ways of doing this. We're practical, we don't want to re-invent the wheel (unless its a side project).

I've recently built an event based API for a client, and much of the work is not "writing code", but figuring out the right tools for the job, based on the requirements of how I believe the tool should work. A lot of it is plugging in frameworks and tools that already exist (and are established), which mitigates (but doesn't solve completely) the question of using complex/untested code. For example, most people use take advantage of TCP or HTTP for a web app. Do we test that low level code? Of course not... its already the most tested code in the world.

I think there is a danger in writing a lot of "custom code" as I like to call it, because then you really are introducing brand new functionality, that potentially has never been tested before. You can and should test your code, but often that will fall short, because you (or your team) cannot possibly test every imaginable scenario the code will face.

My general belief is this: the programming of the future won't involve much programming at all, it will be more like "system architecture", where you pick and choose the tools to meet your requirements. In the world of open source and AWS -- all the building blocks are there, but its your job to figure out how to put them together.


"Since the 1980s, the way programmers work and the tools they use have changed remarkably little."

On one level this is very wrong -- since that time we have introduced and improved numerous amazing tools like Internet and its family (protocols etc), distributed version control systems, CI systems and Stack Overflow, to name just a few. On another level it is close to truth, as we still do most of the coding in various text editors -- but not for a lack of trying to find something better. If we found something that would provide the same balance between simplicity and power, we would be more than happy to switch.


"a software developer who worked as a lead at Microsoft on Visual Studio, an IDE that costs $1,199 a year and is used by nearly a third of all professional programmers."

It's tangential to the article, but is that really true?


>> The whole problem had been reduced to playing with different parameters, as if adjusting levels on a stereo receiver, until you got Mario to thread the needle. With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.

This sounds remarkably similar to the ambition of the Logo language (from 1967):

https://en.wikipedia.org/wiki/Logo_(programming_language)


WYSIWYG has been mostly a disaster. We now have several generations of people who 'paint' documents instead of writing them and styling them. Every paragraph has its own explicit style. It's next to impossible to render such documents to a different published format because nothing is tagged to explain why it looks the way it looks. I was at least as productive writing in Wordstar 35 years ago as I am now in Word 365 or whatever the current name is.


One of the best arguments for Lisp I've heard this decade


Does anyone work in the field of formal verification? I’m a senior in school and always enjoyed the actual “computer science” more than coding itself. I’m gonna end up at a google/Microsoft type place programming webapps unless I find something else to do in the next couple months.

I guess my question is just: is it a worthwhile career, and is the field growing?


Microsoft Research has a group which explores tooling for software engineering:

https://www.microsoft.com/en-us/research/group/research-in-s...


"The software was doing what it was supposed to do. Unfortunately it was told to do the wrong thing" I feel the value of Domain Driven Design more and more every day and I think it is one method to make the business intent be reflected clearly in the code base, using domain language and modelling the domain to be self documenting.


One of the biggest benefits of model driven development is having a common language to talk about design with other people. State becomes clear and any complexity is inherently linked to how you designed your model. I really hope it becomes more popular and more companies start supporting it.


I don't know. I think the problem with software is that as we become able to do certain things with it, people realize they can do that and then want to do more.

Ad infinitum this adds up to hopelessly complex software.


I’m surprised no one’s mentioned it yet, but unintended acceleration is most likely a myth. In any functioning car, the brakes can easily overpower the engine and stop the car.


This article is silly in a lot of ways. The real problem of software engineering is not "how do you prove this code follows algorithm X exactly?" It's "how do you know what algorithm X needs to be in sufficient detail to implement it?" It's "when you realize you were wrong about algorithm X, how do you change your existing, working code to implement the new algorithm X-prime, without interruption?" It's "what do we do when we realize algorithm X-prime was completely the wrong thing, and now we need to transition to algorithm Y, and also fix all the data that algorithm X-prime messed up?"

No matter what awesome tool you come up with to translate requirements into CPU bytecode, you still have to translate human requirements into that requirements language, whether that language is assembly, C, Java, Rust, Haskell, or TLA+.

When you have the money and time and patience to fully examine the problem space and work out exactly what the requirements are, that's great. In the far more common scenario, you have impatient investors, or you're inventing something entirely new, or your software will deal with the real world which is not as predictable as an imaginary algorithm. Government regulations, flaky hardware, network latency, human error, financial limits, competition, malicious actors, gamma rays, other people's software, power outages, terrorists, dependencies, operating system upgrades, corrupt data, hurricanes, all are conspiring against your software. Can you plan for all of that?


The article's not silly. Leveson and Hamilton's works cover that explicitly. Start with Engineering a Safer World's Intent Specifications, move on to Safeware, and look at the attention that goes into writing the right TLA+ specs.

Lists like "Government regulations, flaky hardware…" look a lot like the upper levels of the hierarchical control systems contemplated there. It didn't make it into a pop science piece, but it's absolutely core to the work at issue.

You can get a copy of the book at http://sunnyday.mit.edu/safer-world/index.html .


It sounds like you're both in agreement, which would support the criticism of the article, which hides the (addressing of the) very points the parent was raising, and gives the impression that it only solves the very academic version of the problem rather than the real-world software snags.


Everything you said after "this article is silly" kinda seems like just restating the exact point of the article.

The central idea is that the massive costs of writing code that adheres to a specification takes away resources from the task of understanding the problem space well enough to create a correct specification. That is, incidental complexity overwhelms essential complexity in modern software engineering. The various avenues of study mentioned in the article are attempts at addressing that imbalance.

Which is pretty much what you said.


You cannot explore problem space without writing code for it. But once you have you spaghetti code how do you rewrite it to align with your new understanding? And what happens if you enhance your understanding even better while doing rewrite?


> Government regulations, flaky hardware, network latency, human error, financial limits, competition, malicious actors, gamma rays, other people's software, power outages, terrorists, dependencies, operating system upgrades, corrupt data, hurricanes, all are conspiring against your software. Can you plan for all of that?

If your answer to all of those is, "No, I can't, so I won't bother, we'll just update the software later," then please excuse yourself from designing anything of consequence and stick to making websites that gobble dollars from advertising, because planning for those things, ahead of time, quantifying them, and being ready, is _absolutely_ the job of engineers, and you're derelict of duty if you don't.


skywhopper gets it exactly right!

I think the software development community needs to do much more to explain exactly this to the other communities that interact with us, the managers, program/process managers, users, etc.

We have to ensure people understand that the hardest part of software engineering isn't really the coding the software itself, but discovering what the actual problem(s) to be solved are and what is the constraint space of the solution(s). And I call it discovery because I assert no one, up to the point software engineers got involved, ever thought to detail exactly all those things.


It's not a one size fits all thing. The majority of startups won't need these methods, think software for examples in the article: control systems, critical infrastructure, etc. It's fine to have different approaches for different classes of software. Raising awareness of these tools is critical because most software engineers haven't put much thought into them.


I think you might enjoy this paper, I did: http://www1.cs.columbia.edu/~angelos/Misc/p271-de_millo.pdf


I think this paper is quite wrong, and has already been shown to be wrong by all the people who have successfully applied formal methods to their projects.

We believe that, in the end, it is a social process that determines whether mathematicians feel confident about a theorem — and we believe that, because no comparable social process can take place among program verifiers, program verification is bound to fail.

This overlooks the fact that the verifiers aren't black boxes; people trust the TLA+ tools and Coq and Isabelle and PVS because the principles by which they work are well understood, and on top of that, these tools are all open source. (If someone developed a closed-source verifier, they could still leverage this trust by making their verifier output proofs in a format that could be checked by one of these.) So while the individual proofs generated in a software verification are probably not going to be checked socially -- they're mostly just not interesting enough -- there is still a social process operating at the meta-level that gives us confidence in them.

More

Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: