Hacker News new | past | comments | ask | show | jobs | submit login

This is a major problem with code: You don't know which quirks are load-bearing. You may remember, or be able to guess, or be able to puzzle it out from first principles, or not care, but all of those things are slow and error-prone.

This is a problem from both the negative (not breaking things) and positive (knowing how to add things) perspectives. The positive perspective was written about by Peter Naur in one of my favorite software engineering papers, "Programming as Theory Building," in which he describes how the original authors of a codebase have a mental model for how it can be extended in simple ways to meet predictable future changes, which he calls their "theory" of the program, and how subsequent programmers inheriting the codebase can fail to understand the theory and end up making extensive, invasive modifications to the codebase to accomplish tasks that the original authors would have accomplished much more simply.

I highly recommend finding Naur's paper (easily done via Google) and reading it to understand why divining the "theory" of a codebase is a fundamentally difficult intellectual problem which cannot be addressed merely by good design, and not with 100% reliability by good documentation, either.




On the topic of the "theory" (mental model) of a program, I recommend John Ousterhout's book "A Philosophy of Software Design". You get such gems as:

"... the greatest limitation in writing software is our ability to understand the systems we are creating."

"Complexity manifests itself in three general ways... change amplification, cognitive load, and unknown unknowns."

"Complexity is caused by two things: dependencies and obscurity."

"Obscurity occurs when important information is not obvious."

"The goal of modular design is to minimize the dependencies between modules."


This is important to understand when moving through the early stages of a project.

Many projects go through a clear prototype stage (where a lot of disjoint things are written, like a set of utilities to print out information on a file based on the format spec, make files with hardcoded content, etc.), then a system starts coalescing, and finally it's released.

The problem I've encountered is when the prototype is too good. It's an 80% solution, it seems to do everything that's wanted, but the people who wrote it (contractors/consultants/too expensive older devs) aren't the ones who are tasked with finishing the last 20%. The original developers may have understood how to create that last portion with what they'd written, or they may have intended to throw it away [0].

The new developers don't know what's present (and so recreate a lot of existing capabilities), don't understand how to extend it properly (so a lot of copy/paste when the original devs laid out a nice extendable system with generics and or interfaces or whatever the language provides), and the whole thing turns into a mess. This communication between developers is critical, but usually absent.

[0] "This is more of a proof of concept, it does everything you want for converting two file formats between each other, but doesn't scale yet because it's all 1-to-1 mappings, we are working on the intermediate representation now that we have a firmer grasp of what's needed."

"Oh, that's fine, you guys can go work on the next project we've got a crack team that can wrap this up."

"...Ok, thanks for the money."

The crack team never makes that intermediate representation and just creates 1-to-n mappings between each format. The explosion in code size becomes unmaintainable, most of the mappings are the result of copy/paste, and bugs proliferate throughout because, while fixed in one section, they don't realize how many other places that same bug resides in.

EDIT: For the record, [0] started off short enough to be a footnote then grew to be too long for it, and I forgot to edit it properly when I came back from getting a glass of water.


I think I have to disagree with Naur on this, in that people using the Scientific Method don't ship their theories, but we do.

As a scientist who has just succeeded in testing a hypothesis, I now need to go back and document a simplified series of steps that should lead any independent party to the same phenomenon. Once we are on the same page, they can confirm or refute my theory based on their own perspectives on the problem space.

During that process I may discover that I based half of my experiment on another hypothesis that I never tested, or was plain wrong. Now I've discovered my 'load bearing' assumptions. I may discover something even more interesting there, or I may slink away having never told anybody about my mistake.

Essentially, scientists still 'build one to throw away'. We haven't in ages. And my read on Brook's insistence that we build one to throw away is that it was aspirational and not descriptive. And notably, he apparently recants in the 20th anniversary edition (which is itself 25 years old now):

> "This I now perceived to be wrong, not because it is too radical, but because it is too simplistic. The biggest mistake in the 'Build one to throw away' concept is that it implicitly assumes the classical sequential or waterfall model of software construction."

So we are very much at odds with the scientific method. And we have the benefit of hindsight. We have seen the horrors that can occur when you take the word Theory out of context and try to apply it to non-scientific theories. We should learn from the mistakes of others and summarily reject any plan where we do it too.

In other words: next metaphor, please, and with all due haste.


I think I have to disagree with you on this one, I've used the scientific method (though not in an explicit checkbox-y way) plenty of times to ship and debug code.

In particular, since (as I've said on this forum many times) I work primarily on the maintenance end of software. I don't know what the creators or previous developers were thinking, especially with more recent projects (documentation quality has really gone down hill, people call autogenerated UML diagrams "design docs", but without commentary they only reflect the state of the system, not its design). I have to try different changes based on my understanding of the system and see the consequences. That is, I form a hypothesis about what will happen if I do X, I do it, I collect the results and I've either confirmed my hypothesis, refuted it, or left it in an indeterminate state. I form another and repeat. Over time I build up a model (theory) of how the system behaves and should be updated/extended. Since I can't keep tens of thousands of lines of code in my head, let alone hundreds of thousands or millions, I always only have a model (theory), because I never have the totality of it in my mind. Though good code, with good use of modules, makes it easier to keep large chunks in mind, I still have to have a model of how those modules work and work together.

Hell, this is half (or more) of testing for older software systems. You put in some input and see if you get the output you expected. If you don't, you evaluate why (is my model wrong or is the system wrong) and repeat.


I don't mean 'use' as in a #7 torx wrench. I mean 'use' as in air.

I have shipped bug fixes using organized hypothesis checking as well. Especially sanity checks (make sure the instruments are working). But it is not the software developer's default behavior, and I'm sure you've lamented it just as I have. You and I are tourists, and many around us aren't even that. So when we speak of whether 'we' apply formal rigor to our work? Is it still rigor when there is no discipline? I don't think rigor is something you do on a random Thursday. It's something you do all the time.

So no, 'we' do not use the scientific method. We dabble.

And so when someone like Naur tries to summarize software with a line about theory proving, he's not speaking about everybody. If he were honest he might not even be speaking accurately about himself.

ETA: But he's talking about the long arc, not a single bug fix. That we are circling in on what the actual problem is and feeling it out with code. But since we stop at "if it ain't broke don't fix it", we never actually crystallize the thing we built. We never test the hypothesis we suppose that we have created. We have spot checked this organic thing that never gets pinned down and might actually be DOA. We hope the evidence we are wrong is just 'glitches' or problems with the user's machine. Until someone comes to us with a counter-proof that shows unequivocally that we were wrong.

Which leads to problems like those mentioned in this comment tree.


Insofar as "what happens when I do this?" goes, it neglects the null hypothesis. If you don't pay attention to falsifiability can you claim to be doing science?


I use the scientific process constantly while shipping code (most of my time is spent writing fixes for large production systems that are being actively used, where a regression could cost millions of dollars). In particular, I explicitly state my hypotheses, and use positive and negative control experiments when evaluating my fix.

I often "build one to throw away", but half the time what I build is good enough that it goes into production and lasts for a while.


I've had that experience describing the plot of a novel to friends of mine. The novel is pretty complex and covers ideas in crime, internet anonymity, memetics/virality, and then some technical things with vehicles and atmospheric science. Some friends I've talked it through with, they come up with ideas to add stuff and it's like "that makes no sense for this novel, but it's not a dumb idea given how little of the theory of the novel I've communicated to you." Whereas other friends seem to immediately grasp onto the theory and make suggestions that actually fit with the overall concept very well, and usually recommend small changes rather than wholesale plot rearchitectures. It's like an architect coming in and saying "we should move this door two inches so that this door doesn't bang this wall" vs. "We should build this whole house as one story into the side of a cliff."


>...or not care, but all of those things are slow and error-prone

Nah. Not caring is pretty quick and simple. It has served me well!

Seriously I do agree though. One mistake I've seen a lot is assuming that an extensive code base, developed by competent engineers, but which is very complex needs simplifying or rewriting in a simpler way.

Often that complexity is there for a reason, covering platform, customer or situation specific edge cases discovered through hard won experience and feedback from production use.

Twice I've worked at companies where a massive project to replace the core product with a new clean sheet implementation killed the business. That doesn't mean clean sheet implementations are always bad, not at all, but they can like a nice clean beautiful opportunity while actually being blood curdlingly risky.


> One mistake I've seen a lot is assuming that an extensive code base, developed by competent engineers, but which is very complex needs simplifying or rewriting in a simpler way.

Omg, this yes. I've made this mistake countless times. I'e done my share of rewrites or refactorings that ended poorly. Work long enough on a big project, and junior engineers will do it to your code too. Being on that end of it is a very frustrating experience.

But let's now balance that against the assumption at the other end of the spectrum: that an extensive code base, developed by competent engineers doesn't need simplifying or rewriting in a simpler way. As it turns out, this too, is a flawed assumption.

The more code I've seen, the more I've seen that most codebases that have survived any length of time are a mix of both, and it's hard to figure out what is what until you get some deep experience with it yourself. If there's been turnover in the team and the code has been under heavy churn, it's probably a mix of everything!


Thanks for sharing that, I'll read it!

I do agree that comments, documentation, and other artifacts aren't sufficient to solve this problem. The closest I've come is with formal specification, where the intent of the program can be communicated very clearly. Multiple approaches are needed. One of those approaches is continuity: Keeping people around who are familiar with the code base and can pass on this knowledge to others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: