Hacker News new | past | comments | ask | show | jobs | submit login

> Sometimes it certainly is high, but why inherently? Surely it depends on the particular problem you are interested in solving?

Even the simplest "real world" programs do things like: communication with a database, string manipulation, calling library functions which can fail.

> As someone who actually does formally prove things about algorithms significantly larger than that from to time, I believe you’re exaggerating here.

I wrote a master's thesis that touched the subject and it's what I put in an introduction as an illustration of the field's difficulty (I believe it was not taken from air ;). I don't know the field now but ca. 10 years ago program's proofs were mainly refinements of high level axioms to executable code so there was no separate "proof" step of a finished program, but still the added complexity of proving the refinement steps was huge.

Even the simplest "real world" programs do things like: communication with a database, string manipulation, calling library functions which can fail.

Sure, but these things are exactly what I’m talking about with accidental complexity. You’re talking here about databases and strings and library functions, not about whatever real world problem you’re ultimately trying to solve. Complexity that comes from the tools you use or how you use them is exactly what I’m arguing we should ideally minimise when programming, but we often don’t for various reasons.

I was agreeing with the essential / accidental complexity dichotomy until I read this comment. I think we need a third layer, or at least we need to differentiate between accidental complexity because we need to use real-life tools like databases and strings, and accidental complexity because, as J. B. Rainsberger puts it, "we're not so good at our jobs". [1]

[1] https://vimeo.com/79106557

That was a frustrating video to watch. There are a few obvious flaws, like the “proof” that doesn’t actually prove the original claim at all, and the way you literally can’t reach the equivalent of

    add x y = x + y
using the proposed Agile programming model. However, the giant elephant in the room seemed to be that all of those tests the presenter was talking about, and any aspects of the overall design of the production code that exist only to support those tests, are themselves what he terms accidental complication. By his own argument, it seems we should never write tests!

This is why I prefer to talk about minimising accidental complexity rather than eliminating it. Essential complexity is logic you can’t avoid. It’s fundamental to the problem you’re solving, and any correct and complete solution must take it into account. Accidental complexity is logic that in principle you can avoid. However, sometimes you don’t want to: diagnostics and test suites bring practical benefits other than directly solving the original problem, and we will usually accept some extra complexity in return for the value they add. Similarly, using a tried and tested tool, albeit one designed to solve a more general problem, may be more attractive than implementing a completely custom solution to our specific problem from scratch, even though again it might introduce extra complexity in some respects.

I don’t really mind whether we break down accidental complexity into finer divisions. There are lots of sources of accidental complexity. There are lots of potential benefits we might receive in return for accepting some degree of accidental complexity. My original point was just that some complexity is avoidable and often it is possible to simplify real world code by eliminating some of that accidental complexity, even if we choose to accept it to a degree for other reasons.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact