Hacker News new | past | comments | ask | show | jobs | submit login
Why does software development take so long? (sesse.net)
296 points by cheiVia0 on Oct 26, 2016 | hide | past | favorite | 268 comments



People STILL confuse the construction of software with the construction of buildings. We can estimate fairly accurately how long it will take to build a building once we have reasonable plans for it. I can pretty accurately say that it will take about 4 minutes to build the software once I have the plans to build it. The compiler pretty much automates the whole job.

Writing software is NOT construction. Much of it isn't even design. Most of it is gathering detailed requirements and writing them down in unambiguous form (code).

My asking how long it's going to take to write a software it's like saying to a building contractor how long will it take to design every single detail of a city block including gathering all the requirements.

Also the requirements for software are much more detailed than building. 100000 lines of code represents 100000 decisions. I bet not many buildings have 100000 decisions. And 10000 is tiny for a software project.


The reason designing a building is faster is that there are fewer decisions. The reason there are fewer decisions is because buildings are way better understood than software.

A lot of designing the structure of a building is just the implementation of a few core concepts that have been perfected for thousands of years, like doors and windows and beams and arches.

There is sufficient human knowledge about buildings that people expect every single building to stand up, work properly, and be safe the very first time it is built. There aren't many self-taught building engineers who just picked it up in their spare time during high school.

And a lot of the design is simply choosing fixed options. Architecture and construction firms don't design and engineer the ceiling light fixtures or the faucets. They select a supplier + model, and install them with standard hardware in standard ways. In many cases there is even government code that tells them exactly how it must be done.

Compare to software. Every single decision is up for grabs on every new project. Programmers have no standard certifications, and in fact are often actively hostile to standardization and formal training. The state of the art in making sure things work is just brute-force, constant testing. Of course projects can run into problems.

Software, as an engineering discipline, still has a long way to go toward being a mature, well-understood human endeavor. Which makes sense, since we've been doing it about 1/100th as long as we've been making buildings.


> Programmers have no standard certifications, and in fact are often actively hostile to standardization and formal training.

I hold the unpopular opinion that we need those things. The current state of things is that we're not much more than kids hacking away on computers with minimal supervision.

I'd love to see most (if not all) software written with the rigor required to write safety-critical systems.


The problem, to add to the buildings metaphor, is that we don't have a few thousand years of experience building software under our collective belt yet. We quite literally don't know how to build good software. Or rather: we are still figuring out how to build good software, and we don't know yet how far along we are in this process. Therefore it looks like a questionable idea to set the things that we currently believe to be true in stone.


We're also never building the same software (or at least very rarely). The industry is such that we're always trying to invent something new, something that hasn't been done before.

If you build the same (or near same) piece of software 100 times, you can know almost exactly how long it'll take and you can do it quickly, same as if you were building 100 buildings. But we don't do that, because you build software once then just copy it 100 times.

You only build software if you're making something new that hasn't been made before.


> You only build software if you're making something new that hasn't been made before.

Or has been proprietary. This is why we should choose copyleft licenses over permissive ones.

This project I am working on is a bit boring, outsourced (or exploited) here in my country. It's about ETL, transferring data from prod db to analytics db.. I am of the opinion that all this is a solved problem and I am probably repeating mistakes. However, due to the nature of Capitalism, exploitation ....


I've been there. I even got a fair way into discussing starting a 'data migration' startup. Customers only care about getting their data migrated and wouldn't mind you holding the rights to the software tools you write to do it so you could get better and better as you build up your toolkit.


I'd love that too, but I'd also like clients to be willing to pay for that sort of rigour. Which doesn't appear to be the case for most clients.


That opinion is not at all unpopular here; I read it constantly. It doesn't make a lot of sense to me though.


You'd have to have it mandated to everyone. Otherwise the few conscientious teams would take 10 times as long as the risky ones. And since riskiness often doesn't bear deadly fruit for months or years, the careful teams would never stand a chance in the free and ignorant market.


>The state of the art in making sure things work is just brute-force, constant testing.

I'm pretty sure the state of the art is using good type systems and libraries that use that type system. It takes some of the brute force out of testing. Unfortunately, many places don't even use the full power of the types their chosen language have. Others choose PHP or C++, piling on technical debt because they sell software the way HP sells printers - the initial payment is low, but you'll have to pay for support and bugfixes forever.

The only thing that can make software cheaper is the customers demanding better tools, languages and processes.


C++ has a type system. I suppose then that your implication is that it's not a 'good' type system. What would you pick instead?


Haskell ?


Lisp?


The reason there are fewer decisions is because buildings are way better understood than software.

I think buildings are better understood in large part because they are not nearly as malleable. Because software is highly malleable by nature, the complexity and scope of decisions can grow much faster than in any other engineering discipline.


I'm convinced malleability of the software isn't the issue; the issue is that the thing that the software is modeling is malleable and generally unknown to the level of detail necessary.

Nobody in a business knows all the business rules, no manager is aware of all the data that their underlings create, manipulate or consume, no individual or single location at FAA really knows all the rules on how traffic control actually works, etc etc etc.

But when we create software to automate any of those things then, at that point, need to have a full understanding of all those things. And then we generally also discover that the rules discovered are inconsistent, violate some other rules or laws, etc. And once those are fixed, then and only then people realize that what they had isn't quite what they wanted. And not only what they want changes, the whole underlying system was evolving at the same time, so it sometimes feel that you're starting again instead of tweaking the implementation (and this may also be why software methodology research focus obsessively on approaches that make things easy to change at a later date)


When someone builds a multistory building, they will build the first floor to support the second floor. And in general it is exceedingly obvious why that is the case.

The problem with software is the abstraction. The first floor and the second floor aren't connected in any physical fashion. It's easy to rip out the first floor after the second is build, and only later realize you cannot support the necessary load.


Buildings have a far smaller state space, and that space is highly decomposable. There are only so many doors in the building that can be open or closed, lights that can be on or off, HVAC zones that can be on or off, or elevators that can be going up or down or stopped at a floor, etc. And few of those states interact.

Software systems have astronomical numbers of states, and while most of the discipline of software engineering is about how to minimize unintended interactions between them while still producing the intended interactions, we still wind up with lots of the unintended kind.


There's a lot more to the state of a building than a human interfaces. When the ambient temperature changes, the state of the building changes. When the wind blows, the state of the building changes. When it rains, the state of the building changes. As materials age, the state of the building changes.

We're just so much more familiar with these forces and states that we can reliably model and design for them, and then (as with your comment) not worry about them anymore.

We take it for granted that our buildings won't fall down in a storm. But the knowledge of how to do that had to be developed and standardized at some point.


For buildings not collapsing, the main thing that matters is that the structure's strength is larger that the applied forces. Edge cases can be solved by adding more material, or more intelligently by having enough redundancy that you have enough strength even if a small number of components fail.

This, of course, does not apply to software functionality - you can't fix bugs by "more CPU power". However, if you look in the places where you can apply this methodology - like cloud services - you find that they are indeed very reliable.


It does not happen that often that one hears: "I don't like the looks of the building when it rains. Replace the facade, pretty please!"


Also, moving the oven from one end of the kitchen to the other doesn't require replacing the support beams in the garage.


If the materials of software are for-loops or text files, I think one can say that "we" are familiar with them.

A simple program or a simple piece of a large is something one can be very familiar with.

Indeed, the particular parts are more predictable than particular parts of buildings, who behavior changes over time, which literally rather than metaphorically wear-out, which have to simultaneously fulfill a number of functions simultaneously, etc.

So I think it is ultimately a matter of the state-space of the ingredients rather than a lack of familiarity with the materials.


I am not sure malleability is really the core issue... the core issue is that there are many different approaches to making a feature with different trade offs and costs that are not readily apparent at the outset. Not to mention that poor design upfront causes a plethora of issues down the line if you need to scale it.


Just pointing out that software will likely require adaptation and scaling in the future is a huge point of differentiation.


This reminds me of a metaphor: "If you want to know the maximum load of a bridge, you don't drive progressively larger trucks over it until it collapses then rebuild the bridge."

I think you're right that thousands of years of experience play a part. But, overall, that metaphor has had me thinking a bit about how much less predictive building software can be compared to engineering and wondering why that's the case.

Assuming building software and traditional engineering are about as complex and assuming that engineering it is easier to predict (construction deadlines slip too,) I'm curious to see if we can overcome fundamental issues like the halting problem to become as predictive.


> If you want to know the maximum load of a bridge, you don't drive progressively larger trucks over it until it collapses then rebuild the bridge

For most of human history, we basically did this. Bridges have only become very reliable in the past 100 years or so. Before that, bridges collapsed very regularly and people were very wary about going over a newly built bridge.


But now they use simulations which drive virtual vehicles over the bridge.


I think the reason designing a building is faster is peoples' standards are lower.

Almost no one wants a building designed uniquely for their lifestyle. They don't even realize you could ask for such a thing. They just pick and choose from what they've already seen.

If that were true of software, it would be just as simple. But people keep asking for things no one has ever done before, exactly, and that leads to unpredictability. We keep seeing unique new software, so we are more likely to ask for the same.

The same is true for buildings when the architect is trying to do something new. Buildings could be just as interesting as software, but most people don't think to ask.

I think in the long term, buildings will be exactly as custom and complicated as software, and designing them will be just as difficult to estimate.


A lot of people hire architects precisely because they want a building designed for their lifestyle. No one goes to an architect and says "I'd like four walls, some rooms - I don't care what they do - and a roof. Can you do that?"

And the architect never says "Maybe. I wish I could be more specific, but it's just hard, you know?"

There's very little genuinely new in software. Even outside of the CRUD treadmill and corporate Java land, there isn't much of a leap between a Visual Basic application and an iPhone app. There are implementation and platform details, and lots of them. But the core concepts are recognisably similar.

The only real difference is that the tools keep changing - often for no good reason.

In architecture, stone is stone and concrete is concrete. In software, C++11 is not C++17, except for the bits that are, mostly, assuming you can find a toolchain that implements the differences properly.

Angular 1.0 is not Angular 2.0. Metal is not OpenGL, even though sometimes it smells like it. React is not jQuery is not a long list of other things, including Haskell, although you can bet someone somewhere is working on Category Theory as the definitive industry-changing conceptual model for MVC on web pages.

Most of the productivity costs associated with the constant churn are self-inflicted - the result of an industry more motivated by ADHD than by empirical analysis of which language and toolchain features make a real difference to getting shit done, and which are just unthinking tradition, random opinion, and noise.


From what I've been told, it's very rare for an architect to find a client who will let them actually creatively design a space for utility. Clients are primarily interested in appearance, surfaces, size, and to some extent layout. Very few will pay an architect to prototype new concepts, custom design amenities, etc.

Not that that's a good idea for most people... it's better for resale if you make a cookie cutter house. Most codebases will never be sold. Almost every house will.

If you had to resell codebases they'd probably be a lot more standardized.


That isn't the majority of people, though. Most of humanity lives in either apartments, cookie cutter developments, decades old homes they didn't build, or shanties. They don't get to choose, and customization is a luxury.


My brother builds large buildings and after a lot of discussion about our professions the key difference we agreed on was the level of constraint. Software is (appears?) unconstrained. The enormous cost of experiment or change is very much apparent to stakeholders in his projects. In mine there is always a sense that because it is intangible it doesn't have the same cost.

However this appears to be a misconception.


There is a great essay by Jack Reeves making a similar point.

What Is Software Design? http://www.developerdotstar.com/mag/articles/reeves_design.h...

FTA:

There is one consequence of considering code as software design that completely overwhelms all others. It is so important and so obvious that it is a total blind spot for most software organizations. This is the fact that software is cheap to build. It does not qualify as inexpensive; it is so cheap it is almost free. If source code is a software design, then actually building software is done by compilers and linkers.


But then you could argue that the machine code emitted from the compiler is a design and the actual hardware that implements it is "building the software".


Nobody said that it was an exact analogy. :) All it needs to be is more exact than the previous (good, but not all that good) analogy.


But then you could argue that the implementing transistor arrangement is a design and the actual movement of electrons that implements it is "building the software"


What compiler does is equivalent to what a contractor does - building. The machine code emitted from the compiler is the building.


I would bet that those involved in making buildings would differ.

However, sometimes you get to make more or less the exact same building again, which you shouldn't ever be doing in software. This is how suburbs happen; it's way cheaper to rebuild the same house again and again. In software, especially in an era of open source, you should not be doing that. So every house you're making, is the first house you've made of that type.


In software it's basically free if you want exactly the same building. Bits are cheap. Even physical media to store the bits is cheap.

Sure, the bricks, mortar, framing lumber, concrete, drywall, conduit and such for a building cost a fair amount of money. Then there's the labor to actually put it together. But one of the big costs is the architecture and (literal) groundwork.

The fixed costs of a building are fairly steep, but miniscule compared to redesigning it over and over. In software we generally get away with much cheaper fixed costs of deployment but redesign is still redesign.

We're typically happy to spend some time redesigning and paying that cost because we're not actually demolishing and rebuilding parts of the project with high deployment costs like renovating an existing building. Throwing away the old copy and deploying a new one is essentially free. So we shift the budget to more redesign. There should be a limit on how often, though, if we ever want to move on to different problems.


Several years ago I wrote a blog post expressing a related point, contrasting software design to construction. Here's a part:

Let me give an example: if you're designing a bridge, you can draw blueprints on paper which shows girders. The girders are described by giving their dimensions (accurate to 1/16th of an inch, say) and the particular alloy the girder is made from. This is sufficient to accurately model how that girder will behave under all kinds of different stress loads which is important for ensuring the bridge will be safe, and also to model how the girders will fit together like a puzzle which is important for allowing the steelworkers to build the bridge correctly, on-time, and on-budget.

The key to all of this is the fact that you don't need to create a real girder in order to test the design and make sure it's correct. A few easily described properties of the girder are sufficient; it doesn't matter where every atom goes, it doesn't matter if the surface isn't perfectly uniform, it doesn't matter if there is some rust, etc. Lots of the details just don't matter at design time, and most of them don't matter at construction time either.

Software just doesn't work this way. Software development languages are extremely detail-sensitive: get one letter wrong, one punctuation character in the wrong place or left out, and the software won't work right. There is no way to accurately model something this sensitive to detail without building it first, and if you have to build it first you lose the biggest benefit of doing design up-front: the ability to test and iterate on your design cheaply before committing to a full build of it.


It's reasonable to imagine that installing 1000 windows in a building could take weeks or months.

If you're writing software and follow DRY, it might take a few hours to work out how to perform some repeatable task, but then only a brief moment to actually do it 1000 times.

The act of making software is all about decisions, there are almost by definition very few repeatable tasks. If you do find yourself repeating things over again, you're not taking full advantage of DRY or automation.

This is why I think a lot of software is so unpredictable in terms of time. How do you predict that which you don't already know how to do?


Maybe not the best analogy but there's also things like letting 10x the projected number of occupants enter the building to make sure it can still handle the load and then if it can't this can lead to some rather drastic changes requiring rework.

Also things like having a contingency plan if say city sewer gets backed up - how will your tenants take care of "business" then :)


Well, we still need you to provide a detailed task breakdown and hourly estimate for each task so if you can do that before starting work on your story backlog, that would be terrific.


Underestimated in this comparison is the fact that software development, even on very large projects, tends to be staffed by generalists, while building construction relies on many highly-specialized masters of a trade.

Software has some of that specialization (for instance, even big projects don't try to write new operating systems, invent interoperability protocols, or graphics libraries). But when it comes to the boundaries of what's considered part of the project, we rely on generalists. In building construction, a general contractor may have an HVAC subcontractor, one for electricity, one for glass, one for landscaping, one for every subsystem. In software, it's not economically feasible to contract in someone specializing in, say, web routes, and another in Rails model development, and another in for-loops, etc.

The other, related point: very often software development time isn't just a function of the requirements, it's a function of the intersection of the technical requirements/platform and the talent pool available for that platform/language/tech.


Another point in support of this is that buildings tend to get away with more minor bugs than software does. Walls usually have a subtle bow to them, toilets don't have to fail gracefully if the sewage network gets backed up, doors get installed the wrong direction, there are gaps behind the cabinets, light switches get wired around odd corners, etc. So long as the building can stand up in an earthquake and looks good enough to sell, that's usually the end of it. Software, on the other hand, has an air of imperative perfection. If it is even slightly wrong or ugly, it must be entirely wrong. I feel that way myself, and I must sometimes throttle my impulse to fix it indefinitely so that I can meet a business need in time.


In construction the hardware part is pretty much standardized. Very little new hardware is coming up. But in software development, hardware is still evolving very fast (processor, memory, bus, fibre etc..). So software has to evolve along with hardware. Once hardware is standardized like CPU instruction set, then software will also be standardized with out any further significant development.


Meanwhile, lately it seems like all too often, you'll be pounding nails with a hammer and then in between swings, it suddenly turns into paint brush or a screwdriver or an Allen wrench.

Or you're working along one day, and all of a sudden, every machine screw in every piece of power equipment on the job site just up and disappears into thin air, and all of that equipment shakes itself apart into a maelstrom of shrapnel, because that little hidden component nobody thought about winked out of existence.

Construction workers would not put up with this bullshit.


Yeah. Software projects that are as complex as a house are typically called "the script that guy wrote a decade ago", and tend to work fairly well, even considering that use-cases for software have changed rapidly in the last few decades.


>Most of it is gathering detailed requirements ...

Gathering is the common term used to refer to it, but it makes it sound too easy.

I'd say it's harder than gathering requirements, i's eliciting requirements.


It's requirements negotiation round here.


You can see the Pacific Ocean on one side, and you can see the Atlantic on another.

You know of something called a shovel, machetes, labourers and engineers. And you have a budget.

Of course, it must be possible to dig a trench connecting the two oceans.

When you start, you find your men get ill from the climate. And perhaps you don't quite know how to build a reliable lock, but you figure the engineers will find out somehow. And meanwhile there are constraints coming in from government.

Things take long because it's easy to see the big picture, and getting more detail (ie learning) normally means discovering that you need a bit more time to do fix some issue that you didn't see standing on the hill.


Except that in software 1000's of groups have already dug 1000's of canals and yet the next group that will build a canal will insist on doing it their way.

If our profession would only be able to simply collect lessons learned and to pass those on to the next generation then we'd be moving faster on average and we'd produce better software. we don't need 500 crappy, slightly different ways of doing the same thing, we need maybe 5, battle tested and hardened bullet proof solutions that are well engineered. Fragmentation is going to be open source's death at some point if this continues, that's one area where closed source actually has an edge: focus.

In part the problem is that software producers have managed to convince the buyers of their goods and services that they are never ever liable for their product (something no other industry has managed to do), in part it is because it seems superficially easy to re-invent until you reach the same level of complexity (which inevitably happens) as what those 'archaic' solutions were already addressing.

I don't have any answers on how to solve this, unfortunately, but there are software eco-systems that really get this right and others that seem to be totally lost in ever expanding cycles of endless rewrites.


Surely it has nothing to do with the buyers of our goods and services insisting that the problem they need solving is unique, so trivial it should be done yesterday, and so clear that they dont really need detailed written specifications? To say nothing of IP concerns making it difficult to actually reuse solutions.


You hit the nail on the head.

If customers/bosses actually figured out what they want before coming to us, it would be simple to write a perfect solution and/or reuse some existing general product to fulfill their needs. As my experience says, however, the requirements keep changing in parallel to the code.

In other domains this tends not to happen. When you're in the middle of constructing a beam bridge, you don't get an architect suddenly deciding it should be a suspension bridge instead.

Now I'm not saying iterative development and discovery is bad - it's not. But, with iterative development, you treat steps as throwaway, and you don't commit to deadlines. Something the bosses / customers don't understand. You can't have it both ways.


Architects wield cost for requirements changes like a razor sharp sword.

"...you treat steps as throwaway, and you don't commit to deadlines." Just that. For cost reasons.

Prototypes are like football practice. Production code is like a football game. There is a difference.


No, unlike e.g. civil engineering you're never doing the same thing twice and you're always doing something you've never done before because if you have, you could just use that code.

You can't copy-paste a bridge but you can re-use software.


You possibly don't now much about civil engineering, let me start out by writing that every project in civil engineering stands on it's own, it is the knowledge and the practices that get re-cycled from one project to the next and on a higher level from one generation of civil engineers to the next.

So it is extremely unlikely that civil engineering will be done twice for an exactly identical project. But no civil engineer is going to make his own table for bolt tensile or sheer strength just to scratch an itch.

Then we come to software, in software the same problems appear over and over again and we have in the 100's of possible solutions for some problems that are all almost (but not quite, of course due to sloppy design and lack of standardization) interchangeable.

Think web frameworks. They are themselves an attempt to abstract away some elements that are common, but in the meantime there are almost as many webframeworks as there are things that they were trying to abstract away to begin with (and all of them fail on one or more of the details). Or programming languages, another area in which we have re-invented the wheel so many times that the number of programming languages (1000+) will at some point probably exceed the number of human languages (about 6500 in active use, but about 2000 of those have < 1000 speakers).

http://www.infoplease.com/askeds/many-spoken-languages.html

http://www.99-bottles-of-beer.net/


> You possibly don't now much about civil engineering

I don't, so maybe it's not the best example.

But take houses for example, it's not common to see a dozen or more identical houses being built. After you've done 11, number 12 is not going to be much different.


But the foundations might be slightly different to account for terrain differences. And that will not cause the civil engineers involved to re-define what a bulldozer or backhoe can or should do and how it should work, nor will it change the process they will use to make that foundation.

In fact, the house sitting on top of the foundation will likely be the same project if they really are identical, just like copies of a piece of software, in other words, from an engineering perspective the job as far as the house is concerned was done with #1 if you want more than one house.

It's all about the tools, processes, materials, soil knowledge and so on that are being employed in order to solve the problem, not about what is actually built.

In the software equivalent there would be a new build process with associated terminology and tool obsoletion for almost every engineering project. Imagine workers coming to the job one morning to find that the tools they used the day before have now en-masse been declared obsolete and all their technology is slightly different, incompatible and un-tested. Then, a few weeks later at another job site it would be the same story all over. In the meantime, the engineers would be re-inventing engineering from first principles for every 5th job or so.

In other professions that would be called madness.


"Imagine workers coming to the job one morning to find that the tools they used the day before have now en-masse been declared obsolete and all their technology is slightly different, incompatible and un-tested."

This is one reason software is eating the world. It's actually possible to build new tools to increase productivity and start using them in a very short period of time. Sure, most of the tools are crap or marginal improvements at best. But with so many new ones being developed constantly, occasionally one offering real benefits shows up and every developer can benefit almost immediately.

I also think you vastly overstate the amount of "NIH" and vastly understate the massive amount of actual reuse.

How many developers use Apple's UIKit to build apps? Yes, a small number reject some out of the box components, but I'm sure many more developers just use what Apple provides as the building blocks of their application. Very few people write their own networking stack or HTTP library. And as much as newer programming languages get massive hype, the same languages tend to dominate the Tiobe rankings year after year (or whatever popularity metric you prefer).

In other words, focusing on what gets posted to Hacker News probably doesn't reflect the experience of the median developer.


I think that's the wrong perspective.

Software is a tool. What you do with it is an outcome.

A bulldozer is a tool. The house is the outcome.

So, we already invented a bulldozer (some piece of software) and it's trivial to duplicate it and reuse it all over the world. But when we need a new type of machine it's harder because, well, it's new.

In the software engineering world the goals is generally not to facsimile something like e.g. a house. It's to create a legitimately new tool. And since there are no real capital investments required (unlike building new tools in the physical world), and folks often value their free time at a weirdly low marginal rate, you end up with tool proliferation in a way not seen in the physical world, which makes it easy to think that we're building houses when in fact we're inventing the bulldozer.


> It's to create a legitimately new tool.

More likely: to create a minor variation on an existing tool. There have been relatively few instances of legitimate new tool creation but many 100's (and probably 1000's) of instances of re-creating slight variations on the same tools.

For instance: build tools, programming languages, libraries (95% duplicate functionality with some other library, and of course an inconsistent interface) and so on.

> And since there are no real capital investments required (unlike building new tools in the physical world)

Software requires enormous amounts of capital, in part due to all this re-invention. We even have a name for it: NIH, and we have a term for what happens to a software project that is a few years old (technical debt).

Our tools and our processes are ill equipped to deal with the challenge and what is supposed to be building houses more often than not ends up with people re-inventing the bulldozer.

Now there are times when that is the right thing to do, but most of the times it isn't.


Slashdot, Reddit, and HN are different tools. On the surface they seem similar, but so does a Philips head screwdriver and a Frearson, yet they really are different tools.


> Slashdot, Reddit, and HN are different tools.

They're different end products, but they are not different tools.

Conceivably they could have been built with the same toolchain, but instead they were all built with totally different tools (Perl, Lisp/Python, Arc) respectively.


But how is this not complaining that some buildings are concrete and some are wood and some are gasp a hybrid?

I mean, if those engineers were better at what they do, they'd use just one material, right?

And they have hundred or thousands of kinds of screws! Why do they keep reinventing them? They're just meant to hold things together -- one job!


Because in engineering there really are such requirements. In software we rarely have a good reason to start yet another 'from scratch' new hip thing that is so much better than that old thing (where 'old' is likely less than 5 years old). Technology cycles are now so short that libraries and frameworks don't even bother pretending to have a life-span longer than the stuff that will be built upon them.

And those engineers with their 'hundreds or even thousands kinds of screws', they very much favor standardization, lots of work goes into attempting to create families of compatible tools and consumables and typically a change like that will result in decades of stability in the fastener industry, it is quite rare for something revolutionary to happen, most of the changes are incremental and logical.

If you attempted to come up with a completely new thread or screw head there would have to be a reason better than 'I don't like the other screws' if you expect to gain any acceptance of you shiny new tech.

I'm trying to imagine a world where every other week the whole of engineering would be up-ended, everybody would have to totally re-train and we'd discard everything we learned process wise over the life-time of the industry.

That would be the rough equivalent of what we do in software. Ok, maybe not two weeks, but we might actually get there, life-cycles are getting ever shorter.


So if you stepped back 5000-10000 years in engineering, you'd notice that civil engineers... Basically did exactly that.

You're just being extremely unfair in comparing a field with ~10,000 years of development with one that has like... 100.

Everything you've called out as unreasonable is pretty much what every field, ever, has done when it first became a discipline, and your specific criticisms border on absurd.

Programming languages, for instance, are like materials: of course there's thousands, most of them are meant for research purposes, and the hundred or so used ones each represent different tradeoffs in base construction strategy. No different than the dozens of kinda of wood and concrete used by civil engineers.

Similarly, engineering of civil projects has some notable massive overruns on budget and complete design failures in even recent history. Their projects are bespoke and often use novel ideas that don't work out in practice.

The main "difference" is you're comparing a high assurance subset of one side to the general of the other, which is naturally quite unfair.

Want to compare the fields in full? I bet I can find a dozen bad handyman repair jobs for every bad JS framework or library.


But we already have engineering as an example of how to do something like this right, there is no reason to go through another 5000 or 10000 years to figure all this out once more.

And the handyman and the engineer have very little in common.


Civil engineering had thousands of years of development when Galloping Gertie happened. Why would their methodology work in other, different engineering fields when it fairly routinely doesn't work in its own? I mean, if you look at things like project failure percentage (and cost!) over time spent developing the field, you probably have software winning.

Software is developing in to an engineering profession at an astonishingly fast pace. It's just currently at the point where it's differentiating between tradesmen and engineers, and without that clear distinction, it's hard to compare apples-to-apples between fields.

> And the handyman and the engineer have very little in common.

Why not? You're lampooning software development for the fact that a lot of not-quite-professionals develop mixed results because they churn a lot of product onto the market. The damage done to house integrity all over the world by questionable repairmen estimating the engineering impact of a change they make is comparable.

My experience with high-assurance software is that it's similarly well constructed and engineered to high-assurance civil engineering, eg, major bridges. Failures happen, but are sporadic rather than regular.


"My experience with high-assurance software is that it's similarly well constructed and engineered to high-assurance civil engineering, eg, major bridges. Failures happen, but are sporadic rather than regular."

Where did you experience high-assurance software and is there a report on how it was made? There's very few here that even know what that means although I've been working to change that over the past year or so. Always collecting more from that tiny field for any lessons learned that could be shared.


I worked in control firmware and systems middleware/OSes for chemical processing equipment (and related control systems). None of the super fancy, huge plants; more single room sized processing pipelines for R&D uses. That said, the bigger chambers were like 0.2m^3 and operated at 2500PSI @ 100C, so you'd definitely know if one catastrophically failed.

We didn't necessarily develop a lot of process in-house, because our senior engineers had backgrounds with Boeing and/or NASA, which both had extensive ideas about how to design reliable systems.

If I were summarizing, there's only really two points that cover about 90% of what you need to do for high assurance software:

1. Realize almost all bugs are because of politics and economics, not technological or engineering faults per se. That is, we make choices about how we set up our culture and corporate system which incentivizes people to create and hide bugs, while also failing to incentivize others to help fix that bug. The first step in combating bugs must be to change the fundamental incentives which create them. In particular, a focus on the success or failure of a team as a whole. Development is a communal activity, and the entire team either succeeds or fails as a unit. Someone else committed a bug (and it got merged all the way to deployment!) that brought down the system? It's because you didn't provide the necessary support, teamwork, and help with engineering your peer needed to succeed. What can you do to help them succeed next time? After all, everyone is human and makes mistakes. What's important is that the people are interlocked in layers, where one person can catch another's mistake (without punishing the person who made a mistake, because that just incentivizes them to hide them!) and help fix it before the code reaches client machines. Successes may be individual, but failure is always the system's fault, never an individuals.

2. Almost all technical bugs, in any field, are because of leaky abstractions and implicit assumptions. From abstract ones like mathematics to physical ones like carpentry. Be explicit. About every possible detail. And if you think you're being too explicit, you probably forgot 80% of the cases. Ever see the average house blueprint? Puts software engineering design to shame, easily. If you want to build something that runs as reliably as a highly tuned, expertly designed engine, you can't start with anything less than as detailed of a specification as they use. Be explicit.

Once you get to the point where you're working in genuine engineering teams rather than as individual engineers on a team and you have explicit, detailed specifications, the technologies to actually convert that reliably in to software that runs stably are pretty straight-forward.

The reason we don't see this all the time is simply that it's expensive: the politics require a lot of redundancy of time spent (eg, code has to be read several times by several people); the explicitness requires a lot more upfront planning and effort in documentation, which requires more time invested per unit of actual coding; etc.

Of course, much like we have building codes for houses and larger structures, I think it's perfectly fair to expect minimal standards from software engineers. (Especially now that, eg, IoT botnets are DoSing things.)


Appreciate your write-up. It all sounds great. I'd say the tech to go from detailed specs to reliable or secure systems isn't necessarily straight-forward except in the easy cases. It can take some serious work by smart people, esp if it's formal verification or spotting unknowns. We can easily get 90% of the way there, though, without the hard stuff just by people giving a shit and doing stuff like you said.

Of course, Cleanroom Software Engineering that does a lot of what you said without as much formal stuff as Praxis's method. Thing you should remember in these conversations is to point out to other party that what you recommend and what these methods did knocks out tons of debugging and maintenance costs. Since both phases cost labor, with maintenance fixes multiples of development cost, there were many cases where Cleanroom projects actually cost less since they knocked out huge issues early on. Can't bank on that being normal with it often being 20-30% or so more. It was around the same or less cost in many case studies, though, due to knocking out problems earlier in development.

Btw, I found a nice write-up on Cleanroom without tons of math or formality with case studies on college students if you're interested in references like that:

http://infohost.nmt.edu/~al/cseet-paper.html

I like dropping them on people in regular development in these discussions when they say you can't engineer software, it takes ridiculous skill, or it would cost what NASA's team spends (rolls eyes). Eye opening experience for some.


> I'd say the tech to go from detailed specs to reliable or secure systems isn't necessarily straight-forward except in the easy cases. It can take some serious work by smart people, esp if it's formal verification or spotting unknowns.

I was being a little facetious. Of course the technical work is highly complex -- some of the brightest minds of our time work on DARPA related projects on foundational mathematics related to automated theorem proving. (Looking at you HoTT crew and related projects!) Why is DARPA funding automated theorem proving? Because they want to create high assurance software for government infrastructure to counter the threat of AI-based cyberwarfare, and our current mathematical techniques aren't up to muster.

However, we've made tremendous progress on that problem in the span of around 100 years. By contrast, the problem of how to not incentivize your workers to do a shitty job, causing you problems later on... has been with us, I think it's safe to say, for thousands of years. (And it too, has attracted some of the brightest minds over the years.)

So in a relative way, the technical aspects are "straight-forward" to address, compared to the underlying political problem. And the more technical methods only add some assurance that you've correctly implemented the spec as defined, not that you're doing the right thing. It's certainly good to, eg, check that you're using total functions or not misassigning memory, but it's not magic. So while they catch a lot of dangerous problems, they can also lead to false confidence about the existence of other classes of problems.

> they say you can't engineer software, it takes ridiculous skill, or it would cost what NASA's team spends

It mostly just requires that we consistently act with discipline and professionalism, which are both tiring compared to not doing them, so by and large, people just don't bother. I know I don't when I can get away with not doing it (even if I know that's a bad habit).


Aviation engineers are in the same boat, so no the lessons are not transferable. EX: F-35, Space Shuttle and it's replacements etc.

Hell, look at Asbestos, reinforced concrete, or the big dig and the construction industry is clearly less competent than generally assumed,


The reasons why you can name those examples is because they are the exceptions. If you tried to do the same for software the list would likely exceed HN's capability to store it.


You could say the same about construction projects that are 30+% over budget. Kitchen renovation IMO actually has more in common with software development than home building. Homes are generally built for generic people, kitchens are renovated to meet specific needs.


How is the number of web frameworks any different from the number of possible building facades? Even if you decide you want a glass facade, there are dozens of companies that can build that for you, and they've all re-invented essentially the same thing.

The same can be said for nearly everything else. Insulation systems, HVAC, lighting, interlocking brick, drainage...


But, you would expect your house to stand upright, all the fixtures should work (and work in an expected way), the roof won't leak, your house should be solid and so on. And if it is not you'd expect the producer to put a warranty on their product.

Choices made are made for aesthetic or engineering reasons or cost or some other constraint, rarely because the tech is 'fancy', 'new' or has been invented by the crew building your house. Bricks will be laid on plumb walls, the roof will be strong enough to support the expected load and so on and if any of it fails you will be surprised.

Note that most of this is stuff that has already been engineered elsewhere that is is combined in some novel way, and for the most part all of the materials can be used together, are standardized as much as possible and your average build crew will be able to move on to the next job without having to re-learn their whole knowledge base because what they did last week is now so '2015' that they are essentially useless unless they get with the times.

So yes, that's entirely different.


Not all software developers consider something from 2015 'old' and immediately jump to the 'fancy new' stuff that came out this week. See the discussion on "Happiness is a Boring Stack"[0] from yesterday.

There are crappy new building materials that don't work (which you often don't figure out until years later when you have to replace your roof or windows or siding or doors), and there are bad crews that do crappy jobs.

This is really no different from software, though I'd argue it's easier for the crappy developers to continue working. This partly has to do with your website having ugly-but-working code has little impact to your typically business owner, while having bricks that are crooked and unevenly spaced -- even though they're perfectly functional -- is highly visible to everyone.

The bricklayer that does a functional-but-ugly job gets a bad reputation immediately. The website developer that does the same thing isn't found out until much later when you have to modify the code.

[0] https://news.ycombinator.com/item?id=12788804


> This is really no different from software, though I'd argue it's easier for the crappy developers to continue working. This partly has to do with your website having ugly-but-working code has little impact to your typically business owner, while having bricks that are crooked and unevenly spaced -- even though they're perfectly functional -- is highly visible to everyone.

But in the wonderful world of e-commerce those unevenly spaced bricks are roughly the equivalent of the gap a skilled hackers needs to enter your store or to raid your db.

Also, it is probably important to make the distinction between engineers and contractors, engineers design stuff, contractors execute the designs.

In the software world we used to have these people called systems analysts, they would be roughly the equivalent of engineers, whereas the programmers were more comparable to bricklayers.

Then for a while we had 'analysts-programmers' and now the whole analyst bit has disappeared. This is a pity because I think it was a very valuable role and worthy of being an independent discipline because I believe that there were people that were good at these different aspects of the work but rarely really good at both.


In all fairness, if someone throws a brick through your window you'll have police at the scene in relatively short order and most buildings don't need to worry about their doors being proofed against C4. On the other hand, if your software project gets hacked or DDOS'd you have no recourse unless they're incredulously sloppy or you're a mixture of wealthy and influential.

I do think software engineering as a profession should take notes from other engineering disciplines - but to compare them apples to apples is a touch unfair when our discipline is much newer, routinely attacked by bad actors (fer teh lulz, no less), and we're regularly updating our toolset to accommodate for changes in a much more rapidly-evolving landscape.


My title is actually "Senior Programmer/Analyst". But, I think your observation is correct that it's not as common to have a title like mine anymore.

It's strictly due the historical development of our field. Computer systems were limited to large organizations in the early days. VMS, Unix, Windows, and less expensive hardware allowed smaller, less well-funded organizations to jump into the game.

In internal IT departments, software developers have become the "jack-of-all trades, master of none" people. We fulfill analyst, developer, operations person, and support with little to no training in any area. IMO, part of the reason software systems are so fragile nowadays is this "generalist" mentality. Mind you there are plenty of other factors that are, IMO, mostly social.

All that said, there is a tendency to think of software development like assembly line manufacturing. There was a Dyson vacuum ad that I loved. The supposed owner of the company showed the vacuum in operation and then talked about the 239 (not sure of the exact number) versions that came before it. Once the company got it right on the 240th model, mass-assembling the vacuums was quick. Building a model, letting users work with it, integrating their feedback into a new model, this embodies software development. And it's expensive along at least one of two dimensions: money or time.

(Edit: 240 to 239)


In software, programmers are the engineers, and the compiler is the contractor. The software equivalent of brick laying was automated away long, long time ago.


A compiler is merely a powertool, not the contractor, anything it does you could do by hand but slower. We've tried (several times) to create the software equivalent of brick laying aka 4th generation languages but to date they have all failed to attract mainstream attention, mostly because they simply don't work well:

https://www.techopedia.com/definition/24308/fourth-generatio...


> A compiler is merely a powertool, not the contractor, anything it does you could do by hand but slower.

Only in a sense in which you could do all the things a contractor does yourself, but slower. In both cases, you'd need to first learn what the compiler/contractor is doing. If anything, the compiler is a powertool that automates away the contractor completely.

I dislike comparisons of software to construction and civil engineering anyway. The two seem nothing like software. They have nowhere near enough (literally) moving parts to reflect the way software works. The closest thing to a comparable discipline that comes to my mind would be designing and building jet engines.


Having worked with PE civil engineers out of college I can confirm they don't do anything new. Civil engineering at this point is pouring concrete. Concrete is an understood material. They mainly deal with permitting, fixing a cad drawings and arguing with architects (who are also not doing anything new). Once and a great while they do something new like expand the bay bridge. Then you get to see civil engineers try to solve an actual problem. A decade(?) of work on a bridge that could have been completed quicker in 1929.



You posted a video of people moving something heavy with wheels. I guess in the civil engineering world: amazing!


I think you slightly missed the point there. The video is of a bridge designed to span a highway that was to be expanded from 2x4 to 2x5 lanes + an emergency lane on both sides with minimal interruption to existing road and rail traffic.

The decision was made to construct the bridge off to the side of the road on a separate spot, then to create a special purpose roadway across the road to the location where the bridge would be placed. And finally, to move the bridge from the construction site to the final destination in one piece.

The whole thing was conceived and executed according to plan, including a < 24 hour closure of the road and one week of interruption on the rail line (it was not possible for many reasons to have the track installed prior to moving the bridge).

That is engineering. In the 'move fast and break stuff' world that would be 'move fast and kill people'.

If this were software we'd be looking at a multi-year software project with a few hundred programmers delivered on-time, within the budget and working flawlessly on the day of delivery.

I have yet to see such a project. But as you say, 'Amazing!', the fact that you make it seem like this is 'no big deal' is exactly what is so good about it, you fully expected it to work didn't you?

The trick apparently is to make the complex and impressive stuff look so boring, I really wished that we could make software look that boring.


NASA does that regularly. They're seemingly the only people who can afford it.

If bridges had scope creep the way software does, every bridge would start out 1 lane each way and be 6 lanes double decker in the middle before all lanes fly in different directions towards different cities, much less the other side of the river.

I hear what you're saying about using tried-and-true methods. But nothing else has scope creep like software does, because basically nothing else is design-only. That's why software is so crazy. It's ALL design. There's no manufacturing at any point. You can manufacture infinite copies for almost free, i.e. compile and copy and install.

If you could go from bridge design in CAD to actual working bridge in meatspace in ~30 seconds and for $0.01 then yeah, there'd be millions of horrible, horrible bridges EVERYWHERE.

The fact that the design of the bridge takes a year or two and a few million compared to the 2-5 (or 10) years and billions to actually manufacture it means that you can design, redesign, and redesign again until you get a design that'll actually work and it barely moves the needle on the total price tag.

But in software, if you redesign and it doubles the amount of time to complete the project, you just doubled the cost at least.

If it were possible to make software better, faster, cheaper by just imposing some discipline, why haven't dozens of companies done so and taken over the world?


> If you could go from bridge design in CAD to actual working bridge in meatspace in ~30 seconds and for $0.01 then yeah, there'd be millions of horrible, horrible bridges EVERYWHERE.

That's a fantastic and well made point. It's akin to the cost of communications dropping over time. When moving words around the world was expensive people tended to stick to the important stuff, but now that the cost has essentially dropped to 0 we are drowning in irrelevant information.

> If it were possible to make software better, faster, cheaper by just imposing some discipline, why haven't dozens of companies done so and taken over the world?

Because the it likely is more than just 'some discipline' and because the market forces are working against you, after all nobody even expects software to be reliable so your competitor going to market with unreliable junk will eat your lunch if you slow down long enough to get it right, assuming you know what to build in the first place.


> assuming you know what to build in the first place

This is the heart of the problem. Nobody knows exactly what to build. Most software development is half initial coding, half bug fixing, and 90% requirements discovery.

Compiler writers actually have it pretty easy once the language is defined, which is a real honest to god spec. Comparing how long it takes (and how much it costs) to write a compiler once the spec is done would be a pretty fair comparison to how long it takes to build a bridge once the design is finalized.

And even bridges can turn into total disasters. Look at the Bay Bridge replacement in San Francisco: https://en.wikipedia.org/wiki/San_Francisco%E2%80%93Oakland_...

Or the Big Dig in Boston: https://en.wikipedia.org/wiki/Big_Dig


> Compiler writers actually have it pretty easy once the language is defined, which is a real honest to god spec.

Compilers are typically developed in-parallel with the spec, exactly because you don't know what you want to spec before you try it out.

Optimizers have it even worse - they do not have much of a spec beyond "make it fast, quickly", so all development is trying to find interesting places in existing code.


NASA is just the only people doing the full accounting of all costs. The rest are like a cook who drops a food item on the floor, says "five second rule" and hopes for the best.


"NASA does that regularly. They're seemingly the only people who can afford it."

Not by far. Actually, the woman who co-invented software engineering in their Apollo program made one for achieving similar reliability if your specs are right at around $10,000 a seat. That and Cleanroom were in production use in the 1980's making low-defect software. Many others showed up afterward with plenty of application to commercial products or significant OSS software. Here's a few.

An early one, Cleanroom, that was often as cheap as normal development due to reduced debugging:

http://infohost.nmt.edu/~al/cseet-paper.html

Margaret Hamilton of Apollo started a company later to make the one below to embody the principles they used for correctness on Apollo. The papers section is also interesting.

http://www.htius.com/Product/Product.htm

Lots of companies applied the B method for things like railway verification. Many successes that cost nowhere near what NASA spent.

http://www.methode-b.com/wp-content/uploads/sites/7/2012/08/...

Altran-Praxis does formal specs, refinement, and provably-correct (wrt specs) code with 50% premium over normal development for their high-assurance stuff.

http://www.anthonyhall.org/c_by_c_secure_system.pdf

This mentions about three, different methods being used in products or experimental projects by defense contractors:

https://www.nsa.gov/resources/everyone/digital-media-center/...

COGENT is doing seL4-style verification at a fraction of its cost with filesytem paper showing how practical it is:

https://ts.data61.csiro.au/projects/TS/cogent.pml

Some companies are straight-up using logic programming to execute precise specs of how software should work on startup budgets:

https://dtai.cs.kuleuven.be/CHR/files/Elston_SecuritEase.pdf


Thanks for such a thorough rebuttal! This is super useful.

I can't help but note that a lot of it was spun out of NASA though.


Welcome. JUst one came from NASA that Im aware. Others are NSA, European, US firms, and Australian.


> If this were software we'd be looking at a multi-year software project with a few hundred programmers delivered on-time, within the budget and working flawlessly on the day of delivery.

> I have yet to see such a project

Then you haven't worked on safety critical systems or old school embedded/firmware projects.

The alternative is 30 developers solving the same problem with software that has a bunch of critical bugs that get fixed in the first few months of release and a few hundred more minor flaws that get fixed gradually over time.

When it comes down to it the alternative is almost always preferred by the market and for good reason.


I've seen several of those kinds of projects. I have even seen it done with lightweight process. But ( and there's always a but ) you won't do it with fresh grads pulling all nighters. You might do it with a handful of the remaining silverbacks ( who are not just crispy-fried ) left to do everything that needs done.

It really does take your entire life to learn this craft. Now try selling that today.


"f this were software we'd be looking at a multi-year software project with a few hundred programmers delivered on-time, within the budget and working flawlessly on the day of delivery. I have yet to see such a project."

I agree with a lot of what you're saying in this thread except that you keep missing high-assurance engineering in statements like this. Altran-Praxis regularly does what you describe minus "several hundred developers" since they try to keep the systems simple enough to not need that. Galois can do this. Cleanroom Software Engineering teams did this often on the first try. There's companies doing business requirements in Prolog with standardized components for plumbing. The DO-178B/C and similar companies are delivering all kinds of software that's tested from specs to code to whether object code really matches. Hamilton and Kesterel generate some systems straight from logical specs with correct-by-construction techniques while others in CompSci and industry do that by hand. iMatix had their DSL's and generators to do a significant chunk of this without formal methods. One company even specialized in high-availability conversions and migrations like you describe in the bridge example albeit I can't remember the name. Quite a few Ada projects also happened in defense sector with a bunch of subcontracted components that integrated painlessly due to good specs and language's features.

There are companies and groups straight-up engineering software that has few to no defects in production or maintenance activities. They actually have a significant number of customers, too. It's just that 99% of software isn't done that way. It has the problems you mention elsewhere in this thread. Let's be fair and give credit to those actually pulling it off, though. Also lends credibility to our claim more of that 99% could be as well.

EDIT Added specific links in another comment:

https://news.ycombinator.com/item?id=12801963


> I agree with a lot of what you're saying in this thread except that you keep missing high-assurance engineering in statements like this.

> It's just that 99% of software isn't done that way.

I'm aware of it. It's just that in my practice I do not run into companies that actually do this. The companies I look at typically exist between 6 months and 5 years, have a team with an average age of 25 to 30, maybe one or two older people with some in depth experience.

They will happily tell me that they write junk because they don't have time to do it right. Personally I think they don't have the time to do it wrong but what do I know.

Frustrating.

I've worked on re-working a fairly large project from a giant hairball to something a lot more solid over the course of two years and spent the larger amount of that time arguing about bad practices, the lessons learned were legion (for me at least) about why most software is crap.

If regular engineering were done this way everybody would be self-taught, would have about 30% of the picture, would not be willing to begin to take responsibility for their product and the majority of engineering projects would have fatal flaws in them.

We really can and really should do better than this if we are to take the responsibility that has been given to us collectively serious.


There is no iterating in civil engineering, it is built right the first time. They made a plan to build a replacement railway bridge over a highway and install it with a short interruption in rail and highway use and executed that plan. I think that is an impressive feat of engineering as well as management.


There's no iterating a bridge already built but the maintenance, inspection, and thinking around making difficult repair-or-replace decisions continues to evolve in feedback with analytical and management software.


"Built right the first time" does not match the stories I hear from my civil engineering friends.

There's a reason that as-built design docs are provided. They're frequently different than the originals.


That doesn't mean they build it twice, it reflects updated information based on materials availability or changing requirements or conditions being different from initial assumptions. Exactly the kind of problems that software people tend to claim as their own personal and unique kind of challenge. But all engineering efforts are subject to those kind of challenges.


Or simply incorrect design. Like putting electrical panels so close to the wall that the cables can't physically bend to reach the inlet.

Or incorrect construction, like driving a pile in the wrong place and just adding the extra to the design.

I definitely agree. Things don't always work out as planned in any discipline of engineering. My personal experience was mechanical and electrical manufacturing, and it held true there too.


I work for a CE-facing ERP company, and our clients are PEs mostly working for state DOTs. They are actually advancing the state of the art on cost control, roadway safety, etc. using analytical decision-steering systems. It would be a mistake to overlook the value of what's being done just to maintain the status-quo against a pretty dynamic business landscape, much less to make progress. As a software/data nerd, I am suitably impressed by what I see as a vendor-to-insiders.


> analytical decision-steering systems

nice way to say, "the computer is doing my job for me".

which is all well and good, mind. Now if only doctors had that.

Where are the expert systems of the 70s, the ones I read about in AI books?


Or a nice way to say they're making data-informed decisions when previously they lacked the apparatus from collection to analysis to policy.


Is Watson Health[1] anything like the systems you're talking about?

[1] https://www.ibm.com/watson/health/


Sorry for the late reply -- no, Watson is well ahead of what I'm working with. Like, a generation or two ahead.

I think it's helpful to mention that many organizations don't even have the best of what the 1990s could offer; they are still struggling with custom integration of 5+ "system" suites, and the technical baggage of just keeping that working so people get paid and inventory management can happen. They aspire to use fancier algorithms on data they don't yet have with good quality, to provide faster/better/cheaper service to their stakeholders.

The state of the art for the agencies I serve is what I would casually describe to a HN audience as a data pipeline that often ends in an Oracle database, then a handful of web/mobile apps that should be rebuilt with Google Maps. Not rocket surgery, but they're truly in a tight spot in terms of budget to swap out systems already in play with better ones that face the future rather than the past.

Phew, I didn't know I had that much to say on the topic.


> unlike e.g. civil engineering

It isn't like people don't wonder why it takes so long or costs so much to, say, build a road, either. It seems like everybody massively underestimates how long it should take to do something that they don't know how to do and won't be doing. I characterize this mindset as "I need _x_ to be true, therefore _x_ must be true." It's endemic in all of the people who don't actually produce anything.


It's unusual to actually be able to reuse a lot of software. But this is another argument for "the only difference between product A and product B is the configuration."

Then it's "What??? We're paying <x> dollars a seat for a configuration change????." You can't win...


Sure, and evolution would be much more efficient if every mutation were positive and the species adapted that specific one instead of going out on their own creating meaningless mutations.

Who creates the best/bulletproof solutions? You're arguing that we collectively should know which one is the best, among all our different opinions and habits.

I'll instead argue that we move forward through natural selection and that everyone learns from trying, failing, trying, succeeding and learning what the best solutions are by experience.

I'm not saying it's theoretically optimal, or even practically, but I would suggest it's only natural that we've landed here.

That said, I'm all for trying to improve, if you or someone else every comes up with a better route.


Evolution is nothing like engineering. Evolution by its very nature makes forward and backwards movements relative to what an outside observer would consider to be an optimum, and selection pressure is what eventually gives rise to a perceived movement in the 'right' direction.

Software is - in principle - an engineering domain, but we tend to be much more liberal in applying engineering norms to software production schedules and reliability. This need not be the case.

If on the other hand you wish to argue that software is produced in a random fashion and selection pressure is what decides what survives then I rest my case, that view definitely does explain a lot of what I'm seeing.


Sure, your points on their differences are correct, I agree with that. But I still think my comparison holds.

Basically the comparison was made to convery Sturgeon's Law; that 90% of everything made is crap.

In that sense it is like evolution: A numbers game. While not explicitly random, distributing risk among groups going in different direction could be an efficient way forward, when counting for what would be lost with some central governance. You then select the optimal direction by results.


90% of all bridges and buildings, powerstations, cranes and heavy machines are not crap. Engineering is a responsible job, it's nothing at all like making clay vases or plates where if you throw away 9 out of 10 of the things you make isn't going to cause anybody to suffer other than yourself.


What if they are crap, but just by a metric that isn't "falls apart immediately"?

I watched my state government build an overpass that wasn't needed and didn't even have an _exit_ for 3 years. It required multiple road closures and for several months made traffic awful and my drive unsafe. I call that "crap".


I'm sure that there are organizations that will mess up if you give them enough rope. But that says nothing about the overpass itself and 'doesn't fall apart immediately' is not a standard that any civil engineer will want to be associated with, at least, not in the developed world and likely not in the third world either (but there due to resource constraints and corruption there may be a difference between what should have been done and what was actually done).


Rather like software, it's possible for the client to decide to build the wrong thing for entirely stupid reasons.


Sure, but that's because of safety necessity, not because of optimal solution. If building bridges, powerstations, cranes, etc etc were essentially free outside of drawing/planning them, and could be deployed outside of "production" infrastructure, I'm sure that industry would be lightyears ahead of where it is.


Software is rapidly becoming just as critical, and at some point may be more critical.

The ability to copy a product is a thing that works to your advantage after you've created it, the fact that each bridge and each building is engineered with slowly changing techniques and standards is what makes engineering a solid profession. I think that we probably have had too much change in too short a time thrown at us in the software related professions, if our hardware and other engineered products would perform at the same reliability level and with the same degree of resource consumption that our software routinely gets away with - not even looking at the risks associated with deploying crappy software - we would have returned the product to the manufacturer.

But somehow we've collectively managed to convince our customers that this is the best we can do and that there is no honor in going slower but safer. This is all fine until software becomes critical to our survival, and we're definitely passing some threshold in that respect and as an industry I don't think we are ready to accept that responsibility without making some major changes in how we go about our daily work.


Closed source often has the edge of 'focus' because it's working on a very specific type of 'canal'. It doesn't need to handle all types of boat.

We used to have very well engineered software - bespoke for each customer. Then businessmen came along and said "Hey - we could save time (money) if we re-used existing things". Eventually businesses didn't have anyone who understood the things they were re-using, which were often designed for a slightly different purpose, and things started going wrong.

There is no correct answer, other than do you want speed or quality (I leave the cost aspect out, as some things cannot be done faster or better irrespective of the money thrown at them).


> Except that in software 1000's of groups have already dug 1000's of canals and yet the next group that will build a canal will insist on doing it their way.

Because the topography you're digging through is always different, and sometimes they want a Starbucks every two miles along the canal.


The buyer of software has active disincentives - some of those to the level of national security - to not participate in some collective action to create a library of software for everyone to use.

And at some point, you learn to make the software critically dependent not on lines of code doing their thing, but rather to push all the complexity out to the configuration. So now you have man-hours and company profits depending on maintaining a configuration. That's even more ephemeral than code, so it gets even worse.


Emotionally, I agree with this point. I want to make reliable things:

> we don't need 500 crappy, slightly different ways of doing the same thing, we need maybe 5, battle tested and hardened bullet proof solutions that are well engineered.

As much as I'd like to do that, I think we're some ways off from the sorts of stability that allow for that. Sure, we're all neophiles, so there's a fair bit of people using new tools for the fun of it. But I think a much bigger problem is the extent to which our raw materials keep changing under us.

For example, the smartphone isn't even 10 years old. We're all still replacing them every couple of years because the new ones keep getting better. Processing power, memory, power characteristics, sensors, network, ports: they all keep changing. The number of processors in my house is easily 5x what it was 10 years ago. The range of capabilities is larger, too: my phone is way smarter, but my lightbulbs are quite dumb.

Even if that were stable, our conceptions of computing are still changing. Most of the code we write is in languages that are 20 years old, but we've learned a lot since those languages were created. We talk about "virtual servers", which as a mental model is much like "horseless carriage" or "radio with pictures": it's a sign that our understanding hasn't caught up with reality.

We also have an enormous amount of business churn. A ten-year-old tech company seems old. Venture capitalists invested $58 billion last year, mostly in service of disrupting some existing order. Few will see a point in investing for the long term when they aren't sure if there will be one.

Even given all that, I think we underinvest in the long term, for reasons I'm sure we'd agree on. But I am often forced to admit that the level of craftsmanship I personally want is at odds with the current practical reality of software.

And I'm not alone. Back in 1992, Steve Wozniak was hoping for the end of Moore's Law. Bob Cringely writes: "But while the rest of the computing world waits worriedly for that moment when the lines etched on silicon wafers get so thin that they are equal to the wavelength of the light that traces them—the technical dead end for photolithography—Steve Wozniak looks forward to it. 'I can’t wait,' he said, 'because that’s when software tools can finally start to mature.'" [1]

The rise of mobile and cloud computing ruined that plan, of course. But perhaps just about the time you and I retire, the technological base will be stable enough for people to do some serious, long-term building.

[1] http://www.cringely.com/2013/04/03/accidental-empires-chapte...


Great points.

> 'I can’t wait,' he said, 'because that’s when software tools can finally start to mature.'

It looks like I'm in the same boat as Steve on that one.


This is a bit of a tangent, but in case anyone was wondering why Wozniak's prediction was incorrect, it turns out that lithography could be used for features well below the wavelength of light. In fact, we're still using 193nm light (introduced in the early 2000s) to print 14nm(!) features. Human ingenuity is a powerful thing :)


I was indeed wondering that. Thanks!


Different canals to be used by different boats

One can spend a lifetime trying to build the perfect canal that works for all boats and spans the longest distance over the most varied terrain.

But, if one only has one boat to get through, and the distance is short enough, many people will just place a couple sticks of dynamite and be done with it. The most important thing is that the boat gets to port, not the canal.

But, once in a while someone will notice all of the holes and try to build a more permanent canal. They'll save a lot of people a lot of time and may even make some money on it.


Sure, we could collect lessons and pass them on to the next generation. But as software developers, we can do better than that. We can write libraries that the next generation will reuse. That's why software development, which "takes so long" according to the OP, still goes much faster (in terms of economic growth) than your beloved civil engineering. The ease of making new stuff without studying too much existing stuff, which you complain about, is the whole reason why software is awesome and eats the world.


> We can write libraries that the next generation will reuse.

We could, but unfortunately we definitely do not do this as much as we could.

> hat's why software development, which "takes so long" according to the OP, still manages to go much faster (in terms of economic growth) than your beloved civil engineering :-)

If you took away the roads society would collapse, instantly. It is getting to the point where if you take away the internet (or even just a substantial fraction of it) for more than a few days that quite possibly the same will happen. That comes with a degree of responsibility that I do not see reflected in the quality of the product of our industry (including quite a few of those libraries).

There are exceptions to this but not many.

Economic growth is great, if we can keep that afloat in the long term. Given the stakes and the degree to which even the largest entities are failing to take responsibility for their products and services that growth might be the precursor to a very harsh lesson somewhere in our near future.

For every crash, data breach, security issue, hack you have to look back and wonder whether the speed of delivery turned out to be worth the risk (and then you have to discount that even further to take into account that what you read about is only a very small fraction of what actually goes wrong...).


> For every crash, data breach, security issue, hack you have to look back and wonder whether the speed of delivery turned out to be worth the risk (and then you have to discount that even further to take into account that what you read about is only a very small fraction of what actually goes wrong...).

Those risks are pure externalities, unfortunately. The answer to the question for the management of pretty much any company is a resounding "yes, it was worth the risk" - they earned truckloads of money over the years before the breach, and by the time shit hits the fan, the management team probably changed twice already.

Nobody cares, because nobody has to - business incentives are actually strongly aligned against caring.


It sounds like you want to hold software companies liable when their products get hacked. Is that really how other industries work? If a terrorist blows up a bridge, should the bridge builder be sued?


> It sounds like you want to hold software companies liable when their products get hacked.

If the hack is directly related to the quality of the product then yes, by all means. And they could (try to) take out some kind of professional insurance against this.

> Is that really how other industries work?

Yes.

https://en.wikipedia.org/wiki/Warranty#Defects_In_Materials_...

http://www.businessdictionary.com/definition/professional-li...

http://www.me.utexas.edu/~srdesign/paper/

> If a terrorist blows up a bridge, should the bridge builder be sued?

No, but if a manufacturer of a vault leaves a backdoor in their vault then they should be liable (and they most likely are).

Now this is where the analogy breaks down somewhat, in the 'real' world the lock manufacturer is not liable for the damage due to a break-in, but the presence of a high quality lock will reduce your insurance premiums (at least, they do where I live).

So that's how shop owners deal with this, they take out insurance and the insurance company will try to make a stab at how big the risk is that you're going to be the subject of unwanted attention and price their premium accordingly. Any measures you take to make sure that you are not going to burglarized will be taken into account.

Contrast that with a terrorist blowing up your house, the chances are spectacularly small (vanishingly small even) so besides that not being an interesting risk to insure against you likely will not be worried about it either.

On the other hand if your bridge builder will attempt to sell you an explosion resistant bridge then it had better be that (in practice this will simply depend on the size of the explosion and no bridge manufacturer will say that that is acceptable use of their bridge and likely if you sued them the judge would side with them unless they made specific claims about such suitability).


>the next group that will build a canal will insist on doing it their way.

If only every business was identical! Then we wouldn't have to deal with the problem of creating custom software for each of them.

In this canal-across-the-country analogy: imagine that every boat is a different size, different shape, different propulsion method, and is made of different materials. And some of them are actually trucks, airplanes, and vacuum tunnels.

>Then just build the biggest canal that satisfies every requirement!

This exists! It's called IBM.


> Except that in software 1000's of groups have already dug 1000's of canals and yet the next group that will build a canal will insist on doing it their way.

This is why a prudent developer/team would attempt to leverage frameworks/plugins/existing API's and libraries vs reinventing the wheel. Leveraging well-vetted and popular resources should cut development time considerably.


> This is why a prudent developer/team would attempt to leverage frameworks/plugins/existing API's and libraries vs reinventing the wheel.

The problem here is that those frameworks/plugins/existing API's are all moving targets, their life-span is very likely substantially shorter than the life-span of your project.

Well vetted and popular unfortunately does not equal 'will have staying power'.


That hasnt been my experience. Most of the frameworks I use last much longer than my tenure at a particular position. Are you saying you would really write your own django? jquery? react? chef? git? I would argue the reason some startups can now move blazingly fast with relatively few developers is by leveraging open-source in a smart way.


> Are you saying you would really write your own django? jquery? react? chef? git?

No, I think it should be fairly obvious that that would not be my choice, rather the opposite.

But why django? Why not rails? Or symphony? Or Yii? Or Spring? and so on, why git? and not subversion, or tfs or mercurial? and so on. For every one of those choices there are 10's of possible choices and none of those will be something akin to an industry standard. And so we muddle on. Well, at least git seems to become slowly the standard in revision control, but because git was yet another fresh start project a lot of the lessons learned from other revision control systems that were not readily apparent to the git author(s) had to be re-learned the hard way, resulting in a lot of delay and frustration (and to this date the git command set strikes me as ill thought out).

Leveraging open source in a smart way is exactly the right thing to do, now if only there would be some kind of mental penalty to start a new project when an alternative is already available which could use some TLC.


This is true for the lifespan of that framework. No software anywhere has the devoted love of its adherents that Borland's Delphi holds. So where it it now?

And I have literally replaced piles of "well-vetted and popular resources" with better-vetted and less popular resources in matters of weeks - after all, the knife-fighting over features was done, and all that was wrong was that the framework had some massive hole.


We'll get there. Civil engineering has been around a hell of a longer than software engineering.


A man that never digged a canal don't really master it until he digs some. Software craftmanship isn't something you learn by osmosis.


Software craftsmanship isn't something you learn by ignoring the state of the art either.

And as for digging canals, the civil engineering profession is lightyears ahead of the software people because there are lives on the line and they do have a way to pass knowledge on in a way that lessons stay learned. (Most of the time, anyway, forget about the dark ages for a moment please.)


A problem: what actually is the state of the art in our industry? It's a serious question - the more I learn, the more difficult it is to indentify it.

UNIX? You mean that pile of crap that replaced much better and saner systems, so that many decades later we still have the same stupid problems they solved in the 70s?

x86, which is basically crap?

Enterprise Java, which all Big Players use, that is a pile of bloat wasting computing resources, fueled by countless armies of code monkeys typing it dumb code that starts to work if there's enough of it, much like uranium gets dangerous if you put enough of it in one place?

Modern web development, with bullshit hipster fads, even more bloat than Enterprise Java, where half-life time of any library or framework is counted in months, if not weeks? Where people who have no understanding of what they're doing invent dumb solutions than then become de-facto standards (see e.g. template languages)?

The Lisp folks who rant about good old times and develop some software of various quality, but nobody even notices so you won't get to apply any of that in your job?

What exactly is the state of the art?


> What exactly is the state of the art?

I think, you somewhat answered your own question, albeit indirectly and ex negativo. Widely ignoring what we already know or could have learnt, and pushing the next vanity-boost feature seems more attractive than learning from our experiences and drawing the consequences. And the people doing it always find rosy language that makes it seem like a heroic feat of progress, and an audience that falls for the BS.


Except if there was a method that actually provably worked in producing lower cost, more reliable software on schedule without requiring your entire staff to be composed of demigods, then it would have already been adopted by EVERYONE.

There is no such thing as practical state of the art in our field for the problem being discussed (which is the software requirements capture/design/engineering side of things, not the software construction side of things), most of the so called "state of the art" focuses on a problem of very limited utility:

problem a) if I have a complete, consistent and fixed requirements, can you write code that provably implements it it? Sure. But very little software is developed under those conditions (avionics, safety critical stuff, some crypto stuff, etc)

problem b) code/methodologies to make writing easier/more concise/more predictably once as the specification is discovered. Nice, but it is optimizing the 10% of the task, whereas the 90%, the requirements discovery and requirements consistency management between requirements discovered amongst different branches of the problem is where the bulk of the time is spent.

Those are the "assume spherical cow" of our space. Interesting, but limited.

Or going with the bridge analogy that seems to be popular here, you're asked to build a bridge. Nobody tells you over what, what load it has to carry, under what weather conditions. So you don't know whether they want a rope bridge over a tiny stream or a multilane car/train combined suspension bridge. And as you ask for the requirements it is now a bridge over the Atlantic with a stop in Australia. And when you query the Australia part they confirm and casually mention that they might be planning on a stop in Pluto too and can you fit that in the schedule.


In my current project we have a manager on customer side querying about how we can fit various celestial bodies in the existing solution and discussing paint colours, while my usual contact keeps reminding them we really need to drive that truck to Australia ASAP...

Anyway, you're absolutely right about "state of the art" in requirements capture / design / engineering, and this is the larger context of discussion. But I admit, my original post was an assertion that there's no sensible, practical "state of the art" on the construction side either - it's mostly either fads or working with lowest-common-denominator tools (basic Java). The situation seems to look better in embedded in particular, but that's either because you're really, really limited in tools there, or because I have very little experience there and only think it makes sense...


It likely helps that physics puts a hard constraint on things. Over the last decade or two, the physics of computing (aka the hardware) have been changing just as much as the software.

Maybe there is a reason why we seem to see the most impressive code work on "constrained" devices like the C64.


Yes, I suspect this is the case, Moore's law gave us an extremely easy 'out' of many problems that we refused to look into the eye and hopefully now that that has more or less run its course we will finally be able to re-focus on learning our craft properly because there are no free lunches to be had any more.


> the civil engineering profession is lightyears ahead of the software people because there are lives on the line

This is an often repeated and completely false assertion.

Just think about the number of deaths every year due to bad road and highway design - level rail crossings, highway exits and entries from the left (in North America).

The idea that civil engineering is any better than software is just nonsense.

Oh, and don't forget about the civil engineers at Fukushima who put the backup power systems in the same area as the reactors, guaranteeing that they would also be destroyed in any serious disaster.


Civil engineering is a compromise between many factors and those 'bad' roads and highways are likely the best that was possible for a given situation and the budget allotted. The point is not that things are not perfectly safe, the point is that given the constraints and the known state of the art they likely could not have been (much) better.

And civil engineering is just one branch of engineering, there are many and all of them have to work with the same balance between available resources, time and other constraints and they do substantially better than your average software project.

A bridge collapsing is headline news, a computer program crashing is so normal that you would probably not even mention it to another person if it happened to you.


> And civil engineering is just one branch of engineering, there are many and all of them have to work with the same balance between available resources, time and other constraints and they do substantially better than your average software project.

That's the main point that I would like to refute.

Given the regulatory barriers, enormous costs, and long planning timelines for most civil engineering projects, isn't it astonishing that bad designs impacting human lives are still being implemented.

If the process of civil engineering was so well understood, so carefully documented and supervised, and so rigorously taught to new engineers, why are many hundreds of people killed every year?

The uncomfortable answer is that the civil engineering process is very far from well understood - just like software.


> Given the regulatory barriers, enormous costs, and long planning timelines for most civil engineering projects, isn't it astonishing that bad designs impacting human lives are still being implemented.

Well, depending on context, country and corruption: those are the exceptions, not the rule and the engineers typically did their work as well as was possible given the constraints. They know what they know, and more importantly, they know what they do not know and they will engineer in safety factors.

Buildings and bridges collapsing, machinery exploding: those are the exceptions, not the rules.

As a rule, highways function, as a rule, bridges withstand their design loads and excess of those loads and so on.

As a rule: software is buggy, frequently crashes, hogs memory, is slow and has inconsistent user interfaces, updates will randomly break stuff and it is insecure to boot.

If you feel that software is on par with the rest of the engineering disciplines then I probably won't be able to convince you that it isn't.

But whoever designed this bridge and the foundations did a pretty good job of it:

https://www.youtube.com/watch?v=RAv-rYB5qrc

In contrast, if you look cross-eyed at your average piece of software it will misbehave in unpredictable ways.


It looks like the bridge designers defeated the the bridge destroyers - in that case at least.

I DO feel that software engineering is on par with other engineering disciplines - if writing software can be described as "engineering".

The important difference with software is that "the code is the design". The code is not the product.

The manufacturing, building and deployment steps in software are actually quite well understood, and has been mastered by most organizations and individuals.


> The manufacturing, building and deployment steps in software are actually quite well understood, and has been mastered by most organizations and individuals.

If we substitute 'some' for 'most' then we're in agreement, unfortunately my experience to date does not give me the confidence required to subscribe to your version, but that could easily be local variation.


This is changing. See self driving cars.


That seems like a really unfair comparison. I'd argue that the software industry has evolved magnitudes more in the last 20 years than what you call civil engineering has evolved in the last 100. Sure, the latter has existed much much longer, but it's by no means evolving very quickly. Software development has traded safety for rapid evolution in areas where possible. This could be argued for or against, but I'd rather have a multiple generations more mature software industry at a point I'd need to develop "safe" software, than a safely evolved industry several generations behind.


> I'd rather have a multiple generations more mature software industry at a point I'd need to develop "safe" software, than a safely evolved industry several generations behind.

Yes, that's exactly the problem. So now we have an unsafe industry that is producing a terrible that powers a very large chunk of the worlds commerce. It's a house built on quicksand and our hope seems to lie in hoping that we can move to another house before this one collapses.

The areas in which this is most visible: security, reliability, maintainability, documentation and testing, and finally a complete lack of warranty for suitability from the various manufacturers/service providers.

Software is 'eating the world', but is it strong enough to support the world in the longer term if we continue down the road that we are on? I'm not bullish on that.


Let me start off with saying I completely agree with your points of the downsides of this.

It is however extremely hard to imagine the world that would be, if we were to take a different route. What would be the effects of slower improvements. I assume (since I sometimes do) many people look at the software industry as narcissistic and self-agrandizing when "delivering expriences" and "solving global problems with scalable midware". But what third party industries would be held back if this one moved much slower? Could there be loss of medicinal advancements, monetary, safety, third world advancements?

But as you say, I'm also very hesitant on the stability if we can't find some middleground between the two.


To borrow a line from Larry Page's book: more wood behind fewer arrows. That would already work wonders, so instead of having an endless repeat and rehash of the same concepts in disposable form it would likely be better (and possibly even faster, so no effective slowdown!) to do things a bit better, to try to merge more often rather than to fork and re-start all over.

I think the feeling of fresh and new development (before complexity sets in) is so compelling that it tends to drive us away from actual progress. After all, it is so much easier to launch yet another half-baked language, frame work or product than it is to pick up something existing and to really improve it or to update it in a way that it will last much longer. That's relatively thankless and anonymous work compared to slapping your handle on a new framework.

This is part of what makes good software hard: good software isn't sexy (think: erlang verus node.js, as just an example, I've tried very hard to keep this discussion brand and tech free but I feel that an example may help to illustrate what I'm getting at).


For reliability, I think the Jepsen approach of an expert stress tests a system and reports on where it fails is a good approach. I see some "big data" vendors actually pay him now to analyze their software, then fix the bugs he exposes.

Security bug bounties are also an important advance in software engineering process.

I also think there is a growing interest in these issues outside the industry. For example, we are likely to have as our next President someone all too familiar of the risks of poorly secured email systems. Maybe this will be the impetus for a more serious focus on security, at least.


Jepson is awesome.

Bug bounties are not such a hot idea in general. There is the issue of entitlement on the white hat, disputes about severity levels, potential for blackmail, and the two engineers that are clearing the false positives. Better to put them in the pen test team.


I might dispute bug bounties, which are nice in theory but can be a nightmare in practice, and have a somewhat limited upside.


software industry has evolved magnitudes more in the last 20 years than what you call civil engineering has evolved in the last 100

Civil engineering has existed for more than 3,000 years; the romans were building cross-continental roads 2,000 years ago. The Chinese were building cross-continental defense structures 2,500 years ago.


I haven't claimed otherwise...


People have tried to build software like Civil Engineers build bridges. It becomes frighteningly complex and involved to make a stop light controller, and you end up wanting to throw out functionality from your hardware because it makes things complicated or impossible.

I mean a lot of the problem is we generally spend most of our time thirty abstractions away from what is actually happening, and often times those abstractions leak. For goodness sakes few of us are even protected from cosmic rays changing bits in our ram, which happens fairly frequently.


What about software in medical devices, or that touch medical data? How about software in control systems of vehicles, industrial machinery, robots, etc... Perhaps your definition of software is too limited.


If you could somehow graph the amount of work it takes to program something on a line, that line would be very rugged like a coastline. These "big picture" views and estimating programming work is like trying to estimate the length of a coast line:

   * https://en.wikipedia.org/wiki/Coastline_paradox


And then management, who have never seen and cannot visualise what you are building, decide that this is a damn awesome idea and the canal should now be extended to go straight up North America to Canada.



Interesting analogy. I've been reading The Path Between the Seas[1], about the construction of the Panama Canal, and whoo boy, now I won't be able to think about it except as a failed software startup. The hype. The outrageous valuations and fund-raising rounds. The charismatic founder. The blatant disregard of decades of accumulated knowledge. The pivots. The skirting, blindsiding, and lobbying to get around government regulation. The aqu-hire. The moment when the burn-rate overtook the runway, and further desperate rounds of funding were sought, at poisonous terms. Burn-out, layoffs, and attrition. The fire-sale exit. I'm just now getting up to the patent war-chest aquisition and the second-system effect stages of the canal...

[1] http://amzn.to/2eSBq16


And then the Pointy-Haired Boss comes along and shouts at the engineers: "Why do you eggheads always have to make stuff so complicated? Let's just nuke our way through!"

(If you don't get the reference, Ctrl-F for "Panama Canal" on https://en.wikipedia.org/wiki/Operation_Plowshare .)


I would say it's the complete lack of Software field being treated like any other major profession. Lack of empirical evidence or evidence based design. A mixture self appointed experts with opinions instead of facts. Sales people who over-sell a product when the engineering department is under-staffed. Over optimistic and out of touch management not knowing how to call developers out on their bull-shot.


My most recent project has taken more than 18 months.

I implemented 2 major subsystems that I later realized I could simplify the system by throwing out.

My product is very technical so there was probably about four months trying to make things work and learning what did and did not work.

I had to learn about 75% of the development tools and languages I was building with.

I don't like the idea of releasing half baked, half working software that represents half of my vision.

I had to fit the project in between my money paying job, family, friends, life.

I had lots of great ideas and the fun bit about developing software is when you get to implement those "wow it would be cool if...." functions.

I just want to fix bugs after the software is released, not just be starting on the code.

I want the software to be so far along that it would cause a competitor to think twice about copying.

I think software should be great, not just barely functional, so I added lots of things to polish and make it awesome.

Near the end of the build I realized I could make a major leap forward so I redesigned a key area of the UI.

I built as much of it as I could before release because after release its MUCH harder to add new features.

There you go, that's why it took me so long.


I have been working on a product with my cofounder for around 18 months; spent around six months of this doing consulting to keep the lights on. Released a few early videos, gauged interest, but had to keep fending off users who wanted the promised software immediately. Those emails kept us going - even though we didn't validate by asking for money, the fact that people pestered us even after we told it won't be ready for a long while, gives us hope that when we release there will be at least a few people who would find it useful enough to pay money.

I've been asked by a lot of people why we haven't released yet. We're dogfooding it; it is useful to us today, but it is not where I want it to truly be. Since I'm my own customer (this is a technical product, scratching my own itch), I will know when it is ready. It will be ready when I can build a functional front-end application from a design in two days, that would otherwise take me a month.

What is taking us so long? We had to learn the internal API of an ever-changing software, learn a new language, build a Rails server, a Node server, a chrome extension, a Sketch plugin, an entire compiler-like thing, we had to figure out new ways of solving the design -> code problem, we had to duct-tape the hell out of so many things, and in the meantime we had to content with other aspects of life that are not technical.

It is quite annoying to see the way people talk about the Lean MVP approach being the only way to build software. Some stuff takes time - not everything is a CRUD application that you can throw out into the void, put up some marketing pages, and have a hockey stick in three months. But the downside to my attitude is that most people who fail typically use the same argument - that we're exceptional and the lean MVP approach doesn't work for us. I actually think the produce I'm building is an exceptional case, but hey, isn't hindsight the only true foresight?


I think if you try to solve a complicated domain problem AND turn that solution into a product, you can't also learn too much other stuff at the same time. Too many rabbit holes to go down.

Simply building a simple polished product with the stack you know best ist plenty of work.


Kudos on taking the risk to release a bit later and delivering something that meets your higher standards, that's going against accepted business wisdom but it is only because 'everybody else releases barely functional crap too' that that became the norm for competition.


I get a vision in my head of what the software should be and I'm compelled as though by force to make it match the vision. I don't seem to have much choice.


That's not usually the reason to ship early. Common wisdom is to ship early because it's very possible that you do not have a good understanding of the problem you are trying to solve, or how your users will use your solution. The only way to find a better understanding is to release your product and see how it does. It is usually efficient to find this information out before you spend time polishing the wrong parts of the solution.


Can you tell us what it is?


And this is just for a sole developer who knows what he's doing. When you have a team of people at varying levels of ability, a meeting for every decision, JIRA-wielding project managers with burndown charts to keep happy, standards & practices, legal, translators, UI designers, all regularly taking out exclusive locks on each other's time... it's a wonder anything gets done at all.


It's correct. I've noticed that when I work in a team I work with maybe 10% of my solo efficiency. Just because of this whole team work bullshit and because of legacy code (it takes me often more time to understand how code written by someone works than it would take to write such code from scratch).


I think a big part of the problem can be attributed to tool makers and the lack of true innovation in that arena. As a developer tool maker (HiveMind: crudzilla.com) I am almost always disappointed when I come across a "new" development tool and find that it is the same tired code editor tricks (key bindings, syntax highlighting, code completion, multi-pane editing) with some "flavor of the day" gimmick (git) thrown in. IDEs have been around for more than 20 yrs and the most popular IDEs have not advanced the state of software development much at all.

Compare IDEs with chip fabrication as an easy to compare example. Improvements and advances in chips can be directly tied to advances in fabrication, the same can't be said for software because there is almost no innovation happening in those products despite appearances.


Even worse plenty of jobs won't let you use the tools we do have. Right now I have to work on a Java project that only exists inside a Rube Goldberg of virtual machines and old versions of eclipse. Scrolling inside the vm is dog slow and it crashes multiple times a day. Any attempts to improve the development environment are rebuffed as not providing business value so development will continue to be slow meaning less leeway will be given for improvements in the future.


Try presenting improvements as adding easy to understand business value. Start with something easily quantifiable that you can quickly improve on. Then you can say 'while this change that will take about X hours, it will end up saving the company Y hours per developer per month'.

This would be harder to do if your company directly passes on your hours to a customer, so they don't care about making you more efficient. In this case, find something that would increase your billable hours (eg: starting/rebooting the environment isn't billable, so shaving minutes means more billable time).

I did this to get SSDs in all the dev machines where I work, as we spent ~30 minutes a day compiling, SSDs cut that time in half or more, thus netting the company 15 minutes more productivity a day per dev. I presented this to management as an investment that would pay for itself in under a month (any reasonable business should leap at something like that).


> Try presenting improvements as adding easy to understand business value. Start with something easily quantifiable that you can quickly improve on. Then you can say 'while this change that will take about X hours, it will end up saving the company Y hours per developer per month'.

This usually means spending your own time creating those improvements and typically results in an ungrateful (or even hostile) business. Companies that can't do process improvement without "heroic" efforts from individuals are too far gone to try and save.


Do the business people realize they are insisting on a hellish dev environment that programmers would only put up with if they had no other options?


No, no they don't. The will plug their ears rather than try and fix the dead sea effect:

http://brucefwebster.com/2008/04/11/the-wetware-crisis-the-d...


I'd love to see some advances in IDEs, but the author is saying pretty clearly that the bulk of the time taken was in high-level decision making, not in struggling to get his ideas into the computer.

> "In a sense, programming is all about what your program should do in the first place."


Yup. I’ve been working on a project off and on for a few years now, where if I had known exactly what I was building in the first place, then it would have taken me just a few weeks. But writing it (and rewriting it) was the only way to actually arrive at that knowledge.


That statement in a way makes my point. Good tooling should help the creator offload a lot of the "thinking about your program", rather it should help you focus on what you want to do.

Also the long time taken in high-level decision making is strongly tied to the difficulty of getting even a trivial piece of software working, even if it isn't always apparent. For instance if the output of the high-level decision making was to create a spreadsheet or PowerPoint presentation, I am sure it wouldn't take anywhere as long.

I have a few long rants on this topic that I'll be putting on the Crudzilla blog (blog.crudzilla.com), stay tuned :)


Yes, good tooling can shave minutes or hours off here and there, maybe pick out a few bugs but it won't save you from spending months or years writing an ill-conceived product.


C++ Compiler error messages are sooooooo much better than 10 years ago, though.


Because software development is actually research disguised as engineering.

(And that's why "engineering" when related to software is filled with mumbo-jumbo and cargo-cultism.)


That's true in some cases, and in those cases time/budget overruns are probably more acceptable than in CRUD app #34.


It's also true for CRUD app #34, because the hard part is never the implementation, it's dealing with shifting requirements, either due to the nature of businesses, unsufficient understanding of the problem being solved or sabotage from project stakeholders (conflicting agendas, politics, etc.).

When you plan to build a bridge between A and B, everybody can see that the problem being solved is connecting A to B. You then have to match material knowledge and construction experience from other bridges with the specifics of the new location. Also, you define project completion when you can cross the bridge.

When you start out building a "simple form" to solve some business need, nobody agrees on what that need is or what the form should actually solve. Navigating this is research. Also, nobody can tell when the project is indeed finished (i.e. solves the problem).


That does not begin to explain the sometimes orders of magnitude difference between estimates and final costs, elapsed time and required manpower.

I've sat in on meetings with people discussing a new project, it is absolutely incredible how much of the future derailing of a project you can see happen in real time during those preliminary meetings.

A probable clue you can find in this nice video of a product meeting between 'engineering' and 'customers':

https://www.youtube.com/watch?v=BKorP55Aqvg&feature=youtu.be


This isn't always true even for bridges. Near where we live, the Akashi-Kaikyo bridge (longer than Golden Gate bridge) construction started in 1988. It took 10 years to build it, and during construction the Kobe earthquake hit and the bridge construction had to be adjusted because Kobe had moved.


This isn't always true even for bridges. Near where we live, the Akashi-Kaikyo bridge (longer than Golden Gate bridge) construction started in 1988. It took 10 years to build it, and during construction the Kobe earthquake hit and the bridge construction had to be adjusted because Kobe had moved. (http://www.jb-honshi.co.jp/english/bridgeworld/bridge.html)


> in CRUD app #34

If CRUD apps were as cut-and-dried as you're flippantly implying, then they _would_ take a few hours to complete. There have been many, many attempts at auto-generating CRUD-app frameworks: Visual Basic was one, Salesforce is another. They all (quickly) hit a level of fundamental complexity after which the reality of business requirements hits the expectation that "this is _just_ a CRUD app, it should be _easy_!" and real programming is again required. Only now it's required in a framework (that was supposed to speed things up) that hides important implementation details away from you and wasn't really designed for long-term modification outside of a few text and color changes.


> In a sense, programming is all about what your program should do in the first place. The “how” question is just the “what”, moved down the chain of abstractions until it ends up where a computer can understand it, and at that point, the three words “multichannel audio support” have become those 9,000 lines that describe in perfect detail what's going on.

This closing paragraph is excellent.

While I do wonder if the author's progress along these wouldn't have been accelerated if more of this evolution took place before the first line of code, the overall message here is very true and very well put.

When I made this realization for myself, it was a major turning point in the quality, speed, maintainability, and usability of the code I produce, especially when I can successfully define the final code structure to directly reflect this progressive refinement from intent to implementation.


I do wonder if the author's progress along these wouldn't have been accelerated if more of this evolution took place before the first line of code

Doesn't that require much more thorough interface specifications and documentation than we have now (industry-wide)? I don't do that much software development anymore, but I have always been put off by the amount of trial-and-error required because of ill-defined interfaces.


This is true.

I must admit that my more recent, positive experiences are only possible due to my accumulated understanding of my dependency chain and toolkit. I work on multiple things, but my largest, indefinitely-running one has only a single dependency...but it's a truly beastly, poorly documented enterprise system. It took me almost 2 years before I properly understood the system's intended usage, interfaces, intents, bugs, and pitfalls.

The JavaDocs are awesome in their uselessness:

  /**
   * Sets the Wimbulator.
   *
   * @param wimbulator
   *     the Wimbulator to set
   * @param identifier
   *     the identifier to use
   * @param flags
   *     flags
   * @param legacy
   *     used to apply legacy
  */
  public void setWimbulator(Wimbulator wimbulator, String identifier, long flags, boolean legacy);


Hey! That's every JavaDocs I've ever encountered. See the Apple CoreAudio docs for instance. Auto-generated and never revisited.

Such doc systems are a crime - the Manager asks "Is the documentation complete?" and then Engineer says "Well the auto-docs were created but..." and the Manager says "Good, lets move on"


On the other side of the spectrum, we have something like the Qt documentation: https://doc.qt.io/qt-5/qwidget.html


Which is a step up from the documentation you get (if you get any at all) in custom development:

    /**
     * The constructor
     */
    public Wimbulator(boolean q)


Wow, those are awesome compared to most of the docs for the obtuse, wildly popular enterprise system I match wits with.

I resort to decompiling.


If you read the comments so far, everyone has their favorite. The answer is probably "all of the above".

That said, my personal favorites:

Most software engineers are bad and refuse to admit it to themselves (which would let them get better). Too much ego. Given a large enough department/team, only a small percentage is actually doing significant work. Yes, even at Google/Facebook/Amazon/whatever.

Second, software engineers are extremely conservative. For all the flack fast moving ecosystems like JavaScript get, the only reason we see so many iterations is that the end goal is visible from far, but people reject it. So we just make 100 intermediate steps to get there. Eg: a subset of functional programming paradigms and type systems. Think TypeScript...which has to be comfortable for the peanut gallery, while trying to support advanced features many know need to be there in a modern language, and the struggle between those 2 goals.

That ensures we're going slower than we need to, while basically ensuring those techs will be obsolete sooner than later (because we need a stepping stone to the "right" solution as to not alienate developers)


Came here to make your second point. Most developers are bad and think they are good. I'll add that most of the people who provide requirements don't understand software development and most developers refuse or are unable to understand the domain space and yet insist that the requirements writers just tell them the "what" and leave the "how" to the developers.

Oh- and using a process name, such as "agile" as an excuse rather than a constraint.


> Most software engineers are bad and refuse to admit it to themselves (which would let them get better)

Well… not to be confrontational, but in the context of this post, you seem to be implying that if most software engineers (which would, statistically speaking, include me) were “better”, they could put software together faster. I have to say, the implication bothers me, as does my suspicion that it’s a sentiment shared by most non-programmers out there. The problem is, there is a lower limit on how long it can possibly take to complete a programming task - and I think even the MBAs will concede that that lower limit is higher than the time spent typing out the words that make up the program. So, if we accept the premise that “good programmer” = “fast programmer” with no reasonable, objective measure of how fast fast is, the people doing the judging will always end up using the metric “how long I wanted it to take”, which is usually about a day for any task.


I did say the answer to this problem was "all of the above". We're talking about human beings dealing with each other. That means there's an infinite amount of problems. The ones I mentioned are just a few out of many. They're just often overlooked by engineers for obvious reasons.

I hate to use the 10x developer meme...but I will. Personally, I've been around a while...and I worked at a lot of places. A lot of obscure little startups, some non-tech companies, many big and huge names... And 10x doesn't even begin to describe the difference between what some companies consider a "super star" and some consider "bottom of the barrel".

I've seen big name tech companies do with an army in a year, what some of the engineers I worked with would have done in a few weeks, alone, -in the same environment- (thats the important part).

Once you've seen that happen enough, you just can't deny it anymore. The difference between engineers is so great, it matters a whole lot. At one company, a PM will come with vague or shitty requirements and the engineers will run away with it and produce garbage over several months... while another dev would just call it out for the bullshit it is, point out what's wrong and pump out a correct solution by the end of the day.

And all that doesn't even consider the time some will spend googling basic shit while others can just infer the right thing by thinking about it for a sec. The end result is a massive difference in $$$.


Once I outsourced my side-project, since the pay per hour ratio worked better for me, compared to what I earn, combined with the complexity of that project. The "only" thing I had to do is write specs.

The specs took me 3 weeks to write ( finding a lot of logical "bugs" in my project ), but the detail I went to ( SQL schema suggestions, framework suggestions, front-end plugins suggestions, ready pixel perfect design with different states, urls for the different pages, etc. ) was one of the biggest benefit for the final price. The company that took the project estimated 4 months and delivered it in 3 and half. They said those were the best specs they've ever seen.

Moral of the story : Do full specs before you start coding ( I know it's boring ).


I did the same for my outsourced side-project - I went full waterfall.

The spec turned out to be 250 pages long (without pics) and took almost a year to fully develop from the conception stage. In addition to this, I hired a designer to develop an additional design spec for the UI.

So far, this worked great. My total time spent on this project since the moment I gave the spec to developers is about one workday in total. The questions from the devs are mostly trivial, and solvable in an hour or less.


- Requirements that change mid-stream, dozens, sometimes hundreds of times

- Team has to do speculative development for ill-formed or understood problem domain

- Requirements that are improperly gathered

- Requirements that are underspecified

- Poorly understood problem in general

- Fighting bugs in system libraries

- Fighting bugs in third party libraries

- Fighting issues with integration on deployment and/or third party anything

- Complex underlying technology requiring its own kind of discovery (like in the OP)

- Poor/incorrect documentation

- Project members that go off on their own, refuse to communicate, deviate wildly from the project style, break things

- Management that insists on doing things from scratch

- Management that insists on using buggy, broken, etc. 3rd-party or even in-house systems/libraries for political or faux-business reasons

- Constant software developer interruptions

- Long meetings that consist of circular discussions, politics, back-and-forth arguments about personal preferences disguised as "important project stuff"

- Administrative minutiae that never stops adding up

- People leave a project partway through, taking project knowledge with them

- Broken/buggy tools/environments/systems

- People getting pulled off a project to work on another project

- Bad use of project management, source code management, other tools

- Insistence on wheel reinvention for well-understood problems for career advancement, resume building, or boredom-relief purposes

- Low morale / poor focus due to bad management, co-workers, bad tools, bad work environment

- Low morale / poor focus due to reliance on: late night work (leading to insufficient sleep), weekend work, little/no time off or breaks, reliance on crunch/death marches

- Dealing with company/project politics in general

- Lack of focus due to team member resume-polishing/interviewing part way through a bad project

- Occasional project sabotage


Hello, Mr. Brooks. Didn't see you coming in.


Business-time goes seemingly faster than engineering-time. Engineering requires luxurious calm and ample amounts of time to achieve anything of value. Hence the disconnection, the business will ask "why do engineers take so much time?" while the engineers ask themselves "why does the management changes the plan so much?". But a key point to have successful engineering is not panicking upon this perceived, unbearable, uncontrollable slowness and still invest in quality: capital instead of debt.


It looks for me that despite what author wrote, he WAS indeed in exploratory phase: `So now we have a sequencer device, how do we get events from it? Can we do it in the main loop? Turns out it probably doesn't integrate too well with Qt, (...)`

`My initial thought was making a grid of spinners,(...) but then I realized that there isn't an easy way to make headlines in Qt's grid. (...) So after some searching, I found out that it would be better to have a tree view(...)`

And many similar things. Exploration and technical spikes - nothing wrong with that but author wrote: ` It's pretty common to do so if you're in an exploratory phase, but in this case, I had a pretty good idea of what I wanted to do right from the start, and that plan seemed to work. `

and then he contradicted himself writing about various difficulties and explorations... (I don't see anything wrong with that, absolutely. I think that exploratory phase is essential to good design).


I think the author means exploratory as in writing throw-away prototypes to test an idea. There's obviously always exploratory work during development, unless you're reading everything from an extremely detailed spec.


Software development is 5% inspiration, 20% perspiration, and 75% procrastination.


>> and 75% procrastination.

I laughed at this and it's true. But I would like to champion a bit of procrastination in creative pursuits. I know the way my mind works and the simple truth is that sometimes it pays to wait until my head has worked through what feels like an incessant cycle of building up possible approaches and relentlessly tearing them down.

So lets say I'm given an assignment (or a stand-along idea occurs to me). I immediately start to visualize possible outcomes and the approaches that I would need to take to get there. I throw out 99% of these subconsciously. Some of these ideas don't even make it into a full-blown thought, but they are there lurking below the surface. All of this takes some time. Subjectively, it just doesn't feel like the right time to start yet.

This may look like procrastination, but it isn't. I'm not arguing for always waiting until every detail is mapped out in my mind to begin an effort, but sometimes it pays to let your head work through stuff for a bit.

Of course, moderation in everything (including moderation).


Yes!

Sometimes I will have a have a big problem that I will decide not to fix immediately, even if it's more "important" than other things I'm working on. Then, when I come back and rapidly write a simple solution, the question of course is "Why didn't you do this a week ago if the solution was so simple?".

The answer is that the answer wasn't simple at first. Understandings of a problem and solution often live and die many times over in a mental background thread, before they ever reach a keyboard.

Sometimes, when under pressure, I am forced to ignore this fact and start writing the moment I reached the "Yeah, I think that would probably work, let's git 'er done!" phase. In almost every such case, it takes longer overall as I iterate through incomplete solutions and rewrites.


One of the best parts about my job is that my boss understands that this is how good developers work.


Agreed. I've seen "hard work" create more problems than it solves.


I think Fort Minor had the recipe right if we're talking people behind large, enterprise projects:

"This is ten percent luck Twenty percent skill Fifteen percent concentrated power of will Five percent pleasure Fifty percent pain And a hundred percent reason to remember the name"


Its impossible to accurately estimate an unknown. Once you have experience, a rough sketch of the solution will pop into your head right away, but as you go along all these devilish little details will pop up. Also, meetings run by people who can't write code who just want to talk to people for an hour because it breaks up their dull day will eat a decent chunk of time. When starting a new project and in the "give me estimates phase", I do my best, but, I know they are essentially made up numbers. If I were in charge, I would eliminate time estimates. Just have high level goals, keep breaking them down (or up, in bottom up dev) and have lots of automated test cases. I guess like that continuous dev (which I have never done, just read about) style.


> Also, meetings run by people who can't write code who just want to talk to people for an hour because it breaks up their dull day will eat a decent chunk of time

Is there any other reason for weekly status meetings?


Software development takes a long time because of the users. If you think about every piece of software that has ever been written, they have been created to solve a human problem or resolve a human concern. As we build more impressive systems, our expectations also change. A state of the art website just 5 years ago is unacceptable today. Think about all the work that we now have to do in order to make things mobile responsive. Features such as chat and video, which were revolutionary in 2004-2005 (when Youtube and Facebook were found) are common requirements of projects today. In additional to an increase in user expectations, we have to understand that when we undertake a project it is to solve a new problem whose implications cannot fully be understood from the beginning. In most software projects, the problems only becomes clear after spending months building it. Once we have the software as an artifact, we will find new ideas and new extensions that we could have never imagined before. Additionally, a large part of our software has to interact with other artifacts in the real world. These artifacts are built by other people and continuously change. A large part of maintaining a project is ensuring that is still compatible with all the programs that it depends on.


Features such as chat and video, which were revolutionary in 2004-2005

They were revolutionary in 1968: https://www.youtube.com/watch?v=yJDv-zdhzMY


Chat and video was revolutionary in 2004-2005 on the web.


The hell they were, I pioneered that in 1995.


Because it's an art. It's like painting a picture. Good art takes time.



Reading through the article, and having no way to comment there, I'll comment here:

> It is true that both painters and programmers make things, just like a pastry chef makes a wedding cake, or a chicken makes an egg. But nothing about what they make, the purposes it serves, or how they go about doing it is in any way similar.

This misses the mark almost entirely, from my understanding. Hackers and painters CREATE something -- it does not exist and then it does. The wording Paul Graham uses is "makers", but it's not what I feel so maybe that is the criticism being expressed in this part of the article ?

The purpose of artists who paint and hackers /IS/ the same -- to create. To get what is inside of you out. To have the expression which has no form to take form and be in the world as something that can be shared with another soul. The methods are the same -- extract what is inside of you into concrete form, mold it, remove the bits that are not right, add more bits, move the bits around.

Certainly this is not true of computer programmers, which the article refers to hackers as but this is a failure of the article not of the original comparison. Nor is it true of all painters, since the word is too generic, which is a failure of Paul Graham's choice of wording for this comparison since based on the rest of the reading that is what he is referring to.

Computer programmers program computers, perhaps soullessly.

Painters paint things, perhaps soullessly.

Hackers hack because they need to.

Artists that paint paint because they need to.

As to the rest, I don't know if Paul Graham's intention was to attempt to borrow "coolness" from artists, but for me it is not. It's an attempt to explain why one must hack and why that is an identity and not an activity or a profession.


One (half-) sentence from that article especially caught my eye:

  ...computer programmers create artifacts that have to stand up to an objective reality.
If we could make this more of a practise than a theory, it might substantially improve the state of things.


This is in fact the correct answer.


Okay, so developing a "multi channel audio support" feature took three months. Is that expensive? What would this type of feature cost to implement without software? Answer: a LOT or even borderline impossible. Look, to a large degree software like this replaces what used to have to be done by electronics (or, more likely, not at all). The fact that it took three months is a borderline miracle. Oh, you want to make this process faster? Okay -- time to get the AI community involved because I don't really see how human beings are going to perform any faster than they already are. Look, my point is that anyone who claims that this could be done faster or is the result of "bad engineers" or anything else is basically delusional IMHO. 'Nuff said.


It doesn't. It takes as long as it takes. It's a constantly evolving moving target. Why does writing a book or making a movie take so long? It's a creative thing. It's hard.


Following on the analogies already presented here:

Coding is not construction (that's the compiler). Coding is the architect/drafter.

If you want a pre-planned house, no problem, that's the same as buying shrink wrap.

If you want to add a swimming pool under then that a whole bunch of conversations, customization and checking.

If you want a completely bespoke house then then that is a conversation that lasts for months and the requirements will always change.

Think about that Grand Designs tv show. that is software and the architect is software development


Because a lot of stuff gets written several times. Doesent matter wether its agile or waterfall, and i dont care who started it - customers or software-engineering, wether the rewrite happened before roll out, or while the thing was already in the wild. Software development takes so long, because we tear down what we build and redo it, alot of times over.


What is the best resource which explains an outsider why developing software is hard and takes a lot of time in general? I'm thinking of a small accessible book (perhaps even a cartoon) that starts with a bunch of analogies, and then explains why those analogies are true in real life (this last part is important).


This may not be the "best" (not sure if there is a single best), but is pretty well known:

https://en.m.wikipedia.org/wiki/The_Mythical_Man-Month

It also has a 20 or 25 year later anniversary edition.

I like his chapter / point about "No silver bullet" in the 2nd edition.

Edit: He (Fred Brooks, the author) was in charge of the OS/360 project (OS for the IBM System 360 mainframe series). A pretty large project of the time - years long.

The cover image is a good analogy - tar pits.


Maybe this? http://www.bloomberg.com/graphics/2015-paul-ford-what-is-cod...

But I don't think it catches just how much of an expert on all sorts of quirks of how many libraries a programmer must actually be, like the current article does.


What about "Big Ball of Mud"

http://www.laputan.org/mud/

Some of the pictures are brilliant.


Perhaps a bit simpler, but all too applicable in real-life.

http://heeris.id.au/2013/this-is-why-you-shouldnt-interrupt-...


Because software is a half-visible moving target, that's why.


I like your answer the best, quite succinct.

Software is / has way too many intangibles, by its nature. If you try to show the customer some tangible progress every three weeks, that's all very good, but strongly highlights the "intangiblesness" (sorry made up word) of it all. Once the customer starts seeing some "real" feature take shape, then the feature requests really start pouring in, and any previous understanding of this feature before this point, becomes a dim ephemeral dream in everyone's minds. Constant moving target.


Maybe most programmers are amateurs? If you're really good at data-structure/algorithms/etc and practice them like a surgeon frequently, maybe the landscape will be very different.

"Anybody can code, even an idiot can learn to code in 24 hours"


The 'readable' button in Firefox was not working for me. Maybe others also would like to paste this in the url bar:

  javascript:document.body.style.width='600px';document.body.style.margin='0 auto';void(0);


Prefix the URL with "about:reader?url="

I have to do that a lot.

Web design isn't the solution. Web design is the problem.


Firefox is wonky with its "readable view" button, huh? I visited the page, it wasn't showing a button. Came back here, clicked again, and it did show a button. Clicking it worked well.

I don't know why it's breaking so easily.


ha! you are right. A revisit made the button appear.


It doesn't really, unless you overgeneralize your solution (just like the rhetorical question in the title) and make wrong toolset choices.

20 years ago we also had fewer choices, hence fewer bug-ridden abstraction layers and moving targets. We mastered one programming environment and target platform, both were much less complex than what we have today. We could also tack something together quickly in an awkward way without putting the code on a world-visible platform as part of our resume. Also, more precautions are necessary today, like security, general code quality (no more tricks that work only on one compiler).


Cause I (we?) spend too much time reading articles like this instead of coding.


If you were delivering very similar projects several times (in which case you should be reusing existing code anyway), estimates would be a lot easier and more accurate. As most software projects are using a combination of tools, people, requirements etc. that have never been combined before, accurate estimates are always going to be difficult.

I routinely read that estimates get better with experience. The only way I see this as true is in that you learn to broaden your estimates the more risk that is involved, not that you eventually are able to give accurate and narrow estimates.


I'd be tempted to say that a lot depends on being clear on what needs to be achieved. I've personally seen deadlines slip mostly for the lack of clarity on what needs to be done. An analogy might be to say that what was initially described as an elephant turned out to be a giraffe.

Of course, there are instances when the actual approach, implementation or choice of technologies tend to weigh. But the latter is much less frequent.


Why does some software development take so little time?

My suggestion for the answer is that when the goal of the software project is selected carefully in the context of the surrounding ecosystem you can connect a few things and all the real magic was in picking the goal.

When instead you pick the goal and then figure out how to achieve it, who would be surprised that there is often more work involved after picking the goal?


I suspect it has something to do with a) The programmer not actually using the software they create. They end up spending time trying to understand the domain. b) The programmer uses the software they create and wants it to do everything (see also Second System Effect, Zawinski's Law/Law of Software Envelopment). c) The programmer expects components to work as documented.


To me it all boils down to this: hardware is all about making billions of things, all identical. Software is about building innumerable things on top of that hardware, all different.

What would you like it to do, what can be done, how to do it, how to organize a group of people to do it, how to make sure it works and stays working, it's all creative work and it takes time to do it well.


"Why does software development take so long?"

I can see this as a valid question for a subset of problems. For example, the basic database CRUD applications that get built over and over.

The closest solutions I've seen were DabbleDB (acquired by Twitter and shuttered) and Quickbase (laudable, but the pricing model doesn't work for many).


Software development takes a long time because we write code still. Thats essentially the crux of it. Humans then plan/optimize efforts on the wrong layers, and underestimate the cost of updating code over time. Surprisingly so at scale.

The systems that tend to produce the fastest (that also last) results are the one's that give the developer the most control over the environment to get the task done (LISPs). Or one's that generate code for you, and let you deal with abstract flows as the control unit. We don't really use either of those things in industry... but niche segments do to great success.


my real takeaway from this article is the amazing feat of typing at 800 characters a minute!

according to http://smallbusiness.chron.com/good-typing-speed-per-minute-... a good typist does around 335 cpm

so this is way over twice as fast

is this a typo and supposed to be 80? that would be very slow.


My guess is that is with "bursting" and is not very sustainable. Not to mention, being able to type so quickly is very impractical when you need to think about what to say (well....type) and with programming especially, typing speed is irrelevant.

To give a comparison - I can type 700~ cpm steadily, which means I can maintain it for large periods of time. Only practical when copying text when OCR fails me or when doing data entry. With "bursts" (unsustainable but rapid typing) I can type 160-170wpm with a negligible error rate (<1%) but I can only keep it up for a few minutes. Realistically, I type closer to 110-120wpm, because I take longer pauses between words while I consider what I am about to say (again....type).

My hunch is the author actually meant 800cpm, but I do have my doubts to the accuracy or susustainability of the figure. It isn't "impossible" but it is "The top 1% of the top 1%" levels. :)

http://i.imgur.com/NW2Kq4p.png


Why does it need another thread if ALSA uses poll()?


Buildings are totally different in that they are all, roughly, the same or small variations on the same design. Consequently it is very easy to reuse massive amounts of reusable components in extremely predictable ways and to define standards for review that can be broadly applied across the majority of projects.

This is not the case when builders are doing something that is actually totally novel in that they don't have many existing examples and it is built of entirely custom components. In cases like that it is actually exactly like software in that there is a lot of figuring things out as you go along, schedules are usually laughably off by orders of magnitude, and there are frequently major screw ups (Big dig, Seattle big bertha fiasco, SF Bay Bridge, ... to name a very few)

Of course in software we aren't always reinventing everything as people unfairly complain. We reuse way more than we reinvent - postgres, elixir, phoenix, browsers, OSX, etc, etc, etc represent 1000s of lifetimes of work and the majority of the totality of work behind an actual product. The time we do spend, and consequently the variability in schedule, is due to the fact that most software is actually quite different in its requirements from all previous software and usually previous examples (aka competitors) have not made source code available for reuse.

One huge difference between software development and "custom" building development is that good software developers embrace this variability and uncertainty - hence agile. Most other fields of engineering are still caught in an archaic and delusional world of gantt charts and hard deadlines with feature lists developed by architects. It never works that way insofar as the work is _custom_ - in any field!

The thing I really hate about this conversation is how so many people are so eager to just chalk all this up to software developers being half-assed hacks. Well there are lots of teams at places like HP staffed by only "certified" software engineers and guess what? They still are terrible at predicting outcomes and they are actually far far worst in general at their jobs than small, modern, agile teams who make the best of the chaos by addressing that reality in how they work (again, agile).

This is software development people. It isn't about bad devs and lack of certification, it is just hard. For comparison I did integrated circuit design for 7 years right out of school up through being a lead on multiple larger teams before switching to software. The rigor you perceive in other areas of engineering is a total fantasy. Bridges and buildings are maybe the exception but again this is only possible because it is such a canned art by comparison. If you disagree please just enumerate some specific "building codes" or "certification requirements" that you think would actually make any difference on your average complex software project.


Because nobody really knows what they are doing.


That's roughly the gist of it. There are exceptions but not many.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: