Hacker News new | past | comments | ask | show | jobs | submit login
Things I've learned about writing software after 12 years (medium.com/landongn)
298 points by landongn on March 13, 2015 | hide | past | favorite | 118 comments



There's really one issue here. Budget.

You want two pairs of eyeballs on everything as you write it? Okay. Pay two guys.

You want complete test coverage? Okay. Can take as long as writing the code itself. Maybe more.

You want full documentation? Okay. Guy can't spend as much time on coding.

You want new APIs to be included? Okay. Guy's got to read docs and try samples. Less time.

What I've seen is you always end up with someone noticing that you could just spend all your time coding, and you would still produce an output that was ok. At least for small projects, or the start of large ones. It's as if there's a true productivity that's much lower than the imagined, and whoever is setting priorities doesn't notice it. He only sees code that's been written. He doesn't see technical debt. He doesn't see when a test catches a budding error. He doesn't see what that second pair of eyes is for. He doesn't know why you're always reading about some new thing.


The issue is underestimation. Everything is underestimated. That bug that was 5 lines of code? You had to spend 3 days looking for it.

Writing docs and unit tests now is cheaper than fixing bugs later.

I guess you're right that it is about budget because we're all trying to build a spaceship when we're only given some duct tape and a shoebox.

I've been explicitly told in one project to avoid quality and just make something cheap and fast. The project has been in that mode for the last year. Funny how cheap and fast actually means more expensive and slower.

On another project, the manager created different columns in our Kanban board suggesting that we must write unit tests and must have code reviewed and QA'd. Okay Mr Manager, now tell me where in your schedule and budget you're going to fit those key activities when you've accepted a fixed-time, fixed-cost project.

Underestimation is rampant in the industry and I really like this saying in order to justify the necessary work, "If you have time to do it over, you have time to do it right."


The funny part of this, is that often the "budget" of time / effort is low-balled, and you end up with something that's more expensive to fix. The Hubble telescope from the essay is a great example of this. And we've all seen the project where the decision was to just put in a hacky work around for a mis-design because the project deadline is in less than a month... and it continues that way for over a year.

In these cases, some sort of unwillingness or inability to pause and be thoughtful, driven by a rush to meet a budget or deadline, cost a lot of money and time. Perhaps, as you suggest, it's the great difficulty of the lay-person or the manager judging the quality of software (or engineering I guess), and naturally selecting for the "cheapest" quality that seems to work, and very often hitting "too cheap".


The part about delivering a large visible project on time and owning the CEO or sales team's promises I actually get. Sometimes there's just better market/brand value in delivering on time (even if it's crap underneath).

What is truly criminal is not giving the developers at least a couple refactor iterations to fix all the monkey patch bandaiding put into making that happen. It's a mix of both management's naivete about how bad the underlying code is and their greed of wanting to cash in on the present victory at the expense of future technical debt (that hopefully someone else will have to pay).

I think at least some of this responsibility is on development. They should negotiate and secure the repair iterations for post launch in exchange for making 11th hour design compromises.


You describe it as a zero-sum game, but it isn't.

Spending time writing documentation does not mean less time is spent coding, it means less time is spent coding that day, but then that is offset by time gained when someone later has to modify it and can finish sooner, thereby increasing the time available for coding other features.

This article struck a nerve with me because I've been at it 11 years, and feel much the same. The one lesson that has stuck with me most is the quality vs speed paradox: focusing on quality makes you go fast, focusing on speed makes you go slow. The reason is that if you cut corners you are continually wasting time fixing yesterday's cut corners, instead of implementing today's feature. You then feel pressure to rush through the new features to make up for lost time, causing more bugs to stack up, and... In extreme cases the project reaches a virtual standstill, spending all resources fixing past mistakes. Usually that's around the time it gets canceled, no lessons learned, freeing up everyone to do it over on the next project.

So, the issue is not one of budget, but of accounting. You have to account for the future cost of debts taken out on the codebase, and when you do you eventually conclude that the numbers tell you not to take out that debt. Convincing a manager of that however ... well, let me just say that it depends on the manager.


your company gives you bonus per quarter. you have one quarter to impress with new features in production. you don't write code or tests, while team B does. you get 10 features, they get 4. q1 bonus is all your. your code is full of holes next quarter... but now that project is maintained by some junior Dev and you are crunching 10 features on the well documented project on team B because they clearly needed help shipping.

you get the bonuses every quarter.


>Spending time writing documentation does not mean less time is spent coding, it means less time is spent coding that day, but then that is offset by time gained when someone later has to modify it and can finish sooner, thereby increasing the time available for coding other features.

Exactly. But when managers don't see it that way, you have the utilization paradox: people are constantly working but constantly behind.

>So, the issue is not one of budget, but of accounting.

Yes and no. The two are intertwined. If accounting and estimation were correct, the budget would be correct on average. Instead, the budget is on average less that what it's supposed to be, because people do not account for everything.


The real issue is communication, not necessarily budget, as the reason software projects I've been exposed to fail.

A project manager can only get you so far, especially if the team dynamic is rotten or even sub-par.

Management is never really the answer to me, even saying that as a coder who splits time as a manager. At the end of the day, if your team doesn't have the right dynamic -- soft skills or otherwise -- I don't think they've got a great chance of success regardless of how much budget or process you can throw at them.


This is very true. I hadn't touched on the cultural aspects of why things get managed poorly.

One thing to note is when the guy calling the shots is not a programmer. It's simply hard to see all the things programmers have to do, other than typing out code. It's also hard to see why some guy wrote a bug if you're never the guy who is responsible for it.

Have a look at sports teams. The coaches tend to be ex players, even if at a very low level.

The other thing about manager types is things never go wrong for them in a way that is purely them. When I worked in marketing, I'd never get stuck on a PowerPoint slide for days. Problems were always organizational, and someone else was always involved. Sales guys aren't selling fast enough. Fab guys are stuck with their new process. PR people have the wrong message.

It's very easy as the organizer to just think all the problems are due to other people.


Very true. Except that budget is a limiting factor when it comes to communication as well. A team is going to put much less energy and thought into communicating if they have their hands full.


This also has implications for your career development as a programmer. I've been working at an outsourcing shop for the past two years and have barely grown as a programmer.

If you work at a place where the client isn't willing to pay for good quality software you will never learn how to make software the right way , simply because there is no time to do it the right way.


which is why a software project requires someone with technical knowledge doing the management, and not an MBA or "project manager" who cares only about on budget and on time at the cost of technical quality.


If you know the management is not technically competent, the responsibility also lies on the developer to communicate things on their level. If they ask how long will X take, you don't answer 1 day, and then another day to test it. You answer 2 days. If that's outside their budget then forget about that feature and let sales find a new feature they can sell that can be developed within the budget.

This also boils down to the ego thing, many junior developers are proud to give short estimates to show off how quick and good they are at their work. But those short estimates are always always always just the estimated time to get a quick prototype working, where things are configured as you go in the debugger, not something that is robust and will work together with the rest of the application in a customer deployed environment. Estimation is really where you can see the difference in an experienced engineer and a non experienced one.


TL;DR - you want to hire people who can adopt and who respect both sides (business, developers)

I work as devops and my boss always says that good operation/support people are hard to find. One can be a super hero in programming, but really bad at customer service or supporting production. Similarly some ops are very horrible at coding, but they are great system administrators.

With that, I have had product owners and product managers who are really excellent at managing team and able to cope the lack of technical skill by absorbing the technical knowledge from daily standup and eventually able to work with the team to prioritize technical challenge. For example this one product owner works in big data and he couldn't ssh without me showing him, but he could go over the pipelines just enough to make me feel embarrass. Of course, if you are on a project long enough you should know how things work in general.

On the contrary I have had really senior technical people leading teams and eventually got fired for their inability to lead.


In a business on-time and on budget are pretty damn important. I'd argue for most software disciplines (note: I didn't say all), probably more important than technical quality.


It's been tried over and over again. Throwing developers at a project has negative returns at some point.

The truth is that individual engineer skill is the only good bet, and only if within a culture that works.


That's true, if and only if you have decent experienced developers. I work with plenty of mediocre developers, who can code, and not much more. They don't seem to see patterns between things. They don't think in sets. They probably don't refactor their code to make it maintainable for others.


This is actually in reply to d--b, who appears to be hellbanned.

d--b, you're hellbanned for some reason. FYI.


I think the main issue the OP is trying to describe is not the tradeoff between cost and quality. I believe this is well understood. However the problem that is not well understood by non-developers is the cost of change. Maintaining software is _extremely_ costly, and people simply do not get that.

While everyone understands that maintaining a physical infrastructure like a highway involves a large amount of energy (create an alternative road to redirect the traffic, fix the road, destroy the alternative road), it is not true of software. Because what happens in the code is not obvious, clients and managers have no idea of the effort that would be required to change an existing piece of software without disrupting the current flow of operations.

Let's take a more practical example: The boss asks you to create a datatable in a database to store client information. And according to the first specs, each client has one address, so you go on and create a table that contains client id / client name / client address. Everything works fine, great! Now, you do your demo, and suddenly the boss tells you: hey I've got 2 addresses, can I have my 2 addresses stored in the system? You have then 3 choices:

1. Tell your boss: no there's only 1 address in the system 2. Tell your boss: ok I can put 2 addresses, but then it means I have to split the user table and create an indirection. So I have to rewrite the whole address read/write layer. 3. Tell your boss: sure I can do it. What I'll do is take my first design, add 3 or 4 'additional' addresses to my table (who's got 3 or 4 addresses anyway?) and I'll just have some minimal changes to make to my code.

In most cases, if you're the developer, you'll opt for option 3. You may think in the back of your head that this is disgusting and that you'll change it later, but any of the other two options will make you look pretty bad.

Later on, the boss asks you, hey you know what would be great, is that we can lookup users by their addresses. And now you find yourself having to deal with these additional addresses columns all over the place, and here starts your ball of mud.

I think you see my point. The problem is not a question of money, it is very much a problem of understanding the underlying structure of a project.


In our current society model we don't really have a "choice". Basically, if the budget is put towards "doing things right" there is very little chance the project takes off, its going to be "too much long term".

Are we doomed to fail? Maybe.


We get the software we deserve. Apparently, people aren't ready to spend more money on performance, reliablility and security. When they buy software, they pay the price of a giant ball of mud (if they aren't expecting it to be free!) and they get a giant ball of mud. Simple. You call it insanity, I call it market, supply and demand. Of course, the perfectionists are unhappy about that (being one myself I can certainly empathise) but you have to realize that this kind of perfection in software costs huge amounts of money and requires the most skilled workers in the field.


I've only recently started working for larger more corporate-type firms and it's absolutely amazing how much they pay for giant ball of mud software. Astronomical amounts and then on top of that there are maintenance fees, support fees, upgrade fees. It's absolutely amazing and frightening the money being spent.

So I think the problem isn't money; the real problem is there just isn't enough software! We have a software deficit. With software being a critical part of nearly every business in the world, there is huge demand but massively insufficient supply. In my industry, there are only two big software providers and a handful of smaller ones. And it's a huge industry in both money and sheer size. But they literally cannot buy anything but big balls of mud.

People are paying big money and small money for whatever software they can get. And that natural consequence is that performance, reliability, and security are hit or miss.


Speaking from several years at a company pre- and post-merger, the software they want will always be scarce, because they want something customized just to themselves.

They might not get it, of course -- the money is an (often ineffective) stand-in for institutional/organizational change, AKA "throwing money at the problem."

It's simply easier to get people to spend gobs money than to change their comfortable short-term habits. (Evidence: Wasted gym memberships everywhere.)


The core problem isn't that there is a lack of software creation; there is a lack of software but the problem is that the buy is uneducated and conflates priorities. The most successful software services firms aren't those with great engineers, they are those with great sales guys.

The lack of discernment of the buyer is creating all this software waste. This is one of the reasons that MS Excel and Access run most of the world's financial data. The format is open enough that it basically imbues the user partial programming ability without having to worry about things like new line feeds and uptime.


if you don't mind sharing, what industry/niche?


Law.


"We get the software we deserve."

Painfully true. There are techniques for writing better software. They work. They take longer and cost more. They are not widely used outside aerospace.


Is there a high level overview somewhere I can read? Don't even know what to google here.


http://spinroot.com/gerard/pdf/P10.pdf (some are more/less appropriate for non-embedded-systems)

In general, the answer typically involves formal specification and formal methods that check the code against these specifications, combined with testing and coding standards that result in analysable code.

More references:

https://www.cs.umd.edu/~mvz/cmsc630/clarke96formal.pdf

http://research.microsoft.com/en-us/um/people/lamport/tla/fo...


You might want to look into coding standards for C and special languages like Ada (like C, but less writeable, more readable with strong types) and Esterel (deterministic multithread scheduling). Seriously, Esterel is probably the coolest thing you'll read about this week.

There's also various specification languages for multithreaded behaviour, which allows you to analyse your programs behaviour using software tools, for example SPIN[0].

0: http://en.wikipedia.org/wiki/SPIN_model_checker


Search for high integrity software.

For example, with MISRA it may be C, but feels like Ada.

http://www.misra.org.uk/

Or Spark similarly for Ada

http://www.spark-2014.org/


If we're all creating giant balls of mud because the demand is for giant balls of mud done quickly, then we never learn how to craft beautiful software.

It's unfortunate, and it's sad to say I'm often part of it at some level (but I fight it dammit!).


Also in medical devices. IIRC FDA has standards for software writing/testing/certification.


FDA requirements mostly target the SDLC: risk analysis, change control, documentation, v&v, &c. Companies are afforded a surprising amount of flexibility in implementation. Basically, you must have a documented process that you follow, but you're left to your own devices in creating the process. Deviations from voluntary industry standards (e.g., TIR 45) is permissible since they're not specifically required. The DoD, aerospace, and automotive industries have in comparison far more detailed and strict regulatory requirements.



Yup. How a DO-178-like integrity level is not mandatory for medical devices is troubling.


I guess the other thing is instability of the requirements. Something like grep or uniq are almost perfect because the use case is stable.


When I was still in training, I had the pleasure of participating on a software project to build a "web shop" for internal use at a large IT company. It was a pleasure insofar I was only a trainee and did not have to deal with the politics and all that. One day, a week before the first part of the web shop was scheduled to go live, the programmer I was working for came back from a meeting. Usually, he was pretty laid back person, very friendly and pleasure to work with/for. But when he came back from that meeting, he was mad, like, "Hulk SMASH" mad.

I asked him, as delicately as I could, what had happened.

This project we were working on, he told me, had been going on for about half a year, and during that time there had been a meeting every week, where everybody involved in the project had an opportunity to get together and review and discuss the progress of the project.

So there was this guy, who had only attended the very first of these meetings, without saying a single word. And the week before the thing is supposed to go live, he shows up again, with a long list of changes he wants to make to the application.

Needless to say, that programmer, and several other people in the room, told that guy how he had six freaking months to tell the programmers about his requirements, and the he could not possibly expect them to make all these changes a week before the application went live.

So that guy goes to the highest-ranking manager involved with the project, manages to pull some political strings, so that manager goes to this programmer's boss, that boss goes to the programmer and tells him, "I know how much it sucks, but you have to do this. I share your pain, but I, too, am powerless to refuse this request."

When building material things, it seems, houses or ships or airplanes or railraods or whatever, people do realize that you cannot tell, e.g. the construction company that you actually want a house with a circular outline rather than the rectangular one the company has been building for the last, what?, three months. Managers at car companies, I guess, do not storm into the engineers' offices a week before production of a new car starts to tell them the vehicle not only needs to be small and fuel efficient, but also needs to be able to work in antarctic climate and run on carbon dioxide instead of gasoline.

With other products - at least, that is the impression I get - people have an intuitive or explicit understanding of the limitations of the things they want to have built, and they also seem to basically understand that you cannot make drastic changes at the last minute. With software, which does exist in the tangible way that, say, a car our house does, people seem to have a much harder time understanding those limitations. I am not sure this is the entire problem with software development, but it is a big part.


Thankfully I'm at the point in my career where, if such a thing happens, I can respond calmly and without anger that this will reset the clock on development, and that doing so is a management decision that does not reflect poorly on me or my team. If they want to do that, I can do that, but I won't take the blame for that failure.

I recognize some organizations will not accept that answer and will try to make me take the blame. Such organizations will tend to have a very poor tech department.


The thing with programming and abstract stuff is it's very hard to know what makes the problem much harder. And you definitely won't have intuition for it if you haven't programmed.

For instance, it's easy to prove that there's an infinite number of prime numbers. There's no biggest prime, because if there were (hand-wave proof coming) you could multiple them all together, add one, and have another prime number. Easy right?

Now tell me if there's also an infinite number of twin primes, ie pairs of prime numbers separated by two.

Here's a good xkcd: http://xkcd.com/1425/


>but you have to realize that this kind of perfection in software costs huge amounts of money

Maybe not as much as people think. There are machine-assisted mechanisms for building mathematically perfect software. See Curry-Howard oriented languages like Coq and Agda. It's certainly harder to write perfectly, provably correct programs, but maybe not as hard as most people think.

There is also a lot of middle ground in the form of relatively powerful (but not quite Agda-powerful) type systems.


Mathematically perfect provided the proposition you're verifying is actually relevant and consistent with the one you're applying in the real world. Given the finicky nature and constraints of the environments that most software runs on, this could prove difficult. Moreover, the proofs must be maintained along with the software.

As for type systems, mission-critical software has generally made good practice with them (Ada, proof checkers, etc.) More so than consumer and enterprise circles. Yet a ton of catastrophic bugs have been the result of errors outside the scope of type checking, and as a counterpoint, our telephony is pretty robust with dynamically typed Erlang switches. It seems that mechanisms for building self-healing and concurrent systems are often put aside and equivocated solely with strong type systems. From what I recall, Erlang's signature feature of hot code loading actually conflicts with static typing.


I am familiar with some code-proving tools and functional programming. Verifying such code is tedious and will increase time, cost and skill required for the project substantially. People just aren't ready to pay the price for that. Yet.


The thing about typechecking, though, is that you're essentially specifying your program's behavior twice: Once very detailed in the actual code and once at usually some lesser detail in the types. All the compiler does is checking that these two specifications are consistent with each other. Your types can have bugs too, in which case nothing will help you. What frustrates me about detailed type systems is that as detail increases, difficulty of writing the types will increase and type bugs will become more common. Now, assuming the bugs in your code and types are statistically independent that would still save you a lot of bugs, but I suspect they are not.


> The thing about typechecking, though, is that you're essentially specifying your program's behavior twice: Once very detailed in the actual code and once at usually some lesser detail in the types.

How so? The implementation is not the specification; but the type is the specification. Like people like to say, the type is a Theorem, and the implementation is is the Proof of the Theorem. (In a very real sense.) And don't you need both?

Hopefully there can be some type inference at a certain scope, to avoid cluttering the code with a lot of 'obvious' type declarations. I forget exactly how dependently typed languages work in that regard right now (there certainly can't be full type inference).

> All the compiler does is checking that these two specifications are consistent with each other.

"All". That's already better than informal mathematical proofs.

> Your types can have bugs too, in which case nothing will help you.

Yes, just like any other kind of alternative specification there is. Short of mind reading, there is no getting around actually describing what we want. But where there is a distinction to be made is in what kind of specification is easy and declarative enough to use as to give the least likelihood of introducing bugs in the spec itself.

> What frustrates me about detailed type systems is that as detail increases, difficulty of writing the types will increase and type bugs will become more common.

I guess if we assume that we have collapsed/unified the type and term(/value) level, we can use any old Good Software Engineering Practice when it comes to keeping the types tidy. Like using type functions to encapsulate some of the detail: maybe have a `sorted(list)` function instead of having to write it out each time we need it.

This is just a guess though; I don't know how the dependent programmers do it. Well most of what I'm writing here is guessing, on some level.

(And judging by the pride that some people show when they proclaim that "half of our code base consists of just tests", well... I certainly think that types can be more succinct than that!)

> Now, assuming the bugs in your code and types are statistically independent that would still save you a lot of bugs, but I suspect they are not.

I guess if we go with the previous assumptions of a unified value/term level, then the statistical chances are the same in that types are just ordinary values, instead of the types belonging to its own language. ;) Then we just have to make sure that the amount of types is substantially smaller than the amount of "regular ol' code".

Typer er fremtiden. Håper jeg (kanskje).


The problem with advanced type systems is that, while the theory and implementations and the research might be mature, it will take god-knows how many years of research oriented on usability in order to get something approachable and understandable. If people are even researching usability like that in a systematic manner (I have my doubts whether there are PL researchers that are working on that kind of thing, or if the community has just thrown their hands up and said that it isn't doable).


So true, pretty much these thoughts went through my head every time my last boss lectured me to get the team to produce faster while reiterating his bug free goal.


Unless you’ve written a buggy program, you don’t realize that you’re addressing our intellect. This is why I think that every engineer on the planet looks at a bug report and feels a twinge of pain as they read whatever detail that was left to serve as a figurative shame sticker on the report card of their creation. It really sucks when you’re just flat out wrong.

Being wrong — rather, being incorrect — is an extremely humbling experience. The catastrophically incorrect, which is when software crashes, money is lost, or the absolute worst, data is stolen, is the kind of thing that makes you question your career choice. It makes you want to curl up into a ball and weep at how completely fucking stupid you were when you’ve found the problem.

Just had to quote this - I urge people who stopped reading or down't want to read the article to read this anyway. I've read a lot on developping but I think it's the first time I saw someone putting it like this. And oh boy, is he right. At least for me. Every bug report (well, the ones which points to something I obviously fucked up) hurts. What hurts even more is the dreaded reopened because I fucked up again. Especially because sometimes that means the whole set of classes surrounding the bug are just textbook examples of all code smells in the universe. And the only thing that can be done about it is the nearly impossible write good code 100% of the time.


I have a slightly different take. That used to be my initial reaction and it probably still is but taking a cue from other engineers better than me they seemed to internalize that there will be bugs, period. So might was well get over it and just fix them as they're found rather than get upset over them.

Of course they try to write good code, follow good practices, write tests etc but there's just going to be bugs, period. So don't beat yourself up. Just fix the bug, learn whatever you can from it so hopefully you won't do it again and or if it calls for it adjust your build infrastructure or testing infrastructure so you're more likely to catch them in the future.


"Just fix the bug, learn whatever you can from it so hopefully you won't do it again"

The baggage associated with bugs is so prevalent that we can't talk about them without a real sense of shame. "Hopefully you won't do it again".

Bugs are just part of the cycle. I completely agree with your assessment that people who can realize that are happier and more productive because they can plan for the probable case.


Same take, different personalities? I know and have known for a long time (in fact that was a small epiphany back then:) there will always be bugs no matter how much better I'll be in the future. And I do learn from them and fix them as they come by. But unfortunately that does not, and likely will never, stop me from feeling somewhat bad/stupid about creating one. Maybe I make it sound worse than it is though - it's not like bugs affect me more than is healthy or keep me up at night. After all: in the end the bug is fixed anyway so nothing to worry about anymore :P


I feel no pain when I see a bug report I just look at the bug and the reason why the bug occurred and then attempt to correct that in the future.

Bugs in a tracker don't hurt, bugs not in a tracker hurt.


People often forget that 'being wrong' feels exactly like 'being right.' What is humbling and embarrassing is learning that you were wrong before. When you are making a mistake, it usually doesn't feel like a mistake.

So it really is safe to assume that everything you do is wrong. It is nearly impossible to tell the difference between right and wrong when you take action. And wrong is so much more likely than right.


From Edsger Dijkstra: https://youtu.be/RCCigccBzIU?t=13m54s

> ...just after the first successful moon landing [...] I knew that each Apollo flight required some 40,000 new lines of code. [...] I was duly impressed that they got so many lines of code correct. So when I met Joel, I said, "how did you do that? [...] Getting that software right." [...] He said that in one of the calculations of the orbit of the lunar module the moon had been defined as repelling instead of attracting. They had discovered that error by accident. Imagine, by accident five days before the shot.

Sound familiar to anyone?

Some 16 years experience myself, the clarity of "the problem of time" has become the foremost factor of bad software that I knowingly write. I imagine NASA has the same problems that we do.

I've been through about 5 cycles that all occur in the same form: 1) We put the under-performing developers on maintaining "legacy" systems and the seniors begin on the next one. 2) Time is purposely estimated shorter than viable, corners get cut in the software, prototypes are put into production once again, and a new legacy system is born. 3) The good programmers leave, some juniors become seniors and others are hired, and the cycle repeats.

There are many additional mismanagement mistakes that accumulate, but limited time always seems to be at the center of the problem for me. I whole-heartily believe that really great software can be authored by just two or three high quality engineers over the span of a few to many years with a clear vision.


I agree, but I think "with a clear vision" is the rub here. That is possibly the real impediment in all these cases.


Many people here already acknowledged that this text misses some details, but I'll put it even harsher: even though it's interesting to read, in fact it's just completely wrong and somewhat misleading.

Author starts by mentioning huge, complicated and ambitious projects and pointing out how seemingly insignificant mistake makes all difference between "it works!" and "it doesn't work!", which sure will sound familiar to what we experience every day, but on different scale. With this impression he moves on to talking about modern software in the industry and at home, and how we fail to write correct code and why.

But in reality, nobody (except the programmer himself, maybe, if he fails to see the bigger picture) gives a fuck about correct code. Glitches and errors are ok, if they don't make software unusable and prevent us from achieving some global business-goal. It's ok for landing-site to have a CSS bug, that makes some button look weird in certain situation, unless it really annoys the customer and we lose sale — the chances are that nobody even notices, and it may be fixed next week or even never. It's ok to put your business-critical back-office software on production and find out that it contains some really nasty bug, which you'll fix in a hurry in the next 20 minutes. It's ok for Ubisoft to spend millions of dollars on production and then ship game with bugs, which will be fixed with the next patch-release. It's ok to have stupid over-complicated networking protocol, that might have been 5 times more efficient, but still is usable and allows people to do something they couldn't have done before.

What is not ok is to spend millions of dollars on development and then never ship the product. It is not ok to let your client to find a solution which works for him, but is cheaper. In a word, whatever actually hurts your business is not ok. And the truth is that cheap solution that is "somewhat ok" is usually better, than the expensive "good solution".


There's a sentiment in the article I see a lot that keeps coming up these days: "NASA sent a spaceship to the moon with something 100x slower than my computer, so why does it take so long to load a page"?

I think it's entirely unjustified. What makes modern software engineering difficult is that it's one of the few engineering disciplines where not only are you making a Boeing 747 fly, and all the parts were made by different people in different locations, but you're also replacing the engine in flight!

Also the specs on the parts are kind of spotty. They weren't agreed upon by a single group of people, and they certainly weren't decided by you. Some of the parts are decades old. There are parts in there that are older than most of us!

Often, the parts decide to change how they work. They don't ask you whether they should. Sometimes, also, the parts decide to disappear. It's up to you to figure that out.

In fact, there aren't really specs on how the parts go together, come to think of it. None of the people who are building this are talking to each other. You could say that most of it is kind of emergent. There are no top-down quality controls.

There are layers upon layers of abstraction, and no one knows all of it. No one even knows most of it. In fact, most people don't care about any of it other than a cubic centimeter of their own few layers.

And it works - by golly it works. And a person with no training can absentmindedly navigate all this by flicking their finger about while walking down the street and drinking their coffee. IMHO, this is a miracle in proportions that are indescribable. The moon landing really was nothing compared to this. So yes, there's some overhead involved in getting that to work. :-)


> The moon landing really was nothing compared to this.

I used to believe that, but I have slowly moved to the other side.

Things are complicated. There is no doubt about that. But, landing on the moon was a fight against nature (gravity, air, distance, etc). Current programming is a fight against stuff someone else dreamed up and no one ever fixed, because "LOL, that's old school" or "You're just doin' it wrong".

Most of the points about parts not being specified correctly, not working as expected, or disappearing happens all the time in other industries. (That's one reason hardware kickstarters fail so often.)


Except it's an applies-to-oranges comparison to start with. The mathematics of space-travel is very simple - it's just calculus. You need to be accurate, but it's easy to quantify and well controlled.

Which is why you can go to the moon with a decent graphics calculator, but you definitely can't have a self-driving car with much less then a modern supercomputer.


> The mathematics of space-travel is very simple - it's just calculus. You need to be accurate, but it's easy to quantify and well controlled.

We're not talking about calculating our way to the moon. We're talking about going to the moon.

The space program had much more than a graphing calculator. They had buildings full of people, systems, and realtime communication with the craft.


> but you're also replacing the engine in flight!

Could you elaborate that, using a mainstream example, please?

> Also the specs on the parts are kind of spotty. ...

Are you referring to the moving parts of the Internet itself?

> There are layers upon layers of abstraction, and no one knows all of it. No one even knows most of it. In fact, most people don't care about any of it other than a cubic centimeter of their own few layers.

This is an invented problem. Scaling a one-off to work for the masses does not require `layers upon layers of abstraction'. Most of those layers are poor engineering backed by a lot money that is thrown at hardware and code maintenance.

One of the realisations that I had over years of `fixing crises', was that increasing numbers of `start up' programmers produce write-only code. Testing and debugging them is painful and expensive, even in the medium-term. Such code bases demonstrate a clear lack of basic engineering discipline. Nonetheless, such programmers are hailed as heroes, because they managed to cobble up a nice-looking mess in a week or two. Then, enormous amounts of effort are poured into making the mess work tolerably. It also consumes phenomenal time.

`Moon landing' projects cannot countdown and blast-off with a great `UX', and then start figuring out how an inertial navigation system should work, over the next week-end.

So, yes that page that `takes so long to load' is indeed `a miracle in proportions that are indescribable'. Only, in a mostly pathetic way!


I have to disagree with you. The root problem is not that people are undisciplined, but that the problem is undefined. Take the time and effort to do things in a verifiable and provable way is useless if the solution is the wrong thing. The biggest risk in a startup is to take some investment money and build something which doesn't lead to any kind of user traction or revenue stream. That is such a big risk that it's worth writing shoddy code to chase that. If and when you find a real business idea then you can rewrite the code with the knowledge of what it needs to be genuinely useful.

But if you start with the idea that you're going to write solid code on principle then you're just throwing good money after bad.


> The root problem is not that people are undisciplined, but that the problem is undefined. > If and when you find a real business idea then you can rewrite the code with the knowledge of what it needs to be genuinely useful.

I have seen several start ups - invested in a few, contracted for several more - of the nature that you describe.

With the benefit of hindsight, what is there so very laudable (thanks Jane Austen) about a culture that does not have a `real business idea', but `rushes to take some investment money and build something'? When you start without a clue of what sustainable value you can provide, most of your problems are invented problems.

Another thing while we are on this topic: it is exceedingly rarely that a start up really loses out because another beat it by a few weeks. Yet, shoddy work is encouraged or condoned in the name of the necessity to move at `Internet speed'.

The `real business idea's of most start ups offer little incremental or differentiating value. The ratio of successful start ups to the unsuccessful ones speaks volumes!


> When you start without a clue of what sustainable value you can provide, most of your problems are invented problems.

What an obscene strawman.

The point isn't that you go into it without a clue what you are doing. The point is that you learn so much faster once you actually have something out there. Before that you are living in a fantasy world where many assumptions both large and small will turn out to be false.

Software development practices are not so binary. They are on a continuum from NASA-like pursuit of perfection down to hackathon throwaway code. You need to decide what is appropriate for the problem at hand and the stage of your company. If you insist there is only one acceptable standard of quality then I'll run circles around you in terms of converging on the right solutions to the right problems just by writing one off shell scripts while you are busy setting up your testing framework and assertions to prove that what you are doing is correct.


'Landing on the moon' simply isn't as simple as the slogan makes out. The project took 4% of the USA's GDP to complete...


Yep, I agree it wasn't simple at all, but what percent of world GDP did me with a macbook with chrome on it connecting to Facebook to post a message take?

Again, building a one-off is easier than a consumer-ready working system deployed across millions of locations that is owned by no central authority and has to run continually. That's why we had SHRDLU (https://www.youtube.com/watch?v=QAJz4YKUwqw and http://hci.stanford.edu/winograd/shrdlu/) and The Mother of All Demos (https://www.youtube.com/watch?v=yJDv-zdhzMY) years ahead of their time. That was my point.


Except that most modern development isn't trying to make a Boeing 747 fly. It is trying to meet a goal that works for most people, with enterpeise development even explicitly throwing out "80% rules", etc.

So most developers are just trying to make a 747 taxi to the end of the runway. Or even just sit there without its wings falling off.

We'll get it to fly in the next version.


The main thing that resonated with me is keep your software simple. Don't try to be cute or clever. Don't whip out your favorite OO design pattern when a simple function will do. Don't try to optimize the function you spend 1% of your time in. Follow Unix design principles. Simple is easy to maintain, easy to debug, easy to learn, easy to understand. Impress your peers by writing good code that works well rather than code that only you understand (and even then you will have to relearn when you come back to it in six months).


If I may dare to add:

... writing good code (that others, including yourself 1 year from now, can maintain easily) that works well ...


Yes, yes and yes. In general my philosophy is this: you aren't a genius so KISS. Ohh you are a genius? Well the guy next to you isn't and he has to work with you so KISG (keep it simple genius).


It's a shame the article is so egotistical, hyperbolic, and poorly edited. Also, not crediting obvious sources[1] for your punchlines smells like plagiarism.

Otherwise, I really do like reading articles like this as touchstones for engineering and development.

[1] http://blog.codinghorror.com/coding-for-violent-psychopaths/


Credit where credit is due, that was lifted nearly wholesale from codinghorror. Wonder how you do liner notes with Medium. I'm a terrible writer, so the rest of the items I've stolen or otherwise isn't to stand on the shoulders of giants like Atwood, it's to reinforce a point the best way I knew how.

I lifted all of the comments from stackoverflow, too: http://stackoverflow.com/questions/184618/what-is-the-best-c...

I'm curious where the egotistical portions were, though. I wanted to at least prefix some of the more "thou shalt" sections toward the end with a disclaimer that they're personal opinions on how I operate.

Glad you liked it, though!


thanks for the link, and for the link inside of it [1]. It expresses the core problem a lot better than the original article!

[1] http://blog.codinghorror.com/the-noble-art-of-maintenance-pr...


Excellent piece of writing. Very funny at times, witty like a satirical novelist. Particularly enjoyed the anecdotes from NASA and the author's conveying of the powerful emotional context of high stakes programming.

I pine for a golden, future age of programming where documentation is as he describes.

> This image brought to you by preventable catastrophe, budget overruns, political wrangling, infighting, incalculable complexity, and against it’s best effort, humanity.

Beautiful caption. The dismal history of NASA in one sentence.


"dismal" - that's an absurd word to apply to an organization that has successfully enabled humanity to explore the solar system over the course of a short 50 years.

There is plenty to Monday-morning quarterback, but "dismal" is not the word I would ever use.


You're reading me wrong. Dismal means sad, and the caption is meant to be wry or ironic.


Budget as an excuse for poor software seems like a bad argument. There are as many people talking about how much more expensive it is to fix bugs late in the game as there are people pressuring teams to deliver shit code on tight timelines. It costs less to write better software. Invest upfront and rejoice that you did later or rush it out the door now and pay to maintain it forever.


The ideas behind this article are good. The author gives some guidelines that have helped them write better software. However, it makes me think about how we can make "making software" better. It seems like the best current way of going about it is to implement these zen-like self-discipline techniques that have helped the OP.

What's so inherently wrong about software? Why is it so damn difficult to write software that doesn't explode if you don't hold your tongue at the right angle?


"Why is it so damn difficult to write software that doesn't explode if you don't hold your tongue at the right angle?"

Hubris mostly.


I don't know, it takes a certain amount of hubris to write any software at all.



The Big Ball of Mud is not unique to software. Ever been in a building that's had a lot of renovations and additions? There are weird half-flights of stairs here and there, convoluted paths from one side to the other, different styles of fixtures, etc. Eventually they get to the point where it's easier and cheaper to just build a new building somewhere else, or tear the old one down and start from scratch. Just like we do with software.


Absolutely true. Shantytowns are a common parallel to big balls of mud, and I think that would hold true for anything that gets constructed.


It was a good rant until he encouraged writing a test after writing the code to be tested :( As a developer also with 12 years experience, I implore you to write your tests first. Otherwise your organisation will almost inevitably end up with brittle tests & it will often cost more to patch up the tests when they break than the cost of the possible increase in genuine bugs. I don't necessarily espouse slavishly adhering to TDD (it works for some devs, not everyone) but experience has taught my heart to sink every time I hear someone say "I've implemented the feature, now I just need to write tests."


Leave it to me to count the zeroes:

> It contained a defect of effectively point zero zero zero zero one inches. that’s 0.000001.


Sean Parent would bring up the fact that given enough scale all systems end up being a network problem. If you look at the internet, you can think of it as one big system. Huge and hideously complex, its pieces are connected but not dependent.

If you can boil your software down to parts that can afford latency (even if they need throughput), you can pass messages between them. That is possibly and probably less efficient, but your macro pieces that pass well defined messages between each other can exist without adding to the complexity of the other pieces.


To the point of automation, my credo is to automate all hairy stuff because you spend your focus and frustration budget on that otherwise. The rule of thumb is more like "if you can't make this your daily priority but need to do it repeatedly take a day and make it a nobrainer"


Great article. Perhaps because I suck at grammar, I didn't even notice the language issues. Interesting that HN is getting bogged down analyzing grammar rather than the actual content. Were the mistakes so bad that they detracted you away from the message? FWIW I was able to comprehend it just fine.

One interesting issue I see is with specific documentation rather than 'self-documenting code': Considering docs don't compile, I have seen (in my 13 years ;) that docs tend to drift apart from the actual code and become more of a PITA than the code itself. At least once I have caught myself down a wrong path because I trusted the docs.

I don't have anything against docs though - write a bunch myself: I am very circumspect against trusting it completely though.


Yes, they really do detract from the message. They're a jarring distraction, like a skip or buzz in a piece of music. And sometimes even when you think you've comprehended it fine, you and author are thinking about different things because you silently filled in a gap with the wrong idea (can't say for sure if there were cases like that in TFA)


"It’s like creating Frankenstein and then realizing that you didn’t bolt the legs on. After you’ve hit it with lightning and given it life. While it’s attempting to walk."


Inexperienced developers would benefit most from reading this, but its also harder for them to put any of it into practice ... in your first job you're usually just scrambling around day to day trying to get a handle on the work, and also to fit into the workplace culture.


Wow. It feels like I wrote that article I agree with it so much. There are a few things I disagree with:

1. I'm not a TDD purist but I do think a red/green TDD cycle does strongly encourage all of the following things to happen in a more natural, flowing way: "Tell yourself what you’re going to do, and then implement that sentence. When you’re finished, take that sentence and write a test, then just move on." Coding before testing isn't the issue as much as the potential for needing to change code just to be more testable (an efficiency issue).

2. No sale on the idea of documenting every piece of code; I truly feel like doc comments muddy the contents of the file, plus resulting documentation is less likely to be read than code and more likely to fall out of sync with the contents of the code. Completely separate documentation is hard to keep updated. It's often something to maintain with often no tangible benefits unless you are churning developers like crazy. Certainly it might be reasonable to document high-level interactions for complete view of the system but documentation at lower levels seems an increasingly bad value proposition to me. In a perfect world, sure... but in real life there are deadlines and tests+code almost always serve as enough of a developer guide to understand clearly. If not, I'd rather spend time refactoring instead of writing docs. For the few new developers brought on, reviewing the system in a paired programming manner is a good way to get to know them and ramp them up at the same time. Parts of the system that are difficult for them to comprehend might warrant refactoring attention.

3. "If something is performed more than three times in a 4 hour span" It's difficult to make a wholesale suggestion on things like that. If said task is 1 second clicking the "get mail" button and automation is auto-checking every n minutes, automation can be a net loss in terms of productivity. I think the XKCD[1] chart is probably a better measure ;) Anyhow it's definitely better to not waste loads of time doing things you do often, and the more often you do something the more likely it is that automating the task would pay off; just don't do it with an utter lack of discretion.

Some typos in the post: - Stray period before "I just call it by what it was: A big ball of mud." - "Simple isn't sexy." should maybe be "Complex isn't sexy" or "Simple is sexy" - "internet" (proper noun) - "explaining just how the interact with" s/the/they - "english" (proper noun)

Very thought-provoking and fun read (lots of laughs)- I really look forward to future posts!

1. http://xkcd.com/1205/


12 years? Sorry to break it to you, but that's a short time.


12 years is the middle years for a career programmer. Enough to know what patterns aren't serving you, and enough to know that there is still a bunch to learn.


Middle years!? How long are these "middle years" supposed to last? I'm 22 years into my career and there's no way I've hit the mid point yet. It still feels like a joke to have a job title with the word "senior" in it.


How long have you been programming full time though? I find too many people on HN are 25 and count "15 years experience" because they started programming at 10. It's just not the same thing, especially when most people only count professional experience (and full time at that). 15 years of programming experience < 10 years full time experience in an enterprise or startup environment.


I'm counting from when I dropped out of college and started working full-time. Maybe only 21 years ago, it's a little hazy now. I'd been writing code for seven or eight years at that point.


It's pretty impressive, then, and nonetheless, I'm just a lot less impressed when somebody says they have 10 years experience and it equates to 2 years of full time, enterprise or startup experience, and 8 years of "I'm going to school full time and program 3 hours a night." I did program and experiment when I was in my teen years too (late 80s, early 90s) and still cringe when people count that as actual experience. Not that I'm constantly learning and growing at my full time job, but meetings, documentation, paperwork, etc are a large part of the job and count more than not having to bother with it.


Middle years if you go into something like management by the time you're 45, I guess. Though I guess you'd have to have planned out that career switch for over 10 years, in that case...


You don't need to plan it.

In most companies if you don't fight actively against it, many times also being seen as not wanting a career at all by others, as most of us get pushed into management.


But if they know that they are going into management within 10 years, they are at least expecting it. ;)


being wrong is ok! try not to hate it! try not to let the fear of it guide you in any way!


Not sure what all the hate is about. He presented some interesting anecdotes about the quirks of software engineering and then tied it all together with some concrete lessons.

Maybe 5K words without any sort of lists or clickbait titles is just too much for the average reader.


You figured it out. People didn't like the article because they're too stupid to read that many words. Or they were confused by the title.

It's not cause the tone is accusatory, or because it rambles, or compares mission critical life-or-death Apollo software with the unimportant web and native applications we write now. Yep, it's cause people are too dumb to look at that many words without a beer commercial dropped in the middle of it.

Saying "people who disagree with my opinions are too stupid to understand the subtlety of the point" is the rhetorical equivalent of denial in the Kubler-Ross sense of the word.


Or writing at higher level than a 12 or 13 year old :-)


Beyond the grammatical errors, misplaced punctuation and misspellings, and despite the relative novelty of the anecdotes, I thought the writing itself was dull and simplistic. Many paragraphs were just a string of blunt statements without any independent clauses.

According to a copy+paste of the text into http://sarahktyler.com/code/sample.php, the article as a whole reaches Flesch-Kindaid grade level 7.


So, it was written such that a 7th grader could read it. How is that a bad thing?

Edit: I pasted a sample of John D MacDonald in there and it showed a level of 6.61. Poor John, no wonder he was never a success. /S


Your comment isn't really addressing mine from the context in which it was given. My comment was in reply to someone implying criticism of the piece stems from its reading level being above that of a 12 or 13 year old, which isn't the case.

I'm simply proposing that the criticism is founded on actual flaws with the piece, rather than it being too complex to be understood by its detractors.


It was what your comment implied to me - that is some one with a low level of English reading comprehension who needs short simple sentence structure.


I got halfway through this article and then gave up without any idea of what the author learned. Wish the author would have edited this post better.


Good heavens, all of the incorrect uses of "it's" are painful. I may be a bit oversensitive but it is very jarring and distracting to me.


Why don't you read the last paragraph that summarizes the whole thing?


I stopped at about 5 words in with the oddly placed comma and capitalization.


You stop reading things as soon as you encounter an "oddly placed comma"?


Missing or incorrect error handling does that.


Like gcc, one misplaced comma and the whole thing might as well be rubbish.


-Wall -Wextra -Werror -pedantic


Don't forget to thank nazis, who taught you how to build rockets and soviets who showed you that space travel is possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: