Hacker News new | past | comments | ask | show | jobs | submit login
How to Build Good Software (csc.gov.sg)
1016 points by jingwen on Aug 19, 2019 | hide | past | favorite | 239 comments

This struck a chord with me: "Software Is about Developing Knowledge More than Writing Code"

I've experienced more issues caused by management passing around tasks between teams and never paying attention to knowledge and knowledge transfer.

What's amazing, is that in over 18 years as a software engineer, I've seen this so many times. Teams will function well, then the institution tries to change. Often they will try to open up the "innovation" by throwing money at R&D, basically trying to add bodies in order to grow. Then you have tons of teams, and communication becomes very challenging, so then they grow some kind of "task management" layer. Management that never understands who actually _knows_ something, just tracks how much "theoretical bandwidth" they have and a wishlist of features to create. And then the crapware really starts flowing. And then I get bored and move on to the next place.

> "Software Is about Developing Knowledge More than Writing Code"

The company I work for uses Scrum. They consider the User Stories + the code to be everything you need. I struggle with this, but my manager says they don't want to get tied up doing documentation "because it goes out of date". Beside, they are being Agile which "prefers working code over comprehensive documentation".

I am wondering what other companies do to capture this "distilled knowledge". The backend services I rely on are undocumented beside some paltry swagger that leaves much to be desired. The front end has no product-level "spec", if you want to rebuild the thing from scratch. There isn't even a data dictionary, so everyone calls the same thing by different terms (in code, and conversation).

There are just user stories (thousands) and code.

Does anyone have any suggestions on how to fix this?

"prefers working code over comprehensive documentation" does not mean "don't do documentation".

Documentation is essential. How things work is an important thing to document. Ideally it should be in version control and be generated from the code, because then it's less likely to go out of date. It still has problems (What do you do when the code and the documentation disagree? Which is correct?), but they're not as severe as the problems that arise when there is no documentation at all.

What is less useful is having comprehensive documentation for those things that are yet to exist. Writing a few hundred pages of specification and handing it over to the dev team is waterfall, and it is _this_ that the Agile manifesto signatories were interested in making clear.

I'd fix it with strategic DDD - I'd develop at least a "ubiquitous language" (or a UL): I'd get others to work with me on having clear terminology and making sure that is used consistently both in user stories and in the code base. That's table stakes.

I'd then event storm the contexts I'm working in and start to develop high level documentation.

Even at this point relationships between systems emerge, and you get to draw circles around things and name them (domains, contexts), and the UL gets a bit better. At this point you can start to think about describing some of your services using the UL and the language of domains and contexts.

By that point, people should start to click that this makes life easier - there is less confusion, and you're now all working together to get a shared understand of the design and the point of DDD is that the design and the code match.

The first part (all 100+ pages of it), of the Millet and Tune book on DDD will pay you dividends here.

If that doesn't work, look around for somewhere else to work that understands that software development is often a team sport and is committed to making that happen.

    Documentation is essential. How things work is an important thing to 
    document. Ideally it should be in version control and be generated 
    from the code, because then it's less likely to go out of date. 
My solution to this is old and fairly unpopular, but I stand by it: anything in the codebase that's not obvious to a new maintainer should have a brief, explanatory code comment.

Generally, this falls into two categories.

1. Hacks/kludges to get around bugs in hardware, external services, or included libraries. These manifest in code as incomprehensible, ugly bits of code that are difficult to distinguish from code that is simply "sloppy" or uninformed. More importantly, they represent hard-won knowledge. It often takes many programmer-hours to discover that knowledge, and therefore many dollars. Why throw it away? (Tip: include the version of the dependency in the comment, ie)

    # work around bug in libfoo 2.3, see blahblahblah.com/issues/libfoo/48987 for info
    # should go away once we can upgrade to libfoo 3..
    if error_code == 42 reset_buffer()
...so that future programmers (including you) can more easily judge whether the kludge is still needed in the future.

2. Business logic. This too is difficult/impossible to discern from looking at code. Often, one's git commit history is sufficient. But there are any number of scenarios where version control history can become divorced from the code, or require a fair bit of git/hg/svn/whatever spelunking to access. And this of course becomes increasingly onerous as a module grows. If there are 200 lines of code in a given module, it is a significant time investment to go git spelunking for the origins of all 200 lines of code. Some concise internal documentation in the form of code comments can save an order of magnitude or two of effort.

    It still has problems (What do you do when the code and the 
    documentation disagree? Which is correct?), but they're not as 
    severe as the problems that arise when there is no documentation at all.
This is pretty easy to enforce at code review time, prior to merging.

In the first place, only a true maniac would intentionally update

    # no sales tax in Kerplakistan on Mondays
    return nil if country_code==56 and day_of_week==1
...without updating the associated comment. If they do neglect to update it, that's an easy catch at review time.

Count me in as another old timer who agrees. I had a friend once throw the "code should be self-documenting" line at me once and it upsets me. That only really applies for code that is so simple it writes itself, and never has any gotchas hiding (and which useful project is like that?).

Leaning towards commenting "why" not "what" is another good general rule. "Self-documenting code" with sensible function and variables names and logical flow already cover the "what" fairly well.

While I still would add a comment about the why, your last bit of code probably should be written without magic constants.

    # Some countries have sales tax rules dependent on the day of the week
    return nil if country_code==KERPLAKISTAN and day_of_week==MONDAY

The exact comment here could probably be more specific (e.g. where do you find these rules), but it also most likely shouldn't repeat the code (and the code should make clear what it represents).

If you do the substitution as you suggest and then add a unit test, then you have something ;-) Something on the lines of "describe countries with sales tax dependent upon the days of the week => Kerplakistan doesn't have sales tax on Mondays" So now it's self documented and self testing.

But I agree with your statement that there should be a pointer to the business rules somewhere. Otherwise it's difficult to have a meeting with the business side and ask, "Has anything here changed?" I think that's the biggest thing people miss out -- It's not that hard to find the thing in the code if things change. It's super hard to make sure you are on top of all the business requirement changes.

But don't do what one memorably awful project I had to maintain did - to use that example they would have done:

country_code==FIFTY_FIVE and day_of_week==ONE

But what if the definition of 55 changes? You'll be glad to have your table of constants then.

The project also defined HTTP, COLON, SLASH, WWW and DOT so that you would have:

   string url = HTTP + COLON + SLASH + SLASH + WWW + DOT ...
I swear I'm not making this up....

Reminds me of http://pk.org/rutgers/notes/pikestyle.html

> "There is a famously bad comment style: ...

Don't laugh now, wait until you see it in real life."

Sounds like a PHP codebase I'm currently working in. I shit you not, $LI = '<li>' is in the functions file, along with $LI_END.

It was a very enterprisey Java codebase from the bad old days of J2EE - it had somewhere over 30 layers of abstractions between the code in a JSP and a web service call.

[NB 30 isn't an exaggeration - I think the vast team who wrote it were paid by the abstraction or something].

Well at least a typo sould give a compile time error for some subset of typos.

But in the trade-off in code readability was probably the cause of many other mistakes, so probably ended up further behind.

But how else will your compiler tell you that you mistyped 55? /s

That's a very good point. Avoiding magic numbers would have removed the need for an explanatory comment in my example.

Comments are often a code smell. In lots of examples, better variable naming, breaking something out into a function, or constants often reduces the need for a code comment.

I disagree. Code ages and people move on. 2 years down the line some new guys are maintaining the code base. Some new guy is testing the system and notices that sales tax values seem to be "strange" for Kerplakistan on certain days of the week so they create a ticket for it. Then that goes through the typical pipeline. Another member of the team gets assigned the issue and looks into it. They come across the line:

  return nil if country_code==KERPLAKISTAN and day_of_week==MONDAY
Hmm.. Well that's strange. I don't have a background in Kerplakistan monetary policy so I don't know why we aren't assessing sales tax on Monday. Perhaps Kerplakistan is a special case. Is that being handled somewhere downstream? Then 1-2 hours later, after shuffling through source and eventually just Googling Kerplakistan sales taxes, you discover what someone found out 2 years ago when they wrote that line. Now you resolve the ticket and move on with your day but you just wasted a couple man-hours on a non-issue that could have been resolved instantly from a code comment.

Comments are as much for the next guy as they are for you.

Without a more concrete example, it's difficult to suggest what the better fix would be.

Code smell doesn't mean you should never do it, just that often there's a better way.

Here's a more real-world example.

I worked on an enterprisey line of business app that assigned sales leads to salespeople.

The algorithm to do this was a multi-step process that was (1) rather complex (2) constantly being tweaked (3) very successful (4) contained a number of weighting factors that were utterly arbitrary even to veterans of this app.

It was full of many `if country_code==KERPLAKISTAN && day_of_week==MONDAY` -style weighting factors. Each represented some hard-won experience And when I say "hard-won" I mean "expensive" -- generating leads is expensive business.

We had a strong culture of informative commit messages, but this file had hundreds if not thousands of commits over the years.

It was the kind of code that resisted serious refactoring or a more streamlined design because it was a recipient of frequent change requests.

A few human-readable comments here and there went a loooong way toward taming the insanity and allowing that module to be worked on by developers besides the original author.

Knowing the why for many of these rules made it much easier to work with, and also allowed developers to be educated about the business itself.

I agree. The most obvious place to find an explanation of a piece of code, is right beside that code. Not hidden away in some git commit message or nested away in confluence.

>anything in the codebase that's not obvious to a new maintainer should have a brief, explanatory code comment

I'm not at all convinced that this is unpopular, but I think it's a whole lot harder than you're letting on. Unless you have a constant stream of new people coming in and you can convince them to give honest feedback, you don't actually know what's not obvious.

Why not:

    return nil if country_code==KERPLAKISTAN and day_of_week==MONDAY
Then you don't need comments and the sync problem goes away?

Except this doesn't retain the crucial information: why? It looks arbitrary. The thought that "some countries have sales tax rules dependent on the day of the week" may or may not be obvious from the context. At the very least, the comment pins a point in the space of all possible reasons for that piece of code - with it, you know it's related to sales tax and week days, and isn't e.g. a workaround for the bug with NaNs in tax rates that you saw on the issue tracker last week.

This is admittedly a trivial example, but ideally you want developers who understand why we're doing this.

Is this a quick thing somebody hacked in for a special, one-off, tax-free month in Kerplakistan as the country celebrates the birth of a princess?

Is this a permanent thing? Will there eventually be more weirdo tax rules for this country? Will there be others for other countries?

Knowing the "why" would help a developer understand the business, and reason about how best to work with this bit of code... should we just leave this ugly little special case in place? Should we have a more robust, extracted tax code module, etc.?

Commit messages help to accomplish this too, and can offer richer context than inline comments. Each has their place. Sifting through hundreds of commit messages in a frequently-updated module is not a great way to learn about the current state of the module, as the majority of those commit messages may well be utterly stale.

Ultimately the cost of having some concise inline comments is rather low, and the potential payoff is very large.

Remember that the longer term goal (besides the success of the business) of software is to have your developers gain institutional knowledge so that they can make more informed engineering decisions in the future.

Yup. This + some diagrams for models and infrastructure is plenty

> Documentation is essential. How things work is an important thing to document.

I agree with this 100%. However, to be useful it needs to hit the right level of crudity. For most projects, a short (<10 pages) description of goals, design principles, architecture and an overview of interfaces is sufficient.

It is best when this exists as a standalone document which is a required reading for any new developer. After this they can look at module descriptions, function docs, code, etc. and understand how to make sense of it and how to add their code without breaking general principles of the project.

> Ideally it should be in version control and be generated from the code, because then it's less likely to go out of date.

With this, I have some beef. In my experience the best documentation is the one that complements the code. Usually this means a short description by a human that explains what this chunk of code does and assumptions or limitations (e.g., "tested only for points A and B in troposphere") and IME most useful information is not derivable automatically. Auto-generated docs are very useful, but cannot replace clean explanations written by a human. My 2c.

I think there's a lot of ambiguity in the phrase "generated from the code". When I hear it, I think if docs generated from doc-comments embedded in the code, which hare clean explanations written by a human. They just have the advantage of being right next to the code, so they're a lot more likely to be updated when the code changes than an entirely external document.

"Documentation" that is nothing more than the interface definitions in HTML for is worse than useless. I can get all of that from just reading the code.

I think there is room for this to be two documents if a project is large enough - one which resides inside source control which explains the design and how it works, and one which is external (sometimes managed by corporate level document control) which explains the "so what", including top level requirements and so forth.

These could be just one document if the project is small enough.

> Ideally it should be in version control and be generated from the code, because then it's less likely to go out of date

Interestingly, this has been a big point of discussion in the Dota 2 playerbase. Dota 2 is one of the most complex games ever created and it rapidly changes on the order of days or weeks. At one point, the in-game descriptions of spells were months or years out of date because they were being updated manually. After much hue and cry from the community, the developers finally made the tooltips get generated from the same code that determined the spells' effects. Things are a bit better now.

There is still a quite a bit of ways to go though, in terms of generating documentation for all the other mechanics in the game, which are crucial for gaining competency in the game, but which are only available due to third-party community efforts (often via people reading the game's codebase to understand subtleties), instead of being available inside the game.

It's surprising that wasn't being done in the first place. I used the Warcraft 3 map editor, and it was simple to include references to attribute values in an object's description. Don't know why the DotA2 team didn't port that feature over when moving to the new engine.

This is a good example of a general rule of thumb I learned, if you need to do something once or twice do it by hand, but if you do something three or more times make it a function! Looks like Dota 2 updated their spells a few more than 3 times ;)

I use this rule for introducing abstraction: don't do it unless you have at least 3 different use cases you're abstracting, and the test suite doesn't count.

> Ideally it should be in version control and be generated from the code, because then it's less likely to go out of date

Not always - when you want to document the requirements (in whatever format), having them be separate from the code is often a plus. The code might implement the requirements incorrectly, so being able to recognise that is important.

I find this very similar to writing tests that are separate from your implementation. In fact, Cucumber/BDD tests try to make product requirements executable to validate the software has been written correctly to meet the requirements.

I never understood why generated API docs are "documentation". That is source, trivial technical info which is easy to find in the source anyway.

I never got documentation about the thought processes, the iterations, the design meeting, the considerations, etc. Which is way, way more important to understanding a system in context than knowing "convertLinear" takes 2 unsigned ints.

> Writing a few hundred pages of specification and handing it over to the dev team is waterfall, and it is _this_ that the Agile manifesto signatories were interested in making clear

That doesn't sound too bad from a dev point of view, better than the opposite - half arsed specifications with no thought given to the important details. Though I can imagine a lot depends on what exactly you are trying to build.

Thanks for the thoughtful response, this is helpful.

> Ideally it should be in version control and be generated from the code ..

May I ask if you have suggestions for tooling to capture the high level documentation. We use javadoc a little, but it seems best for lower level reference. Also for diagrams, like sequence diagram and/or state machines, how do you capture this?


Use graphviz (dot tool for example) for state machines. It is a text format where you list state machine transitions and it generates a visual representation.

Or better yet: generate your state machines from same format you would use to generate visual representation.

Don't be afraid of building a product specification, and doing it in Markdown and auto-generating a mini-website out of it.

Just build a product specification for how the product works (which is useful documentation), not how the product will work (which is waterfall).

We're experimenting with this a little, and I'm getting into document-driven development a little: if the product spec is in markdown, why not create a pull request on it as part of your story/project planning that shows the changes that would happen as a consequence of your work. Once the story is done, you can merge the pull request, even. We're not quite there with this yet, but I'm optimistic.

Putting design assets into your repo is also acceptable, and also paying time and attention to commit messages can be really, really helpful. I love this talk, for example: https://brightonruby.com/2018/a-branch-in-time-tekin-suleyma...

How does Millet and Tune's DDD book compare with Eric Evans? Are they both worth reading?

Many years ago, I worked for a company where we were writing complex distributed telecom software and they had a wiki for documenting the system. I spent a few weeks meticulously documenting everything I did and anything that was touched by it (including defining all of the industry jargon and such). It was a great way to get a quick understanding of any part of the system, but I was the only person keeping it up to date so after a while I ran out of steam and stopped doing it. :(

I've come across the "documentation becomes quickly outdated" argument a lot, but nobody has ever been able to suggest a good alternative. The best I've found is to write design logs for proposed changes (which you then let other team members/stakeholders can review/comment on before it gets implemented) and decision logs for any decisions that are made. This way, them going out of date is expected and ok, as they become a history of ideas and decisions with their context and outcomes laid out. You don't necessarily have a snapshot of "the system right now" but you have a log of all the ideas and decisions that lead up to the current system.

> I've come across the "documentation becomes quickly outdated" argument a lot

Me too, but I still feel that saying "documentation quickly becomes outdated" and refusing to write any, is not that different from saying "software quickly becomes full of bugs" and refusing to write unit tests. Yes, if you believe that something is doomed, and therefore you refuse to even try, it becomes a self-fulfilling prophecy.

Yes, documentation quickly becomes outdated, if no one updates it. Duh. If a person creates/modifies a part of code, they should also create/modify the corresponding documentation accordingly. (And the person reviewing the code should also review the docs.) If you don't do it, then yes, obviously, the documentation becomes outdated. Did you expect it to update magically by itself?

If you believe that documentation is useless in principle, go ahead and don't write it. Then you won't have to maintain it. Also, make sure to include memory tests to your interview process. If you believe that documentation is useful, write it, and maintain it. But if you have a documentation that you never update, you get the worst of both worlds.

Yes, thank you. A mindset which thinks that documentation is wasted due to a need to constantly update, is cousin to the mindset which thinks that software, once written, is a purchased asset which needs no further attention nor maintenance.

I believe it is equally important to determine the level of abstraction for the documentation, such that updating it is an infrequent task (essentially, every line of code change should not trigger a document change). It is easier said than done, but that's the best compromise I have arrived it.

At the very least, the document should capture high-level (again this is relative term) design, possibly an architecture diagram of major interacting functional units. The success measure should be the relative ability to build a mental model of the system by looking at this document for any newbie.

That design document would be a start, and most likely not "quickly outdated".

My personal beef with the agile camp is precisely this: when they let go of documentation, they don't do the design doc as well, and all that remains of the system is thousands of incoherent stories and huge amount of code.

If you document via design logs and decision logs, then you don't need to document every line of code because its already done in the design log. If something unforeseen arises, then you have a discussion, make note on the outcome in a decision log and move on. That is, the documentation logs proposed decisions and outcome decisions, which should contain enough context that you can read them in isolation. Then you don't need to worry about documenting as you're coding or documenting the code.

If you diverge too far from the original design, you should probably have a rationale as to why, that gets reviewed by others: another design log and decision log.

These documents don't need to be long either, just a couple of sentences for each of context, what you propose, impact on other teams or systems, decision made may be enough for smaller things (so a paragraph or two) and for larger changes, you probably need the detail for everyone to really understand what, why and its impact. The alternative is to do these things blind.

> when they let go of documentation, they don't do the design doc as well, and all that remains of the system is thousands of incoherent stories and huge amount of code.

Absolutely agreed.

We create "blueprints" that have system design and goals on new projects and how they fit into the ecosystem. We are supposed to update them after they go into production as things change, but seldom do. Still, going back and reviewing a design document helps.

Yes, I agree that treating most documentation as project history is the practical way to go.

Another thing that helps is to write good commit messages giving the business context for a change. When code is reviewed, the commit messages should be reviewed as well. If they don't agree then that's a problem.

> Does anyone have any suggestions on how to fix this?

Not to be rude, but yes: switch employers. This is not something you can fix on the employee level, it is a management issue.

Yes. I've never seen unenlightened management somehow become enlightened.

Sometimes it's even more complex than a management issue.

It might be a team culture or company culture issue, and even radical changes in the management are not enough to fix it.

I've been on a couple of projects like that, and in my experience the real reason behind this logic is that manager and product owner this way may make themselves indispensable, they can't ever get fired without practically killing the project (until it becomes completely unmaintainable and slowly dies off)

It could be worse, the product owner could be killing the project by their presence instead.

And then they leave

This is not necessarily in opposition to what TFA states: that programming is about knowledge engineering. It just happens that knowledge flows through user stories and code. This might work well or not so well for your team, and there's nothing to fix here. Managers should be aware, though, that this means you no longer have a project, but a line organization put up indefinitely, and where when the teams is disbanded, no "product" as such does remain; nor would it be possible to hand-over the "project" to an offshore team.

My experience is you want to move fast by reusing good software. This means that well understood components should be well engineered and documented components, while poorly understood problems might reserve documentation for the future.

Agility requires a stable foundation. And a lot of places forget that.

Isn't this the old Unix tools idea, where programs are tools that ideally do one thing well, and with good inter-program communications developers can combine basic tools to build more complex programs?

This was also the original idea of Object Oriented Programming before people thought the real world was a good analogy.

They consider the User Stories + the code to be everything you need

Then they are Doing It Wrong™. Note that there's nothing in the Agile Manifesto OR the Scrum Guide that says "don't write documentation." The closest you get is in the AM where it says "We have come to value ... Working software over comprehensive documentation". But note that immediately after that it says "That is, while there is value in the items on the right, we value the items on the left more." IOW, the Agile Manifesto explicitly endorses the value of documentation!

Remember this the next time somebody tries to tell you that "we don't do documentation because we're Agile." Anybody running that line is Full Of Shit™.

I've worked in a company where I enforced as a PM to write Technical Document of every project we did along with the PID, the Functional Document (design and interactions). That Technical Document was all about to describe the essentials, and not go deeper into every single class written. It was intended for those developers who wanted to join the team and update that old project. With it was useful document and barely updated.

> Does anyone have any suggestions on how to fix this?

Have a product wiki (e.g. MediaWiki).

Have documentation in source code that compiles to HTML code, which can be linked to/from the product wiki (e.g. JavaDoc in Java, Natural Docs for languages that do not directly support compilable documentation). Make building and publishing this documentation a part of the continuous integration.

When you have this, make it a part of code reviews to ask "where is this documented?" for those kinds of things that are easy to remember today, but no one will remember it a few months later. In other words, make it a "code+doc review".

(Don't be dogmatic about whether the information should go to code documentation, unit test documentation, or wiki. Use common sense. If it only related to one method, it's probably the code; if it related to a use case, it's probably the unit test that verifies that use case; if it is a general topic that has an impact on many parts of the program, it probably deserves a separate wiki page.)

> Have documentation in source code that compiles to HTML code, which can be linked to/from the product wiki (e.g. JavaDoc in Java, Natural Docs for languages that do not directly support compilable documentation). Make building and publishing this documentation a part of the continuous integration.

Are you referring to something like Knuth's Literate Programming (en.m.wikipedia.org/wiki/Literate_programming)? As a non-professional who's learning to develop on the side, something that follows more of a natural language approach appeals to me, as sometimes I have a few months between working on my project, and comments on my source code help me not to forget why I do certain things in the code. However, I'm not doing Literate Programming, just python with comments.

No. I think one should organize the code as the code needs to be organized, and the documentation can either follow along (if it describes parts of code contained within the same file) or be placed separately (if it relates to multiple files), where "separately" could still be a package-level JavaDoc, or an external wiki.

I have never tried the Literal Programming, so perhaps I am out of my depth here, but I strongly suspect it only works after one has already mastered the usual ways of programming. That you do not have to structure the code qua code, because you can already do it in your head. But it's hard to imagine what one has never done before.

For example, if you never tried programming the usual way, how do you know when and why to put "Header files to include" in your Literal code? It's only because you can imagine the constructed code, you know where the header files go in the result, so you know where to place them in the Literal version. Otherwise, it would look quite arbitrarily.

I don't know about documentation in Python, but the JavaDoc (and Natural Docs) work like this: You put comments to classes and methods, or packages (and files), along with the code. So you can read them and write them while you are looking at the code. But then you run a "documentation compiler" that extracts the comments and builds a separate HTML website out of them. Here you can browse and read about what the individual classes and methods do. The idea is to make this a part of the continuous integration, so that whenever you update the source code and the related comment, the HTML website also gets updated.

Java supports this out of the box. When you install the Java compiler, you also install the Java documentation compiler. When you read the official documentation to the standard Java classes, those were made using exactly the same tools you are encouraged to use.

I don't know whether Python has something like this. If yes, go ahead and use it. If not, look at Natural Docs -- it is a system to provide this functionality to languages that do not support it out of the box. Just try it: document a part of your existing project, compile the docs, and see whether you can imagine any value in reading that.

prefers working code over comprehensive documentation

This is funny because “working code” might just mean that it doesn’t crash. But does it actually do what it’s supposed to do or does it reliably deliver the wrong results? How would you know without documentation?

The software in the Therac didn’t crash, it quite reliably killed people with its “working code”.

Using that logic, what would be "working car"? When wheels don't fall apart?

So I think "working code/application/program" is when it does what it is supposed to do. Including not crashing.

> So I think "working code/application/program" is when it does what it is supposed to do.

And the point of the comment you're replying to is to ask what is "what it is supposed to do". How do you know what the answer to that question is, without documentation or a specification? And if you try to rely on just verbal communication, in a group of people probably larger than about 1, they're going to have different ideas about what the software is supposed to do.

Some of the most challenging problems I've encountered have been looking at code that does something. What it does is clear enough from the code. But why it does it, or should it do that, that is much harder to answer, particularly if the person who wrote it has left the company or it's been >6 months and they just don't remember.

I can't picture a single scenario in which "doesn't crash" is the sole criterion by which code is evaluated as "working"

Traceability is where I'd start. First, ensure your stories are being linked to your code / pull requests / issues. That way you can figure out why something was changed in the future. This is key to determining whether you can change something down the road. Stories directly traceable to code can be powerful for capturing knowledge.

I might also recommend creating user stories for non-feature development like infrastructure and tech debt paydown (if you don't already). That way, all of the value flow is captured in one place and you're not just leading managers to see new features only.

Second, in addition to the user stories I'd advocate for strong background information about the context of the story as well as detailed acceptance criteria if you don't have that in place already.

In Scrum, you don’t write code based off User Stories. The scrum team agrees to a set of Stories for the sprint, and then the scrum team breaks those stories into a set of Tasks which are the actual work that must be done. The User Stories are just something the product owner uses to show stakeholders that the project was successful.

1) In the Hermes Conrad sense, this is technically correct.

2) In my experience, this basically never happens.

Your comment encapsulates a lot of what I have come to call "Scrumbutt." It's Scrum, but. And while I have no idea if it's intended on your part, the sentiment is a fantastic way for a Scrum consultant--only some shade thrown; I've been a "DevOps consultant" before, after all--to come in and pull from deep in their Scrumbutt something to the effect of "you're doing it wrong, Scrum has not failed, you have failed Scrum."

Within epsilon of nobody does Scrum "as prescribed"--because the amount of responsibility that must be undertaken at all levels is virtually impossible to get full buy-in on--and as such the boil on our collective behind that it is persists because criticism is immediately bedeviled by Scotsmen of unknown provenance.

You’re projecting “shade thrown”.

People might be interested in what Scrum is. I know I am. That’s why I pointed out the error. It was a shock to me to learn I wasn’t doing anything close.

Readers can do with the info what they want.

I’m not sure if I can say the same of your comment. You seem to be trying to make me feel bad for commenting? Or accusing me of hawking pointless info for consulting fees? I really can’t tell.

> They consider the User Stories + the code to be everything you need

If you have sufficiently detailed user stories, they can be.

The user story is a promise to have a conversation. I think that is usually well understood. From there I think you can fall into two camps: that conversation should result in a Jira/whatever ticket with all the requisite documentation for an agile team versus that conversation IS the essential information required to properly build the expected valuable working software.

Back to the question - what do you do about poor knowledge transfer in a project? I think a moderate de-emphasis on thinking of the user story text and the additional info like acceptance criteria etc. as self-sufficient documentation and adding more emphasis on that close relationship between developer, user, and maybe a tester, can help fill in big knowledge gaps.

When does a "sufficiently detailed user story" become "documentation"?

When it's only part of the ticket? Some teams call everything a "user story" when it might actually be it starts with the story format then adds a whole bunch of detailed acceptance criteria, the background for the story, etc.

Change managers/leads

Get a new manager

One of my professors condensed that point into something I thought was clever: "Software engineering is the distilling of ambiguity".

I think about that whenever I get frustrated about a vague spec or lack of details. It's the job!

"Software engineering is the distilling of ambiguity"

I hope he meant separating out the ambiguity rather than concentrating it. :)

I can see it working both ways - in many cases, I'd like my ambiguity distilled down to one specific point, while the rest of the project deals with lower specific ambiguity (where ambiguity in my mind is equal to "is this thing possible").

Apparently, Peter Naur (the N in BNF) wrote this up nicely back in 1985 in "Programming as Theory Building": http://pages.cs.wisc.edu/~remzi/Naur.pdf

You should consider making this it's own article.

"...the designers job is not to pass along "the design" but to pass along "the theories" driving the design. Knowledge of the theory is tacit in owning..."

Well said. Thank you!

I would prefer people read the original paper than any second-hand explanation of it. The paper is very readabale and understandable, and is probably relevant as long as into the future as human beings write code.

When mcnichol said "make this its own article", they probably meant to submit the link to HN as its own submission, not to blog about it.

Yeah, that's what was meant. Totally agreed with the statement above this one, it reads well.

I only have one year of formal CS education but that paper is one of my favorites on the topic. Naur is also the founder of CS at the University of Copenhagen, the place where I studied :)

I was about to suggest the same thing. This paper should be mandatory reading for anyone that is professionally involved in a software development project.

+ 1 Yes, things really changed for me once I started to ask why we implement things. Tried to understand the manager/customer/stakeholder what is their domain? What kind of issue they want to solve? What is the business case we are working on?

I know as a software developer you don't want to do that. More fun refactoring code than dealing with management. More fun writing that piece of SQL than sitting in a meeting. Easier to whine about missing specifications than to understand the big picture.

Once I stepped back from coding and looked at the software from a birds eye view, I had actually a much easier time programming features than before. More knowledge, less writing code.

Good point: it's far, far better to be proactive, than wait for management to "recognize you".

Being a part of the early decision making processes has been a challenge for me as a remote employee. In larger companies, there are lots of meetings, discussions, and decisions that happen before anyone talks to the engineering staff is brought in. But, by basically being nice, asking questions, and really getting involved, I've been able to "weasel" my way into some of these discussions.

Once you get involved early on, there's so much more clarity around the one liner "requests" that often get farmed out.

Haha yeah the number of times I've gotten "Hey someone estimated a task would take 40 hrs on a project you've never seen with libraries you've never touched. mind knocking that out this week?" is astounding.

The first place I worked as a software dev had an owner that would explain everything about the business and the problems to me in very good detail. He would just stop by my desk whenever he thought of something he thought might be good for me to know. Eventually understanding the business became just as interesting as the coding. These days, I hate getting a task without knowing the business side of things or not being able to discuss it directly with the person that does.

> "Then you have tons of teams, and communication becomes very challenging"

communicationChannels = nrOfTeams(nrOfTeams-1)/2

More people should read The Mythical Man-month

> This struck a chord with me: "Software Is about Developing Knowledge More than Writing Code"

Managers are very unhappy when I tell them of all the knowledge I've developed.

I think this is also known as the fallacy of the fungible engineer / myth of the interchangeable programmer.

So knowledge about:

- the problem you are trying to solve

- how you could solve it

- how you actually did solve it

- which solutions come with which flaes and merits

Some good tidbits from the government perspective on software development:

Beware of bureaucratic goals masquerading as problem statements. “Drivers feel frustrated when dealing with parking coupons” is a problem. “We need to build an app for drivers as part of our Ministry Family Digitisation Plans” is not. “Users are annoyed at how hard it is to find information on government websites” is a problem. “As part of the Digital Government Blueprint, we need to rebuild our websites to conform to the new design service standards” is not. If our end goal is to make citizens’ lives better, we need to explicitly acknowledge the things that are making their lives worse.

This also very much reads like something from Singapore.

A quasi-Orwellian dystopia Singapore may be, but their government is effective.

Indeed, I couldn't live there, and I don't think it's right for humans to not have a choice not to live there, but for those who have chosen to be there and agree with the state, I'm sure it satisfies them.

I like being partially or fully nude in the home, occasionally chewing gum, and having the right to freely criticize or endorse ideology on its own merits; even if it sometimes sucks to see dirty black spots on the sidewalk, or to hear people making weak arguments just to upset eachother.

Certainly a weird cosmopolitan fascist (u|dys)topia.

Only for certain strained definitions of "effective".

And, if you are on the wrong side, it is very "effective" at ruining your life.

Most of us would take a bit less "effective" in order to avoid that, thanks.

I don't understand the point of replying like this. Clearly we agree that the Singaporean government is very good at getting things done, and we agree that the things it wants to get done are horrible. Why are you speaking as if our opinions differ? Why manufacture conflict where none exists? Is calling something a quasi-Orwellian dystopia now too subtle an expression of disapproval?

> Clearly we agree that the Singaporean government is very good at getting things done, and we agree that the things it wants to get done are horrible.

Singaporean here. The government's mainly effective for tasks that are on a happy path. If your particular case falls through the cracks, it often takes phone calls, printing, postage, and weeks or months of waiting to get stuff done.

(Personal experience trying to get business stuff done not as a Private Limited company.)

> If your particular case falls through the cracks, it often takes phone calls, printing, postage, and weeks or months of waiting to get stuff done.

That sounds like the happy path for dealing with the Canadian government. Well, except the months part.

sounds like lots of govt. and corporate interaction in the US.

The "but" negates any disapproving effect it may have had, because structurally the latter part acts as a justification for tolerance.

"The tool is squeaky but it gets the job done" - you wouldn't expect the speaker to do anything about the squeaks. Squeaking is tolerable.

"The tool does the job but it's squeaky" - you would expect the speaker to do something about the squeaks. Doing the job isn't good enough.

Your comment is most easily read as not disapproving of authoritarian government when it is effective.

Comments like this are part of the reason why people like Sam Altman stopped posting here. Can you just give the poster the benefit of the doubt that they just admire the efficiency of the Singaporean government, not that they're endorsing authoritarianism as long as it's effective?

Never acknowledge any quality of the Enemy. The Enemy is Bad, therefore it is also weak, stupid, lazy, cowardly… Because the risk of being perceived as praising the Enemy always trumps the consequences of underestimating it.

Not the best example of crowd wisdom.

>Can you just give the poster the benefit of the doubt

Perhaps you should do the same for barrkel? I read your parent as a simple explanation to solveit why their comment may have been misconstrued by bsder -- a question solveit directly asked.

You're probably right. I just get frustrated when people insist on reading value judgments in literally everything. Sometimes the curtains are just blue, you know?

I replied that way because, if we moved this to a tech subject, people here would be horrified at your definition of "effective".

If someone produced an insulin pump that you implanted, worked perfectly for life, but killed 1 person in 1,000 randomly, people would be screaming for the head of the CEO of that company rather than calling it "effective".

But that wouldn't be effective because it's random, not because it kills people. A better analogy is if the pump worked perfectly, but killed anyone who the CEO disliked. That would be effective, yet monstrous.

Effective just means it achieves the intended outcome, it's not a value judgment on the goodness of that intention.

>Only for certain strained definitions of "effective".

Have you compared it to others? Strained is the last word I'd use to qualify how effective it is...

I thought "what a bold title, if someone's figured it out we can just close HN" and upon reading, hey it's not far off.

The following is a wonderful point I have hardly ever heard said directly:

"The main value in software is not the code produced, but the knowledge accumulated by the people who produced it."

It's not that they have the knowledge but that the knowledge is now encoded in software and available to anyone else - software shares knowledge without the users having to learn it (for example having to learn which five systems need to have their names enters in order to pay their parking fine

This is not what I have experienced, at least not in real world software with mediocre documentation. Usually the software encodes only the "how", but the important parts are "why" and what aspects of the original problem led to that design. Good teams learn how to transfer and adapt in the next project. Starting with the software only, lots of that can only be reverse engineered.

That was something I picked up from one of the more experienced software devs when I started - in his code there were far fewer comments saying "This section does x and y", than there were comments saying "We are doing x and y because ..."

weird, I've heard it said frequently for decades in various forms,

"value your knowledge workers"

"your employees are your most valuable asset"

Some companies don't treat employees well, and some employees at good companies feel they are not treated well enough

If the above quotes do not strike a chord with you, you might just be a software engineer who thinks you're more important than non-SEs.

The difference, for me, is that neither of those quotes explain why you should value knowledge workers or why employees are valuable (maybe hiring is expensive, maybe turnover reduces morale, etc), nor do they suggest the mechanism that creates this value.

I'm sure another author has put the same sentiment out there before, but it's not every day I see such a nice phrasing of it.

> neither of those quotes explain why you should value knowledge

I mean, the point of short quotes is to be memorable and get future listeners to hunt for the reason behind them. "The sun will rise tomorrow" may also be meaningless for some people on its own.

Nothing wrong with elaborating on this subject again via a blog post, I was just pointing out to the commenter who's never heard this expressed before that it has a long history, that's all.

Employees are not assets because the company doesn't own them.

True but they produce value which goes on to become an asset, in this case the knowledge captured and organised in to software.

> Reusing software lets you build good things quickly

It also introduces unknown amounts of debt and increases the likelihood that you'll end up with intractable performance/quality/velocity problems that can only be solved by re-writing large portions of your codebase.

This can be a dangerous cultural value when it's not presented with caution, which it isn't here. I think it's best to present it alongside Joel Spoelsky's classic advice: "If it’s a core business function — do it yourself, no matter what".


Great article. I liked this quote:


The best advice I can offer:

If it’s a core business function — do it yourself, no matter what.

Pick your core business competencies and goals, and do those in house. If you’re a software company, writing excellent code is how you’re going to succeed. Go ahead and outsource the company cafeteria and the CD-ROM duplication. If you’re a pharmaceutical company, write software for drug research, but don’t write your own accounting package. If you’re a web accounting service, write your own accounting package, but don’t try to create your own magazine ads. If you have customers, never outsource customer service.


This all rings true in my experience. You should write the software that's critical to your core business competency yourself, because the maintenance cost is worth paying if you can achieve better software. But if it's not a core competency and your business isn't directly going to benefit from having best in class vs good enough, then it may be worth outsourcing.

I agree completely. Dependencies solve problems but not for free - bugs, security issues, versioning headaches, performance problems, compatibility gotchas, churn, &c. I live by the following:

(1) If a problem can be exhaustively specified in a formally well-defined way (mathematical logic), it will be wise to adopt a mature implementation - if it exists.

(2) If a problem can't be so specified, all implementations will be incomplete and will contain trade-offs. I have to address these problems myself to ensure that limits and trade-offs suit as well as possible what the business needs. If I can.

So, (1) says I shouldn't parse my own JSON. (2) says I should avoid the vast majority of what shows up in other people's dependency trees.

Yeah I think the OP article is great but doesn't pay enough respect to the costs of dependencies.

I found this to be an incredibly accessible and easy to read guide for software development. It’s a very short read - just a few minutes - but it’s full of practical examples and written in a way that speaks to non-engineers (like bureaucrats). If you are a non-technical person handling software stuff, this article should definitely be high on the reading list.

The author seems like an unknown in the software development world, but they’re one of the managers for Singapore’s fairly successful digital government initiative. So it does feel safe to say they have some experience.

Li Hongyi is the son of Singapore PM Lee Hsien Loong, as well as a deputy director in GovTech Singapore (the Government Technology Agency). (He's also an MIT CS grad, and a past Googler.)

I suppose he wrote this for other people in the Singapore civil service.

Nice post! Agreed on keeping the initial stuff simple as possible.

In python, I typically follow a pattern of keeping stuff in __name__ == '__main__' block and running it directly, then splitting to functions with basics args/kwargs, and finally classes. I divide into functions based on testability btw. Which is another win, since functional tests are great to assert against and cover/fuzz with pytest.mark.parameterize [1]

If the content of this post interested you: Code Complete: A Practical Handbook of Software Construction by Steve McConnell would make good further reading.

Aside: If the domain .gov.sg caught your eye: https://en.wikipedia.org/wiki/Civil_Service_College_Singapor...

[1] https://docs.pytest.org/en/latest/parametrize.html

I prefer putting the main code into a "main" function (called from the __name__ == '__main__' block) fairly early, since otherwise the functions you extract might accidentally keep relying on global variables.

Good point

I like to do it early also, to make sure that the new script, if imported from by a sibling module, is inert.

An example would be a scripts/ folder and sharing a few functions between scripts w/o duplicating.

In some cases I don't have a choice. Initialization of a flask app/ORM stuff/etc has to be done in the correct order.

I think the general rule of thumb I follow is: avoiding keeping code that'd "run" in the root level. Keep it in blocks (normally to me functions) has the added effect of labeling what is does.

What I don't do: I don't introduce classes until very late. In hindsight, every time I tried to introduce a complicated object model, I feel I tended to overengineer / encounter YAGNI

The article appears to be written by Singaporean prime minister Lee Hsien Loong's son, Li Hongyi.


Now I'm wondering why the children romanized their surname as Li not Lee.

I came across this article: https://mothership.sg/2015/03/lee-hsien-yang-reveals-the-sto...

> I have taught my children never to mention or flaunt their relationship to their grandfather, that they needed to make their own way in the world only on their own merits and industry.

Singapore's older generations speak/spoke Chinese "dialects" like Hakka (Lee Kuan Yew's heritage), but there has been a massive government-led push towards standardizing on Mandarin as the one true Chinese. Hence Lee Hsien Loong's children all have their names officially romanized in Mandarin pinyin (Li), not Hakka (Lee). The underlying character, 李, is still the same.

Also, his brother Li Haoyi wrote Ammonite, a well-known Scala REPL.

Sorry - the son of Singapore's Prime Minister is a Scala Hacker ...

I keep on saying that Software Literacy is a real thing. And that this current generation of leaders are like Charlemagne - he was the first Holy Roman Emperor and the last who was illiterate.

Interesting to see it in practise

Singapore's Prime Minister prefers C++: https://arstechnica.com/information-technology/2015/05/prime...

> One of them browsed a book and said, 'Here, read this.'" It was a textbook on the Haskell programming language, Lee recounted. "One day that will be my retirement reading."

Even Singapore's PM has to put up with smug Haskell programmers

>I keep on saying that Software Literacy is a real thing. And that this current generation of leaders are like Charlemagne - he was the first Holy Roman Emperor and the last who was illiterate.

And probably he was the best of what followed as well, so this literacy thing didn't go as well, where power figures were concerned...

Regarding "Seek Out Problems and Iterate", it's a bit of an understatement how important this is. I've invested a lot of time helping my coworkers understand the distinction between tasks and problems. The end goal being only tracking problems in the ticketing system. It's not easy to do this and it takes constant effort, but it pays off very quickly. I've yet to see a real "problem" ticket stay unresolved for a long time, whereas "task" tickets tend to stay around until they're either irrelevant or they get closed after getting kicked between a few people.

A good example of this is:

- Add worker thread for X to offload Y

When the actual problem is more along the lines of:

- Latency spikes on Tuesdays at 3pm in main thread

Which may be caused by a cronjob kicking off and hogging disk IO for a few minutes.

A good rule of thumb I've found is that task tickets tend to have exactly one way of solving them, whereas problem tickets can be solved in many ways.

Can you explain the 3pm on Tuesdays issue? My sister works for LLS and she said their servers get very slow at a precise time every Tuesday. Not saying it's the same bug, but what was the solution in your specific case?

The next sentence suggested that the cause of the problem in this probably hypothetical situation might be “a cronjob kicking off and hogging disk IO for a few minutes”.

So in that case, I guess either run the job with a lower priority and see if that helps, or execute the job more often so it doesn’t have to catch-up all at once one time per week, or rewrite it so that it performs I/O with smaller chunks of data at a time and sleeps for a little while in-between reading or writing chunks of data. Basically, do something so that you no longer have this one huge job consuming all of the IO bandwidth for several minutes every week.

I can't get into too much detail, but there were increased failure rates during a few jobs. In one case, we added ionice. In another it was a matter of adding a missing index to the DB (full table scan instead of looking at records from the last week).

There was one periodic job that we moved from the production server to work off the daily backups instead of the live server.

Database doing some housekeeping or backup; virus scan; perhaps automated check for windows updates (patch tuesday is every 2nd tuesday of every month so prob not that); completely separate task fighting the DB or other application layer you sis uses. Something else. Anything else.

It's not something anyone can diagnose from what you say, it could be anything, even weirdness such as a hardware fault kicked off by something else (office cleaner plugging something in?) causing power spike RF interference affecting the network causing mass packet drops and retries (ok, unlikely but it's not impossible, I've heard of such).

Building good software requires mainly achieving two things:

1. Making sure what you build is what was really requested (correct), and

2. Making sure what you've built doesn't have a higher running "cost" than the thing it replaced (either manual process or old automated solutions).

Everything else, IME, is ancillary. Performance, choice of platform, frameworks, methodology to build, maintainability etc are sub-objectives and should never be prioritized over the first two objectives. I have worked on many projects where the team focussed mostly on the "how to build" parts and have inevitably dropped the ball on the "what" to build of the projects. Result: failure.

Sauce: personal experience with several years of different projects (n = 1; episodes = 20+ projects that have gone live and have remained live versus 100+ projects lying by the wayside).

Writing software is not easy.

I agree with you but what I've noticed is for all the large projects I've worked on it was impossible to get an official answer as to whether or not the whole endeavor had a positive ROI. In fact, with a little back of the napkin math and some knowledge of the project's resource allocation it was obvious in most cases there would not ever be a positive ROI.

> 3. Hire the best engineers you can.

This is where most companies fail. Yes, they do want the best developers, but for the budget of an average junior/medior dev.

For some reason most companies/managers I worked for do not understand the financial impact of a not so good developer. Or the other way around; they fail to value the best developers and are unable recognize them.

I've worked for plenty companies where they let mediocre dev's build a big app from scratch (including the architecture), in an Agile self managed team.. These are the codebases that always need to be rewritten entirely because they have become an unmanageble buggy mess of bad ideas and wrong solutions.

>"3. Hire the best engineers you can."

If every single company wants that, where is he space to grow and learn from mistakes?

Maybe I'm wrong but I think those "mediocre dev's" learned a lot building a big app from scratch, solving bugs and refactoring.

They can learn them on their own time outside of work.

This is just an awful, awful mindset.

If you want great devs, you're going to have to invest in junior devs, and you're going to have to expect them to be learning at work. This is also why the best use of your senior devs is as mentors to your less experienced ones.

I agree that it is an awful mindset, but it's also true that they should not be given a task that they can barely do. Instead the formerly mentioned great devs should mentor over them, so they can become similarly good.

That is the mindset of upper management at many companies. But if your upper management wants miracles performed in impossible schedules with too few people, there is no room for a junior dev. In fact most of the junior devs at where I work are cheap outsourced contractors, not junior employees, as impossible projects also need to be done with limited budgets. Not of this makes sense to me.

Thats true if you have no choice.

But If you can hire great devs that already comes with the experience, required skill and you can pay for it, then why not?

If I'm a dev who is willing to put the time outside work to improve myself, wouldn't that put me in advantage when applying for job, compared to people who are not willing to put the time?

This mindset is what leads to utterly incapable people being hired as seniors. If nobody is willing to hire junior or medior developers, then naturally everyone starts calling themselves senior.

This is such a critical point. So many problems that plague the industry tie back to this. Lack of company investment in juniors leads to greater job hopping, which leads to building things based on bleeding edge fads to pad resumes, which leads to ever increasing title inflation, NIH syndrome, or cargo culting, which leads to the brutal churn that most everyone hates and wastes tons of man hours instead of just maintaining and improving existing software and teams.

How so ? If utterly incapable people being hired as seniors, isn't that failure in evaluating candidate ?

Senior/junior title designation shouldn't have much importance when evaluating candidate anyway, rather on what they can actually do or provide.

I've worked for companies where supposed senior devs write a massive amount of code without even the slightest indepth thought because they think they know everything.

Then the project turns out to be months late, even though I called the timeline of the project virtually unfeasible, and we have to go back and make several changes that could've been caught early on with a better strategy.

The problem with hiring the "best" engineers is as follows:

1. Nobody can ever tell you what the best means. People just throw 10x around without any explanation.

2. Most people in the world are average. You simply don't have enough of the best people to handle the work load, even if they're 10x average. So much existing software and new problems exist that it's nigh impossible to have the best everywhere.

3. Many of the best people are able to write really good code, but they consider it so easy that they often write code that they think is correct and it gets put in production. Since they're loners, they often don't do the necessary leg work either because of their own arrogance, or because the company hasn't clearly defined its processes and the developer can't even reach this goal despite numerous efforts. So management just believes the code is correct without any verification.

4. Many average developers support the best ones by taking needed work away from them through comparative advantage. Just because X employee is awesome at Y task, doesn't mean he meets the highest utility by doing Y task all the time. Especially when there are conflicting priorities.

5. The best engineers aren't going to be working at a small company in most cases. They also aren't likely to be paid well outside a large company either. The article sites Google, Facebook, and all the large tech companies and their supposed stringent interview process as a reason. But these companies have written terrible software (Google+, AMP pages) and become ethically compromised easily. Plus their interview process is often so outside the daily work flow because it involves answering algorithm questions, that it often makes no sense. Even worse, it teaches people to do katas instead of build actual projects. Project based interviews make much more sense.

6. Rewriting code bases is one of the worst things you can do and is what caused Netscapes downfall. Companies with supposedly the best engineers (ie. Netscape), can't even do it well.

So while hiring the best engineers is an awesome goal. It isn't feasible in a lot of cases.

I admit I have some bias as I consider myself pretty average. But I do a lot of crap on the side that "10x devs" don't even hear about because they're working on something more urgent. Does that mean I'm worthless?

Agree. sports analogy follows.

it won't help you to have 11 'Lionel Messi's on your team. good compatibility among players is much more preferable. It's probably better to have small robust teams that can work together, ppl who are avg in most required areas and are rockstars in certain specific ones.

If you had written any other athlete I would agree.

But in this case I think 11 Messi's would win everything there is to win in football for a decade straight.

He's too small and his defense contribution is too low. You'd just chuck 11 prime Yaya Toures at him and watch them dominate the game :-)

To go further, it reminds me of the movie Money Ball.

One of the principles the article highlights is that additional features make a software complex and therefore more likely to fail. This is true, but I'd argue it's not for the reason the article claims.

The claim is:

> Stakeholders who want to increase the priority for a feature have to also consider what features they are willing to deprioritise. Teams can start on the most critical objectives, working their way down the list as time and resources allow.

In other words, the argument is "competing priorities in a large-scale project make it more likely to fail, because stakeholders can't figure out which ones to do first." Actually, in this very paragraph, the author glosses over the real issue: "Teams can start on the most critical objectives, working their way down the list" - treating development as an assembly line input-to-output process.

I argue that it's not time constraints that complex programs bad, but instead the mere act of thinking that throwing more developers at the work will make it any better. Treating the application as a "todo list" rather than a clockwork of engineering makes a huge difference in the quality of the work. When developers are given a list of customer-facing features to achieve, more often than not the code winds up a giant ball of if-statements and special cases.

So yes, I do agree that complex software is worse and more prone to failure than simple software - but not for the reason that there's "too much to do" or that prioritizing is hard. Complex software sucks because it's requirement-driven, instead of crafted by loving hands. No one takes the time to understand the rest of the team's architecture or frameworks when just throwing in another special case takes a tenth of the time.

I’ve also seen the failures in requirement driven software. When engineers receive unfiltered customer requests as requirements or tasks they tend to focus simply on getting that functionality into the software. Most times not understanding the job the customer is trying to get done.

There are different personalities of engineers, those who thrive on explicit requirements and can accomplish difficult engineering tasks when they are given clear requirements. But those engineers should only be given those requirements once the job that the customer is trying to get done is clearly understood. Some engineers have the ability to find creative solutions, that customers or product managers can’t see, when they are provided with problems and jobs rather than requirements and tasks.

Managers would be wise to distinguish between the type of engineers they are managing and play to their strengths. Whatever type you have, understanding the job the end user is trying to get done must occur, preferably by an engineer that’s capable of articulating that, if needed, to team members as technical requirements.

Paraphrasing, you said

> There are engineers who can accomplish difficult engineering tasks when they are given clear requirements and engineers have the ability to find creative solutions when they are provided with problems and jobs rather than requirements and tasks.

I feel like I could perform adequately in either environment. The problem is I've previously found myself in environments where I'm expected to come up with creative solutions to a problem, but I have no access to the customer or even a simulated environment where I could try to do something similar to what a customer would do.

In this kind of case, it's impossible to really know how to articulate your requirements, because all you can use is a fantasy model of hypotheticals. But requests for more precise requirements are potentially brushed off as wanting to be spoon-fed what you need to do and having inability or unwillingness to think creatively.

    I argue that it's not time constraints that complex programs bad, 
    but instead the mere act of thinking that throwing more developers 
    at the work will make it any better. 
The bit about throwing more developers is true, but really does not follow from anything else you or the author is talking about.

    Treating the application as a "todo list" rather than a clockwork 
    of engineering makes a huge difference in the quality of the work. 
    When developers are given a list of customer-facing features to achieve, 
    more often than not the code winds up a giant ball of if-statements 
    and special cases.
Admittedly, this is often the case when doing feature-driven development.

But it absolutely does not need to be the case.

If you treat engineers as interchangeable cogs who only need to know about one story at a time, and never tell them about the medium- and long-term goals of the business and the application? Then yes. Then you get an awful code base with tons of if-then crap.

However, it doesn't need to be this way. If you give engineers visibility into (and some level of empowerment with regard to) those longer-term goals, they can build something more robust that will allow them to deliver features and avoid building a rickety craphouse of special cases.

I have experienced both scenarios many times.

> In other words, the argument is "competing priorities in a large-scale project make it more likely to fail, because stakeholders can't figure out which ones to do first."

This is a misinterpretation of the article's claim. The article very explicitly begins by saying that the best recipee to increase a project's chances to success is to:

> 1. Start as simple as possible;

> 2. Seek out problems and iterate;

The priority part reads to me as a way to determine which features are critical (and hence part of the as simple as possible set) and which ones are not (and hence you should not build "yet"). The underlying vibe being that these other features should probably never get implemented because once the critical ones get built and the software is put to use you will actually find other critical fearures that solve actual problems found through usage.

That is, only when you find that one of the initially non-critical features has become a hindrance for users actually using your software you should seek to implement it.

I really think this would be a better way to build software, just as much as I think that you will have a very very hard time getting any management on board with it...

I've personally been thinking about this for some time and wondering if in the real world this looks like building as much as possible at the database level and treating your DB as a state machine for your app, aiming to disallow whole classes of errors and communicating the design of the business logic at the SQL functions/triggers/data layer, separate from the API, Services, Programming Language, and Frontend layer(s).

This means that instead of lots of issues with business logic being separate from the data the business logic and data sit together and prevent your system from getting into bad states.

Thinking about this, maybe I just stole this thought from Derek Sivers: https://sivers.org/pg

Yes, the data model is probably the most important aspect of your application, it defines the relations and constraints. With a good data model, you don't need to write a lot of code to deal with it. Having lots of code that deals with weird situations in the database means your data model needs some serious consideration.

A database in my opinion is not a good place to write business logic with functions and triggers, since there is lack of tooling that would make development and debugging easy. Let the database do what it does well, which is storing and querying data.

Nice Post. But everyone needs to understand something. Even if you follow these principles to the letter T, you can still produce very bad software. In fact you can also find many cases where people did the exact opposite of what this guy said and still produced great software. I'm sure many people can name examples of software that just came together out of blind luck.


Because there is no formal definition for what is bad or good software. Nobody knows exactly why software gets bad or why software gets good or what it even exactly is... It's like predicting the weather. The interacting variables form a movement so complex that it is somewhat impossible to predict with 100% accuracy.

What you're reading from this guy is the classic anecdotal post of design opinions that you literally can get from thousands of other websites. I'm seriously tired of reading this stuff year over year rehashing the same BS over and over again, yet still seeing most software inevitably become bloated and harder to work with over time.

What I want to see is a formal theory of software design and by formal I mean mathematically formal. A axiomatic theory that tells me definitively the consequences of a certain design. An algorithm that when applied to a formal model produces a better model.

We have ways to formally prove a program 100% correct negating the need for unit tests, but do we have a formal theory on how to modularize code and design things so that they are future proof and remain flexible and understandable to future programmers? No we don't. Can we develop such a theory? I think it's possible.

So you're not talking about "formal methods"? https://en.wikipedia.org/wiki/Formal_methods

The Applied Category Theory folks have some very interesting stuff, like Categorical Query Language.



But it sounds to me what you mean is more like if "Pattern Language" was symbolic and rigorous, eh?

Yes, this is exactly what I mean. Though I feel patterns can be formalized within thr framework of category theory.

Have you read "Introduction to Cybernetics" by Ashby?

(PDF available here: http://pespmc1.vub.ac.be/ASHBBOOK.html )

Cybernetics might be the "missing link" for what you're talking about.

I didn't dive to deep into this so I could be wrong but this looks like control theory with elements of category theory.

I'm looking more for a theory of modules and relationships. Something that can formalize the ways we organize code.

From my POV control theory is rediscovering cybernetics, but yeah.

It sounds like CT is what you're after (to the extent that we have it at all yet...)

We know a great deal about dynamics,kinematics, thermodynamics and generally the physics that governs car components, yet we are a long way from an algorithm that applied to a car will produce a better car. My guess is that doing that for software is as hard, if not harder.

Also the sentence 'algorithms that applied to algorithms produce a better model' has a strong smell of halting problem, at least to this nose.

I get where you're coming from. I think your intuition is off.

Intuitively, software can be modeled as a graph of modules with lines representing connections between modules. An aspect of "good software" can be attributed to some metric described by the graph, let's say the amount of edges in the graph... the less edges the less complex. An optimization algorithm would probably take this graph as an input and output a graph that has the same functionality but less edges. You can call this a "better design." This is all really fuzzy and hand wavy but if you think about it from this angle I'm pretty sure you'll see that a axiomatic formalization can be done along with an algorithm that can prune edges from a graph (or in other words, improve a design by lowering complexity)

A computer program is a machine that translates the complexity of the real world into an ideal system that is axiomatic and highly, highly simplified. Such a system can be attacked by formal theory unlike real world issues like what constitutes a good car.

The halting problem bit is a shower thought with no supporting evidence whatsoever, so your complexity lowering scenario may well be doable. However, paring complexity is a strictly developer-side measure of goodness (that is assuming that the low complexity result is still readable, maintainable...) - we can agree that reducing bugs is also a very good user side metric, but that tells only a (little) part of the story.

In my experience, developer-side evaluation has a very low impact (I was about to write: zero) on the perceived and actual goodness of the software itself. Which is tied mostly to factors such as user experience, fit to the problem it was designed for and to the organization(s) it is going to live in (user experience again). These properties do not strike me as amenable to algorithmic improvement, no more than "pleasant body lines and world class interiors" in the original car analogy. But they are a (big) part of good software design, besides being the 'raison d'etre' of the darned thing to begin with.

But let's forget cars, as hard as it is. Few months ago HN was running the story about developing software in Oracle. Now, Oracle may be by now a little soft around the edges, but I think that most would agree that it has been setting the standard for (R)DBMS for decades. Success may not on itself be the tell-all measure of software goodness, but the number of businesses that have been willing to stake the survival of their data on Oracle is surely a measure of its perceived goodness (as that other elusive factor - hipness - tends not to be paramount in the DBMSs business).

The development side story, taken as face value, was pure horror (https://news.ycombinator.com/item?id=18442941). Everything in it spoke bad, outdated, rotting design. The place must be teeming with ideas on how to improve just about everything in that environment. And yet if that came to be, maybe by some nifty edge pruning algorithm, it would do nothing to improved the goodness-to-the-world measure of the software, not until the internals' improvement translated to observables in the user base experience.That type of improvements will still require vaste amount of non-algorithmic design and, in the meantime, a very concrete risk will be run of deteriorating the overall user experience (because ehi, snafus will happen).

This (internals are just a small part of the story) is one of the reasons why so many reimplementations I have seen failed ("ehi, let's rewrite this piece of shit and make it awesome") and the reason because everyone resists the move from IPV4 to IPV6. I could think of many more examples.

All very good points. Don't write code, solve the problem. For that, first understand the problem. Take time to reduce complexity, else you won't be able to evolve. Gather knowledge along th he way.

This all takes a bird-eye view and a long perspective, very unlike quarter-results-driven development.

This is great. So many quotable quotes. If only we could make it required reading for our clients!

This one struck me, because as soon as I read it I knew it was true yet had never considered it:

> Most people only give feedback once. If you start by launching to a large audience, everyone will give you the same obvious feedback and you’ll have nowhere to go from there.

I've been on both sides of that fence and it rings true.

Anonymous feedback (like really anonymous) is the answer. people can't give real feedback and be nice at the same time.

I think feedback fatigue is a real thing though. The comment about only leaving feedback once hit home to me. It's rare I bother reviewing something twice even if asked, and especially if my original round of feedback didn't seem to change anything (which I understand is totally reasonable in many cases, but still a little disappointing).

Jesus Christ that’s a well-written article. There’s no fat to it. All signal, no noise.

Write code. Not too much. Mostly test-covered.

I like this Pollan reference

>>> The hard limit to system complexity is not the quantity of engineering effort, but its quality.

This article is full of good ideas, an antidote to creeping corporate take over of software projects - make this required reading for software projects.

The initial proposition of the article, that software is bad because it follows the lifecycle "gather requirements - write software - deliver it" is simply wrong. There are huge projects in specialized domains that are delivered on time and on budget and use this approach.

The problem is lack of knowledge. The successful projects mentioned above did not have a lack of knowledge, and so they were finished successfully.

When there is a lack of knowledge, then it makes sense to use the iterative approach...as knowledge is slowly gathered, the software gets improved. As with all things in life!

Yes, the lack of knowledge is definitively one of the issue.

But starting a "gather requirements - write software - deliver it" lifecycle because you are confident that you have all the knowledge is one as well.

I like the article, it gets to the point. I would, however, change this: " 3. Hire the best engineers you can." to: "3. Hire and work hard to keep the best managers and engineers you can." As they mention, accumulating knowledge is important. Keeping that knowledge around is therefore also important (and sometimes difficult). The best managers will know what technical debt is, how to handle pressure from higher-ups, and how to keep a team happy, healthy and productive.

Years ago I wrote http://oss4gov.org/manifesto saying that governments needed to not only embrace OSS but that it is the only moral option to take.

Now we have government digital systems leading the charge across most western countries, and we have excellent polemics like this. I am just so happy to see this level of insightful ness at top levels of government.

I am so glad they listened to me :-)

> Overall, good engineers are so much more effective not because they produce a lot more code, but because the decisions they make save you from work you did not know could be avoided.

This is spot on, and very much my experience (of the good engineers I've come across).

Kind of : management had planned extensive and painful testing of a component that turned out to be discarded entirely (not because of functionality reduction but because it was actually unecessary).

Keep it simple software should be open source. Government software often has similar demands as other countries. Share and reuse.

Reusing good modules and software will make the software work.

Kiss engineering still works keep it simple stupid. Make it as simple as possible. Simple software and systems are easy to maintain and understand.

Use modules as these can be swapped out.

Use proven boring technology such as SQL and JSON. Boring tech has been tried by others and generally works well.

>Government software often has similar demands as other countries.

What makes you think so?

>The better your engineers, the bigger your system can get before it collapses under its own weight. This is why the most successful tech companies insist on the best talent despite their massive size.

Translation: the successful tech companies have so much poorly documented legacy enterprise spaghetti code and tooling that they need the best talent they can get just to make sense of it and maintain it

Alternate translation: Bad devs are worse than no devs and all your competent devs will spend most of their time dealing with the former's crappy code until they quit. (my code is of course perfect and free of all technical debt)

The article lists the characteristics of a good engineer:

  * has a better grasp of existing software they can reuse
  * (has) a better grasp of engineering tools, automating away most of the routine aspects of their own job
  * design systems that are more robust and easier to understand by others
  * the decisions they make save you from work you did not know could be avoided
I obviously concord with the analysis (not sure about the 10X myth). It also states that:

  * Google, Facebook, Amazon, Netflix, and Microsoft all run a dizzying number of the largest technology systems in the world, yet, they famously have some of the most selective interview processes
This sounds a bit like a paradox to me. Given the current state of "selective interview processes" (algo riddles, whiteboard coding, etc.), none of the above traits can be easily evaluated in a candidate during an interview. On the other hand, these companies do hire stellar engineers: the technological supremacy of FAANG is irrefutable.

Former Googler here.

Google views picking new engineers like picking quality construction metals. In the end, the machine melts you down and hammers you into a pristine cog.

The way I interpreted that last comment was as a counterpoint to the idea that large companies necessarily end up hiring many mediocre employees because the talent pool simply isn't deep enough to stack the deck. Instead of just being happy with what they can get, the big tech companies make it a real challenge to be hired.

It's not a paradox because the statement that the interview processes don't evaluate candidates is false. It's proven that this particular interview format has a very high correlation with the future candidate's performance.

Excellent article, well grounded in the reality of software development. New developers and managers would benefit from understanding the practical points made here as early as possible.

I do think perhaps there is too much emphasis on reuse and particularly cloud services. Ironically, this is partly for the reasons given elsewhere in the article. If you rely on outsourcing important things, you also naturally outsource the deep understanding of those important things, which can leave you vulnerable to problems you didn't anticipate. Also, any integration is a source of technical debt, so dependencies on external resources can be more fragile than they appear, and if something you rely on changes or even disappears then that is a new kind of potentially very serious problem that you didn't have to deal with before. Obviously I'm not advocating building every last thing in-house in every case, but deciding when to build in-house and when to bring something in can be more difficult than the article here might suggest.

> Software has characteristics that make it hard to build with traditional management techniques

Perhaps some software development techniques would work though...

> The main value in software is not the code produced, but the knowledge accumulated by the people who produced it.

Those people go on to work on other things or for other organization. So, while that statement might have some truth to it, it's still the case that the code has to be useful, robust, and able to impart knowledge to those who read it (and the documentation).

> Start as Simple as Possible

That's a solid suggestion to many (most?) software projects; but - if your goal is to write something comprehensive and flexible, you may need to replace it with:

"Start by simplifying your implementation objectives as much as possible"

and it's even sometimes the case that you want to sort of do the opposite, i.e.

"Start as complex as possible, leading you to immediately avoid the complex specifics in favor of a powerful generalization, which is simpler".

> > Software has characteristics that make it hard to build with traditional management techniques

> Perhaps some software development techniques would work though...

As you go up the management chain, you usually run into some layer where people are traditional managers, who want to run a software project like a traditional project. And behold, you're at this problem. Saying "software development techniques would work" is useless unless you can get those managers to change. And when you get them to change, the problem moves up one layer.

One additional principle is this:

When faced with a standard solution, use a standard component if you can. If you can't use a standard component, build a standard component. Keep your components simple, well-understood, and easy to maintain.

Hiring the best engineers, technically-wise, is a good thing but it's not enough. In my experience it's better to hire somebody with a previous experience in the domain. Somebody that already built something similar or related. Those engineers will ask the right questions, make the customer think about the system in the right way, not lose time on worthless details. Even if the implementation is not shiny it will be working. It beats shiny but misguided. And if you can find great engineers with great skills, that's even better.

After a brief visit and further readings, I’ve learned to admire Singapore’s 1st world transformation. I’m a absolutely in awe that a government body can produce such a high quality article.

> The root cause of bad software has less to do with specific engineering choices, and more to do with how development projects are managed.

...While I do agree that "project-management" is important, I think the tools we are using today are really underpowered to deal with complexity/human-error - Which is the bigger problem IMO.

> The main value in software is not the code produced, but the knowledge accumulated by the people who produced it

The problem is most CEOs see the binary as the asset, not the knowledge gained. I've tried to explain this concept to multiple startup CEOs, who hire outside development firms, for which it rarely works out for them.

> Software has characteristics that make it hard to build with traditional management techniques; effective development requires a different, more exploratory and iterative approach.

Or the management techniques considered “traditional” are overlooking a century of iterative development outside of software. See Deming.

The author is very cavalier about open source licenses - they seem to be implying you can just use open source code whenever you want, even for closed-source, proprietary applications. Whether or not that is true depends on the licenses involved.

#1 web rule: Don't require JS to read an article.

This site is an empty page without JS.

Strangely, all the HTML elements are there but the opacity of <body> stays at 0 without JavaScript.

The lead image features two laptops and a desktop. And three hardcopies of code. All with a light-on-dark color scheme. I'd wager they had a bit of fun taking this programmery photo.

"Software Is about Developing Knowledge More than Writing Code"

This is also the real problem with vendor lock-in.

You are more often locked in by the knowledge of your employees than by your tech stack.

What excellent article. It summarizes things that sometimes need decades to learn by ourselves. Reading books and good articles like this give us a shortcut for such wisdom.

`There is no such thing as platonically good engineering` +1

> 3. Hire the best engineers you can.

What is the definition of "best engineers"? Those with extensive experience? those who follow design patterns and coding standards religiously? those who solve algorithms on a whiteboard? I would like to see if there is a definition for this.

I would say build the right culture (collaborative, always learning from mistakes and revise decisions and no blame or pointing fingers).

You can get a bunch of great coders/engineers _who follow code standards, break down codes to zillions of functions/methods ... etc_ but will fail to work together and conflicts will raise quickly.

This reads like conventional wisdom. No one is going to argue against "Hire the Best Engineers You Can" and "Start as Simple as Possible".

If only.

Industry this days is more about headcount than quality itself. Why hire two good engineers when you can have three mediocre ones for the same price?

On simplicity, common wisdom these days dictate that we should use bloated kitchen-sink backend MVC frameworks that generate dozens of directories after `init`, because supposedly nobody knows how to use routers. Frontend compiler pipelines are orders of magnitude more complex than the reactive frameworks themselves, because IE11. And even deployment now requires a different team or expensive paid services from the get go. We're definitely not seeking simplicity.

The second point is also something that most developers and managers would balk at: "To build good software, you need to first build bad software, then actively seek out problems to improve on your solution". Very similar to the Fred Brooks "throw one away" advice that no one ever followed.

I'd start from here http://0.30000000000000004.com/

This is beautiful, I feel like I should memorize the entire thing.

Impressive that this comes from a Civil service college.

Now, how do I convince my management this is a problem?

I think only hire the best is wrong

Except for the 10x myth a fairly good article.

I disagree with that, if you describe it as stated in the article: "Overall, good engineers are so much more effective not because they produce a lot more code, but because the decisions they make save you from work you did not know could be avoided."

I've seen plenty of poor decisions that cause 10x the work, and end up with something 10x less maintainable.

I'm not disagreeing with the idea that there is variability in developer productivity. However, quantifying the most productive engineers by throwing around a specific random factor such as "10x" is rather idiotic.

You have entire blog posts by Steve McConnell of Code Complete fame devoted to defending the 10x claim by citing 20 to 50 year old research that shows 5x to 20x differences across certain dimensions and then him falling back to the 10x thing. Not one single sentence where he is being self aware enough to spell out the most likely reason for "10x" being so prominent: 10 is the base of the decimal system and as such psychologically attractive to use.

> Both Steve Jobs and Mark Zuckerberg have said that the best engineers are at least 10 times more productive than an average engineer.

I know I'm venturing into ad hominem territory with this, but first of all: Steve Jobs wasn't a programmer. Mark Zuckerberg, well does he even qualify as a programmer nowadays? How well can he quantify programmer productivity? His decision to use PHP led Facebook to create HHVM and Hack. Is this the 10x developer way?

Anyways, the question to me is: Is it possible for average software engineers to write good software?

Perhaps "10X engineer" is just an easier thing to say than "5 to 20X engineer as described by this paper." Perfect numerical accuracy is not needed to make the point that there is a lot more variance in the productivity of engineers than there is with most jobs.

If someone suggests you focus on the 20% of customers who make 80% of your revenue, and you run the numbers and find a 75-25 distribution, should you call the person making the suggestion an idiot?

Anyone know how to demonstrate this to management? I’m quite certain that my boss think I’m a crappy developer, because I usually take longer than other to produce the same amount of code. But I’ve reduce the amount of code we need to write with three quarters, but that is harder to demonstrate.

Reducing the amount of code shouldn't be the end goal, but a way of increasing quality.

You should seek to demonstrate instead that you're making software that is more malleable, has less bugs, is easier for new hires to understand, is easy to add new features, etc.

Of course it is not an end goal. But a quarter as much code has (in general, everything else equal) a quarter as much bugs, a quarter of the amount of code to read and understand for new hires a quarter as much code to take into account when adding new features. But none of these things are easy to measure or demonstrate. It is just easier to see that Ma8ee took 1.5x weeks to write 10,000 loc while other developer wrote 40,000 loc in x weeks.

Your diligence will surely be rewarded eventually.


Edit: fwiw id work with ya though. Caring enough to try is half the battle.

> Surprisingly, the root cause of bad software has less to do with specific engineering choices, and more to do with how development projects are managed. The worst software projects often proceed in a very particular way:

> The project owners start out wanting to build a specific solution and never explicitly identify the problem they are trying to solve. ...

At this point, it looks like the article will reveal specific techniques for problem identification. Instead, it wraps this nugget in a lasagna of other stuff (hiring good developers, software reuse, the value of iteration), without explicitly keeping the main idea in the spotlight at all times.

Take the first sentences in the section "Reusing Software Lets You Build Good Things Quickly":

> Software is easy to copy. At a mechanical level, lines of code can literally be copied and pasted onto another computer. ...

By the time the author has finished talking about open source and cloud computing, it's easy to have forgotten the promise the article seemed to make: teaching you how to identify the problem to be solved.

The section returns to this idea in the last paragraph, but by then it's too little too late:

> You cannot make technological progress if all your time is spent on rebuilding existing technology. Software engineering is about building automated systems, and one of the first things that gets automated away is routine software engineering work. The point is to understand what the right systems to reuse are, how to customise them to fit your unique requirements, and fixing novel problems discovered along the way.

I would re-write this section by starting with a sentence that clearly states the goal - something like:

"Paradoxically, identifying a software problem will require your team to write software. But the software you write early will be quite different than the software you put into production. Your first software iteration will be a guess, more or less, designed to elicit feedback from your target audience and will deliberately built in great haste. Later iterations will solve the real problem you uncover and will emphasize quality. Still, you cannot make technical progress, particularly at the crucial fact-gathering stage, if all your time is spent on rebuilding existing technology. Fortunately, there are two powerful sources of prefabricated software you can draw from: open source and cloud computing."

The remainder of the section would then give specific examples, and skip the weirdly simpleminded introductory talk.

More problematically, though, the article lacks an overview of the process the author will be teaching. Its lack makes the remaining discussion even harder to follow. I'll admit to guessing the author's intent for the section above.

Unfortunately, the entire article is structured so as to prevent the main message ("find the problem first") from getting through. As a result, the reader is left without any specific action to take today. S/he might feel good after having read the article, but won't be able to turn the author's clear experience with the topic into something that prevents more bad software from entering the world.

nice one

How many more decades are we going to have to spend learning this lesson before we learn it?

There's a saying in most other fields of engineering (civil, chemical, mechanical, etc.): "Regulations are written in blood." A whole lot of bridges collapsed and a whole lot of people died before strong requirements were put in place.

It seems we are on the path to repeat history with software engineering, what with how software and the internet is being developed with such little regard for public safety and long term consequences.

Unfortunately, it appears that the "free love" phase of software engineering is coming to an end, as society now relies more and more on software and major tech players for life and safety. It's starting to get real for software engineering.

Luckily, other engineering fields have been here before, so this sort of transition shouldn't be anything new.

Relevant Tom Scott video: https://www.youtube.com/watch?v=LZM9YdO_QKk

> It seems we are on the path to repeat history with software engineering, what with how software and the internet is being developed with such little regard for public safety and long term consequences.

> Unfortunately, it appears that the "free love" phase of software engineering is coming to an end, as society now relies more and more on software and major tech players for life and safety. It's starting to get real for software engineering.

Software will always be a spread of reliability requirements, from pacemakers on one side to excel reports on the other. Part of being a responsible user is choosing software with the right balance of economics and reliability for the job.

Thumbs up.

As many as they need to start teaching it in business schools?

Even if business schools start lecturing their students about details of software development, and even if these details actually sink in, there will still be many other kinds of work their students don't learn anything about. The real problem, I think, is graduates of these schools who believe they can manage people in a generic way without understanding the details of the work.

"2. Seek out problems and iterate;"

This is bad advice. It's like saying "go into a bar and start picking up fights".

If some part of the software has problems, runs slow or has bugs but nobody is complaining, then there's no problem. Why waste time improving it?

Almost 100% of the time when you solve a problem you just create new problems of different kind in turn.

Be lazy. The less code you write the better off you are.

If some part of the software has problems, runs slow or has bugs but nobody is complaining, then there's no problem.

This depends very much on context. To pick an extreme example, if you're writing the control software for a nuclear weapon and you know you have a bug that might cause it to activate unintentionally if you eat a banana while it's raining outside, I think we can reasonably agree that this is still a problem even if so far you have always chosen an apple for lunch on wet days.

In your example youd hope some stakeholder to complain and raise an issue.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact