I've experienced more issues caused by management passing around tasks between teams and never paying attention to knowledge and knowledge transfer.
What's amazing, is that in over 18 years as a software engineer, I've seen this so many times. Teams will function well, then the institution tries to change. Often they will try to open up the "innovation" by throwing money at R&D, basically trying to add bodies in order to grow. Then you have tons of teams, and communication becomes very challenging, so then they grow some kind of "task management" layer. Management that never understands who actually _knows_ something, just tracks how much "theoretical bandwidth" they have and a wishlist of features to create. And then the crapware really starts flowing. And then I get bored and move on to the next place.
The company I work for uses Scrum. They consider the User Stories + the code to be everything you need. I struggle with this, but my manager says they don't want to get tied up doing documentation "because it goes out of date". Beside, they are being Agile which "prefers working code over comprehensive documentation".
I am wondering what other companies do to capture this "distilled knowledge". The backend services I rely on are undocumented beside some paltry swagger that leaves much to be desired. The front end has no product-level "spec", if you want to rebuild the thing from scratch. There isn't even a data dictionary, so everyone calls the same thing by different terms (in code, and conversation).
There are just user stories (thousands) and code.
Does anyone have any suggestions on how to fix this?
Documentation is essential. How things work is an important thing to document. Ideally it should be in version control and be generated from the code, because then it's less likely to go out of date. It still has problems (What do you do when the code and the documentation disagree? Which is correct?), but they're not as severe as the problems that arise when there is no documentation at all.
What is less useful is having comprehensive documentation for those things that are yet to exist. Writing a few hundred pages of specification and handing it over to the dev team is waterfall, and it is _this_ that the Agile manifesto signatories were interested in making clear.
I'd fix it with strategic DDD - I'd develop at least a "ubiquitous language" (or a UL): I'd get others to work with me on having clear terminology and making sure that is used consistently both in user stories and in the code base. That's table stakes.
I'd then event storm the contexts I'm working in and start to develop high level documentation.
Even at this point relationships between systems emerge, and you get to draw circles around things and name them (domains, contexts), and the UL gets a bit better. At this point you can start to think about describing some of your services using the UL and the language of domains and contexts.
By that point, people should start to click that this makes life easier - there is less confusion, and you're now all working together to get a shared understand of the design and the point of DDD is that the design and the code match.
The first part (all 100+ pages of it), of the Millet and Tune book on DDD will pay you dividends here.
If that doesn't work, look around for somewhere else to work that understands that software development is often a team sport and is committed to making that happen.
Documentation is essential. How things work is an important thing to
document. Ideally it should be in version control and be generated
from the code, because then it's less likely to go out of date.
Generally, this falls into two categories.
1. Hacks/kludges to get around bugs in hardware, external services, or included libraries. These manifest in code as incomprehensible, ugly bits of code that are difficult to distinguish from code that is simply "sloppy" or uninformed. More importantly, they represent hard-won knowledge. It often takes many programmer-hours to discover that knowledge, and therefore many dollars. Why throw it away? (Tip: include the version of the dependency in the comment, ie)
# work around bug in libfoo 2.3, see blahblahblah.com/issues/libfoo/48987 for info
# should go away once we can upgrade to libfoo 3..
if error_code == 42 reset_buffer()
2. Business logic. This too is difficult/impossible to discern from looking at code. Often, one's git commit history is sufficient. But there are any number of scenarios where version control history can become divorced from the code, or require a fair bit of git/hg/svn/whatever spelunking to access. And this of course becomes increasingly onerous as a module grows. If there are 200 lines of code in a given module, it is a significant time investment to go git spelunking for the origins of all 200 lines of code. Some concise internal documentation in the form of code comments can save an order of magnitude or two of effort.
It still has problems (What do you do when the code and the
documentation disagree? Which is correct?), but they're not as
severe as the problems that arise when there is no documentation at all.
In the first place, only a true maniac would intentionally update
# no sales tax in Kerplakistan on Mondays
return nil if country_code==56 and day_of_week==1
Leaning towards commenting "why" not "what" is another good general rule. "Self-documenting code" with sensible function and variables names and logical flow already cover the "what" fairly well.
# Some countries have sales tax rules dependent on the day of the week
return nil if country_code==KERPLAKISTAN and day_of_week==MONDAY
But I agree with your statement that there should be a pointer to the business rules somewhere. Otherwise it's difficult to have a meeting with the business side and ask, "Has anything here changed?" I think that's the biggest thing people miss out -- It's not that hard to find the thing in the code if things change. It's super hard to make sure you are on top of all the business requirement changes.
country_code==FIFTY_FIVE and day_of_week==ONE
string url = HTTP + COLON + SLASH + SLASH + WWW + DOT ...
> "There is a famously bad comment style:
Don't laugh now, wait until you see it in real life."
[NB 30 isn't an exaggeration - I think the vast team who wrote it were paid by the abstraction or something].
But in the trade-off in code readability was probably the cause of many other mistakes, so probably ended up further behind.
return nil if country_code==KERPLAKISTAN and day_of_week==MONDAY
Comments are as much for the next guy as they are for you.
Code smell doesn't mean you should never do it, just that often there's a better way.
I worked on an enterprisey line of business app that assigned sales leads to salespeople.
The algorithm to do this was a multi-step process that was (1) rather complex (2) constantly being tweaked (3) very successful (4) contained a number of weighting factors that were utterly arbitrary even to veterans of this app.
It was full of many `if country_code==KERPLAKISTAN && day_of_week==MONDAY` -style weighting factors. Each represented some hard-won experience And when I say "hard-won" I mean "expensive" -- generating leads is expensive business.
We had a strong culture of informative commit messages, but this file had hundreds if not thousands of commits over the years.
It was the kind of code that resisted serious refactoring or a more streamlined design because it was a recipient of frequent change requests.
A few human-readable comments here and there went a loooong way toward taming the insanity and allowing that module to be worked on by developers besides the original author.
Knowing the why for many of these rules made it much easier to work with, and also allowed developers to be educated about the business itself.
I'm not at all convinced that this is unpopular, but I think it's a whole lot harder than you're letting on. Unless you have a constant stream of new people coming in and you can convince them to give honest feedback, you don't actually know what's not obvious.
return nil if country_code==KERPLAKISTAN and day_of_week==MONDAY
Is this a quick thing somebody hacked in for a special, one-off, tax-free month in Kerplakistan as the country celebrates the birth of a princess?
Is this a permanent thing? Will there eventually be more weirdo tax rules for this country? Will there be others for other countries?
Knowing the "why" would help a developer understand the business, and reason about how best to work with this bit of code... should we just leave this ugly little special case in place? Should we have a more robust, extracted tax code module, etc.?
Commit messages help to accomplish this too, and can offer richer context than inline comments. Each has their place. Sifting through hundreds of commit messages in a frequently-updated module is not a great way to learn about the current state of the module, as the majority of those commit messages may well be utterly stale.
Ultimately the cost of having some concise inline comments is rather low, and the potential payoff is very large.
Remember that the longer term goal (besides the success of the business) of software is to have your developers gain institutional knowledge so that they can make more informed engineering decisions in the future.
I agree with this 100%. However, to be useful it needs to hit the right level of crudity. For most projects, a short (<10 pages) description of goals, design principles, architecture and an overview of interfaces is sufficient.
It is best when this exists as a standalone document which is a required reading for any new developer. After this they can look at module descriptions, function docs, code, etc. and understand how to make sense of it and how to add their code without breaking general principles of the project.
> Ideally it should be in version control and be generated from the code, because then it's less likely to go out of date.
With this, I have some beef. In my experience the best documentation is the one that complements the code. Usually this means a short description by a human that explains what this chunk of code does and assumptions or limitations (e.g., "tested only for points A and B in troposphere") and IME most useful information is not derivable automatically. Auto-generated docs are very useful, but cannot replace clean explanations written by a human. My 2c.
"Documentation" that is nothing more than the interface definitions in HTML for is worse than useless. I can get all of that from just reading the code.
These could be just one document if the project is small enough.
Interestingly, this has been a big point of discussion in the Dota 2 playerbase. Dota 2 is one of the most complex games ever created and it rapidly changes on the order of days or weeks. At one point, the in-game descriptions of spells were months or years out of date because they were being updated manually. After much hue and cry from the community, the developers finally made the tooltips get generated from the same code that determined the spells' effects. Things are a bit better now.
There is still a quite a bit of ways to go though, in terms of generating documentation for all the other mechanics in the game, which are crucial for gaining competency in the game, but which are only available due to third-party community efforts (often via people reading the game's codebase to understand subtleties), instead of being available inside the game.
Not always - when you want to document the requirements (in whatever format), having them be separate from the code is often a plus. The code might implement the requirements incorrectly, so being able to recognise that is important.
I find this very similar to writing tests that are separate from your implementation. In fact, Cucumber/BDD tests try to make product requirements executable to validate the software has been written correctly to meet the requirements.
I never got documentation about the thought processes, the iterations, the design meeting, the considerations, etc. Which is way, way more important to understanding a system in context than knowing "convertLinear" takes 2 unsigned ints.
That doesn't sound too bad from a dev point of view, better than the opposite - half arsed specifications with no thought given to the important details. Though I can imagine a lot depends on what exactly you are trying to build.
> Ideally it should be in version control and be generated from the code ..
May I ask if you have suggestions for tooling to capture the high level documentation. We use javadoc a little, but it seems best for lower level reference. Also for diagrams, like sequence diagram and/or state machines, how do you capture this?
Or better yet: generate your state machines from same format you would use to generate visual representation.
Just build a product specification for how the product works (which is useful documentation), not how the product will work (which is waterfall).
We're experimenting with this a little, and I'm getting into document-driven development a little: if the product spec is in markdown, why not create a pull request on it as part of your story/project planning that shows the changes that would happen as a consequence of your work. Once the story is done, you can merge the pull request, even. We're not quite there with this yet, but I'm optimistic.
Putting design assets into your repo is also acceptable, and also paying time and attention to commit messages can be really, really helpful. I love this talk, for example: https://brightonruby.com/2018/a-branch-in-time-tekin-suleyma...
I've come across the "documentation becomes quickly outdated" argument a lot, but nobody has ever been able to suggest a good alternative. The best I've found is to write design logs for proposed changes (which you then let other team members/stakeholders can review/comment on before it gets implemented) and decision logs for any decisions that are made. This way, them going out of date is expected and ok, as they become a history of ideas and decisions with their context and outcomes laid out. You don't necessarily have a snapshot of "the system right now" but you have a log of all the ideas and decisions that lead up to the current system.
Me too, but I still feel that saying "documentation quickly becomes outdated" and refusing to write any, is not that different from saying "software quickly becomes full of bugs" and refusing to write unit tests. Yes, if you believe that something is doomed, and therefore you refuse to even try, it becomes a self-fulfilling prophecy.
Yes, documentation quickly becomes outdated, if no one updates it. Duh. If a person creates/modifies a part of code, they should also create/modify the corresponding documentation accordingly. (And the person reviewing the code should also review the docs.) If you don't do it, then yes, obviously, the documentation becomes outdated. Did you expect it to update magically by itself?
If you believe that documentation is useless in principle, go ahead and don't write it. Then you won't have to maintain it. Also, make sure to include memory tests to your interview process. If you believe that documentation is useful, write it, and maintain it. But if you have a documentation that you never update, you get the worst of both worlds.
At the very least, the document should capture high-level (again this is relative term) design, possibly an architecture diagram of major interacting functional units. The success measure should be the relative ability to build a mental model of the system by looking at this document for any newbie.
That design document would be a start, and most likely not "quickly outdated".
My personal beef with the agile camp is precisely this: when they let go of documentation, they don't do the design doc as well, and all that remains of the system is thousands of incoherent stories and huge amount of code.
If you diverge too far from the original design, you should probably have a rationale as to why, that gets reviewed by others: another design log and decision log.
These documents don't need to be long either, just a couple of sentences for each of context, what you propose, impact on other teams or systems, decision made may be enough for smaller things (so a paragraph or two) and for larger changes, you probably need the detail for everyone to really understand what, why and its impact. The alternative is to do these things blind.
> when they let go of documentation, they don't do the design doc as well, and all that remains of the system is thousands of incoherent stories and huge amount of code.
Another thing that helps is to write good commit messages giving the business context for a change. When code is reviewed, the commit messages should be reviewed as well. If they don't agree then that's a problem.
Not to be rude, but yes: switch employers. This is not something you can fix on the employee level, it is a management issue.
It might be a team culture or company culture issue, and even radical changes in the management are not enough to fix it.
Agility requires a stable foundation. And a lot of places forget that.
Then they are Doing It Wrong™. Note that there's nothing in the Agile Manifesto OR the Scrum Guide that says "don't write documentation." The closest you get is in the AM where it says "We have come to value ... Working software over comprehensive documentation". But note that immediately after that it says "That is, while there is value in the items on the right, we value the items on the left more." IOW, the Agile Manifesto explicitly endorses the value of documentation!
Remember this the next time somebody tries to tell you that "we don't do documentation because we're Agile." Anybody running that line is Full Of Shit™.
Have a product wiki (e.g. MediaWiki).
Have documentation in source code that compiles to HTML code, which can be linked to/from the product wiki (e.g. JavaDoc in Java, Natural Docs for languages that do not directly support compilable documentation). Make building and publishing this documentation a part of the continuous integration.
When you have this, make it a part of code reviews to ask "where is this documented?" for those kinds of things that are easy to remember today, but no one will remember it a few months later. In other words, make it a "code+doc review".
(Don't be dogmatic about whether the information should go to code documentation, unit test documentation, or wiki. Use common sense. If it only related to one method, it's probably the code; if it related to a use case, it's probably the unit test that verifies that use case; if it is a general topic that has an impact on many parts of the program, it probably deserves a separate wiki page.)
Are you referring to something like Knuth's Literate Programming (en.m.wikipedia.org/wiki/Literate_programming)? As a non-professional who's learning to develop on the side, something that follows more of a natural language approach appeals to me, as sometimes I have a few months between working on my project, and comments on my source code help me not to forget why I do certain things in the code. However, I'm not doing Literate Programming, just python with comments.
I have never tried the Literal Programming, so perhaps I am out of my depth here, but I strongly suspect it only works after one has already mastered the usual ways of programming. That you do not have to structure the code qua code, because you can already do it in your head. But it's hard to imagine what one has never done before.
For example, if you never tried programming the usual way, how do you know when and why to put "Header files to include" in your Literal code? It's only because you can imagine the constructed code, you know where the header files go in the result, so you know where to place them in the Literal version. Otherwise, it would look quite arbitrarily.
I don't know about documentation in Python, but the JavaDoc (and Natural Docs) work like this: You put comments to classes and methods, or packages (and files), along with the code. So you can read them and write them while you are looking at the code. But then you run a "documentation compiler" that extracts the comments and builds a separate HTML website out of them. Here you can browse and read about what the individual classes and methods do. The idea is to make this a part of the continuous integration, so that whenever you update the source code and the related comment, the HTML website also gets updated.
Java supports this out of the box. When you install the Java compiler, you also install the Java documentation compiler. When you read the official documentation to the standard Java classes, those were made using exactly the same tools you are encouraged to use.
I don't know whether Python has something like this. If yes, go ahead and use it. If not, look at Natural Docs -- it is a system to provide this functionality to languages that do not support it out of the box. Just try it: document a part of your existing project, compile the docs, and see whether you can imagine any value in reading that.
This is funny because “working code” might just mean that it doesn’t crash. But does it actually do what it’s supposed to do or does it reliably deliver the wrong results? How would you know without documentation?
The software in the Therac didn’t crash, it quite reliably killed people with its “working code”.
So I think "working code/application/program" is when it does what it is supposed to do. Including not crashing.
And the point of the comment you're replying to is to ask what is "what it is supposed to do". How do you know what the answer to that question is, without documentation or a specification? And if you try to rely on just verbal communication, in a group of people probably larger than about 1, they're going to have different ideas about what the software is supposed to do.
Some of the most challenging problems I've encountered have been looking at code that does something. What it does is clear enough from the code. But why it does it, or should it do that, that is much harder to answer, particularly if the person who wrote it has left the company or it's been >6 months and they just don't remember.
I might also recommend creating user stories for non-feature development like infrastructure and tech debt paydown (if you don't already). That way, all of the value flow is captured in one place and you're not just leading managers to see new features only.
Second, in addition to the user stories I'd advocate for strong background information about the context of the story as well as detailed acceptance criteria if you don't have that in place already.
2) In my experience, this basically never happens.
Your comment encapsulates a lot of what I have come to call "Scrumbutt." It's Scrum, but. And while I have no idea if it's intended on your part, the sentiment is a fantastic way for a Scrum consultant--only some shade thrown; I've been a "DevOps consultant" before, after all--to come in and pull from deep in their Scrumbutt something to the effect of "you're doing it wrong, Scrum has not failed, you have failed Scrum."
Within epsilon of nobody does Scrum "as prescribed"--because the amount of responsibility that must be undertaken at all levels is virtually impossible to get full buy-in on--and as such the boil on our collective behind that it is persists because criticism is immediately bedeviled by Scotsmen of unknown provenance.
People might be interested in what Scrum is. I know I am. That’s why I pointed out the error. It was a shock to me to learn I wasn’t doing anything close.
Readers can do with the info what they want.
I’m not sure if I can say the same of your comment. You seem to be trying to make me feel bad for commenting? Or accusing me of hawking pointless info for consulting fees? I really can’t tell.
If you have sufficiently detailed user stories, they can be.
Back to the question - what do you do about poor knowledge transfer in a project? I think a moderate de-emphasis on thinking of the user story text and the additional info like acceptance criteria etc. as self-sufficient documentation and adding more emphasis on that close relationship between developer, user, and maybe a tester, can help fill in big knowledge gaps.
I think about that whenever I get frustrated about a vague spec or lack of details. It's the job!
I hope he meant separating out the ambiguity rather than concentrating it. :)
"...the designers job is not to pass along "the design" but to pass along "the theories" driving the design. Knowledge of the theory is tacit in owning..."
Well said. Thank you!
I know as a software developer you don't want to do that. More fun refactoring code than dealing with management. More fun writing that piece of SQL than sitting in a meeting. Easier to whine about missing specifications than to understand the big picture.
Once I stepped back from coding and looked at the software from a birds eye view, I had actually a much easier time programming features than before. More knowledge, less writing code.
Being a part of the early decision making processes has been a challenge for me as a remote employee. In larger companies, there are lots of meetings, discussions, and decisions that happen before anyone talks to the engineering staff is brought in. But, by basically being nice, asking questions, and really getting involved, I've been able to "weasel" my way into some of these discussions.
Once you get involved early on, there's so much more clarity around the one liner "requests" that often get farmed out.
communicationChannels = nrOfTeams(nrOfTeams-1)/2
More people should read The Mythical Man-month
Managers are very unhappy when I tell them of all the knowledge I've developed.
- the problem you are trying to solve
- how you could solve it
- how you actually did solve it
- which solutions come with which flaes and merits
“Beware of bureaucratic goals masquerading as problem statements. “Drivers feel frustrated when dealing with parking coupons” is a problem. “We need to build an app for drivers as part of our Ministry Family Digitisation Plans” is not. “Users are annoyed at how hard it is to find information on government websites” is a problem. “As part of the Digital Government Blueprint, we need to rebuild our websites to conform to the new design service standards” is not. If our end goal is to make citizens’ lives better, we need to explicitly acknowledge the things that are making their lives worse.”
I like being partially or fully nude in the home, occasionally chewing gum, and having the right to freely criticize or endorse ideology on its own merits; even if it sometimes sucks to see dirty black spots on the sidewalk, or to hear people making weak arguments just to upset eachother.
Certainly a weird cosmopolitan fascist (u|dys)topia.
And, if you are on the wrong side, it is very "effective" at ruining your life.
Most of us would take a bit less "effective" in order to avoid that, thanks.
Singaporean here. The government's mainly effective for tasks that are on a happy path. If your particular case falls through the cracks, it often takes phone calls, printing, postage, and weeks or months of waiting to get stuff done.
(Personal experience trying to get business stuff done not as a Private Limited company.)
That sounds like the happy path for dealing with the Canadian government. Well, except the months part.
"The tool is squeaky but it gets the job done" - you wouldn't expect the speaker to do anything about the squeaks. Squeaking is tolerable.
"The tool does the job but it's squeaky" - you would expect the speaker to do something about the squeaks. Doing the job isn't good enough.
Your comment is most easily read as not disapproving of authoritarian government when it is effective.
Not the best example of crowd wisdom.
Perhaps you should do the same for barrkel? I read your parent as a simple explanation to solveit why their comment may have been misconstrued by bsder -- a question solveit directly asked.
If someone produced an insulin pump that you implanted, worked perfectly for life, but killed 1 person in 1,000 randomly, people would be screaming for the head of the CEO of that company rather than calling it "effective".
Effective just means it achieves the intended outcome, it's not a value judgment on the goodness of that intention.
Have you compared it to others? Strained is the last word I'd use to qualify how effective it is...
The following is a wonderful point I have hardly ever heard said directly:
"The main value in software is not the code produced, but the knowledge accumulated by the people who produced it."
"value your knowledge workers"
"your employees are your most valuable asset"
Some companies don't treat employees well, and some employees at good companies feel they are not treated well enough
If the above quotes do not strike a chord with you, you might just be a software engineer who thinks you're more important than non-SEs.
I'm sure another author has put the same sentiment out there before, but it's not every day I see such a nice phrasing of it.
I mean, the point of short quotes is to be memorable and get future listeners to hunt for the reason behind them. "The sun will rise tomorrow" may also be meaningless for some people on its own.
Nothing wrong with elaborating on this subject again via a blog post, I was just pointing out to the commenter who's never heard this expressed before that it has a long history, that's all.
It also introduces unknown amounts of debt and increases the likelihood that you'll end up with intractable performance/quality/velocity problems that can only be solved by re-writing large portions of your codebase.
This can be a dangerous cultural value when it's not presented with caution, which it isn't here. I think it's best to present it alongside Joel Spoelsky's classic advice: "If it’s a core business function — do it yourself, no matter what".
The best advice I can offer:
If it’s a core business function — do it yourself, no matter what.
Pick your core business competencies and goals, and do those in house. If you’re a software company, writing excellent code is how you’re going to succeed. Go ahead and outsource the company cafeteria and the CD-ROM duplication. If you’re a pharmaceutical company, write software for drug research, but don’t write your own accounting package. If you’re a web accounting service, write your own accounting package, but don’t try to create your own magazine ads. If you have customers, never outsource customer service.
This all rings true in my experience. You should write the software that's critical to your core business competency yourself, because the maintenance cost is worth paying if you can achieve better software. But if it's not a core competency and your business isn't directly going to benefit from having best in class vs good enough, then it may be worth outsourcing.
(1) If a problem can be exhaustively specified in a formally well-defined way (mathematical logic), it will be wise to adopt a mature implementation - if it exists.
(2) If a problem can't be so specified, all implementations will be incomplete and will contain trade-offs. I have to address these problems myself to ensure that limits and trade-offs suit as well as possible what the business needs. If I can.
So, (1) says I shouldn't parse my own JSON. (2) says I should avoid the vast majority of what shows up in other people's dependency trees.
The author seems like an unknown in the software development world, but they’re one of the managers for Singapore’s fairly successful digital government initiative. So it does feel safe to say they have some experience.
I suppose he wrote this for other people in the Singapore civil service.
In python, I typically follow a pattern of keeping stuff in __name__ == '__main__' block and running it directly, then splitting to functions with basics args/kwargs, and finally classes. I divide into functions based on testability btw. Which is another win, since functional tests are great to assert against and cover/fuzz with pytest.mark.parameterize 
If the content of this post interested you: Code Complete: A Practical Handbook of Software Construction by Steve McConnell would make good further reading.
Aside: If the domain .gov.sg caught your eye: https://en.wikipedia.org/wiki/Civil_Service_College_Singapor...
I like to do it early also, to make sure that the new script, if imported from by a sibling module, is inert.
An example would be a scripts/ folder and sharing a few functions between scripts w/o duplicating.
In some cases I don't have a choice. Initialization of a flask app/ORM stuff/etc has to be done in the correct order.
I think the general rule of thumb I follow is: avoiding keeping code that'd "run" in the root level. Keep it in blocks (normally to me functions) has the added effect of labeling what is does.
What I don't do: I don't introduce classes until very late. In hindsight, every time I tried to introduce a complicated object model, I feel I tended to overengineer / encounter YAGNI
I came across this article: https://mothership.sg/2015/03/lee-hsien-yang-reveals-the-sto...
> I have taught my children never to mention or flaunt their relationship to their grandfather, that they needed to make their own way in the world only on their own merits and industry.
I keep on saying that Software Literacy is a real thing. And that this current generation of leaders are like Charlemagne - he was the first Holy Roman Emperor and the last who was illiterate.
Interesting to see it in practise
Even Singapore's PM has to put up with smug Haskell programmers
And probably he was the best of what followed as well, so this literacy thing didn't go as well, where power figures were concerned...
A good example of this is:
- Add worker thread for X to offload Y
When the actual problem is more along the lines of:
- Latency spikes on Tuesdays at 3pm in main thread
Which may be caused by a cronjob kicking off and hogging disk IO for a few minutes.
A good rule of thumb I've found is that task tickets tend to have exactly one way of solving them, whereas problem tickets can be solved in many ways.
So in that case, I guess either run the job with a lower priority and see if that helps, or execute the job more often so it doesn’t have to catch-up all at once one time per week, or rewrite it so that it performs I/O with smaller chunks of data at a time and sleeps for a little while in-between reading or writing chunks of data. Basically, do something so that you no longer have this one huge job consuming all of the IO bandwidth for several minutes every week.
There was one periodic job that we moved from the production server to work off the daily backups instead of the live server.
It's not something anyone can diagnose from what you say, it could be anything, even weirdness such as a hardware fault kicked off by something else (office cleaner plugging something in?) causing power spike RF interference affecting the network causing mass packet drops and retries (ok, unlikely but it's not impossible, I've heard of such).
1. Making sure what you build is what was really requested (correct), and
2. Making sure what you've built doesn't have a higher running "cost" than the thing it replaced (either manual process or old automated solutions).
Everything else, IME, is ancillary. Performance, choice of platform, frameworks, methodology to build, maintainability etc are sub-objectives and should never be prioritized over the first two objectives. I have worked on many projects where the team focussed mostly on the "how to build" parts and have inevitably dropped the ball on the "what" to build of the projects. Result: failure.
Sauce: personal experience with several years of different projects (n = 1; episodes = 20+ projects that have gone live and have remained live versus 100+ projects lying by the wayside).
Writing software is not easy.
This is where most companies fail. Yes, they do want the best developers, but for the budget of an average junior/medior dev.
For some reason most companies/managers I worked for do not understand the financial impact of a not so good developer. Or the other way around; they fail to value the best developers and are unable recognize them.
I've worked for plenty companies where they let mediocre dev's build a big app from scratch (including the architecture), in an Agile self managed team.. These are the codebases that always need to be rewritten entirely because they have become an unmanageble buggy mess of bad ideas and wrong solutions.
If every single company wants that, where is he space to grow and learn from mistakes?
Maybe I'm wrong but I think those "mediocre dev's" learned a lot building a big app from scratch, solving bugs and refactoring.
If you want great devs, you're going to have to invest in junior devs, and you're going to have to expect them to be learning at work. This is also why the best use of your senior devs is as mentors to your less experienced ones.
But If you can hire great devs that already comes with the experience, required skill and you can pay for it, then why not?
If I'm a dev who is willing to put the time outside work to improve myself, wouldn't that put me in advantage when applying for job, compared to people who are not willing to put the time?
Senior/junior title designation shouldn't have much importance when evaluating candidate anyway, rather on what they can actually do or provide.
Then the project turns out to be months late, even though I called the timeline of the project virtually unfeasible, and we have to go back and make several changes that could've been caught early on with a better strategy.
The problem with hiring the "best" engineers is as follows:
1. Nobody can ever tell you what the best means. People just throw 10x around without any explanation.
2. Most people in the world are average. You simply don't have enough of the best people to handle the work load, even if they're 10x average. So much existing software and new problems exist that it's nigh impossible to have the best everywhere.
3. Many of the best people are able to write really good code, but they consider it so easy that they often write code that they think is correct and it gets put in production. Since they're loners, they often don't do the necessary leg work either because of their own arrogance, or because the company hasn't clearly defined its processes and the developer can't even reach this goal despite numerous efforts. So management just believes the code is correct without any verification.
4. Many average developers support the best ones by taking needed work away from them through comparative advantage. Just because X employee is awesome at Y task, doesn't mean he meets the highest utility by doing Y task all the time. Especially when there are conflicting priorities.
5. The best engineers aren't going to be working at a small company in most cases. They also aren't likely to be paid well outside a large company either. The article sites Google, Facebook, and all the large tech companies and their supposed stringent interview process as a reason. But these companies have written terrible software (Google+, AMP pages) and become ethically compromised easily. Plus their interview process is often so outside the daily work flow because it involves answering algorithm questions, that it often makes no sense. Even worse, it teaches people to do katas instead of build actual projects. Project based interviews make much more sense.
6. Rewriting code bases is one of the worst things you can do and is what caused Netscapes downfall. Companies with supposedly the best engineers (ie. Netscape), can't even do it well.
So while hiring the best engineers is an awesome goal. It isn't feasible in a lot of cases.
I admit I have some bias as I consider myself pretty average. But I do a lot of crap on the side that "10x devs" don't even hear about because they're working on something more urgent. Does that mean I'm worthless?
it won't help you to have 11 'Lionel Messi's on your team.
good compatibility among players is much more preferable. It's probably better to have small robust teams that can work together, ppl who are avg in most required areas and are rockstars in certain specific ones.
But in this case I think 11 Messi's would win everything there is to win in football for a decade straight.
The claim is:
> Stakeholders who want to increase the priority for a feature have to also consider what features they are willing to deprioritise. Teams can start on the most critical objectives, working their way down the list as time and resources allow.
In other words, the argument is "competing priorities in a large-scale project make it more likely to fail, because stakeholders can't figure out which ones to do first." Actually, in this very paragraph, the author glosses over the real issue: "Teams can start on the most critical objectives, working their way down the list" - treating development as an assembly line input-to-output process.
I argue that it's not time constraints that complex programs bad, but instead the mere act of thinking that throwing more developers at the work will make it any better. Treating the application as a "todo list" rather than a clockwork of engineering makes a huge difference in the quality of the work. When developers are given a list of customer-facing features to achieve, more often than not the code winds up a giant ball of if-statements and special cases.
So yes, I do agree that complex software is worse and more prone to failure than simple software - but not for the reason that there's "too much to do" or that prioritizing is hard. Complex software sucks because it's requirement-driven, instead of crafted by loving hands. No one takes the time to understand the rest of the team's architecture or frameworks when just throwing in another special case takes a tenth of the time.
There are different personalities of engineers, those who thrive on explicit requirements and can accomplish difficult engineering tasks when they are given clear requirements. But those engineers should only be given those requirements once the job that the customer is trying to get done is clearly understood. Some engineers have the ability to find creative solutions, that customers or product managers can’t see, when they are provided with problems and jobs rather than requirements and tasks.
Managers would be wise to distinguish between the type of engineers they are managing and play to their strengths. Whatever type you have, understanding the job the end user is trying to get done must occur, preferably by an engineer that’s capable of articulating that, if needed, to team members as technical requirements.
> There are engineers who can accomplish difficult engineering tasks when they are given clear requirements and engineers have the ability to find creative solutions when they are provided with problems and jobs rather than requirements and tasks.
I feel like I could perform adequately in either environment. The problem is I've previously found myself in environments where I'm expected to come up with creative solutions to a problem, but I have no access to the customer or even a simulated environment where I could try to do something similar to what a customer would do.
In this kind of case, it's impossible to really know how to articulate your requirements, because all you can use is a fantasy model of hypotheticals. But requests for more precise requirements are potentially brushed off as wanting to be spoon-fed what you need to do and having inability or unwillingness to think creatively.
I argue that it's not time constraints that complex programs bad,
but instead the mere act of thinking that throwing more developers
at the work will make it any better.
Treating the application as a "todo list" rather than a clockwork
of engineering makes a huge difference in the quality of the work.
When developers are given a list of customer-facing features to achieve,
more often than not the code winds up a giant ball of if-statements
and special cases.
But it absolutely does not need to be the case.
If you treat engineers as interchangeable cogs who only need to know about one story at a time, and never tell them about the medium- and long-term goals of the business and the application? Then yes. Then you get an awful code base with tons of if-then crap.
However, it doesn't need to be this way. If you give engineers visibility into (and some level of empowerment with regard to) those longer-term goals, they can build something more robust that will allow them to deliver features and avoid building a rickety craphouse of special cases.
I have experienced both scenarios many times.
This is a misinterpretation of the article's claim. The article very explicitly begins by saying that the best recipee to increase a project's chances to success is to:
> 1. Start as simple as possible;
> 2. Seek out problems and iterate;
The priority part reads to me as a way to determine which features are critical (and hence part of the as simple as possible set) and which ones are not (and hence you should not build "yet"). The underlying vibe being that these other features should probably never get implemented because once the critical ones get built and the software is put to use you will actually find other critical fearures that solve actual problems found through usage.
That is, only when you find that one of the initially non-critical features has become a hindrance for users actually using your software you should seek to implement it.
I really think this would be a better way to build software, just as much as I think that you will have a very very hard time getting any management on board with it...
This means that instead of lots of issues with business logic being separate from the data the business logic and data sit together and prevent your system from getting into bad states.
Thinking about this, maybe I just stole this thought from Derek Sivers: https://sivers.org/pg
A database in my opinion is not a good place to write business logic with functions and triggers, since there is lack of tooling that would make development and debugging easy. Let the database do what it does well, which is storing and querying data.
Because there is no formal definition for what is bad or good software. Nobody knows exactly why software gets bad or why software gets good or what it even exactly is... It's like predicting the weather. The interacting variables form a movement so complex that it is somewhat impossible to predict with 100% accuracy.
What you're reading from this guy is the classic anecdotal post of design opinions that you literally can get from thousands of other websites. I'm seriously tired of reading this stuff year over year rehashing the same BS over and over again, yet still seeing most software inevitably become bloated and harder to work with over time.
What I want to see is a formal theory of software design and by formal I mean mathematically formal. A axiomatic theory that tells me definitively the consequences of a certain design. An algorithm that when applied to a formal model produces a better model.
We have ways to formally prove a program 100% correct negating the need for unit tests, but do we have a formal theory on how to modularize code and design things so that they are future proof and remain flexible and understandable to future programmers? No we don't. Can we develop such a theory? I think it's possible.
The Applied Category Theory folks have some very interesting stuff, like Categorical Query Language.
But it sounds to me what you mean is more like if "Pattern Language" was symbolic and rigorous, eh?
(PDF available here: http://pespmc1.vub.ac.be/ASHBBOOK.html )
Cybernetics might be the "missing link" for what you're talking about.
I'm looking more for a theory of modules and relationships. Something that can formalize the ways we organize code.
It sounds like CT is what you're after (to the extent that we have it at all yet...)
Also the sentence 'algorithms that applied to algorithms produce a better model' has a strong smell of halting problem, at least to this nose.
Intuitively, software can be modeled as a graph of modules with lines representing connections between modules. An aspect of "good software" can be attributed to some metric described by the graph, let's say the amount of edges in the graph... the less edges the less complex. An optimization algorithm would probably take this graph as an input and output a graph that has the same functionality but less edges. You can call this a "better design." This is all really fuzzy and hand wavy but if you think about it from this angle I'm pretty sure you'll see that a axiomatic formalization can be done along with an algorithm that can prune edges from a graph (or in other words, improve a design by lowering complexity)
A computer program is a machine that translates the complexity of the real world into an ideal system that is axiomatic and highly, highly simplified. Such a system can be attacked by formal theory unlike real world issues like what constitutes a good car.
In my experience, developer-side evaluation has a very low impact (I was about to write: zero) on the perceived and actual goodness of the software itself. Which is tied mostly to factors such as user experience, fit to the problem it was designed for and to the organization(s) it is going to live in (user experience again). These properties do not strike me as amenable to algorithmic improvement, no more than "pleasant body lines and world class interiors" in the original car analogy. But they are a (big) part of good software design, besides being the 'raison d'etre' of the darned thing to begin with.
But let's forget cars, as hard as it is. Few months ago HN was running the story about developing software in Oracle. Now, Oracle may be by now a little soft around the edges, but I think that most would agree that it has been setting the standard for (R)DBMS for decades. Success may not on itself be the tell-all measure of software goodness, but the number of businesses that have been willing to stake the survival of their data on Oracle is surely a measure of its perceived goodness (as that other elusive factor - hipness - tends not to be paramount in the DBMSs business).
The development side story, taken as face value, was pure horror (https://news.ycombinator.com/item?id=18442941). Everything in it spoke bad, outdated, rotting design. The place must be teeming with ideas on how to improve just about everything in that environment. And yet if that came to be, maybe by some nifty edge pruning algorithm, it would do nothing to improved the goodness-to-the-world measure of the software, not until the internals' improvement translated to observables in the user base experience.That type of improvements will still require vaste amount of non-algorithmic design and, in the meantime, a very concrete risk will be run of deteriorating the overall user experience (because ehi, snafus will happen).
This (internals are just a small part of the story) is one of the reasons why so many reimplementations I have seen failed ("ehi, let's rewrite this piece of shit and make it awesome") and the reason because everyone resists the move from IPV4 to IPV6. I could think of many more examples.
This all takes a bird-eye view and a long perspective, very unlike quarter-results-driven development.
This one struck me, because as soon as I read it I knew it was true yet had never considered it:
> Most people only give feedback once. If you start by launching to a large audience, everyone will give you the same obvious feedback and you’ll have nowhere to go from there.
I've been on both sides of that fence and it rings true.
This article is full of good ideas, an antidote to creeping corporate take over of software projects - make this required reading for software projects.
The problem is lack of knowledge. The successful projects mentioned above did not have a lack of knowledge, and so they were finished successfully.
When there is a lack of knowledge, then it makes sense to use the iterative approach...as knowledge is slowly gathered, the software gets improved. As with all things in life!
But starting a "gather requirements - write software - deliver it" lifecycle because you are confident that you have all the knowledge is one as well.
Now we have government digital systems leading the charge across most western countries, and we have excellent polemics like this. I am just so happy to see this level of insightful ness at top levels of government.
I am so glad they listened to me :-)
This is spot on, and very much my experience (of the good engineers I've come across).
Kind of : management had planned extensive and painful testing of a component that turned out to be discarded entirely (not because of functionality reduction but because it was actually unecessary).
Reusing good modules and software will make the software work.
Kiss engineering still works keep it simple stupid. Make it as simple as possible. Simple software and systems are easy to maintain and understand.
Use modules as these can be swapped out.
Use proven boring technology such as SQL and JSON. Boring tech has been tried by others and generally works well.
What makes you think so?
Translation: the successful tech companies have so much poorly documented legacy enterprise spaghetti code and tooling that they need the best talent they can get just to make sense of it and maintain it
* has a better grasp of existing software they can reuse
* (has) a better grasp of engineering tools, automating away most of the routine aspects of their own job
* design systems that are more robust and easier to understand by others
* the decisions they make save you from work you did not know could be avoided
* Google, Facebook, Amazon, Netflix, and Microsoft all run a dizzying number of the largest technology systems in the world, yet, they famously have some of the most selective interview processes
Google views picking new engineers like picking quality construction metals. In the end, the machine melts you down and hammers you into a pristine cog.
I do think perhaps there is too much emphasis on reuse and particularly cloud services. Ironically, this is partly for the reasons given elsewhere in the article. If you rely on outsourcing important things, you also naturally outsource the deep understanding of those important things, which can leave you vulnerable to problems you didn't anticipate. Also, any integration is a source of technical debt, so dependencies on external resources can be more fragile than they appear, and if something you rely on changes or even disappears then that is a new kind of potentially very serious problem that you didn't have to deal with before. Obviously I'm not advocating building every last thing in-house in every case, but deciding when to build in-house and when to bring something in can be more difficult than the article here might suggest.
Perhaps some software development techniques would work though...
> The main value in software is not the code produced, but the knowledge accumulated by the people who produced it.
Those people go on to work on other things or for other organization. So, while that statement might have some truth to it, it's still the case that the code has to be useful, robust, and able to impart knowledge to those who read it (and the documentation).
> Start as Simple as Possible
That's a solid suggestion to many (most?) software projects; but - if your goal is to write something comprehensive and flexible, you may need to replace it with:
"Start by simplifying your implementation objectives as much as possible"
and it's even sometimes the case that you want to sort of do the opposite, i.e.
"Start as complex as possible, leading you to immediately avoid the complex specifics in favor of a powerful generalization, which is simpler".
> Perhaps some software development techniques would work though...
As you go up the management chain, you usually run into some layer where people are traditional managers, who want to run a software project like a traditional project. And behold, you're at this problem. Saying "software development techniques would work" is useless unless you can get those managers to change. And when you get them to change, the problem moves up one layer.
When faced with a standard solution, use a standard component if you can. If you can't use a standard component, build a standard component. Keep your components simple, well-understood, and easy to maintain.
...While I do agree that "project-management" is important, I think the tools we are using today are really underpowered to deal with complexity/human-error - Which is the bigger problem IMO.
The problem is most CEOs see the binary as the asset, not the knowledge gained. I've tried to explain this concept to multiple startup CEOs, who hire outside development firms, for which it rarely works out for them.
Or the management techniques considered “traditional” are overlooking a century of iterative development outside of software. See Deming.
This site is an empty page without JS.
This is also the real problem with vendor lock-in.
You are more often locked in by the knowledge of your employees than by your tech stack.
What is the definition of "best engineers"? Those with extensive experience? those who follow design patterns and coding standards religiously? those who solve algorithms on a whiteboard? I would like to see if there is a definition for this.
I would say build the right culture (collaborative, always learning from mistakes and revise decisions and no blame or pointing fingers).
You can get a bunch of great coders/engineers _who follow code standards, break down codes to zillions of functions/methods ... etc_ but will fail to work together and conflicts will raise quickly.
Industry this days is more about headcount than quality itself. Why hire two good engineers when you can have three mediocre ones for the same price?
On simplicity, common wisdom these days dictate that we should use bloated kitchen-sink backend MVC frameworks that generate dozens of directories after `init`, because supposedly nobody knows how to use routers. Frontend compiler pipelines are orders of magnitude more complex than the reactive frameworks themselves, because IE11. And even deployment now requires a different team or expensive paid services from the get go. We're definitely not seeking simplicity.
The second point is also something that most developers and managers would balk at: "To build good software, you need to first build bad software, then actively seek out problems to improve on your solution". Very similar to the Fred Brooks "throw one away" advice that no one ever followed.
I've seen plenty of poor decisions that cause 10x the work, and end up with something 10x less maintainable.
You have entire blog posts by Steve McConnell of Code Complete fame devoted to defending the 10x claim by citing 20 to 50 year old research that shows 5x to 20x differences across certain dimensions and then him falling back to the 10x thing. Not one single sentence where he is being self aware enough to spell out the most likely reason for "10x" being so prominent: 10 is the base of the decimal system and as such psychologically attractive to use.
> Both Steve Jobs and Mark Zuckerberg have said that the best engineers are at least 10 times more productive than an average engineer.
I know I'm venturing into ad hominem territory with this, but first of all: Steve Jobs wasn't a programmer. Mark Zuckerberg, well does he even qualify as a programmer nowadays? How well can he quantify programmer productivity? His decision to use PHP led Facebook to create HHVM and Hack. Is this the 10x developer way?
Anyways, the question to me is: Is it possible for average software engineers to write good software?
If someone suggests you focus on the 20% of customers who make 80% of your revenue, and you run the numbers and find a 75-25 distribution, should you call the person making the suggestion an idiot?
You should seek to demonstrate instead that you're making software that is more malleable, has less bugs, is easier for new hires to understand, is easy to add new features, etc.
Edit: fwiw id work with ya though. Caring enough to try is half the battle.
> The project owners start out wanting to build a specific solution and never explicitly identify the problem they are trying to solve. ...
At this point, it looks like the article will reveal specific techniques for problem identification. Instead, it wraps this nugget in a lasagna of other stuff (hiring good developers, software reuse, the value of iteration), without explicitly keeping the main idea in the spotlight at all times.
Take the first sentences in the section "Reusing Software Lets You Build Good Things Quickly":
> Software is easy to copy. At a mechanical level, lines of code can literally be copied and pasted onto another computer. ...
By the time the author has finished talking about open source and cloud computing, it's easy to have forgotten the promise the article seemed to make: teaching you how to identify the problem to be solved.
The section returns to this idea in the last paragraph, but by then it's too little too late:
> You cannot make technological progress if all your time is spent on rebuilding existing technology. Software engineering is about building automated systems, and one of the first things that gets automated away is routine software engineering work. The point is to understand what the right systems to reuse are, how to customise them to fit your unique requirements, and fixing novel problems discovered along the way.
I would re-write this section by starting with a sentence that clearly states the goal - something like:
"Paradoxically, identifying a software problem will require your team to write software. But the software you write early will be quite different than the software you put into production. Your first software iteration will be a guess, more or less, designed to elicit feedback from your target audience and will deliberately built in great haste. Later iterations will solve the real problem you uncover and will emphasize quality. Still, you cannot make technical progress, particularly at the crucial fact-gathering stage, if all your time is spent on rebuilding existing technology. Fortunately, there are two powerful sources of prefabricated software you can draw from: open source and cloud computing."
The remainder of the section would then give specific examples, and skip the weirdly simpleminded introductory talk.
More problematically, though, the article lacks an overview of the process the author will be teaching. Its lack makes the remaining discussion even harder to follow. I'll admit to guessing the author's intent for the section above.
Unfortunately, the entire article is structured so as to prevent the main message ("find the problem first") from getting through. As a result, the reader is left without any specific action to take today. S/he might feel good after having read the article, but won't be able to turn the author's clear experience with the topic into something that prevents more bad software from entering the world.
It seems we are on the path to repeat history with software engineering, what with how software and the internet is being developed with such little regard for public safety and long term consequences.
Unfortunately, it appears that the "free love" phase of software engineering is coming to an end, as society now relies more and more on software and major tech players for life and safety. It's starting to get real for software engineering.
Luckily, other engineering fields have been here before, so this sort of transition shouldn't be anything new.
Relevant Tom Scott video: https://www.youtube.com/watch?v=LZM9YdO_QKk
> Unfortunately, it appears that the "free love" phase of software engineering is coming to an end, as society now relies more and more on software and major tech players for life and safety. It's starting to get real for software engineering.
Software will always be a spread of reliability requirements, from pacemakers on one side to excel reports on the other. Part of being a responsible user is choosing software with the right balance of economics and reliability for the job.
This is bad advice. It's like saying "go into a bar and start picking up fights".
If some part of the software has problems, runs slow or has bugs but nobody is complaining, then there's no problem. Why waste time improving it?
Almost 100% of the time when you solve a problem you just create new problems of different kind in turn.
Be lazy. The less code you write the better off you are.
This depends very much on context. To pick an extreme example, if you're writing the control software for a nuclear weapon and you know you have a bug that might cause it to activate unintentionally if you eat a banana while it's raining outside, I think we can reasonably agree that this is still a problem even if so far you have always chosen an apple for lunch on wet days.