Just a few other things I call "work prevention" that make good developers appear to be slow:
- unclear specs
- context switching
- peer review (both giving & receiving)
- code review
- status meetings
- waiting for others to show up at meetings
- others not prepared for meetings (no agenda, etc.)
- unnecessary but enforced standards (stupid, pointless)
- programs locked by other devs (get in line)
- pre-existing technical debt (must be fixed before amended)
- framework can't handle it
- works but won't scale
- can't recreate bug
- platform issue (server, DBMS, etc.)
- environmental issues (test server out of date)
- waiting for server resources (testing)
- waiting for regression tester
- waiting for user acceptance testing
- specs changing during user acceptance testing
- uncontrolled deployment "freezes" (month-end, release schedule)
- noisy office
- meetings when emails would have sufficed
- phone calls when emails would have sufficed
- text messages when emails would have sufficed
- continually changing priorities
- delays in offshore communications
- lack of training (technology)
- lack of training (business subject matter)
- too much training (HR bullshit)
- not enough donuts
I can't stand hearing that there's no room in the budget to pay a flat $500 to license a 3rd-party grid control (for example) only to have a $90k/yr salaried developer spend 20 or more hours recreating barely a tenth of its features. It's even worse when the license is free (as in speech), but it still can't be used, because the code hasn't been blessed by the security department, or because management simply has a prejudicial bias against the terms of the LGPL.
Usually, the argument goes like this:
- We cannot use X until it has been cleared by legal and/or security.
- Legal and/or security do not have the resources available to clear X for our use within a time span that would allow X to be useful.
- As a result, we won't even ask them to try.
- Therefore, screw you, nerd.
- We don't have room in the budget to license X.
- The budget won't change until after the end of the project.
- We can't justify altering the budget to accommodate licensing X unless it would be useful for an ongoing project.
- You won't be allowed to comment on upcoming projects until after the budget is fixed.
- Therefore, screw you, nerd.
Or in the worst case:
- Here is a detailed proposal to use invented-elsewhere thing X in our product Y, with gobs of research and references, and oodles of possible pros and cons.
- We don't understand what you are talking about.
- So we're going to fixate on some tiny, mostly-irrelevant detail, and then blather about it until you go away.
- Therefore, screw you, nerd.
At my previous job I requested an upgrade from a 3 year-old laptop. Initially they approved it, but before ordering one they reversed and denied my request. The justification was they'd spent too much outfitting new hires and it was no longer in the budget, despite my request coming long before the interview process started for the new hires. To make it worse, the new hires were entry level and I had the second longest tenure at the company (~90 people).
Now I work somewhere that happily provides every technical employee with the latest maxed out hardware. The difference in amount of work accomplished is staggering.
Well, I guess that just depends on how productive you expect me to be over the next quarter.
In reply to the top post, I would add the following task delayer :
- supplier unreliability
We order a server from a major manufactoring company, which had a faulty PCI bus (well hardware defaults happen). They took 6 months and 5 differents motherboards to ship us the same exact configuration we initially payed for !
Like... you'll pay a developer $100 an hour, but then you ask them to spend an hour in a meeting to justify the purchase of a $50 widget that will save them an hour a day for the next year.
There must be some twisted logic at work, but I don't see it.
I am sure I would be way more effective without my managers. Prioritization would be somewhat different, but many of the day to day problems that hinder me being properly productive would have gone by now.
The question then becomes, is it a secretary/manager/ceo complaining about things 'not being fair' or is it simply that no one was ever asked and IT ended up making a decision on their own?
Sometimes decisions left unspoken end up causing a kind of solidification by accident.
I find the biggest cause is that IT ends up being silo'd off (or being an MSP/corporate branch) so they aren't in touch with the workplace's culture and assume its one of petty ego.
Another factor can be that many of these decisions are looked through the eyes of the company's stock and that causes a sort of myopia regarding short-term costs vs long term gain.
Both of these end up being nasty feedback cycles and can be impossible to break once it becomes status quo.
They might need to categorize all expenses as capex vs opex, etc.
It's a shame this isn't more obvious. I was recently given an old laptop at work as well. So old, that it took 7+ seconds from the time I clicked "Inspect Element" in Chrome for the window to open. It was weeks before a purchase for RAM was made.
- waiting for regression tester
- substandard documentation
...including old compilers and otherwise a lack of support for the compilers, debuggers, editors, analyzers, profilers, etc. that will enable a developer to focus on business problems instead of technology stack problems.
Anthropology of corporate cultures is fascinating: http://www.daedtech.com/how-developers-stop-learning-rise-of...
Can you elaborate on peer review and code review ? I have worked with and without these steps and can quantify the hours required and the quality shift in having these steps.
Do you think that these two can be disconnected from the development workflow with acceptable consequences? Or is it that there might be other standards or development measures which are no being met, such that there IS a need for review ?
I agree, there's a handful of these issues I'd planned on talking about in future posts; but your list adds a bunch more I hadn't thought of.
Lack of training is a tricky one, isn't it? If you give the wrong kind of training, or too much, or too little, it's easy to miss the mark.
It is all too easy for a developer to blame difficulties on poor specs.
But if you're doing agile properly, you shouldn't consider a user story a 'spec' - a user story should be a placeholder for a conversation.
It is unrealistic to expect a non-technical stake holder to deeply and accurately describe anything but the most trivial feature. It needs a developer mindset to poke the idea, see what holds water, see where it falls apart, foresee the consequences.
You can't do that in a spec. It needs to be a conversation.
Agile favours 'Individuals and interactions over processes and tools', and this is too often forgotten when someone is pimping their tool, or blaming their process.
Have a conversation, understand what's needed, build good software.
Yes, this is a very good point. Unfortunately it's also a point that many non-software developers miss. Writing software is hard. Very hard. We, as an industry, should be pointing that out more.
> But if you're doing agile properly, you shouldn't consider a user story a 'spec' - a user story should be a placeholder for a conversation.
I'll submit that there's no way to know whether you are doing agile properly or improperly, unless you move to another company.
> It is unrealistic to expect a non-technical stake holder to deeply and accurately describe anything but the most trivial feature. It needs a developer mindset to poke the idea, see what holds water, see where it falls apart, foresee the consequences.
This smacks of elitism and it is what makes non-developer scoff at us. Developers are not gods, we're not even the smartest people.
We are simply the people that have to crystallize the requirements into a computer that only understands logic. Non-tech stakeholders can completely describe business requirements and also be made to understand the how their requirements fit into the logic of a computer. Non-technical people are not stupid, they just have different interests.
> You can't do that in a spec. It needs to be a conversation.
The spec is the output of the conversation.
> Have a conversation, understand what's needed, build good software.
And this is the most important thing said.
The trick with adhering to your statement is to define the parties involved in each step. Minimize those parties and then implement each step in order as fast as possible without sacrificing quality.
Remove the word "Agile" and the concept of software development, and what you've just described is how work has been completed for thousands of years. I fail to see how Agile comes into play.
> This smacks of elitism and it is what makes non-developer scoff at us. Developers are not gods, we're not even the smartest people.
> The spec is the output of the conversation
It’s important to have developer buy-in. Their passion for a feature can be a huge driver for velocity.
This is why I also appreciate PM tools that incorporate discussion threads: that back-and-forth communication really helps clarify what you're building. I also like using the 5 whys after the initial spec has been written: "Why are we building this feature?"
> It is unrealistic to expect a non-technical stake holder to deeply and accurately describe anything but the most trivial feature.
Few would consider going out and hiring their own construction crew to build their new house, managing materials deliveries, blockers, scheduling, head-count, etc. That's what a General Contractor is for.
But in software, both (some) business people and (some) developers think they can get away with attempting to go down a well-travelled path (by others) and do it all themselves.
Without that experience and expertise it's more likely than not IME that you'll spend more than you have to, and since you aren't a scheduling wizard and are trying to keep all this stuff in your head, you'll have numerous "stop work"s, conflicts, change-orders for half-baked designs, etc.
If you're building anything beyond the trivial, and you'd like to have a reasonable ball-park for what it's going to cost and how long it's going to take, make sure you're talking to the GC and not the plumber. And run far away from any business that tells you they cut out overhead by removing the GC.
I saw a slide not too long ago depicting a development process. It went something like: Skateboard -> Bicycle -> Scooter -> Car -> Truck.
If I'm the one writing the cheques, I'd be pretty pissed that I paid for four vehicles I didn't need before I got the one I did. Especially when it could have all been easily avoided by asking the right questions.
Agile can favor items on the left over items on the right. That doesn't mean you don't do the items on the right. It doesn't mean you start without a contract. It doesn't mean you don't plan. It doesn't mean you don't document requirements.
There's nothing Agile about having a chat between A and B, sitting at a keyboard, and spending money. That's just cowboy coding. And when it inevitably goes south, best of luck getting the stockholder to fall on their sword for you.
> There's nothing Agile about having a chat between A and B, sitting at a keyboard, and spending money. That's just cowboy coding
If all goes well, maybe you have a happy client at the end of that. If they're OK with costs and time being potentially an order-of-magnitude off.
Most people outside of the software world aren't.
Most people outside of the software world use specs. I feel like if there were a better, more reliable way to build something, anything, it would be common.
But wether it's a car, a house, a boat, a bridge, a road, a piece of furniture or a motion film, that's not the way it works generally. If you want repeat business anyways. I feel like resounding success without specs is the exception.
At least it's always been that way in my career. The less planning effort, the less success. The correlation is so strong IME that it's basically like saying fire is hot.
As someone whose dealt with independent contractors for house work, it really sucks for a client to not know the what, for how much and when. I've never met a client that objected to nailing those down (within reason) before I started sending over invoices for Developer hours.
In twenty years I have never known a written specification help achieve that, and it has often hindered it.
I guess your experience differs, but I focus on delivering software my clients love, and so far that has never failed to work.
Lots of developers are pretty bad at this, too; its more of a system analysis than coding skill.
You need a conversation directed by someone with a clear thought process that can encompass what is important in both the business and technical domains, ideally involving both business and technical experts, to get to a spec. But you absolutely can specify what needs to be done in a spec, and if you don't document the outcome of the conversation in one, its almost always going to be trouble down the road in any significant system. In the best case, its going to result in the technical staff answering lots of questions from business users (often, when neither the original technical nor business folks are in the same role) about what the system is expected to do, rather than actually working on system improvements.
If you replace specs with conversations rather than using conversations to get to specs, then everytime someone would otherwise consult a spec to answer a question later they end up consulting a programmer, instead.
Yep. It almost sounded like this article was saying "you're not waterfalling hard enough". Processes are not a substitute for human interaction, they are supposed to enable human interaction.
But it also reflects the real world.
Building good software is hard.
Don't get me wrong, I like the story format of detailing requirements, in most cases it definitely helps (although I know many developers who dislike the story format), but it is not without its issues. The weak leak is always the product owner writing the wrong stories within a task.
In my opinion the biggest contributor to tech debt is NOT lack of stories in tasks or poorly written ones leading to vague requirements (although it doesn't help), it is unrealistic timelines and failing to factor in testing. When people are pushed to achieve a long list of things in a short amount of time, they take shortcuts to get them done which appeases the product owners but leaves some landmines in the codebase waiting to be stepped on.
The biggest issue with Agile in my opinion is lack of time devoted to testing. It works on a cycle of "You estimated we can get this much done within this much time" I have never been in a planning meeting that even mentioned testing as part of the normal development cycle, it is always an afterthought when the other big ticket items are done and time is running out. Time for writing tests and testing should be instantly baked into the Agile process (maybe it is and my experience has been unfortunate), this will significantly reduce tech debt.
I think the issue with testing and technical debt in my experience has been the fact the places I have worked in like to call themselves Agile, they have the daily standups that always descend into debates and meetings, they have the planning poker events and we do sprints, however they don't fully adhere to the rules of Agile. When you start slipping in extra tasks that were not estimated on or were just added and slip them into the current sprint, that is not Agile. When you start changing deadlines and priorities midway through a sprint, that is not Agile. I think this is a problem that is more prevalent than people realise. There are misconceptions as to what proper Agile is.
1) when estimating, dev tests are part of the strategy (and, ideally, stories are written with enough detail to make testing strategies clear). Sometimes we review the ticket with QA to ensure that we both understand what's being asked for and what needs to be taken into account. Most tests at this point would include unit tests and functional tests.
2) Once a task is done, the story is reviewed with someone from QA to ensure it works. They suggest a couple of things to try that may require us to make improvements. The goal is to catch 80% of the issues at this point with 20% of the effort, and the pair-testing does a great job of flushing out issues. Here the focus is on functional tests and exploratory tests.
3) The QA team runs their own sprints testing the dev team's previous sprint's work. This is mostly performance and integration tests, but sometimes includes functional testing.
I thought the process was good because we were able to measure our software quality and address it quickly. That said, it feels somewhat cumbersome. This isn't an easy problem to tackle.
Officially, all projects are written using TDD, and all estimates for all tasks should include time to write Acceptance and Unit following the suggestions in "Growing object oriented software guided by tests", along with any integration tests with external software that may be required.
Even unofficially, I've never seen anyone consider a story "complete" until there's sufficient automated tests for it, and there's been at least some manual testing.
Another thing we strongly encourage (and do) as part of Agile is regular retrospective meetings on our software development process. If our testing strategy (or lack thereof) was causing Fear during refactoring, prod bugs, or difficulty during maintenance, this would be bought up during such a meeting and addressed.
I'm not saying you're not "doing Agile", but your experience very strongly does not match mine.
A good first step to take would be to get your team together and work to define a Definition of Done, which should include both unit and integration/E2E testing (manual and/or automated) being complete - any stories not meeting these criteria cannot be considered as "done" and you can't count the points for them in the sprint.
Of course, initially this will probably cause lots of failed sprints and will decrease your velocity, but you have to see this as a positive, in that this raises visibility of the problems, and once you can identify the specific issues you're having and take the steps you need to resolve them (whether that's a lack of QA resource, or poor testing culture among developers, or whatever), you'll know when you say something is "done", it actually is done, rather than hiding a load of extra testing work that isn't complete.
On the requirements front, again defining a Definition of Ready can help - these are the criteria stories need to meet before you will estimate or start working on them. This should include things like requirements being clear, designs/UX being complete (if appropriate) and the story being testable and of a suitably small size for you to estimate with a reasonable degree of confidence.
Once you start pushing back on estimating stories because they don't meet these requirements and educating your product owners what they can do to improve the situation (for example, breaking stories down into smaller chunks, or defining requirements more clearly), you'll hopefully find the situation improves.
Of course, none of this is a substitute for conversations and working with the product team to help them understand what does and doesn't work for you, but I've personally helped make big differences to a team's quality and productivity by taking small steps such as these, in addition to educating the team and business on what "agile" is (without all the BS that some people will try and sell you!) and why this is important to us as developers and therefore to the wider organisation.
If you don't have the process, then you don't have an application to write.
"how it works now, how it should work".
This took at most a month and made life for everybody a lot better.
The article suggests that developers trying to match business concerns with technical implementation disproportionately takes up time, as each feature was not spec-ed out and described properly in a technical manner.
Accumulating technical debt would be a delay purely on the engineering side. The engineering team is not always to blame, though.
PM: Client wants feature X and they need it now!
Developer: Great, I can get started today, but first I'll need clarification on X, Y and Z.
PM: I'll e-mail the client...
Then hours turn into days, days into weeks. By the time they get back, I've completely forgotten what it was we were going to do on the project or why I needed to know X, Y and Z.
That's pretty lucky.
In my experience the best solution is for them simply to be available to the developers to answer questions that arise in a timely manner.
I love the Mad Libs-style form in sprintly, btw.
Here's the link to the audio: https://soundcloud.com/business-of-coding/business-of-coding...
After implementing the requirements, it is found that they did not reflect the actual desired system. Then there are technical issues that were unforeseen. Rather than accepting these technical issues as additional costs, they are blamed on the developer, "if they knew what they were doing, there would have just done it already". Incorrectly or poorly communicated requirements are also blamed on the developer.
To me the root cause is generally a lack of resources in terms of time and money. Contributing to that is the general inability for people to grasp the complexity of software development.
Personally, I have never had a client or budget that I could say was really reasonable throughout. I believe this unicorn may exist somewhere. Certainly I haven't been finding my clients in the most auspicious places.
How can it be done if you don't know if it works? It's troubling that it takes a long time to go from 'done' to 'ready to be deployed', when they sound like the same thing.
I mean, really it's just terminology, but it suggests that there's a lot of tasks that are not being considered as 'real work' that are actually quite time-consuming in practice. It can be useful to look over what work was actually done historically so that perception of a task matches reality.
1) Ward Cunningham explains "technical debt":
2) comic strip - "How Projects Really Work":
Agile isn't a methodology, its a meta-methodology (a set of principles for selecting and dynamically adapting the methodology in use.) If waterfall is the process that produces good results for the set of people you have working on the problems you are addressing, then waterfall is Agile.
What you say is about sprints is accurate about Scrum, but while an Agile shop might use (and especially start with as a baseline) Scrum, Scrum is not the same thing as Agile.
There's a good reason to argue that not committing to requirements at some point so that work can proceed directed at a fixed target is suboptimal in the general case, but arguing whether its Agile or not -- and even moreso describing Agile as definitively being up front at some specific level, is confusing metamethodology with methodology and reducing Agile to a fixed methodology, of which it is very much the antithesis.
> - Changing requirements
> - Context switching
That's Agile Programming!
They are called "Change Requests". Put on your big boy pants and replace "Stories" with "Changes" and it makes much more sense.
I agree with some of what's being said here, but not the direction it's heading. Business-driven engineering is a dead end and it can't be patched by hiring consultants to tell people how to play planning poker. If you want to build great technology in-house, you can only do it as an engineer-driven firm.
The real lesson from all this is that you should never let people work on more than one thing at once. Make sure they know what it is.
Neat idea. Fails horribly in practice. Why? The problem with "one thing only" management is that the matter of who chooses each person's "one thing" gets horribly political, and fast. "One thing only" works only if the workers get to choose their one thing, which brings us to open allocation, which leaves me questioning whether we need all of this process that the engineers wouldn't impose on themselves...
Also, "one thing only" tends to create dead time and throughput issues, (a) due to communication issues and blockages, and (b) because people who'd rather have a different "one thing" will often do other stuff anyway. It also tends to lead to a high burnout rate when people can't voluntarily switch between at least two tasks.
>....business-driven software engineering. It does not work.
My interpretation "business-driven software engineering" (and please correct me if I'm wrong) is one where business determines the requirements of the software. But if business doesn't drive those requirements, who will? You've repeatedly shown how terrible the typical dev is at business, and most devs would seemingly cozy up to their IDE rather than navigate the mess that is determining requirements. So who's gonna do it? Besides, which dev doesn't hate a fuzzy spec?
Additionally, or perhaps more fundamentally, a business hires devs to the ends of furthering their, well, business objectives. What, exactly, is so wrong with business setting the agenda of what gets built?
I ask because of a lesson I learned early (and painfully) in my career as a sysadmin. Technology exists for the business - Not the other way around. Unless, of course, you're a tech firm, where tech itself is your business. And even then, a business has to make economic sense, and as a programmer, I think I'm allowed to say we typically make poor economic choices. McLeod 'Clueless', if you will.
I don't think that it's good when the business (usually, this means the executives) do it unilaterally. Obviously, it's not good for engineers to build with no concern whatsoever as to whether they're building something useful. It needs to be a collaboration focused around letting each side do what it's good at.
I'd like to believe that enough of us have sufficient business sense that we don't need the Agile-style waterfall (no, that's not a contradiction) in which requirements flow from business into "product" into technology. Not all of us make sound economic choices, but I think that a large part of that comes from the fact that most companies promote people with any business sense "out of IT".
As one who's trying to look forward for us as technologists, I guess I'd say that we need to take some responsibility for learning business and politics. The head-in-sand strategy is bad for us individually, but also bad for us as a group.