This is how you can get two years into a project and have managers and clients that think that things are going well when the actual code is an increasingly unmaintainable rats' nest. Good devs confronted with this kind of mess will eventually burn out on sticking their necks out defending necessary but opaque refactoring tasks and move on to greener pastures.
There's some development work to be able to capture these things, but it gives visibility and power to Tech to inject some maintenance work into normal sprint development.
I'd love to hear what other people are doing.
This is where I suspect manager-types and developers have a vigorous divergence in values.
Professionals routinely encounter situations where something is wrong and needs to be "actioned" but its wrongness isn't effectively measured by any metric (other than the opinions of the experienced people looking at it).
There is a certain species of manager who has taken Taylorism a bit too far and says that anything not reflected in the KPIs is not real and not getting acted upon. I hope this isn't what you're expressing here, but oh man is that mindset frustrating.
Being shitty is not reason enough, but it's almost always possible to reason and quantify and weigh the cost versus the benefits.
We have one-on-one catch-ups with staff, we have all-hands meetings, we visit clients, read books and articles, attend industry functions, issue press releases, meet with potential investors, etc.
There's value in all those things, but very rarely do we need to account for the fact that we spend time on them.
Senior engineering staff need that same freedom - it should be totally appropriate for them to make that call that "I'm accountable for the technical quality of this code, and I've decided that this needs to be done".
That requires some sense of business acumen - there needs to be an appropriate balance of technical investment to business investment - but that's part of the role of being a senior dev / tech lead / architect / whatever-your-organisation-calls-them.
I am not saying that you always need a detailed balance sheet, just that it's good to reason about the benefits.
It's especially hard to quantify risks - that's where you really need the expertise.
What I wanted to point out is that there seems to be quite many developers that actually do not have a sense of business acumen, and are willing to spend alot of money on low priority tasks, just for their own personal pleasure.
There is no mature "actuarial science" of software development. The market can provide concrete pricing for new feature development; engineers can't tell you how many dollars of tech debt you're in or give three significant digits on the probability of a major outage tomorrow.
That doesn't make money/risk/time costs which are difficult to measure any less real and it doesn't make them smaller than the ones which are easy KPIs.
Nor can you necessarily look at the movement of measurable KPIs and say "man, that rewrite was a waste of money." Who's to say things wouldn't have been worse without it?
I don't think so. As a pretty good analogy, can you quantify the benefit of replacing knob-and-tube wiring in an old house? Now consider that the house across the street keeps its old wiring for the next twenty years without anything bad happening.
We talk about organisations applying Scrum but what we're really referring to are groups of people, and depending on those people, their mindset, their history and their current role requirements - they will use the Scrum model in their own idiosyncractic way.
Take the point about technical debt and running so fast with an Agile development workflow that you never get to refactor code or properly document etc.
Even if you just took that in reference to one specific company, the value of, cost of, and importance of these things could be different at different times.
When a startup is running to get MVP out to get customer feedback, it's usually much more efficient to set the basic expectation that the first version of your product will get thrown away once you've learned all the important aspects of what you need to deliver. With that in mind - startup development is a completely different beast than when your product is established and you have paying customers with expectations of up-to-date documentation etc.
Some managers are do not have good people skills and manage by just holding developers to account based on the expectations set in their Scrum planning day. Discussions about whether refactoring should be done yet or not may not even be something they want to know/talk about and they rely on engineers to factor that into their sprint estimates.
Depending on whether the company is being driven by sales/demo opportunities, or feature roll-out timescales etc. then the call about whether to do something 'quick' or 'right' may drop either way.
For developers it's important to understand the dynamics that are driving the business and what their role is (sometimes you have to 'JFDI'). If you have a good engineering team with a strong leader then these things shouldn't be that visible to anyone else.
It's also important to be able to meet the expectations you set. If you keep delivering late then you'll also struggle to get people to let you do more than the minimum required at the time.
For the business as a whole it's all about the big picture and understanding the decisions you make and what their impact (short and medium term) is.
If you want quick releases and don't have the resources for that to allow for good documentation and/or scalable code, then you need to understand that technical debt is building up and at some point it will need to be paid.
Exactly. In fact, the linked text is a prime example: it professes to address the "standard Scrum as described in the official guide", and then goes on to condemn the notion of story points. Now, I happen to agree with those story-points criticisms, but guess what: that official guide says nothing about story points at all! :o
Utterly dysfunctional of course, but if it's do-or-die, I'd prefer to 'do'.
If we had to lean on a QA department to get this stuff done, I'm not convinced we would have been able to actually deliver that project.
"Oh, since we're adding new email features, we'll need to clean up some of the old email code. That will add additional complexity to this task, so we will estimate it higher."
It works, allows us to still track the impact on velocity, keeps everyone informed, and makes technical debt clear and trackable.
I'll add that some subsystems are a mess but stable. It's difficult to make refactorings with other changes when there aren't other changes to make. I see this in particular with old but stable subsystems that will need to be ported to new environments someday, but it's never a good day to lay the groundwork for that inevitable change.
I think it's correct to defer that work until you need it. You may end up never porting that subsystem, in which case refactoring the currently-stable implementation is wasted effort. As and when porting becomes an actual business requirement it can be prioritized appropriately.
I understand this approach, and it works many times. But when the porting is delayed until the last possible minute, it's more likely that hacks are put in because the requirement turned into a hard deadline. Instead of defining a sensible OS abstraction layer, the developers might find and replace "Windows XP" with "Windows 7".
Which may well be the right choice for the business at that point.
More generally, it's not like doing it now makes it faster than doing it later: you have to put the same amount of total work in either way. In fact you have to put more work in if you do it now, because you will have to ensure that any subsequent changes don't break the porting. If you have other tasks that are a higher priority than porting, you should do them first, almost by definition. Of course if the port is your highest priority then that is what you should be working on (again almost by definition). Porting should be left as late as possible, but no later.
I think it may be imporant to raise a few points for consideration here:
- what may be a good choice for the business short-term can also be a very bad choice for the business long-term
- many - personally, I believe most - businesses don't care about products they make or services they give, they care about the money they can make via those products/services; ergo, the quality doesn't matter beyond the point the customer already paid what they were expected to pay
I don't bring it up as criticism, but only to point out that there are two completely different worldviews here competing. One, shared by many developers, is that the product is what that matters. The other, shared by the "business types", is that the profit generated by product matters.
I also feel that part of one becoming a professional developer is a shift from thinking about quality of work to thinking about its money-making potential. Which I personally consider a poison to the mind, and it makes me hate working in companies. But those are just my personal feelings.
The 'business people' are not qualified by themselves to say that the cost-benefit analysis of the refactoring is worth the effort. Because while they may have a good idea of the customer benefits, they generally have little insight into what the future costs of technical debt are. You see this obviously demonstrated by 'business people' who are shocked that there is work to do on five-year-old code that has been working fine for years. Any reasonably experienced developer, any many junior developers besides, can tell you that code rots if it's not maintained properly.
IME that's very rarely the highest-value thing you can be doing for the business. It's not worth paying for flexibility that you're never going to use, and it's not like it's going to take longer to refactor later than it would to do it now.
> The 'business people' are not qualified by themselves to say that the cost-benefit analysis of the refactoring is worth the effort.
Yes they are. Cost-benefit analysis is their job. It's your job to give them an accurate picture of the costs and the benefits.
> Because while they may have a good idea of the customer benefits, they generally have little insight into what the future costs of technical debt are. You see this obviously demonstrated by 'business people' who are shocked that there is work to do on five-year-old code that has been working fine for years. Any reasonably experienced developer, any many junior developers besides, can tell you that code rots if it's not maintained properly.
Yes and no. If you don't need to make changes to a given system then it's fine for it to "rot". Presumably there is a business reason they want to make changes, in which case bringing the code up to a point where you can make those changes is part of the cost of that business-level deliverable.
Except at that point, there's likely a hard (usually arbitrary) deadline, meaning you don't have the time. So you end up hacking stuff up again, and the tech debt doesn't get addressed. Your best people end up getting frustrated that they're not being listened to, and that they constantly have to explain this stuff to the business people, and eventually they leave.
It is so, when you have to build on top of something you will have to refactor later. I think the miscomunnication here is happening because the person you are replying to is assuming that case, and you are assuming the case where the code to be refactored is isolated form the rest of the system.
No, but "faster" isn't necessarily the point. Doing it correctly leads to it being "faster" because you don't have to constantly go back and fix bugs because you used ugly hacks to get stuff done in the short timeframe you had because the company decided to wait until the last minute.
If the organization is not sane, that's another thing of course (and often the case, sadly).
The fault is almost all on the management side because, ipso facto, they're the ones making the decisions that drive this outcome.
In the real world, there are outcomes that are much worse than this. But this does have the effect of neutralizing the value of scrum: Management gets what implementers decide they get, and scrum is just the way middle management is told the story that they pass up the chain.
It's the only way that seemed sane without twisting user stories into weird refactoring or architecture stories and keeping team from writing garbage code waiting for these "tech stories" to clean it up.
The PO should try to balance this with feature impementation and the team + SM should make it clear when it's needed. This seems to work fine in an environment based on collaboration between PO,SM and dev team, but will probably fail in an adversarial one.
If I had asked beforehand to invest time for this, the response would be "let's postpone this (forever)".
Let's just remember that Scrum is just a tool that is trying to replicate some practices of the toyota production system (TPS), and an important principle of the TPS is continuous improvement, and Scrum has this through restrospectives. Another principle is quality built in.
Now, how to avoid having a rat's nest after 2 years?
First of all, you have to remove velocity as a goal for your team. It's easy to game, and it de-incentivizes finding a _predictable_ velocity, which is the goal of that tool.
The goal of velocity, is to know what's the "cost" of building an healthy system.
i.e.: Your team ships an arbitrarily 10 points per sprint by doing what they think is right. The same team could ship an arbitrarily 20 points per week. 10 is the _true_ cost of building a healthy system. There is not a lot more to do for good team, honestly, at that point. Management can't say anything, because you bring them predictability, and that's mainly what they want. They might think you're slow, but hey, all of this is relative.
Suddenly, what does that mean when your team only ships 5 arbitrary points? That they had to push extra. That there was an unexpected problem. That things were on fire and you had to stop working on features entirely.
Basically, that there is something to retro on, find the root cause and anticipate for next sprints, and use to go back to the healthy level of points.
I used this at several places. It demands trust, but it always pays off.
With some teams I worked with, we set up a technical prioritization planning meeting every week, where basically decided what was going to be prioritized for that week. Each team was picking 2/3 things to do on top of the features.
You don't tell anyone. You do it.
Unfortunately this has become a cliché whenever someone criticizes scrum, Agile in general, etc.: it's never the fault of the idea/process, it's the fault of the manager/team for not understanding it properly or not doing it right.
Even if these philosophies and processes are so amazingly great when properly understood and implemented, the fact that hardly anyone seems to be able to properly understand and implement them would be a fatal flaw.
If you are implementing Scrum, I don't see how you do it without the Scrum Guide (which defines Scrum), which is both quite short and does, in fact, go through the underlying principles.
Well, almost every time I see Scrum criticized the described process is not Scrum and merely cribs ideas from it without understand the purpose or how things inter-relate.
So it's a fair criticism.
In my experience, it means some developers failed to game the system that particular time.
But you can be sure there were many other problematic moments in previous sprints, but the developer could hack together an ugly mess of a code to avoid the shame of failing in front of everyone in a meeting, and dragging the team's points down.
Because the unwritten thing about points is that they are used to shame people. In public. Sometimes not explicitly, but the feeling is there. It's always there.
That's why, if you have to work with and/or manage a scrum team, making velocity not the focus of the sprint is step 1 for a sane process. We count points, but it's not a contract. We try to reach the points, but it's not a contract.
In my parent comment:
_First of all, you have to remove velocity as a goal for your team. It's easy to game, and it de-incentivizes finding a _predictable_ velocity, which is the goal of that tool._
This sort of thing is only useful for managers who don't understand what is going on. Since they are unable to look at code and understand it, 'points' give them a semi-opaque substitute.
If management trusts the team, it is not necessary.
I know I do.
If you are a leader, and not just a manager, you will help your programmers improve their skill, so over time you spend less and less time managing them. Your programmers will appreciate it because with greater self-management comes greater happiness and job satisfaction.
The extra layer of indirection helps to account for uncertainties in the task, imprecision in the estimate, and chaos (in the scientific sense) in how long individual tasks take relative to aggregated historical metrics.
The drawback to units of effort is that estimates eventually bubble up to non-technical people who don't discern the difference between effort and time, and for most projects there are going to be some time-based constraints on the schedule.
You can still take a probabilistic approach to tasks that might interrupt you.
Mostly joking but plenty of places do work like that.
No jokes. It's Brads fault.
I swear I remember reading a piece on how having a team member leave can be good for cohesion and efficiency, because everyone is free to vent about old, bad decisions without targeting anyone still at the company. I think it was part of a story about a team clearing their technical debt by convincing a reluctant manager that they needed to undo the last guy's mistakes.
You often needs to be more subtile with them (x) than blatantly calling on their bullshit every time...
(x): I mean to try to influence them so they progress so your life is better, not to shit on them
Why can't we make some sub-component this sprint then the UI bit the next?
I tried various ways of reframing it such that the developer of the UI be the "user" but it didn't wash.
What if you finally hook up UI to the sub-component and the customer/stakeholder decides they don't like any of it? You could have known about it earlier.
I think it's important to understand that the sprint "membrane" is not designed to help developers. It's designed as a compromise between developers and their managers.
Developers want to work uninterrupted and perfect their work before releasing it, and managers want working software quickly and like to interrupt developers and change courses frequently. Sprints are an attempt to find a middle ground, but it's not always ideal.
No, it states the reasonable conclusion that all valuable features can be broken down into such (because otherwise if it adds no value to the product).
There are also Scrum tasks around Research, Tech Debt, etc that are perfectly fine to create and work on but they have an affect on your overall time to build new features and that's ok. Because it needs to be done, so they'll get prioritized accordingly.
Systems like scrum try to wrangle into a manageable bolus a process that -- if we're to make good softare -- necessarily includes creativity, inspiration, and the traveling of paths yet unseen. It's like writing a novel and having two-week deliverables like "complete the arc of the Alice character", versus "write approximately 100 pages". It's not valueless to write 100 pages, despite not finishing the Alice section, and perhaps specifically because we discover that Alice's emerging story turns out to intersect perfectly with what we want to do with the Bob arc later on.
So there's writing and engineering and lines of code versus plotting and architecture and inspiration. We need all of it, right? Does everything that's not a "valuable feature" have to be shunted into the cul de sac of a research spike, doomed to be frowned upon by the management for whom the system otherwise provides the sheen of predictable velocity? Does the critical work of "dreaming" of what to do at both macro and micro levels become a casualty of the banal necessity of marching equal-sized boluses through the development tract?
I'm making a florid point, but to me this feels like the essential tension.
Yet to find any that can't. I've yet to hear of any either. Usually that's a problem with the people trying to break things down not being used to modeling things differently - not the process. It's pretty common.
I'd love to hear some examples tbh.
> There's nothing magical about being able to partition a deliverable into a sprint's time frame that, by such distinction alone, makes it valuable versus valueless.
Of course not! The point is not that the time-boxing makes it valuable, I hope I didn't imply that, it's that all features that have value should be specific enough to be able to be broken down. If something is too vague it's not a valuable feature because at that point is just pie in the sky spitballing. That's what requirements and grooming is for - to identify what needs to be broken down, expanded on, or specced out better by the Product Owner.
> It's like writing a novel and having two-week deliverables like "complete the arc of the Alice character", versus "write approximately 100 pages".
That would be awful Scrum process. Also the Product Owner is the Sprint Team in a novel but lets pretend they are two separate people for this: here's how that should work in a Scrum system...
- Story: "complete the arc of the Alice character"
- Feedback: Too vague. What IS the arc? What perspective should it be from? Lots of questions, needs to be broken down.
Next, after the product owner has worked out that we're going to hit these beats in a 3 act structure.
- Story: "complete Alice's arc for Act One, with exposition introducing her and the other characters, ending on the inciting incident for the main plot"
- Feedback: Better, but this is really 3 things - the introduction of Alice, the intro of the other characters, and the inciting incident. You should split those up.
Next, the PO has split this up into those three Stories and presents the first one...
- Story: "We need to introduce Alice so the audience can start to get to know her."
- Feedback: Great, this seems low complexity and simple. We have requirements for her backstory? Ok, great, lets work with that. Seems like a Complexity 2 Story.
Then, when Sprint Planning...
- Scrum Master (not PO): "Ok so we were planning on getting this Alice introduction done this sprint, we still ok with that?"
- Everyone: Yup!
- Scrum Master: Ok, lets break this down into tasks of things we want to cover then.."
And then that part of the story gets written. Obviously it's an odd metaphor, but that's kind of how many (not all, at all) professional authors break down a lot of their writing process anyhow. Some are more freeform of course, but many plan a lot too.
The point is - that just because you have a planning step doesn't remove the creative process, it helps you plan better for work. That's all.
Scrum doesn't magically change how you code or what you code, it's just a planning and change management tool that emphasizes incremental steps.
And my point is, if we're focusing only on short-term client-recognizable value, we may as well just make shiny prototypes. Who cares about the pesky internals anyway.
They provided quite a bit of short and long term value by being better at giving clients an understanding of what they were actually getting. They cared about the internals and making sure their clients understood the logic the internals would use.
If I was skilled in some animation tool like they were I would do the same.
Erm. But it's you who is the customer in this scenario, not him? Or am
I missing something?
To which I have 2 responses:
1. not all features need a UI to be useful
2. this also demonstrates the infantilising nature of scrum where no developers can be trusted to think deeply, talk to stakeholders and otherwise do the right thing in a fully-rounded way but must just follow the exact instructions expressed
The core idea I was trying to get across is that until a feature is working and in front of a customer (or stakeholder), it's essentially in limbo because you don't know if you've built what they wanted. Maybe there was a miscommunication, maybe they find the feature confusing, maybe they've changed their mind. The goal is to get feedback as soon as reasonably possible.
eg: A stakeholder (or customer) requests feature X, and everyone agrees it's a good idea and we should to work on it right away. The dev team could spend 2-4 weeks writing excellent behind-the-scenes code that's not hooked up to anything, or you could spend 2-4 weeks on holiday. Either way, you've given stakeholder the same thing: No new feature.
If you're confident that you know exactly what it is you want to build then you don't need Agile, scrum or sprints. Scrum isn't supposed to be waterfall with arbitrary reviews every 2 weeks.
> As a customer, you've given me no value.
> You, being a customer, have given me no value.
This is what I don't understand.
> [From my hypothetical perspective] as a customer, you've given me no value.
Or more concisely:
> [Speaking] as a customer...
Are you surprised? The developer isn't the user of the UI, product wise, so no wonder it didn't get very far.
The trick in this case is to break the story down. So the original story isn't do-able in one sprint? Ok, so what is actually the MVP of that Story? What's the first block that builds the overall feature? Take a 5 point Story and make it three 2 point Stories or something. That's what the refinement step is for in Scrum.
Unless you're doing abnormally short sprints there should be some part of that Story that can be abstracted into a smaller Story that fits into a sprint.
Because that's how you get bad UI. The user-facing design needs to drive the API interface, not the other way around.
I don't see how this need would nullify the ability to modularize code.
In a country where logic reigns, it depends.
The moment you start getting dogmatic about your process is the moment your decisions start being driven by something other than your actual needs at hand. And that's the moment when you start to produce a bad product.
Build Quality In
Optimize the Whole
I've never practiced it and my only exposure has been reading some of the book "Lean Architecture". But it certainly seems like an improvement over Scrum which seems blind to some of the most critical things in developing complex innovative systems. In fact I'm convinced that Scrum was designed to work with basic information systems where a feature consists of essentially adding a new data entry form or report for a database (e.g. a lot of web stuff)
> ...one can claim that this is not the job of Scrum, which is a software management methodology, and not a software engineering methodology, that it's only concerned with organizing the teams' time and workload
A manufacturing process can only drive quality if both the primitives and end system have gone through some sort of engineering process. Web developers love this stuff because they have components, frameworks and clients where the engineering details are taken care of in MANY scenarios.
Using a contrived scenario involving LEGO -- you cannot expect kids and parents to design new LEGO parts. The components are designed and engineered to work together in a completely different context. Carrying this further, if you want kids to assemble a consistent, specific artifact (Say a Star Wars X-Wing fighter), someone at LEGO needs to design that and produce documentation. Designing on the fly with sprints ("ok, kids, today figure out how to make a wing") isn't going to be a productive exercise.
Most SCRUM projects that I have personally seen fail are projects where "agile" is a codeword for "we didn't think this through, so we'll figure it out as we go". Then the "sprints" become a real joke as the team repeatedly runs into a wall.
> if a user story requires a refactor or architectural change then that's just fine
As though a useful, usable product design (architecture) could be divined thru piecemeal revelation.
The debt dial should be from 0% (spend all the time refactoring) to 100% (spend zero time refactoring, get it out at all costs).
Management should have complete control over the dial. Developers should have control over what kind of refactoring they do (ideally the retro should have a question "what debt/tooling issues caused you the most pain this sprint? and a parallel track for debt/tooling stories").
I'm not sure I ever found myself in the position of writing bad code just for the sake of speed - I surely wrote tons of bad code because I didn't know how to do it better, or because I didn't have the requirements clear from the start, or because of bad design and planning. But to get a feature out quicker, no. If anything, it seems to me that writing bad code requires more time than writing clean and elegant code.
In the end, "technical debt" becomes a way to shift the blame from your own (the team's) inadequacy at planning, designing and developing, towards supposed time constraints that always lie outside of the team's responsibility.
Time constraints can certainly be relevant too. It's not always an artificial shift of blame. For instance, time constraints could be why the code passed review to begin with.
And some programmers certainly do write worse code when they have to do it quickly. Typically, the code itself doesn't look all that bad in a vacuum, but it presents problems months down the line when a new feature needs to be added or an existing one changed in a non-trivial way. There was little forethought in its design.
Or, let's just look at the ways you said you might write bad code:
> I surely wrote tons of bad code because I didn't know how to do it better, or because I didn't have the requirements clear from the start, or because of bad design and planning
In other words, these things can cause you to write bad code:
1. You just didn't know better.
2. Unclear requirements at the start.
3. Bad design and planning.
In (1), you might realize after writing some code that you didn't quite know what you were doing and you should refactor it, but you're now under pressure from management to just get it out. Oops, no time to refactor. Now code that you know is bad is going into production, and it'll bite you in six months.
For (2), the reason the requirements weren't clear is because not enough time was spent by management/product owners/designers on clarifying said requirements.
For (3), it's the same thing -- bad planning is often the result of time constraints (notably, time constraints which may not be visible to you as a rank and file programmer).
I think I have yet to see any software to which this doesn't apply sooner or later. While the whole point of "technical debt" is that it is something you're supposed to knowingly acquire because of time constraints. You're basically saying "we knew it was wrong, but they forced us to do it that way". While to me, most of the times, the truth is that you really didn't know. Yes, you coded in a hurry, but there is always a time constraint of some kind so that's no excuse. Somebody else in the same time would have done a better job.
As for my points: if I realize I didn't know what I was doing, I always refactor the code. Committing code that you know to be conceptually wrong is just sloppy. And if I am under pressure by the management is because I spent time developing without understanding what I was doing; a better developer would have gotten it right at the first try, and there would be no technical debt.
If the managers/ product owners didn't produce clear requirements, it's not a technical debt, it's a sloppy job on their part.
If the planning was wrong, that's a sloppy job on the part of who had to do it. Responsibilities should be found and action should be taken. Saying "ah yes you know, we were under pressure so we (kind of naturally) accumulated this technical debt" is just a way to save everybody's face.
While this reasoning is probably not technically wrong, I'm not sure if it's relevant to the real world.
You can always make an argument of form "there exists a developer who could have got this feature right on the first try, with very little time spent." This simply does not matter when you do not happen to have that developer on your team right at the moment. The nature of the work is that you will work on a variety of things, and you won't necessarily be the best in the world at every individual thing. You're inevitably going to encounter work that's challenging enough for you where you don't get it perfectly right on your first try.
> If the managers/ product owners didn't produce clear requirements, it's not a technical debt, it's a sloppy job on their part.
It's a sloppy job on their part, induced by time pressure, which produces technical debt. I feel like you're just playing with definitions here to avoid admitting that technical debt can come from poorly managed time pressure.
> if I realize I didn't know what I was doing, I always refactor the code.
All that tells me is that you've never been under a lot of time pressure. That's not a bad thing. It most likely means the management at your companies have been competent. But it doesn't mean that technical debt does not exist or cannot be induced from time pressure in other companies.
As for your other points: maybe one of my team members produces consistently more technical debt than the others. Is it still technical debt?
Manager or designers can be the source of time pressure for those down in the chain, because bad planning or decisions, made in absence of time pressure, can force others to work under pressure. Again, "technical debt" masks the real problem.
I might not have worked in extremely high pressure environments - but I've surely worked in teams where we were leaving the office at ten or eleven pm every night for months on end just because demented design decisions had been made by the much respected solution architect.
Why wouldn't it be?
Creating lots of extra technical debt, in fact, is a defining feature of poorer developers.
>Creating lots of extra technical debt, in fact, is a defining feature of poorer developers.
I think we agree then. It's just that "technical debt" makes it sound- to me at least- inevitable and impersonal, while it is possible (at least for substantial amounts of it) to ascribe it to specific people and to avoid it by hiring better people.
This is possibly the most wrongheaded comment you've made.
1) There is no such thing as "no technical debt". It asymptotically trends to zero but never, ever gets there. If you think that you oe anybody else is the kind of developer who magically creates debt-free code all the time then you're deluded.
2) The "right first try" argument is wrong. You shouldn't even try to get it right first try - that's the whole point of red/green/refactor. You're supposed to get it working and then clean it up because prematurely 'cleaning up code' is an inefficient way to work.
It's not called red/green/refactor-if-you're-too-shit-to-get-it-right-first-try.
As for your second point, what should I do? Try to get it wrong? To get it working how? We're not talking of premature optimization here, we're talking about understanding the requirements, understanding the tools, understanding the big picture, understanding your time constraints, and pulling out the best job you can.
1. I use a particular technique/abstraction/whatever to solve the problem, it solves the problem well.
2. Over time we solve other problems elsewhere. As we go, the bigger picture becomes clearer and we pick more suitable techniques/abstractions/whatevers as we go.
3. Eventually we have to solve a problem that interacts with the original problem, the newer techniques/abstractions/whatevers don't work cleanly with the new ones. So we have to make a choice:
a) Hack something together that solves the current problem without us having to touch much of the older code.
b) Rewrite the old code to match the newer technique/abstractions/whatevers.
c) Put down tools and thoroughly evaluate whether there's an even better technique/abstraction/whatever that solves the old problems and the new ones.
We all know that C would give us the best code, but it's also likely to mean we never get anything done because every new problem means reevaluating everything. B happens more often, but in reality we usually end up doing A due to various pressures.
At no point has bad code been written, but there's technical debt nonetheless.
Over time we become better at adopting patterns an architectures that allow for clearly defined boundaries and reduced cost of making mistakes, you still get technical debt (because just about anything you want to change can be considered technical debt), but it doesn't tend to cripple your ability to get things done.
I've worked on five year old technical debt. It meant that bugs were far more common and fixes/new features took 10-15x as much effort as they would have otherwise.
It wasn't that it didn't 'call' for a refactor - it's that the team didn't respond to the problems by refactoring. They tried the following instead: heavy manual regression testing before release (once in two years), waterfalling, longer and longer feature/code freezes, keeping multiple branches around for different customers.
Managerial response was to hire additional mediocre developers, making the problem worse, but it wasn't like hiring better developers made developing immediately quicker and less risky. Paying that debt down to a reasonable level was impossible with mediocre developers would take ~36 months with good developers (also working on bugs/features).
>As for your second point, what should I do? Try to get it wrong? To get it working how?
You should do red->green->refactor.
After writing a failing test, your only priority should be to make the test pass. Not elegant. Just passing. Once it's passing, then make it elegant.
The reasons for this are twofold:
1) You're solving fewer problems at the same time. Something you want to avoid as much as possible as a developer is to have to juggle 40 different competing problems at the same time.
2) Refactoring-driven architectural decisions are ~95% of the time better decisions than those made during up-front design.
>We're not talking of premature optimization here
It's a closely related problem but it's not identical.
To my mind the "debt" metaphor captures important intuitions: it accumulates interest, slowly at first but rapidly if you have too much of it, it can look like it isn't a problem until it is, taking on more is often an easy way out of your current situation in the short term.
It's absolutely not that. Technical debt is a natural by-product of working even with the best coders. There's nobody out there who doesn't create it.
Better coders just produce it more slowly and clean it up more often.
>I'm not sure I ever found myself in the position of writing bad code just for the sake of speed
I could literally spend all of my time making code nicer and none at all developing features/fixing bugs. It's always a trade off between speed and quality.
Ramping up technical debt isn't always about speed, either. It's sometimes about risk - it's often less risky in the short term to copy and paste a block of code than it is to change a block of code and risk breaking something else.
The point of the dial is just to make the trade off between quality and speed that individual developers are making every day both explicit and management's responsibility.
It means if the dial is turned up to 100% management have no excuse for asking the question "why is our product a pile of crap?". It means also if the dial was at 60% for a year and a half the developers have no excuse for why the product is still riddled in technical debt, meaning that skills problems are distinguished from time constraints and managerial pressure comes with a cost attached.
And yes of course, as we all know scrum is by definition successful, and all the teams that fail are not following the true faith.
The other interpretation might be a neat solution. Allow developers to indicate publicly what proportion of the time their last tasks took, they should have taken had there been no technical debt. It would handle the communication from developer into business-speak pretty effectively.
Management decisions are usually CYA based, and leaving the dial set at 0 both exposes them and gives developers a get out of jail free card.
i.e. leaving it at zero is suicide. That's the whole point of making it a trackable dial.
Probably what would happen in most cases is it would fluctuate between 30% during average times and 0% during crunch times.
I'd expect any manager worth is salt to put that at a 10% or 20% (basically anywhere NOT 0) and uses that number as a reminder to devs that part of the job description includes MAINTAINING the system in proper condition, not just piling up new stuff on top of the existing random stuff.
Same as a car needs regular maintenance, a dev project needs regular maintenance.
* Tight coupling (this would include global variables, among many other things)
* Lack of code cohesion
* Code duplication
* Code that doesn't fail fast (e.g. weak typing).
* Variable/class/method naming that is either not sufficiently disambiguated or is wrong.
* Lack of tools to run and debug code
* Lack of test coverage
Whenever I go looking for code to clean up this is what I keep an eye out for.
I'm pretty sure that each of these could be measured empirically somehow (I've read papers to that effect for a few), but we're not quite there yet in terms of tooling, or even in agreement over what technical debt actually is. Give it 5-10 years.
But at the same time there is definitely some underlying real quantity. Different developers may not agree on every single aspect, but they'll agree about the difference between a good codebase and a bad one, and it really does take longer to make changes to the bad ones.
So what can you do?
The time it takes to change code that is already written.
For instance, if you want feature N+1, but to do feature N+1 you need to change feature N, then the amount of time you spend changing feature N is the technical debt.
So when you are estimating, you could say: We need to refrob the whozzit to make it compatible with foo 2.0, then that work could be captured as technical debt.
One issue though: If the requirements have changed, that is not a good metrics.
i.e. We did that theme blue for Iphone, now people wants it to be green AND on windows. It's not necessarily technical debt, it's cancelling everything we've done to make something else entirely (even though it may seem superficially similar).
If the requirements and/or features for N+1 ignore or counter the requirements/features from N, it's not necessary technical debt, it might be you've just got no idea what you're coding and you're running in circle.
I think the point to take away is that software is just a tool, an artifact, and at the end of the day it just matters that it does what it is "supposed to do". However, civilization is dynamic, so we are constantly seeking to optimize an ever-changing potential function. So perhaps it is not the programmer that is running in circles, perhaps it is the market. So for the manager accounting for the technical debt, they must accept the change as a cost of doing business.
You could think of technical debt as the energy lost due to friction. It is the energy lost changing from one point to another in your domain space. Perhaps "technical friction" would be a better concept for software. However, that is even more hand-wavvy and harder to measure ;-)
"YAGNI" is... in general... not that useful when used by people who've never worked on a particular type of project, because, almost by definition, they don't what what they will and won't need.
And... defining "needs" is its own set of headache. Needed by who? I'll tell you what, we need to ensure we have a logging system in place that can alert folks, and we'll need the ability to view logs in production, and share access to those. Years ago I got "YAGNI" back on that, and months later... weird bugs that no one could reproduce, and the minimal logging in place was only accessible by one guy who was out of town for a week. But hey... we got those "rounded corners" to work in IE5 and IE6 with only an extra week of work - yay...
YAGNI is a good principle, but as with most ideas, you get problems when you introduce various types of people in to the mix. Someone who's never done a project type X should not be the one making YAGNI decisions when other people on the team have done multiple "project type X" before, and are trying to introduce basic requirements.
Reminder: the "User" doesn't have to be an external person to the team. It can be internal for tools, or a tech debt task for internal issues like the ones you mention.
Thinking Scrum is just about end/paying User stories is mistake #2 I see when people do Scrum (#1 is the classic "We're going to not actually do Scrum but call it Scrum" that leads to a lot of "Scrum doesn't work" articles itself).
This is one of our biggest challenges. Much of the work we're doing isn't confined just to a user feature, so square-peg-round-hole syndrome affects us a lot.
> What about contributing to open-source software? Reading the code of an important external dependency, such as the web framework your team uses, and working on bugs or feature requests to get a better understanding was not part of any Scrum backlog I've ever seen.
Working at various startups, I have developed a methodology of contributing back to open-source projects we use without accounting for it in sprints or the ticketing system. It involves getting to the office an hour early, over-estimating on my other tasks so I have time for something extra, and a sprinkle of office politics.
And yet, it is some of the most valuable work I have done. Not for me, but for the companies I've worked for.
To get an in-depth understanding of that one Django or Express.js feature you use, or even better, to find and fix a bug that affected the business or may have affected it in the future, just gives you that much more of an edge over your competitors. Say goodbye to that nasty workaround you had to use to get around the bug--now it Just Works exactly how you need it to!
What's more, it's attractive to engineering candidates when you get to tell them the story of when you fixed a big bug in Socket.io.
The best engineering managers I have had have been receptive to the idea that this type of research and/or contribution to other projects should be considered "work" that provides value to the business.
We also had the general expectation that one story should generally take no more than about two weeks. Before starting a story, if you think going in that it will be too big, then you try to limit the scope or defer parts of it into new stories until you're confident it can be done in two weeks.
Once a week we tracked how long stories were in the "In progress" column, and once it'd been up there for three or four weeks, people started asking how they can help wrap it up. I think the longest I remember a story being in progress was about 6-7 weeks, and that was real uncomfortable for us. Typical times were 1-3 weeks.
So we had a weekly cadence for demos, and product owners liked that they would see steady, regular progress rather than being inundated with sudden large dumps at sprint boundaries.
One explanation for the longer time is that we had a process where for each user story we'd write a short, informal design document and send it to the team and stakeholders for review. For non-trivial stories we'd then meet to discuss them and come to a consensus about how to implement it. This probably added about a day or two to a story's duration, but it meant that we had a solid system at all time. It also often served a similar purpose to a retrospective, or produce topics for a retrospective, because these meetings would surface any technical debt and other impediments to progress. It also served as knowledge transfer and helped the team converge on design principles and expectations.
So these meetings had a cost, but we felt it was essential for practicing agile design. We didn't find this made us slow to react. On the contrary, because this kept our technical debt low, and our software well designed and well understood by the whole team, it meant we could pivot on a dime.
I think there's a real risk in going too fast. Maximum speed should not be the goal of a software development process, and I don't think any business really wants that. The two primary goals should be: 1) to make predictable, steady progress over long periods of time, and 2) the ability to change priorities as quickly as possible as new information emerges. If you have to sacrifice some speed to get there, it may be worthwhile.
Most Kanban boards allow you set to mark items if they go past a certain time.
Manage by exception. If most cases just take a few days, set up some kind of system to mark the outlining cases out. Then you can investigate and address these exceptional cases if required.
Do you have some extra process that makes this happen? I don't remember anything in the version of kanban I saw that implied the lead developer should be doing this.
> Manage by exception. If most cases just take a few days, set up some kind of system to mark the outlining cases out. Then you can investigate and address these exceptional cases if required.
In the project I'm thinking of it wasn't a single exception; rather the length of a typical task gradually crept up (from below 2 weeks at the point when we switched from scrum) until it was normal to have tasks lasting multiple months.
The whole point of Kanban is to have everything visible and known. If your not looking at the Kanban board and taking action based things that look wrong whats the point?
It's easy - and useless - to make this about people. The point is, this is a problem that we didn't have under Scrum, with the same people.
> The whole point of Kanban is to have everything visible and known. If your not looking at the Kanban board and taking action based things that look wrong whats the point?
At what point does it look wrong? If the idea is to have everything visible why does Kanban generally include fewer stats than other processes? I was always told the "whole point" was limiting work in progress.
Points, for example, the argument is based on the premise that teams are obsessed with points. What if the teams use points as a framework to discuss complexity? I have worked in scrum groups where if something was given a large point value it would be questioned and "split" to break the job into component tasks that could be done by separate people simultaneously (or some now some later).
Long meeting times with the wrong people in the meeting? That's a managerial issue and has nothing to do with scrum.
Writing stories is the main art of scrum and without good stories it is pointless. A story can be written to encourage a developers to improve quality of a codebase, share knowledge with another team, or take time to learn themselves. A really good story would encourage these things and also deliver customer value.
Any system for managing software development hinges on good communication. SCRUM's main advantage to me is that it provides continuous opportunities for face to face communication. If you fail to take advantage or engage with those opportunities then it won't work, but I bet another "framework" wouldn't either.
Show that to a people unfamiliar with the Scrum church and it will say that sentence is meaningless. And well, it is, in an absolute way. A story is just that: a story. Maybe it can help to put your son to sleep, but I doubt you need to invent a brand new vocabulary to discuss about software architecture and technical debt, and I doubt you can do anything brillant by only using a few examples in a field that is even more driven by pure logic than maths.
We are grown up. I don't want to work anywhere where I'm told stories and must get some points done by the end of the week. And this is not anecdotal: Scrum is a derivative of a manufacturing framework. I'm doing engineering.
Maybe waterfall with unusually good specifications can use cog-like devs, but the few times I've seen that done it didn't turn out very well.
Planning/grooming we do in one hour. You good story writers is all. I once had a planning last 12 hours because the product management was so awful. If your team sucks, no process will save you.
Again, it all comes down to communication. If nobody wants to talk to each other, SCRUM is not going to help.
As with all of these posts, they're extremely lacking in alternatives. Of course SCRUM won't work for every team, or even every project for the same team. But it provides a way for the team to evaluate and adjust how to best accomplish their goals in a straightforward and (if done right) low friction way.
I think it's taken for granted how much effort is alleviated by adopting SCRUM. Just think how difficult it would be to develop an alternative process for every team, every project, every new person to join one of those teams or projects. Everyone has their own way of doing things, standardizing on a few universal goals aint a bad thing.
If scrum tends to lead with long meeting times with the wrong people then scrum certainly should include safeguards against that. Otherwise we need "x+scrum" framework which includes these safeguards. I don't think scrum should fix every problem "in management", but "meeting management" should be a core scrum issue to deal with.
Even with two-week sprints the majority of your last day is entirely meetings between the review and retrospective. In my experience one-month sprints are more common, especially in large enterprises and government, and you will literally have 7 hours of meetings on that final day if you follow the Scrum handbook.
Long meetings are built into Scrum.
I've never been on a team that does pure Scrum - even ones which intended to do so always ended up with what we termed "Scrum-ish": taking the ideas from the base methodology, but adding a whole level of 'house rules' aiming to patch up obvious gaps. For instance, always making time for one refactoring or technical debt task per sprint, tracking accuracy of estimated time versus actual time elapsed (deeply uncomfortable but very useful!), the hundreds of different rules around trying to make standups shorter and more useful.
I am a big fan of well-run retrospectives, though: they can be a really nice way to feel empowered as a developer, especially when you have one retrospective identifying that Thing A keeps causing everyone pain, and the next retrospective having everyone say 'Hey, Thing A is so much better now!' Never realized they weren't 'meant' to be about technical matters, though: in our Scrum-ish teams, they were always open for all topics, and I think that's a very good idea.
Of course, the fun thing about Scrum-ish teams is now you have a whole new level of debate that can happen: "We're failing because we're not doing Scrum rigorously enough!" vs "We're failing because we're doing Scrum too rigorously, and what we need is more house rules!" ;)
Of course, feedback is solicited, but it is an unspoken rule that criticism of project management is verboten. However, criticism of self and others on the dev team is absolutely allowed and encouraged, and so the brown-nosers amongst the team use the opportunity to make themselves known.
I would even argue that in the same way the "brown nosers" are trying to make some positive impact for themselves, as risky as it is, you can do better by fighting bad product management. If you feel the pain, your team does, and likely, your manager might as well. (if it's also a separate entity from the PMs.) The loyalty and trust you can build by defending your devs and being a force for good can be absolutely invaluable as your career goes on.
By and by, although I acknowledge the risk, if you shape your rebuttals well, you can find yourself bringing PMs to your side (a recent "spirited" discussion in which a PM was refusing to institute KPIs to track their features ended with their other PM peers questioning their resistance and backing the eng push for better telemetry, because when it came to "how can we justify how well we're serving clients if we _don't know_" this speaks even across lines.)
We've certainly done them, identified problem points and then solved them (and it does feel good to do that..) but it doesn't actually seem to make things better.
Lets compare it to say, personal estimations:
When you estimate, execute and reflect, you can tangibly improve your estimation process.
You can quantitatively observe an improvement in estimations on tasks when people go through this process.
Previously; estimated 20 hours for (task). Took 10 hours. Repeat... soon, your estimates are for 10 hours, and you're quantitatively, objectively able to make consistently better estimates.
Retrospectives in my experience don't do that.
You can sit through 50 retrospectives and each one identify a problem area and then fix it and yes that does feel good, but objectively when I reflect over the defect rate as a result of the process, I feel like retrospectives make zero impact on the rate at which technical debt accumulates.
There's something missing in the way they work; all you (well, all we, I suppose, this being my personal experience...) ever do is find things that are wrong and fix them. Objectively when you look at it, there's no closing of the loop where the defect rate drops.
There's no process improvement that generates less problems in the future... all it ever is is band-aiding to prevent technical debt spiraling totally out of control and devastating the project.
There must be a better way, where you somehow measure how technical debt was created and work to incrementally prevent it happening... but I've never seen that actually happen in practice.
It should only be uncomfortable if the team means 'commitments' when they say 'estimates'.
This is part of the reason stories are generally pointed with (more or less) triangular numbers. So that teams stop fretting about missing estimates by an order of magnitude or less.
Does this even mean anything? How would it work in practice?
I wrote about this here: "The Agile process of software development is often perverted by sick politics"
That's a really good point. I'd be excited to see what a team could do if each developer wrote a one-page memo about what he did and what he was going to do once an iteration, and a few sentences for each day. Throw 'em in a log, and they might even aid the retrospective.
Maybe we could just go back to .plan files, like Carmack.
Yes the issue with standups/scrum cereomonies is if people ignore all the 'other' info they are receiving and for what ever reason choose to ignore it.
The real key for Scrum for me has to be not only teams that want to write good code, but teams that WANT to get better at working together. That takes the whole team. If the team aren't bought in to this, then I'm not sure which project management tool will work for them, but it sure isn't scrum!
If youre working on a project where it is important that you have as-accurate-as-is-realistic an idea of the size of the project, or more specifically your progress through that project, then I can't see how a methodology could be any simpler.
If having a good idea of the size of your project over time and your progress through that project are not very important from a management perspective, the Scrum artefacts will seem like, and will probably in fact be, needless overhead.
Scrum is not opinionated about the actual development methodology so claims about how it affects the code that is written are themselves a bad smell IMO.
Scrum is actually part of the problem, IMO. I've seen many teams turn scrum into a hammer and treat all future problems as nails.
Example problem: The foobar story has failed failed for the third sprint in a row.
Likely discussed in retrospective (plausibly good ideas, mind you):
- We need to break down stories more before we estimate them.
- Or we need to stop underestimating foobar stories.
- Or we need to focus on unblocking subtasks related to foobar stories.
- The foobar code is a mess and needs to be refactored.
- Or the foobar subsystem is too coupled to the Fizzbuzz subsystem.
- Or the need for some developer tools to increase productivity in the foobar ecosystem.
Since scrum is methodology oriented, methodology is the first tool teams reach for when a problem is encountered. And I see this after team leads make it explicitly OK to discuss technical subjects in retrospectives.
I'm not a psychologist, so I can't describe why this phenomenon happens, but I see it regularly.
Pretty much every kind of deadline driven development ramps up technical debt. Scrum certainly isn't the worst in this respect (developers make their own deadlines, and conscientious ones will build the time in), but the emphasis on commitment and the pressure to deliver at the end of the sprint puts pressure on developers to cut corners.
The worst part though, is that the product owner is usually non-technical and will deprioritize stories to clean up technical debt as a result.
IMO for any kind of development methodology to work it must have an opinion on technical debt. Scrum doesn't.
One of the few defining characteristics of scrum is that the developers define how much they can achieve, and this estimation is improved over time. If this is not happening there is something else wrong with the culture and Scrum is being used as a scapegoat.
* The prediction is made in a meeting while your head is "out of the code".
* The prediction is made in a group setting, rendering the decisions more easily subject to peer pressure and groupthink.
* The prediction is made up to 2/4 weeks in advance of actually doing the work.
* The prediction is made without risk of overshoot attached. Risk is critical metric which scrum conceals.
And the main defining characteristic of scrum that leads to pressure, after all of that unwarranted optimism:
* The prediction is designated as a commitment.
* while he is actually writing the code (so not up front)
* not in a group setting but as an individual, so either one person estimating the whole thing or each person giving different estimates
* (third point same as first, dont want to estimate up front)
* must incorporate what is often called 'contingency' (which is actually what the whole point of measuring velocity is for!)
* and the final point - he doesn't want to have to commit to it
how can you _not_ read this into it?
How is that the same as not being "required to give any estimate at all"?
> he doesn't want to have to commit to it
why not? an estimate is an estimate, not a commitment. Committing to an estimate makes it a commitment, not an estimate.
I might expect a dice roll to be 3.5, I'm not committing to the next roll being 3.5 - analysis should inform policy, in this case expectations informing stated commitments, but the two are not the same.
Furthermore, this bullet point actually takes the quote out of context - He specifically doesn't want to commit to the estimate produced under the previous conditions, not that he won't commit to any estimate. The difference is choosing to commit to an estimate you have high confidence in, versus any estimate given automatically being a commitment (where estimates may be required on demand).
Scrum people believe that scrum is the simplest way of measuring that. But at some stage you have to estimate the constituent parts of the project in order to get an idea of its size, and for those estimates to be useful in tracking your progress you have to do it in advance.
I repeat however, if you dont need to do this then thats fantastic! Many of us do however, and some of us choose to use scrum to do that, and some of us have had a great deal of success with that.
(edit: I worry that this sounds condescending. I am just trying to keep the tone friendly)
In advance of what? The only constraint on a useful estimate is that is comes before the task is finished - it needn't be considered as credible at the earliest possible time.
Also, your response doesn't really address my post..
I am clearly not expressing myself well. I am talking about a situation where some stakeholders are expecting a complete picture of roughly how large the project is and would like to be able to track how far your team is through this project on a regular basis.
I am putting scrum forward as a methodology for, in as short a time as possible, measuring the size of that project in a meaningful way by merely breaking it up into as small pieces as possible and attaching numbers to those pieces, intended to measure the size of each piece relative to the other pieces, and then over time discovering how long it takes to complete a piece of a given size.
> Assume each person giving different estimates for their own work, but not up front - ongoing as code is written.
The situation I outlined above (the time when scrum helps out) requires you have a stab at estimating all the constituent parts of the project at the beginning of the project.
> an estimate is an estimate, not a commitment. Committing to an estimate makes it a commitment, not an estimate.
True, but the point of estimating in scrum is to assign relative sizes to the pieces of work, not a number of hours, so this isnt a commitment to finish at a specific time but just to say 'I think this is one of the larger pieces of work in this project.' The person I was replying to sounds like they are on a bad team/project where people use their estimates to blame/finger point, and they are ascribing this to scrum as if the team wouldnt be doing this otherwise.
And in case you suggest that estimating without ascribing a time value is not meaningful, it is used to track how far you are through the project, and over time you refine what the finishing date will be given the emerging velocity.
> I might expect a dice roll to be 3.5, I'm not committing to the next roll being 3.5 - analysis should inform policy, in this case expectations informing stated commitments, but the two are not the same.
The analysis comes in discovering the velocity. The expectations evolve over time. But knowing your velocity is of limited use if you dont have an estimate of the overall size of the project.
> The difference is choosing to commit to an estimate you have high confidence in
This is the method for getting confidence in your estimate. You have an overall number of 'points' in the project and you learn how many points you can tackle on average every X weeks.
Every time you try and infer what I'm "really" saying or what "really" happened to me you get it completely wrong. Next time you do that just assume that you're wrong, it'll save us both time.
The blame/finger pointing on my projects wasn't really external (although in a different environment it certainly could have been). Developers themselves felt bad about missing their 'commitments'. The pressure/blame was largely self-inflicted.
Despite feeling bad the predictions were still consistently optimistic and still consistently wrong due to the environment the predictions were made in. It was a bug in the scrum process that led this to happen, but the team and management (and you, apparently) would rather assign blame to anything else other than a bug in their methodology.
>The analysis comes in discovering the velocity.
Velocity isn't a useful metric.
>This is the method for getting confidence in your estimate.
Except it doesn't work. It didn't work for us and it probably doesn't work for anybody else.
Confidence in estimates means treating risk and uncertainty as if it is real rather than sweeping it under the carpet, like it is in scrum.
Confidence means a prediction process that doesn't make developers feel guilty about being wrong, like it does with scrum 'commitments'.
Confidence a prediction process that doesn't intentionally subject developers to groupthink and peer pressure by immediately putting them on the spot like scrum planning pt 2 does.
Confidence means that your estimation process itself should be mutable. Under scrum it is fixed and not subject to review (if you change it you're doing "Scrum-but" and that's a sin, according to scrum trainers).
Most of all, confidence means that you should be able to inject technical debt cleanup stories into the sprint that derisk future changes. Scrum says that's only allowed if the PO says it's allowed. The PO is not responsible for missed commitments though, so it's not their problem.
Yes. I can take time out to answer email. I can take time out to make estimates as soon as I get an estimate request. Doesn't have to be done in a meeting.
>(so not up front)
What the fuck is the point of an estimate that's not made in advance???
>not in a group setting but as an individual, so either one person estimating the whole thing or each person giving different estimates
The latter. Is that a problem?
>(third point same as first, dont want to estimate up front)
"Not up front" is not the same thing as "not 4 weeks in advance". I'd do it as soon as the PM needed it to do prioritization.
>must incorporate what is often called 'contingency'
If you think risk and contingency are the same thing you're an idiot. Risk is story A (e.g. upgrading dependencies) might take 0 hours or might take 4 weeks while story B (updating translations) is going to take 1.5 hours and it's really only going to take 1.5 hours.
Contingency is (for example) "let's make sure we have 4 weeks spare before doing story A".
>(which is actually what the whole point of measuring velocity is for!)
No, velocity is about measuring how fast you're doing stories.
>and the final point - he doesn't want to have to commit to it
Yeah, because as soon as you start assigning blame for missing feature deadlines the technical debt dial gets ramped up to 11 and predictions become an exercise not in being accurate but in CYA.
An estimate about how long something is going to take can be wrong for many reasons that aren't the developers fault - bugs in libraries, technical debt in dependencies, technical debt they weren't aware of and didn't create, team members disappearing, etc.
If you want developers to commit to things make sure it's things that they have full control over.
I am here assuming that you want to be able to try to measure your progress through the project (as I mentioned, this is the only thing scrum does for you). Both of you seem to be suggesting (dont insult me if Im wrong) that this isnt the highest priority.
And no, velocity is to make the whole system self-adjusting. If I put 3 points against a story we use velocity to discover over time how long those 3 points take. This self-adjusts to incorporate for contingency.
If you disagree with this then we simply disagree on what velocity is about. It doesnt make us enemies, we dont need to get super pissed off at each other.
Scrum is complex and not always possible to follow exactly, so this is to be expected but it makes me wonder, how many successful projects are out there that are following the true Scrum methodology?
My guess is that it's a few more than the classic waterfall but I still seem to see far more failure than success stories.
Regarding success stories, it might be that process doesn't play such a critical role as long as solid engineering techniques are used and the team is competent.
All those methodologies are for the less stellar programming teams, to get consistent results from those (also to a lesser degree to make good and bad programmers work well along each other). Because you can't always get the best programmers.
If Scrum would only work well with good programmers, it would be next to useless.
It remains to be seen if big Scrum engineering projects where Scrum is actually applied even exist. I can't even think about one on the top of my head. I'm not even sure Scrum is that well defined for us to be able to judge if is correctly applied or not. And it's yet another story to judge if they are successful or not.
In the end it does not matter much. The theoretical vision that nobody ever uses has almost no interest if you are concerned with real world efficiencies.
You are engaging in equivocation.
> Want to construct a bridge or a rocket, design a microprocessor? You are not going to do that with "stories".
Nor are you going to use the software development methodology described as the waterfall method (you may use a physical engineering methodology that was among the inspirations for that software development methodology, but those are distinctly different things, with different specific practices, and different domains.)
> I'm not even sure Scrum is that well defined for us to be able to judge if is correctly applied or not.
Scrum is exquisitely well-defined, both as to what it involves, what it specifically excludes, and what it is neutral to, in the Scrum Guide. (There's lots of confusion between Agile, a broad approach which is not a specific methodology, and Scrum, a very-specifically-defined -- though by itself fairly incomplete, in that any implementation of Scrum needs lots of decisions on the things to which Scrum is neutral -- methodology.)
Scrum origin is partly in manufacturing. Now there are some common points between some aspects of software dev and manufacturing, especially more so if the software being developed can be iterated very quickly (but very few if it's not the case), but at least in the real world (and maybe even in the theory) Scrum is what is also actually mainly used to interact with other stakeholders. And given how the communication is performed, and its content, that might be better than complete chaos when nobody is actually able to do the work they are supposed to do (PM being limited to having vague ideas, lack of a truly competent tech lead doing actual tech lead work, lack of vision by management, and so over) and only very vague general ideas of what should do the software -- or more generally the whole product -- are ever emitted.
As soon as "serious" stuff starts to be involved, you need real boring engineering, with functional analysis, requirement engineering, modeling, systematic testing or even partial proofs, etc. And you need to have it structuring communication between teams, and day to day work. And then, I don't expect Scrum or anything Agile in such a context adding any kind of value.
Now the theory of Agile and Scrum has evolved because of criticisms to a point where we are told that it actually do not cover the things that matter. That is bullshit retro-justification, now that the world is fucked up trying to make sense how to use that. Here is the Agile manifesto:
> We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:
> Individuals and interactions over processes and tools
> Working software over comprehensive documentation
> Customer collaboration over contract negotiation
> Responding to change over following a plan
> That is, while there is value in the items on the right, we value the items on the left more.
Engineering is mainly about "processes and tools", of course "individuals and interactions" are also needed, but there is no need to oppose them (although I am not sure what is the point of "individuals" here; the authors might as well said "oh and by the way be nice")
"comprehensive documentation" is critical in all kind of domains, and now that software is everywhere it just makes no sense to declare your "preference" of "working software" above "comprehensive documentation". It is, again, even dangerous to oppose them.
Customer collaboration over contract negotiation; again, highly dependent on the field and specific project if this is something where it makes sense to even have a "preference" or not.
"Following a plan" is what you do about how you organize your work when you use Scrum. There is no problem in studying the impact of a change any time if proper engineering practices are used. Obviously, the cost can vary depending on various factors.
My conclusion about Agile and Scrum, is that if you prefer all of that (4 Agile preferences, and the Scrum theater), you should seek projects that are suitable for the Agile preferences, and so poorly defined that Scrum is a plus. On my side, I'm just not seeking to work on chaotic projects -- on the contrary I try to bring logical and more systematic practice where I feel that chaos reigns -- and I'm neutral about Agile preferences, I prefer to choose projects on other criteria (mostly; intrinsical interest)
And Agile does not avoid processes and tools, it recognizes that process and tools must be specifically fit to the particular team and context of work (Scrum, particularly, is a baseline set of processes and tools that is designed to serve as a framework for common contexts of software work -- its intentionally incomplete to avoid specifying too much that would narrow its scope of applicability.)
> "individuals and interactions" are also needed, but there is no need to oppose them
The need to oppose them comes from the authors' concrete experiences in the software world before writing the manifesto, where very frequently canned (often consultant-pushed) processes and tools were being adopted by management in shops without considering the dynamics of the existing team and the particular work being done. (One of the sad ironies of the Agile movement is that the "Agile" banner itself has become a tool for the same kind of thing.)
> "comprehensive documentation" is critical in all kind of domains
Yes, it is; the preference stated in the manifesto is, again, the result of concrete experience where projects were quite often focused on producing mandated documentary artifacts because there was a checklist and that was how "control" was exercised, but the documents required and delivered were often irrelevant to (and not consumed by, or updated to reflect changes resulting from, the process of) delivering working software.
> Customer collaboration over contract negotiation; again, highly dependent on the field and specific project if this is something where it makes sense to even have a "preference" or not.
This is intended specifically in the content of developing specific software requirements (and, really, its more about the dev team pushing the customer to engage rather than provide hands-off requirements.)
The Agile Manifesto really deals with concrete problems encountered in particularly enterprise software contracting (but bad practices from the enterprise world were, at the time, getting exported to the rest of software development, so not limited to the enterprise world.)
> "Following a plan" is what you do about how you organize your work when you use Scrum.
Scrum, like most methodologies that attempt to implement agile values, focuses quite a lot on managing potential rapid change within the plan.
Scrum is what you do when you try to do software engineering without actually doing software engineering. It insanely meta, and like explained in other comments, the improvements you get from its loop are too often meta (we should evaluate more accurately). I prefer to stick to the real thing, and core engineering practices. Scrum attempts to fix situation when core engineering practices are misunderstood and used as constraints instead of being used as something essential to the dev of a good product; but it is vain to try to fix such a situation by engaging key people even less in core engineering practices, and more in mundane discussions where the real problems are never addressed.
Oh, yeah, that's definitely a problem. I don't think the Agile Manifesto is bad at all, but I think that, ironically, in application it suffers from the same problem it sought to address -- people are looking for simple answers that can be applied without deep knowledge of context. The Agile Manifesto and Agile software movement was itself a strong reaction against that, but unfortunately it (and tools from within that movement, like Scrum) get applied by exactly the same process that the Manifesto was a reaction against (focusing on particular ways it had manifested, prior to the Manifesto, in software development.)
> Honestly if some management is stupid enough to force badly suited processes and tools instead of letting (competent) teams choose better ones, I doubt they will suddenly see the light by reading the Agile manifesto.
Absolutely; the real audience of the Agile Manifesto is software development practitioners that have influence with management, and its not really "new knowledge" as a concrete distillation of experience. The fundamental problem, I think, with Agile isn't that its ideas are bad, its that the real problem it deals with isn't a problem of process/tools, or even the meta-level approach to processes and tools, but a problem with institutional organization and leadership of large entities that happen to be doing software projects, and how that manifests in software projects.
The agile movement has produced some new tools that can be applied effectively in, largely, the areas that didn't really have the worst cases of the problems that motivated the movement -- because its helped motivate and inspire a lot of efforts by people with decent engineering backgrounds at finding new ways of working.
But the kinds of organizations that were worst afflicted by the problems that the Manifesto set out to address are still the most afflicted by those problems, and what they've gotten out of it is a lot of new processes and tools that consultants will sell them, their management will blindly adopt without understanding the conditions which makes them useful, and thus they find all kinds of new ways to fail.
> Scrum is what you do when you try to do software engineering without actually doing software engineering.
Scrum is largely orthogonal to software engineering (presumably, people using scrum in a software project will be doing software engineering within Scrum, but Scrum is not about software engineering.)
> It insanely meta, and like explained in other comments, the improvements you get from its loop are too often meta (we should evaluate more accurately).
Scrum is designed to be very meta, true. And, yes, if you mistake Scrum for a complete process rather than a process framework, you aren't going to get much out of it beyond omphaloskepsis. (I'm actually not convinced that Scrum is particularly valuable, even as a framework, as anything more than a well-known starting point to develop an appropriate, context-specific work model.)
Regarding development: My main point is that Scrum leans towards agile methods such as XP (testing, CI etc), but it also sucks the time necessary to do those things well. The time Scrum takes off of the devs' working hours could much better be spent on those.
There's slightly more to it than that: it also encodes an assumption that you're working with a single fairly-tightly-integrated group (with synchronisation points at least daily). It's possible that this helps with estimation and scheduling -- it's a lot less clear that it helps get the best outcome in other respects.
There's also something of an uncertainty principle here -- more precision of estimation is possible, at the expense of increased expected timescales (partly due to padding, partly due to picking lower-risk approaches).
I personally think estimating projects is one of the most difficult things about this industry. Especially if we're talking about delivering many calendar-months worth of effort for a team , unless its just a variant on some other project[s] the team is well experienced at
More like wandering in the desert, hoping you find the promised land.
Been thru scrum master training 3 times, been on many "agile" teams. I've never heard this rationalization. Rather, a common justification for "agile" was you always have a working product. Which might be nice if things worked out that way.
Also, PMI style critical path worked just fine for figuring out that "straight line".
Scrum and "agile" democratized project management, empowering every poseur to claim expertise and ability. Whereas PMI required real effort to learn and master, Scrum flavored "self help" books can be flipped thru before you finish your coffee and then safely stored in plain sight on a book shelf, never to be touched again, allowing said poseur to claim the daily mutant chaotic dysfunctional mismanagement that they've always done is now "agile".
If you are objecting to defining the scope as small tasks and measuring your progress through that over time, then continually re-evaluating this scope as requirements change, then I think you are not working in an environment that would benefit from this kind of tool.
Its just a pragmatic set of guidelines, and objecting to it with such ridiculous vitriol makes you sound as foolish as the people I think you're objecting to.
"with such ridiculous vitriol"
Emperor, little boy, no clothes. It's thankless work.
In opposition, defenders of Scrum et al use the No True Scotsman's fallacy. Because those of us who have tried and failed are just morons.
Project management is risk mitigation. In my experience, most "agile" projects have been risk amplifiers. Ironic.
In such an environment, developers prefer to drastically over test their code, and to undertake work in manageable sprints that let management claim success and understanding even when neither exist.
If you already have a very strong product market fit, and you need to hire developers whose judgement you don't trust, or if there are extrinsic sources of timeline pressure (like investors or non-technical management who think developers are lazy... essentially anyone other than users or customers), then Scrum is perfect for your organization.
The other constituency that seems to love Scrum is product managers who either have no vision for the product or no control of the vision, and are essentially being asked to be cat herders and manage engineers without earning their respect or having any authority over them.
Of course, if you already have meeting overload, standup is going to feel like the worst.
The daily standup, IMO, should be only to remove impediments, and if you have none, then a sentence or two will suffice. I see the DS as the most useful meeting, as you are aware of what your workmates are doing.
And if Scrum is still a pain in the back, then you have Kanban, which is sort of Scrum without the straitjackets.
Yes you can do it (and probably should when you understand the purpose of each Scrummy practice), but don't expect to be praised by Scrummers.
I'm extremely happy to discuss the reasons I want to change something, but all to often the only counter-argument is "that isn't Scrum". I'm 100% fine with that, but I'm not going to do something if the only reason for it is religion.
If communication is a problem, having a daily standup just pretends the problem doesn't exist, rather than solve it.
100% this. The only thing a synchronous team-wide meeting is useful for is revealing a significant issue and getting a prompt and definite acknowledgement from the team. And then, if it's a priority, some help with the issue. But given the proper tool, even that feature can be made asynchronous.
It gets worse if the team is actually 2-4 different teams with not much overlap (because companies tend to adopt these agile methodologies without much thought and it just keeps growing and including more people because.. it's nice, right?). Then you're ignoring (or not having a clue) about 90% of the meeting, and it's _daily_.