Hacker News new | comments | show | ask | jobs | submit login
Why I'm not a big fan of Scrum (okigiveup.net)
483 points by keso on Aug 8, 2016 | hide | past | web | favorite | 380 comments



I've been doing Scrum-driven development for a few years now after working much more independently for most of my career. I understand why managers like it because it at least lends some structure and predictability to what is an inherently unpredictable enterprise.

But the author's criticisms of the incentives of Scrum are on point I think. Because the stories are always articulated in terms of user facing features they encourage developers to hack things together in the most expedient way possible and completely fail to capture the need to address cross cutting concerns, serious consideration of architecture, and refactoring.

This is how you can get two years into a project and have managers and clients that think that things are going well when the actual code is an increasingly unmaintainable rats' nest. Good devs confronted with this kind of mess will eventually burn out on sticking their necks out defending necessary but opaque refactoring tasks and move on to greener pastures.


I am Product in a large corporation, and it's a certainly a fine balance. In a previous life, I was a developer, and I dealt with similar issues. I have a lot of respect and empathy for the the Tech team. Therefore, I asked my Tech team to inform me when things are getting out of hand -- that's the responsible Product thing to do. I instituted that technical debt is part of our KPIs. If it's not captured, it's not actionable. A few examples: - Documentation: required when method's cyclomatic complexity is high or method is just plain long (we defined what we consider "long"). Every customer facing method has a working usage example (%), etc. - Test Coverage: % unit tests (defined by modules) -- higher is not necessarily better, but it gives us a sense of where we stand. % of test automation: manual vs. automated. - Refactoring delays: # of TODO comments, # of compiler and static code analysis warnings (e.g., “this method is deprecated”) --> that is a sure sign that things will break in the future and an investment is needed.

There's some development work to be able to capture these things, but it gives visibility and power to Tech to inject some maintenance work into normal sprint development. I'd love to hear what other people are doing.


> technical debt is part of our KPIs. If it's not captured, it's not actionable

This is where I suspect manager-types and developers have a vigorous divergence in values.

Professionals routinely encounter situations where something is wrong and needs to be "actioned" but its wrongness isn't effectively measured by any metric (other than the opinions of the experienced people looking at it).

There is a certain species of manager who has taken Taylorism a bit too far and says that anything not reflected in the KPIs is not real and not getting acted upon. I hope this isn't what you're expressing here, but oh man is that mindset frustrating.


On the other hand, at the end of the road - or the rewrite - something should have been gained in terms of money, risk or time.

Being shitty is not reason enough, but it's almost always possible to reason and quantify and weigh the cost versus the benefits.


Senior staff in pretty much all organisations are given some latitude with respect to performing tasks that they believe will be beneficial without needing to produce a specified cost-benefit.

We have one-on-one catch-ups with staff, we have all-hands meetings, we visit clients, read books and articles, attend industry functions, issue press releases, meet with potential investors, etc.

There's value in all those things, but very rarely do we need to account for the fact that we spend time on them.

Senior engineering staff need that same freedom - it should be totally appropriate for them to make that call that "I'm accountable for the technical quality of this code, and I've decided that this needs to be done".

That requires some sense of business acumen - there needs to be an appropriate balance of technical investment to business investment - but that's part of the role of being a senior dev / tech lead / architect / whatever-your-organisation-calls-them.


I agree.

I am not saying that you always need a detailed balance sheet, just that it's good to reason about the benefits.

It's especially hard to quantify risks - that's where you really need the expertise.

What I wanted to point out is that there seems to be quite many developers that actually do not have a sense of business acumen, and are willing to spend alot of money on low priority tasks, just for their own personal pleasure.


Of course something should be gained, but that doesn't mean you can always tell how much would be or was.

There is no mature "actuarial science" of software development. The market can provide concrete pricing for new feature development; engineers can't tell you how many dollars of tech debt you're in or give three significant digits on the probability of a major outage tomorrow.

That doesn't make money/risk/time costs which are difficult to measure any less real and it doesn't make them smaller than the ones which are easy KPIs.

Nor can you necessarily look at the movement of measurable KPIs and say "man, that rewrite was a waste of money." Who's to say things wouldn't have been worse without it?


> it's almost always possible to reason and quantify and weigh the cost versus the benefits

I don't think so. As a pretty good analogy, can you quantify the benefit of replacing knob-and-tube wiring in an old house? Now consider that the house across the street keeps its old wiring for the next twenty years without anything bad happening.


The problem with raising issues with a framework like Scrum is that despite many share references, most people are in environments which apply Scrum in a slightly different way.

We talk about organisations applying Scrum but what we're really referring to are groups of people, and depending on those people, their mindset, their history and their current role requirements - they will use the Scrum model in their own idiosyncractic way.

Take the point about technical debt and running so fast with an Agile development workflow that you never get to refactor code or properly document etc.

Even if you just took that in reference to one specific company, the value of, cost of, and importance of these things could be different at different times.

When a startup is running to get MVP out to get customer feedback, it's usually much more efficient to set the basic expectation that the first version of your product will get thrown away once you've learned all the important aspects of what you need to deliver. With that in mind - startup development is a completely different beast than when your product is established and you have paying customers with expectations of up-to-date documentation etc.

Some managers are do not have good people skills and manage by just holding developers to account based on the expectations set in their Scrum planning day. Discussions about whether refactoring should be done yet or not may not even be something they want to know/talk about and they rely on engineers to factor that into their sprint estimates.

Depending on whether the company is being driven by sales/demo opportunities, or feature roll-out timescales etc. then the call about whether to do something 'quick' or 'right' may drop either way.

For developers it's important to understand the dynamics that are driving the business and what their role is (sometimes you have to 'JFDI'). If you have a good engineering team with a strong leader then these things shouldn't be that visible to anyone else. It's also important to be able to meet the expectations you set. If you keep delivering late then you'll also struggle to get people to let you do more than the minimum required at the time.

For the business as a whole it's all about the big picture and understanding the decisions you make and what their impact (short and medium term) is.

If you want quick releases and don't have the resources for that to allow for good documentation and/or scalable code, then you need to understand that technical debt is building up and at some point it will need to be paid.


> despite many share references, most people are in environments which apply Scrum in a slightly different way

Exactly. In fact, the linked text is a prime example: it professes to address the "standard Scrum as described in the official guide", and then goes on to condemn the notion of story points. Now, I happen to agree with those story-points criticisms, but guess what: that official guide says nothing about story points at all! :o


In my view the only way to handle this is to make constant refactoring part of the work without telling management. If you ask for permission for refactoring you will almost always get a "No".


This is how we ended up with the "Shadow Sprint" at a previous workplace. A whole bunch of utterly essential engineering work was left out of the sprint process, so many of us would just go ahead and do what needed to be done anyway while working on whatever tickets we'd picked up frome the "real" sprint.

Utterly dysfunctional of course, but if it's do-or-die, I'd prefer to 'do'.


How did you handle regression testing of the refactored functionality? That's our biggest hurdle. QC is swamped with "regular" sprint work, so there's no way they manage any additional load during the sprint.


We had pretty good (but not amazing) unit and integration tests in many of those systems, and a willingness to manually test the systems where we didn't have tests.

If we had to lean on a QA department to get this stuff done, I'm not convinced we would have been able to actually deliver that project.


Only refactor things in sections of code that you're changing anyway. That way QA already needs to test those sections of code, and gets no extra work.


We tend to bundle that work up and include it in tickets with the full acknowledgement of the rest of the team.

"Oh, since we're adding new email features, we'll need to clean up some of the old email code. That will add additional complexity to this task, so we will estimate it higher."

It works, allows us to still track the impact on velocity, keeps everyone informed, and makes technical debt clear and trackable.


I believe this is the correct approach. You're not hiding the complexity of the work, and you're also not compromising on quality. This leaves only the task of saying "no" when asked if you, "for now, could just..."


This is how it should work in my view. You just need managers that are disciplined and don't tell you to not clean up old code because of deadline pressure.


This is exactly what is supposed to happen in a Scrum system. If someone is not doing this they're not doing Scrum - just something they called Scrum to pacify people


This is mentioned in the article as possible only for small changes. When the refactoring involves more systemic problems, it becomes impossible to make the change in the course of making other client-facing changes.

I'll add that some subsystems are a mess but stable. It's difficult to make refactorings with other changes when there aren't other changes to make. I see this in particular with old but stable subsystems that will need to be ported to new environments someday, but it's never a good day to lay the groundwork for that inevitable change.


> I see this in particular with old but stable subsystems that will need to be ported to new environments someday, but it's never a good day to lay the groundwork for that inevitable change.

I think it's correct to defer that work until you need it. You may end up never porting that subsystem, in which case refactoring the currently-stable implementation is wasted effort. As and when porting becomes an actual business requirement it can be prioritized appropriately.


> You may end up never porting that subsystem, in which case refactoring the currently-stable implementation is wasted effort.

I understand this approach, and it works many times. But when the porting is delayed until the last possible minute, it's more likely that hacks are put in because the requirement turned into a hard deadline. Instead of defining a sensible OS abstraction layer, the developers might find and replace "Windows XP" with "Windows 7".


> I understand this approach, and it works many times. But when the porting is delayed until the last possible minute, it's more likely that hacks are put in because the requirement turned into a hard deadline. Instead of defining a sensible OS abstraction layer, the developers might find and replace "Windows XP" with "Windows 7".

Which may well be the right choice for the business at that point.

More generally, it's not like doing it now makes it faster than doing it later: you have to put the same amount of total work in either way. In fact you have to put more work in if you do it now, because you will have to ensure that any subsequent changes don't break the porting. If you have other tasks that are a higher priority than porting, you should do them first, almost by definition. Of course if the port is your highest priority then that is what you should be working on (again almost by definition). Porting should be left as late as possible, but no later.


> Which may well be the right choice for the business at that point.

I think it may be imporant to raise a few points for consideration here:

- what may be a good choice for the business short-term can also be a very bad choice for the business long-term

- many - personally, I believe most - businesses don't care about products they make or services they give, they care about the money they can make via those products/services; ergo, the quality doesn't matter beyond the point the customer already paid what they were expected to pay

I don't bring it up as criticism, but only to point out that there are two completely different worldviews here competing. One, shared by many developers, is that the product is what that matters. The other, shared by the "business types", is that the profit generated by product matters.

I also feel that part of one becoming a professional developer is a shift from thinking about quality of work to thinking about its money-making potential. Which I personally consider a poison to the mind, and it makes me hate working in companies. But those are just my personal feelings.


You misunderstand. If more than a small minority of your code cares what OS it's running on (beyond obvious excepted), you have a large amount of technical debt. Cleaning that up, either by refactoring or replacing the offending code, is in the business's best interest because it keeps costs down and keeps more options available to the business.

The 'business people' are not qualified by themselves to say that the cost-benefit analysis of the refactoring is worth the effort. Because while they may have a good idea of the customer benefits, they generally have little insight into what the future costs of technical debt are. You see this obviously demonstrated by 'business people' who are shocked that there is work to do on five-year-old code that has been working fine for years. Any reasonably experienced developer, any many junior developers besides, can tell you that code rots if it's not maintained properly.


> Cleaning that up, either by refactoring or replacing the offending code, is in the business's best interest because it keeps costs down and keeps more options available to the business.

IME that's very rarely the highest-value thing you can be doing for the business. It's not worth paying for flexibility that you're never going to use, and it's not like it's going to take longer to refactor later than it would to do it now.

> The 'business people' are not qualified by themselves to say that the cost-benefit analysis of the refactoring is worth the effort.

Yes they are. Cost-benefit analysis is their job. It's your job to give them an accurate picture of the costs and the benefits.

> Because while they may have a good idea of the customer benefits, they generally have little insight into what the future costs of technical debt are. You see this obviously demonstrated by 'business people' who are shocked that there is work to do on five-year-old code that has been working fine for years. Any reasonably experienced developer, any many junior developers besides, can tell you that code rots if it's not maintained properly.

Yes and no. If you don't need to make changes to a given system then it's fine for it to "rot". Presumably there is a business reason they want to make changes, in which case bringing the code up to a point where you can make those changes is part of the cost of that business-level deliverable.


"Yes and no. If you don't need to make changes to a given system then it's fine for it to "rot". Presumably there is a business reason they want to make changes, in which case bringing the code up to a point where you can make those changes is part of the cost of that business-level deliverable."

Except at that point, there's likely a hard (usually arbitrary) deadline, meaning you don't have the time. So you end up hacking stuff up again, and the tech debt doesn't get addressed. Your best people end up getting frustrated that they're not being listened to, and that they constantly have to explain this stuff to the business people, and eventually they leave.


> it's not like it's going to take longer to refactor later than it would to do it now.

It is so, when you have to build on top of something you will have to refactor later. I think the miscomunnication here is happening because the person you are replying to is assuming that case, and you are assuming the case where the code to be refactored is isolated form the rest of the system.


"More generally, it's not like doing it now makes it faster than doing it later"

No, but "faster" isn't necessarily the point. Doing it correctly leads to it being "faster" because you don't have to constantly go back and fix bugs because you used ugly hacks to get stuff done in the short timeframe you had because the company decided to wait until the last minute.


In a non-broken organization technical leads (or equivalent) have same level of authority to guide development as feature driven personnel and the necessity of features vs. quality can be triaged in bright daylight.

If the organization is not sane, that's another thing of course (and often the case, sadly).


True, but don't forget that non-broken organizations are very rare :-)


Do you know of one? I've been looking. For 25 years.


All organizations are broken, but some are more broken than others.


There are several in the Helsinki area (that I'm aware of - that have been at some point at least - sane in this regard. Many broken of course as well).


This is a typical way that agile and related approaches become a barrier between implementers and management.

The fault is almost all on the management side because, ipso facto, they're the ones making the decisions that drive this outcome.

In the real world, there are outcomes that are much worse than this. But this does have the effect of neutralizing the value of scrum: Management gets what implementers decide they get, and scrum is just the way middle management is told the story that they pass up the chain.


TBH this is what I've always assumed it should be from day one with Scrum, after training with Ken Schwaber etc. The backlog is not a Todo list.

It's the only way that seemed sane without twisting user stories into weird refactoring or architecture stories and keeping team from writing garbage code waiting for these "tech stories" to clean it up.


Why is management getting involved?

The PO should try to balance this with feature impementation and the team + SM should make it clear when it's needed. This seems to work fine in an environment based on collaboration between PO,SM and dev team, but will probably fail in an adversarial one.


I have rarely seen a PO as the single contact to the "business". Most of the time there are multiple project managers, product mangers and line managers who have a say and put pressure on the dev team.


Because managers have to report to other managers and they don't want to report that the devs are working on something that already has been implemented and "works".


Indeed. That's sometimes the case with other stuff like playability too. I've added simple animations which delighted users from time to time.

If I had asked beforehand to invest time for this, the response would be "let's postpone this (forever)".


Can't do that for deep refactoring. You'll need QA involved and the boss will never sign off on that because, well, he's a boss not a proper manager.


I don't like scrum myself, but let me try to defend it because I think the problem of most of the people who don't have a good experience with scrum is a misunderstanding of the underlying principles behind it.

Let's just remember that Scrum is just a tool that is trying to replicate some practices of the toyota production system (TPS), and an important principle of the TPS is continuous improvement, and Scrum has this through restrospectives. Another principle is quality built in.

Now, how to avoid having a rat's nest after 2 years?

First of all, you have to remove velocity as a goal for your team. It's easy to game, and it de-incentivizes finding a _predictable_ velocity, which is the goal of that tool.

The goal of velocity, is to know what's the "cost" of building an healthy system.

i.e.: Your team ships an arbitrarily 10 points per sprint by doing what they think is right. The same team could ship an arbitrarily 20 points per week. 10 is the _true_ cost of building a healthy system. There is not a lot more to do for good team, honestly, at that point. Management can't say anything, because you bring them predictability, and that's mainly what they want. They might think you're slow, but hey, all of this is relative.

Suddenly, what does that mean when your team only ships 5 arbitrary points? That they had to push extra. That there was an unexpected problem. That things were on fire and you had to stop working on features entirely.

Basically, that there is something to retro on, find the root cause and anticipate for next sprints, and use to go back to the healthy level of points.

I used this at several places. It demands trust, but it always pays off.

With some teams I worked with, we set up a technical prioritization planning meeting every week, where basically decided what was going to be prioritized for that week. Each team was picking 2/3 things to do on top of the features.

You don't tell anyone. You do it.


I think the problem of most of the people who don't have a good experience with scrum is a misunderstanding of the underlying principles behind it.

Unfortunately this has become a cliché whenever someone criticizes scrum, Agile in general, etc.: it's never the fault of the idea/process, it's the fault of the manager/team for not understanding it properly or not doing it right.

Even if these philosophies and processes are so amazingly great when properly understood and implemented, the fact that hardly anyone seems to be able to properly understand and implement them would be a fatal flaw.


Agreed, that's why I specified I don't really like Scrum. Mainly because it's hard to actually implement without all the flaws if you don't try to understand the underlying principles. Unfortunately, Scrum books and coaches rarely go through these.


> Mainly because it's hard to actually implement without all the flaws if you don't try to understand the underlying principles. Unfortunately, Scrum books and coaches rarely go through these.

If you are implementing Scrum, I don't see how you do it without the Scrum Guide (which defines Scrum), which is both quite short and does, in fact, go through the underlying principles.


> it's never the fault of the idea/process, it's the fault of the manager/team for not understanding it properly or not doing it right.

Well, almost every time I see Scrum criticized the described process is not Scrum and merely cribs ideas from it without understand the purpose or how things inter-relate.

So it's a fair criticism.


> Suddenly, what does that mean when your team only ships 5 arbitrary points?

In my experience, it means some developers failed to game the system that particular time.

But you can be sure there were many other problematic moments in previous sprints, but the developer could hack together an ugly mess of a code to avoid the shame of failing in front of everyone in a meeting, and dragging the team's points down.

Because the unwritten thing about points is that they are used to shame people. In public. Sometimes not explicitly, but the feeling is there. It's always there.


Yes.

That's why, if you have to work with and/or manage a scrum team, making velocity not the focus of the sprint is step 1 for a sane process. We count points, but it's not a contract. We try to reach the points, but it's not a contract.

In my parent comment:

_First of all, you have to remove velocity as a goal for your team. It's easy to game, and it de-incentivizes finding a _predictable_ velocity, which is the goal of that tool._


Scrum really is shame driven development in my experience


Your team ships an arbitrarily 10 points per sprint by doing what they think is right. The same team could ship an arbitrarily 20 points per week. 10 is the _true_ cost of building a healthy system. There is not a lot more to do for good team, honestly, at that point. Management can't say anything

This sort of thing is only useful for managers who don't understand what is going on. Since they are unable to look at code and understand it, 'points' give them a semi-opaque substitute.

If management trusts the team, it is not necessary.


I guess if you understand what is going on as a manager, then you might prefer the 10 points team?

I know I do.


If you understand what is going on, you will get rid of points altogether, and look at the git log from time to time.

If you are a leader, and not just a manager, you will help your programmers improve their skill, so over time you spend less and less time managing them. Your programmers will appreciate it because with greater self-management comes greater happiness and job satisfaction.


The points are (also) an estimating mechanism. They let you estimate a new task in points (effort measurement) instead of hours (time measurement), and then use history to predict a likely timeframe based on your typical rate.

The extra layer of indirection helps to account for uncertainties in the task, imprecision in the estimate, and chaos (in the scientific sense) in how long individual tasks take relative to aggregated historical metrics.


That's surely worth something, but it's easier to think of estimates as a 90% outer bound: "I am 90% sure that the task will be completed by date X." Uncertainty can be incorporated, and you don't need to learn a new conversion factor from time to points.


That doesn't work when you can't predict what other tasks might interrupt you between now and date X. That's why estimating in units of effort is much more reliable than units of time.

The drawback to units of effort is that estimates eventually bubble up to non-technical people who don't discern the difference between effort and time, and for most projects there are going to be some time-based constraints on the schedule.


That doesn't work when you can't predict what other tasks might interrupt you between now and date X

You can still take a probabilistic approach to tasks that might interrupt you.


Totally. Some teams want the points though So I let them have it. But I just set the principles I explained above for the points to have an actual meaning.


I leave my current role in just under two weeks for the very reason you describe. I'm tired of being the "difficult dev" that everyone has grown to hate. Let them launch. Let it fail. Let's see who the arsehole is then.


You, because the moment you leave you become that-guy-who-we-can-safely-blame.

Mostly joking but plenty of places do work like that.


Yes, they'll blame him. It won't make sense for the most part, but it'll still happen. Probably largely things like "it only tomelders had worked with us more instead of being 'difficult', we wouldn't be in this mess", even if the 'mess' is precisely what he was advocating people work on before he left...


I will always blame whoever most recently left for everything. The rain, the chair being broken, the mouldy cream cheese in the fridge.

No jokes. It's Brads fault.


It will indeed be his fault. He set them up for failure and left, they'll say.


Isn't there a name for this pattern?

I swear I remember reading a piece on how having a team member leave can be good for cohesion and efficiency, because everyone is free to vent about old, bad decisions without targeting anyone still at the company. I think it was part of a story about a team clearing their technical debt by convincing a reluctant manager that they needed to undo the last guy's mistakes.


"Blame Canada". PM: "We can't blame Canada for missing our targets 4 sprints in a row."


And in a way it's saner than if you blamed over and over some people who still work 5m from you, even if they deserve it; and we all have stories about that kind of people.

You often needs to be more subtile with them (x) than blatantly calling on their bullshit every time...

(x): I mean to try to influence them so they progress so your life is better, not to shit on them


It's a long time since I've had to deal with "strict" Scrummers. But I do remember being utterly baffled as to the insistence on user features being ready by the end of a sprint, even for quite complex, technical components.

Why can't we make some sub-component this sprint then the UI bit the next?

I tried various ways of reframing it such that the developer of the UI be the "user" but it didn't wash.


Because that two weeks you spent writing excellent code is mostly useless unless you have a way to get feedback on it. As a customer, you've given me no value.

What if you finally hook up UI to the sub-component and the customer/stakeholder decides they don't like any of it? You could have known about it earlier.


The problem with this is that it imposes a membrane on the process that is only permeable to work that contains a shippable piece of user-facing software. It presupposes that all work can be divided into such pieces. And since that's not true, it leads to contorted stories to squeeze necessary work through that membrane.


I agree that sprint lengths can be arbitrary and counter-productive to developing good software.

I think it's important to understand that the sprint "membrane" is not designed to help developers. It's designed as a compromise between developers and their managers.

Developers want to work uninterrupted and perfect their work before releasing it, and managers want working software quickly and like to interrupt developers and change courses frequently. Sprints are an attempt to find a middle ground, but it's not always ideal.


> It presupposes that all work can be divided into such pieces.

No, it states the reasonable conclusion that all valuable features can be broken down into such (because otherwise if it adds no value to the product).

There are also Scrum tasks around Research, Tech Debt, etc that are perfectly fine to create and work on but they have an affect on your overall time to build new features and that's ok. Because it needs to be done, so they'll get prioritized accordingly.


You italicized "valuable features" and I understand why, but I think the trouble spot is actually the word "all"; your parenthetical about adding no value merely begs the question. I'd argue that, indeed, some valuable features can be organically broken down into two-week user-facing deliverables, but certainly not all and probably not even most. There's nothing magical about being able to partition a deliverable into a sprint's time frame that, by such distinction alone, makes it valuable versus valueless.

Systems like scrum try to wrangle into a manageable bolus a process that -- if we're to make good softare -- necessarily includes creativity, inspiration, and the traveling of paths yet unseen. It's like writing a novel and having two-week deliverables like "complete the arc of the Alice character", versus "write approximately 100 pages". It's not valueless to write 100 pages, despite not finishing the Alice section, and perhaps specifically because we discover that Alice's emerging story turns out to intersect perfectly with what we want to do with the Bob arc later on.

So there's writing and engineering and lines of code versus plotting and architecture and inspiration. We need all of it, right? Does everything that's not a "valuable feature" have to be shunted into the cul de sac of a research spike, doomed to be frowned upon by the management for whom the system otherwise provides the sheen of predictable velocity? Does the critical work of "dreaming" of what to do at both macro and micro levels become a casualty of the banal necessity of marching equal-sized boluses through the development tract?

I'm making a florid point, but to me this feels like the essential tension.


> but certainly not all* and probably not even most.*

Yet to find any that can't. I've yet to hear of any either. Usually that's a problem with the people trying to break things down not being used to modeling things differently - not the process. It's pretty common.

I'd love to hear some examples tbh.

> There's nothing magical about being able to partition a deliverable into a sprint's time frame that, by such distinction alone, makes it valuable versus valueless.

Of course not! The point is not that the time-boxing makes it valuable, I hope I didn't imply that, it's that all features that have value should be specific enough to be able to be broken down. If something is too vague it's not a valuable feature because at that point is just pie in the sky spitballing. That's what requirements and grooming is for - to identify what needs to be broken down, expanded on, or specced out better by the Product Owner.

> It's like writing a novel and having two-week deliverables like "complete the arc of the Alice character", versus "write approximately 100 pages".

That would be awful Scrum process. Also the Product Owner is the Sprint Team in a novel but lets pretend they are two separate people for this: here's how that should work in a Scrum system...

- Story: "complete the arc of the Alice character"

- Feedback: Too vague. What IS the arc? What perspective should it be from? Lots of questions, needs to be broken down.

Next, after the product owner has worked out that we're going to hit these beats in a 3 act structure.

- Story: "complete Alice's arc for Act One, with exposition introducing her and the other characters, ending on the inciting incident for the main plot"

- Feedback: Better, but this is really 3 things - the introduction of Alice, the intro of the other characters, and the inciting incident. You should split those up.

Next, the PO has split this up into those three Stories and presents the first one...

- Story: "We need to introduce Alice so the audience can start to get to know her."

- Feedback: Great, this seems low complexity and simple. We have requirements for her backstory? Ok, great, lets work with that. Seems like a Complexity 2 Story.

Then, when Sprint Planning...

- Scrum Master (not PO): "Ok so we were planning on getting this Alice introduction done this sprint, we still ok with that?"

- Everyone: Yup!

- Scrum Master: Ok, lets break this down into tasks of things we want to cover then.."

And then that part of the story gets written. Obviously it's an odd metaphor, but that's kind of how many (not all, at all) professional authors break down a lot of their writing process anyhow. Some are more freeform of course, but many plan a lot too.

The point is - that just because you have a planning step doesn't remove the creative process, it helps you plan better for work. That's all.

Scrum doesn't magically change how you code or what you code, it's just a planning and change management tool that emphasizes incremental steps.


That's why I sometimes say at my workplace that we should make our projects in Flash. Faster to make an interactive UI this way and get the approval of the management/customer, no time wasted on useless things like having the program actually work in an efficient, useful and secure way.


I knew some folks in the 90's that prototyped in Shockwave. They did UI and animated uses cases. I was impressed by how quick people got their blob-type people animated use cases that were basically little cartoons. Seemed like way too much work, but I guess it worked for them.


I've learned the hard way just how effective such prototyping tools can be if you only care about... prototypes. Or the visual stuff. Seeing a designer whipping up a running example in 15 minutes in Construct2 that was equivalent to what me and two of my friends spent last 8 hours coding has taught me to respect those tools, at least in particular use cases.

And my point is, if we're focusing only on short-term client-recognizable value, we may as well just make shiny prototypes. Who cares about the pesky internals anyway.


To be fair to them, they were hella good C++ programmers and used Shockwave to prototype and lock down the business requirements. You can do an amazing amount of specification like that. It was basically CRC cards put to animation as people and things interacting.

They provided quite a bit of short and long term value by being better at giving clients an understanding of what they were actually getting. They cared about the internals and making sure their clients understood the logic the internals would use.

If I was skilled in some animation tool like they were I would do the same.


...Flash?


Yeah, that thingie in which you draw stuff, sprinkle them with a script or two, and with a few click you get something that can be run or embedded in a webpage. The thing HTML 5 supposedly replaced.


> As a customer, you've given me no value.

Erm. But it's you who is the customer in this scenario, not him? Or am I missing something?


He's saying that eventually when your feature surfaces as a UI the customer may not like it.

To which I have 2 responses:

1. not all features need a UI to be useful

2. this also demonstrates the infantilising nature of scrum where no developers can be trusted to think deeply, talk to stakeholders and otherwise do the right thing in a fully-rounded way but must just follow the exact instructions expressed


I mentioned UI because of the parent comment (about hooking UI up to a sub-component).

The core idea I was trying to get across is that until a feature is working and in front of a customer (or stakeholder), it's essentially in limbo because you don't know if you've built what they wanted. Maybe there was a miscommunication, maybe they find the feature confusing, maybe they've changed their mind. The goal is to get feedback as soon as reasonably possible.

eg: A stakeholder (or customer) requests feature X, and everyone agrees it's a good idea and we should to work on it right away. The dev team could spend 2-4 weeks writing excellent behind-the-scenes code that's not hooked up to anything, or you could spend 2-4 weeks on holiday. Either way, you've given stakeholder the same thing: No new feature.

If you're confident that you know exactly what it is you want to build then you don't need Agile, scrum or sprints. Scrum isn't supposed to be waterfall with arbitrary reviews every 2 weeks.


But he said:

> As a customer, you've given me no value.

Meaning:

> You, being a customer, have given me no value.

This is what I don't understand.


He said

> As a customer, you've given me no value.

Meaning:

> [From my hypothetical perspective] as a customer, you've given me no value.

Or more concisely:

> [Speaking] as a customer...


I'm not sure this is how the grammar works...


Customer is a poor term. It should be stakeholders. Sometimes the stakeholders aren't "customers" per se, they could be other parts of the organisation for example. I'd consider my CTO a stakeholder and I'm pretty sure he's interested in the value of fixes for a security audit or working database backups even though those don't necessarily have a nice demonstrable UI.


> I tried various ways of reframing it such that the developer of the UI be the "user" but it didn't wash.

Are you surprised? The developer isn't the user of the UI, product wise, so no wonder it didn't get very far.

The trick in this case is to break the story down. So the original story isn't do-able in one sprint? Ok, so what is actually the MVP of that Story? What's the first block that builds the overall feature? Take a 5 point Story and make it three 2 point Stories or something. That's what the refinement step is for in Scrum.

Unless you're doing abnormally short sprints there should be some part of that Story that can be abstracted into a smaller Story that fits into a sprint.


> Why can't we make some sub-component this sprint then the UI bit the next?

Because that's how you get bad UI. The user-facing design needs to drive the API interface, not the other way around.


I disagree. It varies from situation to situation but I would argue in my domain at least (healthcare) this is how you get bad data models


"The user-facing design needs to drive the API interface, not the other way around."

I don't see how this need would nullify the ability to modularize code.


Modularize by functionality, not by layer. Start with a simple end-to-end path and grow outwards, rather than trying to go top-down or bottom-up, and don't split into distinct layers until you're actually deriving value from doing so. Writing code when you don't have the use case yet is always a bad idea.


Yep. And then at the third sprint into this path you realize that the way the backend has been built during the first two sprints makes it impossible to deliver some essential or just very desirable feature that nobody took in consideration because it was outside the scope of the first two sprints. Seen it happen multiple times.


That's the opposite of my experience. It's always the extra layer that was added for some planned feature that we never actually implemented in the end that gets in the way.


I disagree. This is how you end up with messy code that needs constant refactoring. Making time to come up with a composable design shouldn't be that difficult. It usually isn't.


All code should be constantly refactored. This way your code gets refactored as the design changes (which... it will) rather than having to shoehorn an architecture that worked great originally into a set or requirements that have moved on.


Not true. If you have design ready before the implementation starts (as you should unless it is very simple) then you are free to implementation things separately. Just have the interface well specified.


Sounds like you've never worked in a shop where they think "agile" == "no need to plan ahead"


This is sad, but my point still holds. Perhaps being honest with oneself about real tasks that need to be done will help. Planning the architecture/feature, doing proof of concept etc. are all valid tasks even if they don't bring immediate business values. If you claim that you can only do business features without doing any exploratory work then you are effectively claiming to have an oracle giving you perfect solutions out of the blue. In which case you should stop whatever you are trying to do and start selling the services of your oracle.


Speaking from a UI developer perspective, this never works :) But YMMV, I suppose…


We always do a brainstorming before implementing any serious feature / change. Once we have the initial design business gets involved (if we can get their attention...). The results usually works without needing any drastic changes, which is still better than no planning at all. It does require involvement from a number of parties though.


Design is better when it's informed by implementation, IME. The only complete specification is working code; if you accept something lower-fidelity then it's very easy to miss ambiguities.


Even code have bugs so it's not perfect. Your design can incorporate high level algorithm to be implemented, ins/outs, or whatever else high level constraints are most applicable to your domain.


But if you design things ahead of time, then you're not "agile"!


See my other comment regarding oracle: https://news.ycombinator.com/item?id=12249897


In your church, maybe.

In a country where logic reigns, it depends.


Imaginary countries don't count.

;)


Depends what you're building. Which leads us to the worst (meta-)aspect of Agile/Scrum these days, which is that it's the industry's current favourite hammer and so it gets used to bash every single problem. Now, hammers are quite versatile, but you have to know when to bash, when to use the claw, when to lever or just nudge things instead of swinging.

The moment you start getting dogmatic about your process is the moment your decisions start being driven by something other than your actual needs at hand. And that's the moment when you start to produce a bad product.


Author of the post here, thanks for your comment. As you said, the fact that you have to stick out your neck and fight for code quality is very annoying, and after some time, one just stops doing it, and starts flowing with the "add the one more line of crap to get things working and collect the points".


"Lean" is supposed to address quality and habitability more. 2 of the 7 tenets of lean focus on it:

Build Quality In

Optimize the Whole

I've never practiced it and my only exposure has been reading some of the book "Lean Architecture". But it certainly seems like an improvement over Scrum which seems blind to some of the most critical things in developing complex innovative systems. In fact I'm convinced that Scrum was designed to work with basic information systems where a feature consists of essentially adding a new data entry form or report for a database (e.g. a lot of web stuff)


Remember that "Lean" (SCRUM is really a subset of those practices) is at it's core a manufacturing process. Manufacturing isn't engineering. The author of the original post captured this very well:

> ...one can claim that this is not the job of Scrum, which is a software management methodology, and not a software engineering methodology, that it's only concerned with organizing the teams' time and workload

A manufacturing process can only drive quality if both the primitives and end system have gone through some sort of engineering process. Web developers love this stuff because they have components, frameworks and clients where the engineering details are taken care of in MANY scenarios.

Using a contrived scenario involving LEGO -- you cannot expect kids and parents to design new LEGO parts. The components are designed and engineered to work together in a completely different context. Carrying this further, if you want kids to assemble a consistent, specific artifact (Say a Star Wars X-Wing fighter), someone at LEGO needs to design that and produce documentation. Designing on the fly with sprints ("ok, kids, today figure out how to make a wing") isn't going to be a productive exercise.

Most SCRUM projects that I have personally seen fail are projects where "agile" is a codeword for "we didn't think this through, so we'll figure it out as we go". Then the "sprints" become a real joke as the team repeatedly runs into a wall.


That book appears to be very interesting, I'll definitely have a look. Thanks for the tip.


I feel much the same way as much of what you've said in this post. I'm curious what you think of things like the #NoEstimates camp that removes the ambiguity and "gut check" nature of things like points or even time estimates?


I read the first chapter of the no estimates book, and have to admit that I found it weak on arguments and poorly written. I think there is real value into doing software estimations if you take them seriously, even using story points, but there must be better ways of using those estimates. One really interesting way of doing this is the Monte Carlo estimate method I linked to in the blog post. Would love to try that sometime, and see how it works out.


The notion that the business can make decision in the presence of uncertainty is the basis of No Estimates. As a participant in another accelerator our funding sources would find that laughable at best and toss us out the cohort at worst.


This sounds like more of an issue with technical debt management. I'm much more of a proponent of Kanban, whenever possible, so in this case if a user story requires a refactor or architectural change then that's just fine. Also regular code reviews to eliminate hacking and sanity checking automated tests shouldn't be made optional, they should be actively encouraged by the team leads and dev manager.


    >  if a user story requires a refactor or architectural change then that's just fine
Those clear cases are not the problem since they are... clear. There is a lot of need for refactoring that arises only slowly and not through one feature. It's more like the boiling frog thing (which by the way is a bogus story but I still use it nevertheless because everybody understands the point). So a problem is you can never really justify the refactoring effort using one specific new feature. It's like partial vs. full cost accounting, sometimes you just have things you can't break down but it still needs to be done (paid). Which makes this a real problem for organizations who don't have the accounting set up for this. In my experience especially large firms may have teams that get their budget "per feature", so even the best manager can't solve that problem - the next (management) layer above doesn't care though because your little software team is too small to get them to laboriously change the accounting process and the SAP system.


Kanban is the software development equivalent of the Concentration (Memory Match) card game.

https://en.wikipedia.org/wiki/Concentration_(game)

As though a useful, usable product design (architecture) could be divined thru piecemeal revelation.


One amendment I always wanted to make was to have a "technical debt dial". You could stick it on the wall next to the scrum board.

The debt dial should be from 0% (spend all the time refactoring) to 100% (spend zero time refactoring, get it out at all costs).

Management should have complete control over the dial. Developers should have control over what kind of refactoring they do (ideally the retro should have a question "what debt/tooling issues caused you the most pain this sprint? and a parallel track for debt/tooling stories").


I often have the impression that the term "technical debt" is just an euphemism to avoid admitting that someone in the team has produced poorly thought and poorly written code.

I'm not sure I ever found myself in the position of writing bad code just for the sake of speed - I surely wrote tons of bad code because I didn't know how to do it better, or because I didn't have the requirements clear from the start, or because of bad design and planning. But to get a feature out quicker, no. If anything, it seems to me that writing bad code requires more time than writing clean and elegant code.

In the end, "technical debt" becomes a way to shift the blame from your own (the team's) inadequacy at planning, designing and developing, towards supposed time constraints that always lie outside of the team's responsibility.


I have to disagree with this comment. Technical debt is sometimes the result of just one lone cowboy coder, but even then there's some responsibility across the team because that means his or her code passed all reviews, i.e., nobody took ownership for the overall team's code quality and vetoed the bad code.

Time constraints can certainly be relevant too. It's not always an artificial shift of blame. For instance, time constraints could be why the code passed review to begin with.

And some programmers certainly do write worse code when they have to do it quickly. Typically, the code itself doesn't look all that bad in a vacuum, but it presents problems months down the line when a new feature needs to be added or an existing one changed in a non-trivial way. There was little forethought in its design.

Or, let's just look at the ways you said you might write bad code:

> I surely wrote tons of bad code because I didn't know how to do it better, or because I didn't have the requirements clear from the start, or because of bad design and planning

In other words, these things can cause you to write bad code:

1. You just didn't know better.

2. Unclear requirements at the start.

3. Bad design and planning.

In (1), you might realize after writing some code that you didn't quite know what you were doing and you should refactor it, but you're now under pressure from management to just get it out. Oops, no time to refactor. Now code that you know is bad is going into production, and it'll bite you in six months.

For (2), the reason the requirements weren't clear is because not enough time was spent by management/product owners/designers on clarifying said requirements.

For (3), it's the same thing -- bad planning is often the result of time constraints (notably, time constraints which may not be visible to you as a rank and file programmer).


but it presents problems months down the line when a new feature needs to be added or an existing one changed in a non-trivial way.

I think I have yet to see any software to which this doesn't apply sooner or later. While the whole point of "technical debt" is that it is something you're supposed to knowingly acquire because of time constraints. You're basically saying "we knew it was wrong, but they forced us to do it that way". While to me, most of the times, the truth is that you really didn't know. Yes, you coded in a hurry, but there is always a time constraint of some kind so that's no excuse. Somebody else in the same time would have done a better job.

As for my points: if I realize I didn't know what I was doing, I always refactor the code. Committing code that you know to be conceptually wrong is just sloppy. And if I am under pressure by the management is because I spent time developing without understanding what I was doing; a better developer would have gotten it right at the first try, and there would be no technical debt.

If the managers/ product owners didn't produce clear requirements, it's not a technical debt, it's a sloppy job on their part.

If the planning was wrong, that's a sloppy job on the part of who had to do it. Responsibilities should be found and action should be taken. Saying "ah yes you know, we were under pressure so we (kind of naturally) accumulated this technical debt" is just a way to save everybody's face.


> And if I am under pressure by the management is because I spent time developing without understanding what I was doing; a better developer would have gotten it right at the first try, and there would be no technical debt.

While this reasoning is probably not technically wrong, I'm not sure if it's relevant to the real world.

You can always make an argument of form "there exists a developer who could have got this feature right on the first try, with very little time spent." This simply does not matter when you do not happen to have that developer on your team right at the moment. The nature of the work is that you will work on a variety of things, and you won't necessarily be the best in the world at every individual thing. You're inevitably going to encounter work that's challenging enough for you where you don't get it perfectly right on your first try.

> If the managers/ product owners didn't produce clear requirements, it's not a technical debt, it's a sloppy job on their part.

It's a sloppy job on their part, induced by time pressure, which produces technical debt. I feel like you're just playing with definitions here to avoid admitting that technical debt can come from poorly managed time pressure.

> if I realize I didn't know what I was doing, I always refactor the code.

All that tells me is that you've never been under a lot of time pressure. That's not a bad thing. It most likely means the management at your companies have been competent. But it doesn't mean that technical debt does not exist or cannot be induced from time pressure in other companies.


I guess that what I'm trying to say - and that got clearer to me while replying to other comments in this thread- is that it feels we're using the term "technical debt" as a way to avoid talking about personal skills or lack thereof, and to avoid admitting our or other people's faults. If I say that we need one more week of work on something because of technical debt due to the exceptional circumstances is one thing; if I say that it is because one of my colleagues didn't do his or her job properly, it's different. Remember that the business might have no clue of how long or difficult some tasks are and judge them only by the amount of work required. So a series of bad design decisions and consequent technical debt can give the impression of a very hard task on which everybody is working skillfully, while in fact it's an easy task with somebody that doesn't know how to do his job.

As for your other points: maybe one of my team members produces consistently more technical debt than the others. Is it still technical debt? Manager or designers can be the source of time pressure for those down in the chain, because bad planning or decisions, made in absence of time pressure, can force others to work under pressure. Again, "technical debt" masks the real problem. I might not have worked in extremely high pressure environments - but I've surely worked in teams where we were leaving the office at ten or eleven pm every night for months on end just because demented design decisions had been made by the much respected solution architect.


>As for your other points: maybe one of my team members produces consistently more technical debt than the others. Is it still technical debt?

???

Why wouldn't it be?

Creating lots of extra technical debt, in fact, is a defining feature of poorer developers.


>>I often have the impression that the term "technical debt" is just an euphemism to avoid admitting that someone in the team has produced poorly thought and poorly written code

>Creating lots of extra technical debt, in fact, is a defining feature of poorer developers.

I think we agree then. It's just that "technical debt" makes it sound- to me at least- inevitable and impersonal, while it is possible (at least for substantial amounts of it) to ascribe it to specific people and to avoid it by hiring better people.


>And if I am under pressure by the management is because I spent time developing without understanding what I was doing; a better developer would have gotten it right at the first try, and there would be no technical debt.

This is possibly the most wrongheaded comment you've made.

1) There is no such thing as "no technical debt". It asymptotically trends to zero but never, ever gets there. If you think that you oe anybody else is the kind of developer who magically creates debt-free code all the time then you're deluded.

2) The "right first try" argument is wrong. You shouldn't even try to get it right first try - that's the whole point of red/green/refactor. You're supposed to get it working and then clean it up because prematurely 'cleaning up code' is an inefficient way to work.

It's not called red/green/refactor-if-you're-too-shit-to-get-it-right-first-try.


Ok for your first point - although I'd argue that technical debt is something that usually asks to be repayed within months. A two year old technical debt is just an improvable software - that is, it didn't yet show problems serious enough as to call for a refactor at unchanged requirements.

As for your second point, what should I do? Try to get it wrong? To get it working how? We're not talking of premature optimization here, we're talking about understanding the requirements, understanding the tools, understanding the big picture, understanding your time constraints, and pulling out the best job you can.


Simplest kind of technical debt:

1. I use a particular technique/abstraction/whatever to solve the problem, it solves the problem well.

2. Over time we solve other problems elsewhere. As we go, the bigger picture becomes clearer and we pick more suitable techniques/abstractions/whatevers as we go.

3. Eventually we have to solve a problem that interacts with the original problem, the newer techniques/abstractions/whatevers don't work cleanly with the new ones. So we have to make a choice:

a) Hack something together that solves the current problem without us having to touch much of the older code.

b) Rewrite the old code to match the newer technique/abstractions/whatevers.

c) Put down tools and thoroughly evaluate whether there's an even better technique/abstraction/whatever that solves the old problems and the new ones.

We all know that C would give us the best code, but it's also likely to mean we never get anything done because every new problem means reevaluating everything. B happens more often, but in reality we usually end up doing A due to various pressures.

At no point has bad code been written, but there's technical debt nonetheless.

Over time we become better at adopting patterns an architectures that allow for clearly defined boundaries and reduced cost of making mistakes, you still get technical debt (because just about anything you want to change can be considered technical debt), but it doesn't tend to cripple your ability to get things done.


>Ok for your first point - although I'd argue that technical debt is something that usually asks to be repayed within months. A two year old technical debt is just an improvable software - that is, it didn't yet show problems serious enough as to call for a refactor at unchanged requirements.

I've worked on five year old technical debt. It meant that bugs were far more common and fixes/new features took 10-15x as much effort as they would have otherwise.

It wasn't that it didn't 'call' for a refactor - it's that the team didn't respond to the problems by refactoring. They tried the following instead: heavy manual regression testing before release (once in two years), waterfalling, longer and longer feature/code freezes, keeping multiple branches around for different customers.

Managerial response was to hire additional mediocre developers, making the problem worse, but it wasn't like hiring better developers made developing immediately quicker and less risky. Paying that debt down to a reasonable level was impossible with mediocre developers would take ~36 months with good developers (also working on bugs/features).

>As for your second point, what should I do? Try to get it wrong? To get it working how?

You should do red->green->refactor.

After writing a failing test, your only priority should be to make the test pass. Not elegant. Just passing. Once it's passing, then make it elegant.

The reasons for this are twofold:

1) You're solving fewer problems at the same time. Something you want to avoid as much as possible as a developer is to have to juggle 40 different competing problems at the same time.

2) Refactoring-driven architectural decisions are ~95% of the time better decisions than those made during up-front design.

>We're not talking of premature optimization here

It's a closely related problem but it's not identical.


What value comes of characterising it as "inadequacy"? I mean you may enjoy the self-flagellation but does it actually help plan or work effectively?

To my mind the "debt" metaphor captures important intuitions: it accumulates interest, slowly at first but rapidly if you have too much of it, it can look like it isn't a problem until it is, taking on more is often an easy way out of your current situation in the short term.


>I often have the impression that the term "technical debt" is just an euphemism to avoid admitting that someone in the team has produced poorly thought and poorly written code.

It's absolutely not that. Technical debt is a natural by-product of working even with the best coders. There's nobody out there who doesn't create it.

Better coders just produce it more slowly and clean it up more often.

>I'm not sure I ever found myself in the position of writing bad code just for the sake of speed

I could literally spend all of my time making code nicer and none at all developing features/fixing bugs. It's always a trade off between speed and quality.


Of course. Some developers (or product owners, architects, managers) generate technical debt slower, some other generate it faster. Some generate a substantial amount of it for a task in which others would generate very little - in the same time frame. Then why don't we call it lack of skills? Sounds the same to me.


Because the same developer with the same skills can often ramp up technical debt to get a feature out in half an hour instead of a day, and there aren't any developers who haven't felt the pressure to do exactly that.

Ramping up technical debt isn't always about speed, either. It's sometimes about risk - it's often less risky in the short term to copy and paste a block of code than it is to change a block of code and risk breaking something else.


True. The amount of technical debt produced is a function of the time constraints and the person skills. In turn, the time constraints can depend on the skills of other people in the organization at planning, designing, figuring out requirements, managing the team and the process, etc. Saying "ah sorry, we'll have to work one more week/ month on this because, you know, technical debt" is sweeping all these possible issues under one big carpet.


I don't see why. Technical debt is just a measure of how much crap there is in the code. It doesn't preclude having a discussion about how much that is to do with skills and how much that is to do with pressure/time constraints/existing technical debt.

The point of the dial is just to make the trade off between quality and speed that individual developers are making every day both explicit and management's responsibility.

It means if the dial is turned up to 100% management have no excuse for asking the question "why is our product a pile of crap?". It means also if the dial was at 60% for a year and a half the developers have no excuse for why the product is still riddled in technical debt, meaning that skills problems are distinguished from time constraints and managerial pressure comes with a cost attached.


I disagree. Even among those who routinely write good code (all of us in our own minds) the schedule tends to pressure you to solve the problem before you, the one in the story, without checking if anyone else has solved similar problems already. What tends to happen is you write your own solution instead of finding the similar solution and generalizing it. Then, after a year you have have a dozen related tasks performed in slightly different ways. If the underlying (e.g.) data structure change, now you need to alter each of the dozen different ways


This sounds like a bad organization of the team, bad process or bad design. Clearly team members don't communicate enough, or groups of similar features have not been foreseen during the design phase, and the same code has been rewritten again and again by different people. This is not contingent "technical debt", this is a serious problem that needs to be addressed with changes in the process.


organization of the team and process? You mean the scrum process? Indeed, my point. Of course one can argue that no true scrum process would have such problems...


Yes, totally agree with you. But then, why "technical debt"? No, there is something wrong here and this process needs to be changed. Somebody made the wrong decisions, at some level, and there is a very specific issue to be found, analyzed and solved.

And yes of course, as we all know scrum is by definition successful, and all the teams that fail are not following the true faith.


I see technical debt as being that code you haven't written yet. You just ignore the errors from misaligned data instead of writing the date sanitizer. You monitor and hand-reboot processes that leak instead of finding the leak/automating the reboot. and so on.


I think the "speed" part is that once you realize it is bad code cause you didn't know how to do it better, you don't have the time to scrap it and start over.


When I read this I assumed you meant a dial that represents the current level of technical debt, rather than a dial that represents how much technical debt you're allowed to accrue.

The other interpretation might be a neat solution. Allow developers to indicate publicly what proportion of the time their last tasks took, they should have taken had there been no technical debt. It would handle the communication from developer into business-speak pretty effectively.


At the very least you need both dials! If you have only the one that management sets, saying how much time to spend refactoring, it will always be at 0. (Okay, they don't literally have to be dials, but there has to be some communication of the kind you describe, so management has some idea how much time is being lost.)


>At the very least you need both dials! If you have only the one that management sets, saying how much time to spend refactoring, it will always be at 0.

Management decisions are usually CYA based, and leaving the dial set at 0 both exposes them and gives developers a get out of jail free card.

i.e. leaving it at zero is suicide. That's the whole point of making it a trackable dial.

Probably what would happen in most cases is it would fluctuate between 30% during average times and 0% during crunch times.


That seems utterly wrong to me.

I'd expect any manager worth is salt to put that at a 10% or 20% (basically anywhere NOT 0) and uses that number as a reminder to devs that part of the job description includes MAINTAINING the system in proper condition, not just piling up new stuff on top of the existing random stuff.

Same as a car needs regular maintenance, a dev project needs regular maintenance.


One question I have is how does one actually objectively measure technical debt? I mean by anything more than a guess or intuition. Does every singleton/global variable count? Does every comment with a HACK count? What other factors contribute to technical debt, and by how much?


I've pondered this problem for a while and I think I narrowed it down to the following:

* Tight coupling (this would include global variables, among many other things)

* Lack of code cohesion

* Code duplication

* Code that doesn't fail fast (e.g. weak typing).

* Variable/class/method naming that is either not sufficiently disambiguated or is wrong.

* Lack of tools to run and debug code

* Lack of test coverage

Whenever I go looking for code to clean up this is what I keep an eye out for.

I'm pretty sure that each of these could be measured empirically somehow (I've read papers to that effect for a few), but we're not quite there yet in terms of tooling, or even in agreement over what technical debt actually is. Give it 5-10 years.


There is no objective measure. Yesterday's good code is today's technical debt.

But at the same time there is definitely some underlying real quantity. Different developers may not agree on every single aspect, but they'll agree about the difference between a good codebase and a bad one, and it really does take longer to make changes to the bad ones.

So what can you do?


I believe the technical debt could be measured as:

The time it takes to change code that is already written.

For instance, if you want feature N+1, but to do feature N+1 you need to change feature N, then the amount of time you spend changing feature N is the technical debt.

So when you are estimating, you could say: We need to refrob the whozzit to make it compatible with foo 2.0, then that work could be captured as technical debt.


I do believe that this N+1 - N captures the idea well.

One issue though: If the requirements have changed, that is not a good metrics.

i.e. We did that theme blue for Iphone, now people wants it to be green AND on windows. It's not necessarily technical debt, it's cancelling everything we've done to make something else entirely (even though it may seem superficially similar).

If the requirements and/or features for N+1 ignore or counter the requirements/features from N, it's not necessary technical debt, it might be you've just got no idea what you're coding and you're running in circle.


Agreed. One could argue that at time t+1, the product that meets the market need is the N+1 version. Therefore, even if feature N developed at time t was perfect for that time, it is still technical debt at time t+1. I could think of a lot of technical innovations that at their time were the perfect fit for the market, but at a future time become technical debt.

I think the point to take away is that software is just a tool, an artifact, and at the end of the day it just matters that it does what it is "supposed to do". However, civilization is dynamic, so we are constantly seeking to optimize an ever-changing potential function. So perhaps it is not the programmer that is running in circles, perhaps it is the market. So for the manager accounting for the technical debt, they must accept the change as a cost of doing business.


That will mean Technical Debt is dependent on what features you want to implement as to how much code has to change to support it. Still makes it rather subjective and hand-wavy.


If the technical debt is in areas that you will very rarely go back into, then it could perhaps be considered low-interest, compared to technical debt in high-churn code paths.


right, so the crux of the issue is that to understand how much technical debt you have, you must understand what the market wants, and what features you need to meet the demand. So I think it is extremely subjective. You could say "our product it perfect, it should never change". Thereby in itself you have accepted that all processes and information that create the product are also perfect, and therefore your technical debt is zero. You must pay nothing to achieve exactly where you want to be.

You could think of technical debt as the energy lost due to friction. It is the energy lost changing from one point to another in your domain space. Perhaps "technical friction" would be a better concept for software. However, that is even more hand-wavvy and harder to measure ;-)


In my experience, you struggle to move that dial from 100%.


In my experience, once it becomes a tracked metric that can be used to destroy a manager's career you can bet the dial will start to move.


Management is just going to leave it at 0% the whole time. Once in a great while, if you're lucky, you might get 5-10%


For most companies static 100% image would do for this dial.


Only if it's untracked. If it's tracked and the responsibility of management it'll go down.


I've been in situations like this, and when I've brought up techdebt issues, I get "YAGNI". I've tried to frame technical debt issues in terms of how it will impact future feature requests, performance, etc, and YAGNI is what normally comes back. Until... they actually NI. Then it's a hair on fire crisis. Fortunately have not been in that type of situation for years, but I know they still happen.


For those that didn't know - "You aren't gonna need it" (acronym: YAGNI) is a principle of extreme programming (XP) that states a programmer should not add functionality until deemed necessary.


Sorry - thanks - was in a bit of a rush and forgot to add that.

"YAGNI" is... in general... not that useful when used by people who've never worked on a particular type of project, because, almost by definition, they don't what what they will and won't need.

And... defining "needs" is its own set of headache. Needed by who? I'll tell you what, we need to ensure we have a logging system in place that can alert folks, and we'll need the ability to view logs in production, and share access to those. Years ago I got "YAGNI" back on that, and months later... weird bugs that no one could reproduce, and the minimal logging in place was only accessible by one guy who was out of town for a week. But hey... we got those "rounded corners" to work in IE5 and IE6 with only an extra week of work - yay...

YAGNI is a good principle, but as with most ideas, you get problems when you introduce various types of people in to the mix. Someone who's never done a project type X should not be the one making YAGNI decisions when other people on the team have done multiple "project type X" before, and are trying to introduce basic requirements.


> the stories are always articulated in terms of user facing features

Reminder: the "User" doesn't have to be an external person to the team. It can be internal for tools, or a tech debt task for internal issues like the ones you mention.

Thinking Scrum is just about end/paying User stories is mistake #2 I see when people do Scrum (#1 is the classic "We're going to not actually do Scrum but call it Scrum" that leads to a lot of "Scrum doesn't work" articles itself).


> Because the stories are always articulated in terms of user facing features they encourage developers to hack things together in the most expedient way possible and completely fail to capture the need to address cross cutting concerns, serious consideration of architecture, and refactoring.

This is one of our biggest challenges. Much of the work we're doing isn't confined just to a user feature, so square-peg-round-hole syndrome affects us a lot.


This was my favorite point out of them all:

> What about contributing to open-source software? Reading the code of an important external dependency, such as the web framework your team uses, and working on bugs or feature requests to get a better understanding was not part of any Scrum backlog I've ever seen.

Working at various startups, I have developed a methodology of contributing back to open-source projects we use without accounting for it in sprints or the ticketing system. It involves getting to the office an hour early, over-estimating on my other tasks so I have time for something extra, and a sprinkle of office politics.

And yet, it is some of the most valuable work I have done. Not for me, but for the companies I've worked for.

To get an in-depth understanding of that one Django or Express.js feature you use, or even better, to find and fix a bug that affected the business or may have affected it in the future, just gives you that much more of an edge over your competitors. Say goodbye to that nasty workaround you had to use to get around the bug--now it Just Works exactly how you need it to!

What's more, it's attractive to engineering candidates when you get to tell them the story of when you fixed a big bug in Socket.io.

The best engineering managers I have had have been receptive to the idea that this type of research and/or contribution to other projects should be considered "work" that provides value to the business.


Fixing a bug in OS software and consequently avoiding a workaround should count as double points in Scrum


Biggest mistake in scrum is to give points for bug fixes, even if they're not yours.


We now use Kanban on our team, after using Scrum for a couple years. Pointing is a process to discover unexpected hurdles or uncover hidden knowledge from coworkers, and as a guideline. By not having sprints, you just focus on your current ticket, and not artificial deadlines or points. Tickets will be done when they are done, and managers don't have expectations of completeness that as disconnected from reality. We've added priority swim lanes to our process. We have a prioritization meeting with PMs every week to assign priority to tickets, order items in the high and medium priority lanes, and review blocked items to see if anything can be nudged back to a working column. The process is clear, the expectations are fluid, the work gets done in the time it needs to get done.


There's certainly a lot to like about that approach. But my experience is that it makes it very easy to end up with tickets that end up taking months (because there's no longer a natural point at which to stop and take stock), and/or accepting tickets that don't really have clearly defined acceptance criteria, which risks working on something that won't actually turn out to be useful. I don't like the artificial 2-week cadence of Scrum but I think I might still prefer it to not having one at all.


When I worked on a team like this, the rule was that when you came free and are looking for work, before you started any new stories you looked at the work in progress to see if you can help out to expedite any of it. The idea is that it's everyone's responsibility to try to minimize the work in progress (Lean).

We also had the general expectation that one story should generally take no more than about two weeks. Before starting a story, if you think going in that it will be too big, then you try to limit the scope or defer parts of it into new stories until you're confident it can be done in two weeks.

Once a week we tracked how long stories were in the "In progress" column, and once it'd been up there for three or four weeks, people started asking how they can help wrap it up. I think the longest I remember a story being in progress was about 6-7 weeks, and that was real uncomfortable for us. Typical times were 1-3 weeks.

So we had a weekly cadence for demos, and product owners liked that they would see steady, regular progress rather than being inundated with sudden large dumps at sprint boundaries.


Two weeks! Any story that takes more than 2 days is generally broken down into smaller stories. A two-week project is more like a small epic...


Lots of teams have different expectations about how granular a user story should be, and some of this has to do with the project. How fast can you design, develop, test and deploy a meaningful amount of new content?

One explanation for the longer time is that we had a process where for each user story we'd write a short, informal design document and send it to the team and stakeholders for review. For non-trivial stories we'd then meet to discuss them and come to a consensus about how to implement it. This probably added about a day or two to a story's duration, but it meant that we had a solid system at all time. It also often served a similar purpose to a retrospective, or produce topics for a retrospective, because these meetings would surface any technical debt and other impediments to progress. It also served as knowledge transfer and helped the team converge on design principles and expectations.

So these meetings had a cost, but we felt it was essential for practicing agile design. We didn't find this made us slow to react. On the contrary, because this kept our technical debt low, and our software well designed and well understood by the whole team, it meant we could pivot on a dime.

I think there's a real risk in going too fast. Maximum speed should not be the goal of a software development process, and I don't think any business really wants that. The two primary goals should be: 1) to make predictable, steady progress over long periods of time, and 2) the ability to change priorities as quickly as possible as new information emerges. If you have to sacrifice some speed to get there, it may be worthwhile.


It's up to ticket creators to pipe up if their request hasn't been worked on forever. But we recently reviewed the whole board with the PMs and that was pretty good for getting some things out of limbo, and closing others that never got that important.


Add WIP limits to your columns. That means no cases can be added without other ones been completed first.


That doesn't help. If you have n people you need to allow n tasks in progress. The problem is one person being on the same task for 6 months.


Honestly I don't think that's an issue, if you lead developer is keeping on top of the kanban board, he will notice it and address it.

Most Kanban boards allow you set to mark items if they go past a certain time.

Manage by exception. If most cases just take a few days, set up some kind of system to mark the outlining cases out. Then you can investigate and address these exceptional cases if required.


> if you lead developer is keeping on top of the kanban board, he will notice it and address it.

Do you have some extra process that makes this happen? I don't remember anything in the version of kanban I saw that implied the lead developer should be doing this.

> Manage by exception. If most cases just take a few days, set up some kind of system to mark the outlining cases out. Then you can investigate and address these exceptional cases if required.

In the project I'm thinking of it wasn't a single exception; rather the length of a typical task gradually crept up (from below 2 weeks at the point when we switched from scrum) until it was normal to have tasks lasting multiple months.


Then you leadership is not doing it right. Cases should take days at most, more then that they should be broken up.

The whole point of Kanban is to have everything visible and known. If your not looking at the Kanban board and taking action based things that look wrong whats the point?


> Then you leadership is not doing it right. Cases should take days at most, more then that they should be broken up.

It's easy - and useless - to make this about people. The point is, this is a problem that we didn't have under Scrum, with the same people.

> The whole point of Kanban is to have everything visible and known. If your not looking at the Kanban board and taking action based things that look wrong whats the point?

At what point does it look wrong? If the idea is to have everything visible why does Kanban generally include fewer stats than other processes? I was always told the "whole point" was limiting work in progress.


It's a problem under scrum as well. Just because the sprint is ending doesn't mean the case will finish. They just get dragged into the next sprint


That's not supposed to happen, and the process says that in that case you have to at the very least re-estimate. If a ticket is really taking forever, at some point the estimate for the time to finish it will be more than a sprint, at which point you're obliged to split it up.


I'm considering a similar approach with my team. Any specific resources or books or tips you found especially helpful?


We use a similar approach in my company and the key here is to borrow the parts that work for your organization from Scrum and blend them with a much simpler Kanban approach. Don't follow a methodology blindly just because someone wrote a book about it. Keep evaluating & tweaking the process until you think it's at a point where it's working well for your organization.


I disagree with most of the major criticisms here. I think it is a valid description of an experience using scrum half-heartedly, but not an argument against its purpose or value.

Points, for example, the argument is based on the premise that teams are obsessed with points. What if the teams use points as a framework to discuss complexity? I have worked in scrum groups where if something was given a large point value it would be questioned and "split" to break the job into component tasks that could be done by separate people simultaneously (or some now some later).

Long meeting times with the wrong people in the meeting? That's a managerial issue and has nothing to do with scrum.

Writing stories is the main art of scrum and without good stories it is pointless. A story can be written to encourage a developers to improve quality of a codebase, share knowledge with another team, or take time to learn themselves. A really good story would encourage these things and also deliver customer value.

Any system for managing software development hinges on good communication. SCRUM's main advantage to me is that it provides continuous opportunities for face to face communication. If you fail to take advantage or engage with those opportunities then it won't work, but I bet another "framework" wouldn't either.


> A story can be written to encourage a developers to improve quality of a codebase, share knowledge with another team, or take time to learn themselves. A really good story would encourage these things and also deliver customer value.

Show that to a people unfamiliar with the Scrum church and it will say that sentence is meaningless. And well, it is, in an absolute way. A story is just that: a story. Maybe it can help to put your son to sleep, but I doubt you need to invent a brand new vocabulary to discuss about software architecture and technical debt, and I doubt you can do anything brillant by only using a few examples in a field that is even more driven by pure logic than maths.

We are grown up. I don't want to work anywhere where I'm told stories and must get some points done by the end of the week. And this is not anecdotal: Scrum is a derivative of a manufacturing framework. I'm doing engineering.


Perhaps but this could be viewed as the "no true scotsman" thing. My main objections to Scrum are a) you need to shoehorn things into user stories that are not naturally expressed that way (as a user I would like all relevant data in the database associated with standard ontology?) b) In a special domain (healthcare) it requires good developers with some level of domain expertise and I find this rare c) (and this is a management issue) people think you can remove important aspects of the methodology (e.g. colocation of resources) and have it still work


I feel like specialized domains in general require devs with domain expertise.

Maybe waterfall with unusually good specifications can use cog-like devs, but the few times I've seen that done it didn't turn out very well.


Agreed


This is my experience also. Points are complexity and high complexity tasks are most definitely to be addressed by breaking down stories. Dependencies are also easily addressed by setting a Definition of Ready. Don't accept a story if the dependencies aren't met (be that APIs, environments, designs, whatever). Simple.

Planning/grooming we do in one hour. You good story writers is all. I once had a planning last 12 hours because the product management was so awful. If your team sucks, no process will save you.


Exactly. A good agile, and by extension SCRUM, process allows the developers to influence how things get done. If the way stories are written are leading the team to write unmaintainable code, change the way the stories are written. Or talk about why the problem exists in the first place and take action to fix it.

Again, it all comes down to communication. If nobody wants to talk to each other, SCRUM is not going to help.

As with all of these posts, they're extremely lacking in alternatives. Of course SCRUM won't work for every team, or even every project for the same team. But it provides a way for the team to evaluate and adjust how to best accomplish their goals in a straightforward and (if done right) low friction way.

I think it's taken for granted how much effort is alleviated by adopting SCRUM. Just think how difficult it would be to develop an alternative process for every team, every project, every new person to join one of those teams or projects. Everyone has their own way of doing things, standardizing on a few universal goals aint a bad thing.


> Long meeting times with the wrong people in the meeting? That's a managerial issue and has nothing to do with scrum.

If scrum tends to lead with long meeting times with the wrong people then scrum certainly should include safeguards against that. Otherwise we need "x+scrum" framework which includes these safeguards. I don't think scrum should fix every problem "in management", but "meeting management" should be a core scrum issue to deal with.


> Long meeting times with the wrong people in the meeting? That's a managerial issue and has nothing to do with scrum.

Even with two-week sprints the majority of your last day is entirely meetings between the review and retrospective. In my experience one-month sprints are more common, especially in large enterprises and government, and you will literally have 7 hours of meetings on that final day if you follow the Scrum handbook.

Long meetings are built into Scrum.


Quoting a nice idea from the article: "One way to achieve this might be putting work items through what I would call an algebra of complexity, i.e. an analysis of the sources of complexity in a work item and how they combine to create delays. The team could then study the backlog to locate the compositions that cause the most work and stress, and solve these knots to improve the codebase. The backlog would then resemble a network of equations, instead of a list of items, where solving one equation would simplify the others by replacing unknowns with more precise values."

I've never been on a team that does pure Scrum - even ones which intended to do so always ended up with what we termed "Scrum-ish": taking the ideas from the base methodology, but adding a whole level of 'house rules' aiming to patch up obvious gaps. For instance, always making time for one refactoring or technical debt task per sprint, tracking accuracy of estimated time versus actual time elapsed (deeply uncomfortable but very useful!), the hundreds of different rules around trying to make standups shorter and more useful.

I am a big fan of well-run retrospectives, though: they can be a really nice way to feel empowered as a developer, especially when you have one retrospective identifying that Thing A keeps causing everyone pain, and the next retrospective having everyone say 'Hey, Thing A is so much better now!' Never realized they weren't 'meant' to be about technical matters, though: in our Scrum-ish teams, they were always open for all topics, and I think that's a very good idea.

Of course, the fun thing about Scrum-ish teams is now you have a whole new level of debate that can happen: "We're failing because we're not doing Scrum rigorously enough!" vs "We're failing because we're doing Scrum too rigorously, and what we need is more house rules!" ;)


Retrospectives for us consist of the dev team sitting in a room getting lectured by the PM on why we consistently fail to close all of our stories, and asking for our opinion on what new processes can be introduced to fix it.

Of course, feedback is solicited, but it is an unspoken rule that criticism of project management is verboten. However, criticism of self and others on the dev team is absolutely allowed and encouraged, and so the brown-nosers amongst the team use the opportunity to make themselves known.


This is familiar to me. However, I'm not sure you'll like my "fix". The PM should never be outside the discussion. The moment they become a separate entity, an unaccountable entity, dev can become more and more pathological. It is for us to stand up to that, communicate the difficulties in task estimation, keep a strict paper trail on when specs get shifted/scope creeps so that we have a well backed response when this discussion happens.

I would even argue that in the same way the "brown nosers" are trying to make some positive impact for themselves, as risky as it is, you can do better by fighting bad product management. If you feel the pain, your team does, and likely, your manager might as well. (if it's also a separate entity from the PMs.) The loyalty and trust you can build by defending your devs and being a force for good can be absolutely invaluable as your career goes on.

By and by, although I acknowledge the risk, if you shape your rebuttals well, you can find yourself bringing PMs to your side (a recent "spirited" discussion in which a PM was refusing to institute KPIs to track their features ended with their other PM peers questioning their resistance and backing the eng push for better telemetry, because when it came to "how can we justify how well we're serving clients if we _don't know_" this speaks even across lines.)


I'm skeptical about retrospectives.

We've certainly done them, identified problem points and then solved them (and it does feel good to do that..) but it doesn't actually seem to make things better.

Lets compare it to say, personal estimations:

When you estimate, execute and reflect, you can tangibly improve your estimation process.

You can quantitatively observe an improvement in estimations on tasks when people go through this process.

Previously; estimated 20 hours for (task). Took 10 hours. Repeat... soon, your estimates are for 10 hours, and you're quantitatively, objectively able to make consistently better estimates.

Retrospectives in my experience don't do that.

You can sit through 50 retrospectives and each one identify a problem area and then fix it and yes that does feel good, but objectively when I reflect over the defect rate as a result of the process, I feel like retrospectives make zero impact on the rate at which technical debt accumulates.

There's something missing in the way they work; all you (well, all we, I suppose, this being my personal experience...) ever do is find things that are wrong and fix them. Objectively when you look at it, there's no closing of the loop where the defect rate drops.

There's no process improvement that generates less problems in the future... all it ever is is band-aiding to prevent technical debt spiraling totally out of control and devastating the project.

There must be a better way, where you somehow measure how technical debt was created and work to incrementally prevent it happening... but I've never seen that actually happen in practice.


> tracking accuracy of estimated time versus actual time elapsed (deeply uncomfortable but very useful!)

It should only be uncomfortable if the team means 'commitments' when they say 'estimates'.

This is part of the reason stories are generally pointed with (more or less) triangular numbers. So that teams stop fretting about missing estimates by an order of magnitude or less.


The retrospective is also my favorite thing about Scrum (or my favorite Scrum ritual, let's say). As you said, when it works, the team feels the rush of having solved a significant problem together. This is the reason I think it should be done more often, and more thoroughly. (I'm the author of the post, btw)


> Quoting a nice idea from the article: "One way to achieve this might be putting work items through what I would call an algebra of complexity, i.e. an analysis of the sources of complexity in a work item and how they combine to create delays. The team could then study the backlog to locate the compositions that cause the most work and stress, and solve these knots to improve the codebase. The backlog would then resemble a network of equations, instead of a list of items, where solving one equation would simplify the others by replacing unknowns with more precise values."

Does this even mean anything? How would it work in practice?


I was going to post the same thing. I found it interesting that the parent comment thought that quote was one of the highlights of the article. To me the quote is far too vague to be converted into anything executable.

My impression is that he means something like identifying a part of the codebase that anytime a story requires working in it, it sucks. I remember we had some old UI object written in some arcane javascript and everyone felt like dying when they had to work in it. But in my experience, the pointing process naturally included that as a consideration. I believe my brain already processes that algebra of complexity in it's native operations when I estimate a point value. I'm not sure that attempting to materialize this "algebra" into some system of equations would be very helpful.


I think the argument for scrum is something like "Our big organization has complicated politics and scrum is one way of dealing with those politics." However, I often wish the management would directly address the internal politics that undermine productivity. I realize that it is awkward and uncomfortable for managers to talk honestly about the differences they have with other managers and other teams, but that is also their profession. That is, working through any relationship that undermines productivity is the job of a manager.

I wrote about this here: "The Agile process of software development is often perverted by sick politics"

http://www.smashcompany.com/business/the-agile-process-of-so...


> What I have a hard time understanding is why the ancient, simple communication form of text is given second seat. The truth of the matter is that, especially under the constraint of distributed teams, it's difficult to beat text.

That's a really good point. I'd be excited to see what a team could do if each developer wrote a one-page memo about what he did and what he was going to do once an iteration, and a few sentences for each day. Throw 'em in a log, and they might even aid the retrospective.


What a fantastic history of a project that would be...

Maybe we could just go back to .plan files, like Carmack.


Not to take away from either point, and I largely agree, but written word is our 'mode 2' - a lot can be missed without talking face to face.

Yes the issue with standups/scrum cereomonies is if people ignore all the 'other' info they are receiving and for what ever reason choose to ignore it.

The real key for Scrum for me has to be not only teams that want to write good code, but teams that WANT to get better at working together. That takes the whole team. If the team aren't bought in to this, then I'm not sure which project management tool will work for them, but it sure isn't scrum!


Scrum proponents (a label I would tentatively apply to myself) would tell you that 'you're doing it wrong' but unfortunately a point-by-point reply to this article would detract from the general problem here: Scrum is intended to be the straightest line towards measuring your real progress on a project, and not much else

If youre working on a project where it is important that you have as-accurate-as-is-realistic an idea of the size of the project, or more specifically your progress through that project, then I can't see how a methodology could be any simpler.

If having a good idea of the size of your project over time and your progress through that project are not very important from a management perspective, the Scrum artefacts will seem like, and will probably in fact be, needless overhead.

Scrum is not opinionated about the actual development methodology so claims about how it affects the code that is written are themselves a bad smell IMO.


> Scrum is not opinionated about the actual development methodology so claims about how it affects the code that is written are themselves a bad smell IMO.

Scrum is actually part of the problem, IMO. I've seen many teams turn scrum into a hammer and treat all future problems as nails.

Example problem: The foobar story has failed failed for the third sprint in a row.

Likely discussed in retrospective (plausibly good ideas, mind you):

- We need to break down stories more before we estimate them.

- Or we need to stop underestimating foobar stories.

- Or we need to focus on unblocking subtasks related to foobar stories.

Probably unconsidered:

- The foobar code is a mess and needs to be refactored.

- Or the foobar subsystem is too coupled to the Fizzbuzz subsystem.

- Or the need for some developer tools to increase productivity in the foobar ecosystem.

Since scrum is methodology oriented, methodology is the first tool teams reach for when a problem is encountered. And I see this after team leads make it explicitly OK to discuss technical subjects in retrospectives.

I'm not a psychologist, so I can't describe why this phenomenon happens, but I see it regularly.


All of the items you listed under unconsidered should be brought up by the dev team. If the dev team is uncomfortable bringing them up, then that's probably a sign of friction between the dev team and management, which is really common.


I've routinely brought up all of the unconsidered comments in retrospectives. Retros are all about making sprints better, and talking about technical problems is integral to that.


>Scrum is not opinionated about the actual development methodology so claims about how it affects the code that is written are themselves a bad smell IMO.

Pretty much every kind of deadline driven development ramps up technical debt. Scrum certainly isn't the worst in this respect (developers make their own deadlines, and conscientious ones will build the time in), but the emphasis on commitment and the pressure to deliver at the end of the sprint puts pressure on developers to cut corners.

The worst part though, is that the product owner is usually non-technical and will deprioritize stories to clean up technical debt as a result.

IMO for any kind of development methodology to work it must have an opinion on technical debt. Scrum doesn't.


Sprints are meant to be based on the previous sprints velocity, so any commitment should get smaller and smaller until you can do it without forcing it.


if pressure is ramping up and quality down the sprints arent serving their purpose.

One of the few defining characteristics of scrum is that the developers define how much they can achieve, and this estimation is improved over time. If this is not happening there is something else wrong with the culture and Scrum is being used as a scapegoat.


A few defining characteristics of scrum that lead to overly optimistic predictions:

* The prediction is made in a meeting while your head is "out of the code".

* The prediction is made in a group setting, rendering the decisions more easily subject to peer pressure and groupthink.

* The prediction is made up to 2/4 weeks in advance of actually doing the work.

* The prediction is made without risk of overshoot attached. Risk is critical metric which scrum conceals.

And the main defining characteristic of scrum that leads to pressure, after all of that unwarranted optimism:

* The prediction is designated as a commitment.


It sounds as though you objecting to being required to give any estimate at all.


I don't know how you manage to read this. He seems to say he would like to be in a situation where he has the means to give good estimate but scrum forbids it and forces to give random and biased estimations.


can we infer that he would like to give his estimates:

* while he is actually writing the code (so not up front)

* not in a group setting but as an individual, so either one person estimating the whole thing or each person giving different estimates

* (third point same as first, dont want to estimate up front)

* must incorporate what is often called 'contingency' (which is actually what the whole point of measuring velocity is for!)

* and the final point - he doesn't want to have to commit to it

how can you _not_ read this into it?


Assume each person giving different estimates for their own work, but not up front - ongoing as code is written.

How is that the same as not being "required to give any estimate at all"?

> he doesn't want to have to commit to it

why not? an estimate is an estimate, not a commitment. Committing to an estimate makes it a commitment, not an estimate.

I might expect a dice roll to be 3.5, I'm not committing to the next roll being 3.5 - analysis should inform policy, in this case expectations informing stated commitments, but the two are not the same.

Furthermore, this bullet point actually takes the quote out of context - He specifically doesn't want to commit to the estimate produced under the previous conditions, not that he won't commit to any estimate. The difference is choosing to commit to an estimate you have high confidence in, versus any estimate given automatically being a commitment (where estimates may be required on demand).


it is totally reasonable for stakeholders to want to track your progress through a project. If you have a good way of doing that then great, you should use that.

Scrum people believe that scrum is the simplest way of measuring that. But at some stage you have to estimate the constituent parts of the project in order to get an idea of its size, and for those estimates to be useful in tracking your progress you have to do it in advance.

I repeat however, if you dont need to do this then thats fantastic! Many of us do however, and some of us choose to use scrum to do that, and some of us have had a great deal of success with that.

(edit: I worry that this sounds condescending. I am just trying to keep the tone friendly)


> for those estimates to be useful in tracking your progress you have to do it in advance

In advance of what? The only constraint on a useful estimate is that is comes before the task is finished - it needn't be considered as credible at the earliest possible time.

Also, your response doesn't really address my post..


(I went to bed so didn't take long to reply before)

I am clearly not expressing myself well. I am talking about a situation where some stakeholders are expecting a complete picture of roughly how large the project is and would like to be able to track how far your team is through this project on a regular basis.

I am putting scrum forward as a methodology for, in as short a time as possible, measuring the size of that project in a meaningful way by merely breaking it up into as small pieces as possible and attaching numbers to those pieces, intended to measure the size of each piece relative to the other pieces, and then over time discovering how long it takes to complete a piece of a given size.

> Assume each person giving different estimates for their own work, but not up front - ongoing as code is written.

The situation I outlined above (the time when scrum helps out) requires you have a stab at estimating all the constituent parts of the project at the beginning of the project.

> an estimate is an estimate, not a commitment. Committing to an estimate makes it a commitment, not an estimate.

True, but the point of estimating in scrum is to assign relative sizes to the pieces of work, not a number of hours, so this isnt a commitment to finish at a specific time but just to say 'I think this is one of the larger pieces of work in this project.' The person I was replying to sounds like they are on a bad team/project where people use their estimates to blame/finger point, and they are ascribing this to scrum as if the team wouldnt be doing this otherwise.

And in case you suggest that estimating without ascribing a time value is not meaningful, it is used to track how far you are through the project, and over time you refine what the finishing date will be given the emerging velocity.

> I might expect a dice roll to be 3.5, I'm not committing to the next roll being 3.5 - analysis should inform policy, in this case expectations informing stated commitments, but the two are not the same.

The analysis comes in discovering the velocity. The expectations evolve over time. But knowing your velocity is of limited use if you dont have an estimate of the overall size of the project.

> The difference is choosing to commit to an estimate you have high confidence in

This is the method for getting confidence in your estimate. You have an overall number of 'points' in the project and you learn how many points you can tackle on average every X weeks.


>The person I was replying to sounds like they are on a bad team/project where people use their estimates to blame/finger point, and they are ascribing this to scrum as if the team wouldnt be doing this otherwise.

Every time you try and infer what I'm "really" saying or what "really" happened to me you get it completely wrong. Next time you do that just assume that you're wrong, it'll save us both time.

The blame/finger pointing on my projects wasn't really external (although in a different environment it certainly could have been). Developers themselves felt bad about missing their 'commitments'. The pressure/blame was largely self-inflicted.

Despite feeling bad the predictions were still consistently optimistic and still consistently wrong due to the environment the predictions were made in. It was a bug in the scrum process that led this to happen, but the team and management (and you, apparently) would rather assign blame to anything else other than a bug in their methodology.

>The analysis comes in discovering the velocity.

Velocity isn't a useful metric.

>This is the method for getting confidence in your estimate.

Except it doesn't work. It didn't work for us and it probably doesn't work for anybody else.

Confidence in estimates means treating risk and uncertainty as if it is real rather than sweeping it under the carpet, like it is in scrum.

Confidence means a prediction process that doesn't make developers feel guilty about being wrong, like it does with scrum 'commitments'.

Confidence a prediction process that doesn't intentionally subject developers to groupthink and peer pressure by immediately putting them on the spot like scrum planning pt 2 does.

Confidence means that your estimation process itself should be mutable. Under scrum it is fixed and not subject to review (if you change it you're doing "Scrum-but" and that's a sin, according to scrum trainers).

Most of all, confidence means that you should be able to inject technical debt cleanup stories into the sprint that derisk future changes. Scrum says that's only allowed if the PO says it's allowed. The PO is not responsible for missed commitments though, so it's not their problem.


>* while he is actually writing the code

Yes. I can take time out to answer email. I can take time out to make estimates as soon as I get an estimate request. Doesn't have to be done in a meeting.

>(so not up front)

What the fuck is the point of an estimate that's not made in advance???

>not in a group setting but as an individual, so either one person estimating the whole thing or each person giving different estimates

The latter. Is that a problem?

>(third point same as first, dont want to estimate up front)

"Not up front" is not the same thing as "not 4 weeks in advance". I'd do it as soon as the PM needed it to do prioritization.

>must incorporate what is often called 'contingency'

If you think risk and contingency are the same thing you're an idiot. Risk is story A (e.g. upgrading dependencies) might take 0 hours or might take 4 weeks while story B (updating translations) is going to take 1.5 hours and it's really only going to take 1.5 hours.

Contingency is (for example) "let's make sure we have 4 weeks spare before doing story A".

>(which is actually what the whole point of measuring velocity is for!)

No, velocity is about measuring how fast you're doing stories.

>and the final point - he doesn't want to have to commit to it

Yeah, because as soon as you start assigning blame for missing feature deadlines the technical debt dial gets ramped up to 11 and predictions become an exercise not in being accurate but in CYA.

An estimate about how long something is going to take can be wrong for many reasons that aren't the developers fault - bugs in libraries, technical debt in dependencies, technical debt they weren't aware of and didn't create, team members disappearing, etc.

If you want developers to commit to things make sure it's things that they have full control over.


The tone of this post is uncivil, e.g. "If you think ... you're an idiot."


(replying here because I guess we've reached the maximum depth)

I am here assuming that you want to be able to try to measure your progress through the project (as I mentioned, this is the only thing scrum does for you). Both of you seem to be suggesting (dont insult me if Im wrong) that this isnt the highest priority.

And no, velocity is to make the whole system self-adjusting. If I put 3 points against a story we use velocity to discover over time how long those 3 points take. This self-adjusts to incorporate for contingency.

If you disagree with this then we simply disagree on what velocity is about. It doesnt make us enemies, we dont need to get super pissed off at each other.


I've seen the "you're doing it wrong" argument so many times (I applied it myself a few times).

Scrum is complex and not always possible to follow exactly, so this is to be expected but it makes me wonder, how many successful projects are out there that are following the true Scrum methodology?

My guess is that it's a few more than the classic waterfall but I still seem to see far more failure than success stories.


The very idea of a one-size-fits-all process is unrealistic IMO. Something will always be customised in practice.

Regarding success stories, it might be that process doesn't play such a critical role as long as solid engineering techniques are used and the team is competent.


If your team is competent and solid engineering techniques are being used, you already have a well working process. Forcing any methodology on this will likely result in a deterioration.

All those methodologies are for the less stellar programming teams, to get consistent results from those (also to a lesser degree to make good and bad programmers work well along each other). Because you can't always get the best programmers.

If Scrum would only work well with good programmers, it would be next to useless.


Successful big waterfall engineering projects where waterfall is actually applied exist. Want to construct a bridge or a rocket, design a microprocessor? You are not going to do that with "stories".

It remains to be seen if big Scrum engineering projects where Scrum is actually applied even exist. I can't even think about one on the top of my head. I'm not even sure Scrum is that well defined for us to be able to judge if is correctly applied or not. And it's yet another story to judge if they are successful or not.

In the end it does not matter much. The theoretical vision that nobody ever uses has almost no interest if you are concerned with real world efficiencies.


> Successful big waterfall engineering projects where waterfall is actually applied exist.

You are engaging in equivocation.

> Want to construct a bridge or a rocket, design a microprocessor? You are not going to do that with "stories".

Nor are you going to use the software development methodology described as the waterfall method (you may use a physical engineering methodology that was among the inspirations for that software development methodology, but those are distinctly different things, with different specific practices, and different domains.)

> I'm not even sure Scrum is that well defined for us to be able to judge if is correctly applied or not.

Scrum is exquisitely well-defined, both as to what it involves, what it specifically excludes, and what it is neutral to, in the Scrum Guide. (There's lots of confusion between Agile, a broad approach which is not a specific methodology, and Scrum, a very-specifically-defined -- though by itself fairly incomplete, in that any implementation of Scrum needs lots of decisions on the things to which Scrum is neutral -- methodology.)


Ok I maybe went a little far for the bridge, but today a microprocessor is way more similar to software than it is to a bridge (at least in some of the design phase, but then now even in some maintenance phases). And a modern rocket also contains tons of software. And waterfall is similar enough to (at least non-software -- but in my thesis also software) engineering to even consider a direct equivalent for the bridge. Only, quite like a description of a method is often not enough to see how it is properly used, the mythical "waterfall" where a phase begins after the other one, strictly, never happens, and there are all kind of loopbacks -- even for the bridge -- and obviously if you try to remove loopback things will get fucked-up, but why would you try to do that? Now in real world conversations, waterfall is used to designate software being developed with proper general engineering practices.

Scrum origin is partly in manufacturing. Now there are some common points between some aspects of software dev and manufacturing, especially more so if the software being developed can be iterated very quickly (but very few if it's not the case), but at least in the real world (and maybe even in the theory) Scrum is what is also actually mainly used to interact with other stakeholders. And given how the communication is performed, and its content, that might be better than complete chaos when nobody is actually able to do the work they are supposed to do (PM being limited to having vague ideas, lack of a truly competent tech lead doing actual tech lead work, lack of vision by management, and so over) and only very vague general ideas of what should do the software -- or more generally the whole product -- are ever emitted.

As soon as "serious" stuff starts to be involved, you need real boring engineering, with functional analysis, requirement engineering, modeling, systematic testing or even partial proofs, etc. And you need to have it structuring communication between teams, and day to day work. And then, I don't expect Scrum or anything Agile in such a context adding any kind of value.

Now the theory of Agile and Scrum has evolved because of criticisms to a point where we are told that it actually do not cover the things that matter. That is bullshit retro-justification, now that the world is fucked up trying to make sense how to use that. Here is the Agile manifesto:

> We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

> Individuals and interactions over processes and tools

> Working software over comprehensive documentation

> Customer collaboration over contract negotiation

> Responding to change over following a plan

> That is, while there is value in the items on the right, we value the items on the left more.

Engineering is mainly about "processes and tools", of course "individuals and interactions" are also needed, but there is no need to oppose them (although I am not sure what is the point of "individuals" here; the authors might as well said "oh and by the way be nice")

"comprehensive documentation" is critical in all kind of domains, and now that software is everywhere it just makes no sense to declare your "preference" of "working software" above "comprehensive documentation". It is, again, even dangerous to oppose them.

Customer collaboration over contract negotiation; again, highly dependent on the field and specific project if this is something where it makes sense to even have a "preference" or not.

"Following a plan" is what you do about how you organize your work when you use Scrum. There is no problem in studying the impact of a change any time if proper engineering practices are used. Obviously, the cost can vary depending on various factors.

My conclusion about Agile and Scrum, is that if you prefer all of that (4 Agile preferences, and the Scrum theater), you should seek projects that are suitable for the Agile preferences, and so poorly defined that Scrum is a plus. On my side, I'm just not seeking to work on chaotic projects -- on the contrary I try to bring logical and more systematic practice where I feel that chaos reigns -- and I'm neutral about Agile preferences, I prefer to choose projects on other criteria (mostly; intrinsical interest)


> Engineering is mainly about "processes and tools"

And Agile does not avoid processes and tools, it recognizes that process and tools must be specifically fit to the particular team and context of work (Scrum, particularly, is a baseline set of processes and tools that is designed to serve as a framework for common contexts of software work -- its intentionally incomplete to avoid specifying too much that would narrow its scope of applicability.)

> "individuals and interactions" are also needed, but there is no need to oppose them

The need to oppose them comes from the authors' concrete experiences in the software world before writing the manifesto, where very frequently canned (often consultant-pushed) processes and tools were being adopted by management in shops without considering the dynamics of the existing team and the particular work being done. (One of the sad ironies of the Agile movement is that the "Agile" banner itself has become a tool for the same kind of thing.)

> "comprehensive documentation" is critical in all kind of domains

Yes, it is; the preference stated in the manifesto is, again, the result of concrete experience where projects were quite often focused on producing mandated documentary artifacts because there was a checklist and that was how "control" was exercised, but the documents required and delivered were often irrelevant to (and not consumed by, or updated to reflect changes resulting from, the process of) delivering working software.

> Customer collaboration over contract negotiation; again, highly dependent on the field and specific project if this is something where it makes sense to even have a "preference" or not.

This is intended specifically in the content of developing specific software requirements (and, really, its more about the dev team pushing the customer to engage rather than provide hands-off requirements.)

The Agile Manifesto really deals with concrete problems encountered in particularly enterprise software contracting (but bad practices from the enterprise world were, at the time, getting exported to the rest of software development, so not limited to the enterprise world.)

> "Following a plan" is what you do about how you organize your work when you use Scrum.

Scrum, like most methodologies that attempt to implement agile values, focuses quite a lot on managing potential rapid change within the plan.


Well, I've got "concrete experiences" in the software world after the manifesto, where this has been interpreted has fuck processes and fuck tools (except those of Scrum, regardless of their applicability -- which is not the majority of projects, far from it) and let idiotic work continue to be done, now that we have a noun for it. This is not better than the previous situation. Honestly if some management is stupid enough to force badly suited processes and tools instead of letting (competent) teams choose better ones, I doubt they will suddenly see the light by reading the Agile manifesto. And again, in too many actual implementations, Working software is not really an output of Agile processes... except now you don't even have a doc anymore. Actually, to get non trivial "Working software", a good documentation is essential. You don't solve anything by casting that you prefer "Working software", especially more so when you are trying to fix a situation where the documentation is mandatory but poor. And guess what, the "client" also want "Working software"...

Scrum is what you do when you try to do software engineering without actually doing software engineering. It insanely meta, and like explained in other comments, the improvements you get from its loop are too often meta (we should evaluate more accurately). I prefer to stick to the real thing, and core engineering practices. Scrum attempts to fix situation when core engineering practices are misunderstood and used as constraints instead of being used as something essential to the dev of a good product; but it is vain to try to fix such a situation by engaging key people even less in core engineering practices, and more in mundane discussions where the real problems are never addressed.


> Well, I've got "concrete experiences" in the software world after the manifesto, where this has been interpreted has fuck processes and fuck tools

Oh, yeah, that's definitely a problem. I don't think the Agile Manifesto is bad at all, but I think that, ironically, in application it suffers from the same problem it sought to address -- people are looking for simple answers that can be applied without deep knowledge of context. The Agile Manifesto and Agile software movement was itself a strong reaction against that, but unfortunately it (and tools from within that movement, like Scrum) get applied by exactly the same process that the Manifesto was a reaction against (focusing on particular ways it had manifested, prior to the Manifesto, in software development.)

> Honestly if some management is stupid enough to force badly suited processes and tools instead of letting (competent) teams choose better ones, I doubt they will suddenly see the light by reading the Agile manifesto.

Absolutely; the real audience of the Agile Manifesto is software development practitioners that have influence with management, and its not really "new knowledge" as a concrete distillation of experience. The fundamental problem, I think, with Agile isn't that its ideas are bad, its that the real problem it deals with isn't a problem of process/tools, or even the meta-level approach to processes and tools, but a problem with institutional organization and leadership of large entities that happen to be doing software projects, and how that manifests in software projects.

The agile movement has produced some new tools that can be applied effectively in, largely, the areas that didn't really have the worst cases of the problems that motivated the movement -- because its helped motivate and inspire a lot of efforts by people with decent engineering backgrounds at finding new ways of working.

But the kinds of organizations that were worst afflicted by the problems that the Manifesto set out to address are still the most afflicted by those problems, and what they've gotten out of it is a lot of new processes and tools that consultants will sell them, their management will blindly adopt without understanding the conditions which makes them useful, and thus they find all kinds of new ways to fail.

> Scrum is what you do when you try to do software engineering without actually doing software engineering.

Scrum is largely orthogonal to software engineering (presumably, people using scrum in a software project will be doing software engineering within Scrum, but Scrum is not about software engineering.)

> It insanely meta, and like explained in other comments, the improvements you get from its loop are too often meta (we should evaluate more accurately).

Scrum is designed to be very meta, true. And, yes, if you mistake Scrum for a complete process rather than a process framework, you aren't going to get much out of it beyond omphaloskepsis. (I'm actually not convinced that Scrum is particularly valuable, even as a framework, as anything more than a well-known starting point to develop an appropriate, context-specific work model.)


I agree that Scrum is dead simple, but that doesn't mean it delivers sensible estimates, or allows you to get somewhere with less effort than some other methodology. You might end up doing more (and worse) work because Scrum is trying to be too simple and linear, which I argue is the case in the post. But it's simple, I definitely agree.

Regarding development: My main point is that Scrum leans towards agile methods such as XP (testing, CI etc), but it also sucks the time necessary to do those things well. The time Scrum takes off of the devs' working hours could much better be spent on those.


>>> Scrum is intended to be the straightest line towards measuring your real progress on a project, and not much else

There's slightly more to it than that: it also encodes an assumption that you're working with a single fairly-tightly-integrated group (with synchronisation points at least daily). It's possible that this helps with estimation and scheduling -- it's a lot less clear that it helps get the best outcome in other respects.


I agree, it is often not the best approach. But many situations demand a well defined approach to estimation and although the OP tried to preempt this, he didnt provide an alternative


I reckon most experienced coders can cope with estimation when it's justified (i.e. "can we realistically get this done before <specific, real and externally-imposed, deadline>? And if not, is there a useful subset we can manage?"). The bigger problems come when estimation isn't about keeping promises, but rather a part of some form of scientific management aimed at "getting velocity up".

There's also something of an uncertainty principle here -- more precision of estimation is possible, at the expense of increased expected timescales (partly due to padding, partly due to picking lower-risk approaches).


if its being used to 'get velocity up' instead of measuring velocity then its not being done right.

I personally think estimating projects is one of the most difficult things about this industry. Especially if we're talking about delivering many calendar-months worth of effort for a team , unless its just a variant on some other project[s] the team is well experienced at


oh and I have attended some of those expensive Scrum 1-week courses and saw the darker side of that community - it definitely has a cult following that give it a bad name, but I've been to similar conventions around design patterns, object-oriented and (to a lesser degree) functional programming so I think that the community problem is not particular to Scrum.


Those with the loudest voices simply have the loudest voices, be they right or wrong.


The problem is communities.


interesting. Do you have an alternative in mind?


"Scrum is intended to be the straightest line towards measuring your real progress on a project, and not much else."

More like wandering in the desert, hoping you find the promised land.

Been thru scrum master training 3 times, been on many "agile" teams. I've never heard this rationalization. Rather, a common justification for "agile" was you always have a working product. Which might be nice if things worked out that way.

Also, PMI style critical path worked just fine for figuring out that "straight line".

Scrum and "agile" democratized project management, empowering every poseur to claim expertise and ability. Whereas PMI required real effort to learn and master, Scrum flavored "self help" books can be flipped thru before you finish your coffee and then safely stored in plain sight on a book shelf, never to be touched again, allowing said poseur to claim the daily mutant chaotic dysfunctional mismanagement that they've always done is now "agile".


If you're objecting to people who treat scrum (or any project management tool) as a one-stop-shop that will cure all ills I agree with you, but nobody here is saying that.

If you are objecting to defining the scope as small tasks and measuring your progress through that over time, then continually re-evaluating this scope as requirements change, then I think you are not working in an environment that would benefit from this kind of tool.

Its just a pragmatic set of guidelines, and objecting to it with such ridiculous vitriol makes you sound as foolish as the people I think you're objecting to.


My goal is to ship products that people will buy and use. Scrum and "agile" has only been an impediment.

"with such ridiculous vitriol"

Emperor, little boy, no clothes. It's thankless work.

In opposition, defenders of Scrum et al use the No True Scotsman's fallacy. Because those of us who have tried and failed are just morons.


Considering failure rate of PMI led projects is even higher then agile projects for software. I really wouldn't hold that up that as the way to go.


PMI (critical path) != waterfall. But then that's also said of "agile", which too often devolve to waterfall.

Project management is risk mitigation. In my experience, most "agile" projects have been risk amplifiers. Ironic.


I think Scrum is successful in organizations where there is a lot of finger pointing and cynicism, and the engineering team is happy to measure progress in "sprints" in order to deal with upstream requests that they fundamentally don't respect, and possibly a product vision vacuum in which decision are routinely made and reversed, making careful and judicious architecture impossible.

In such an environment, developers prefer to drastically over test their code, and to undertake work in manageable sprints that let management claim success and understanding even when neither exist.

If you already have a very strong product market fit, and you need to hire developers whose judgement you don't trust, or if there are extrinsic sources of timeline pressure (like investors or non-technical management who think developers are lazy... essentially anyone other than users or customers), then Scrum is perfect for your organization.

The other constituency that seems to love Scrum is product managers who either have no vision for the product or no control of the vision, and are essentially being asked to be cat herders and manage engineers without earning their respect or having any authority over them.


Re: daily standups. A lot of what the OP criticizes standups for is what I like about them. The content is only half of it. In my experience, it can be very easy for a team to cease feeling like one and instead become a loosely-coupled collection of independent contractors. 15-20 minutes a day of forced synchronous communication goes a long way to making you feel like you're actually on a team and therefore act like it, and is well worth the minimal time. This is especially true if the team isn't all in the same physical space. It can be disruptive, so scheduling it at the right time is important.

Of course, if you already have meeting overload, standup is going to feel like the worst.


I also like it when after the standup updates, most of us stick around and shoot the shit for 10 minutes. Good way to build camaraderie.


In my opinion, the good thing about Scrum is that you can tweak the rules to fit your needs, aka, "Scrum in name only".

The daily standup, IMO, should be only to remove impediments, and if you have none, then a sentence or two will suffice. I see the DS as the most useful meeting, as you are aware of what your workmates are doing.

And if Scrum is still a pain in the back, then you have Kanban, which is sort of Scrum without the straitjackets.


This kind of tweaking is frowned upon and you will be chastised for doing it.

Yes you can do it (and probably should when you understand the purpose of each Scrummy practice), but don't expect to be praised by Scrummers.


I've had people on my team frown at my tweaking.

I'm extremely happy to discuss the reasons I want to change something, but all to often the only counter-argument is "that isn't Scrum". I'm 100% fine with that, but I'm not going to do something if the only reason for it is religion.


That's nonsense, in my experience. Most people doing Scrum-like development in my experience are much more interested in a development process that works well and produces good results than they are about over-attachment to process.


This is also what I see the best-led companies do, not caring about what is Scrum and what it is not, and taking the pragmatist path. One thing I really regret at my current job is missing the Kanban workshop. It definitely appears to be much nicer.


If developers talk within their team, they should constantly be aware of what their workmates are doing.

If communication is a problem, having a daily standup just pretends the problem doesn't exist, rather than solve it.


> The daily standup, IMO, should be only to remove impediments, and if you have none, then a sentence or two will suffice.

100% this. The only thing a synchronous team-wide meeting is useful for is revealing a significant issue and getting a prompt and definite acknowledgement from the team. And then, if it's a priority, some help with the issue. But given the proper tool, even that feature can be made asynchronous.


IMHO, unfortunately standups tend to become status report meetings just for the sake of it (article talks about control, I think it nails it). I rarely see anyone bring a blocker. It's just "yesterday i did this, today i'll do this. Next!".

It gets worse if the team is actually 2-4 different teams with not much overlap (because companies tend to adopt these agile methodologies without much thought and it just keeps growing and including more people because.. it's nice, right?). Then you're ignoring (or not having a clue) about 90% of the meeting, and it's _daily_.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: