
Ask HN: What's your most useful Kanban system for small software teams? - EduardMe
As a small software team we started using Kanban with Trello a while ago using a relatively simple structure. It seems simple enough in the beginning, but we quickly end up abandoning a chaotic ship and falling back to even more chaotic slack chats to solve multiple projects.<p>But I like the idea of kanban in general, I want to check out what kind of systems work best for you (kanban system and workflow).<p>A few pain points:
- Where to collect feature ideas and bug reports, without making too much noise?
- Where to put stuff, which is not immediately required?
- Where to put the burning stuff, which needs attention without rating everything as &quot;important&quot;?
- What to do with those cards, which never seem to get done?
- How to write a &quot;good&quot; card, easy to scan?<p>One simple example we are using is:<p>Backlog -&gt; Collection of features, may or may not do
Buffer -&gt; Planned features, want do
Doing -&gt; Currently implementing features
Test &amp; Review -&gt; Implemented and needs testing
Done -&gt; Implemented and tested
Trash -&gt; Not relevant anymore<p>Then once a month or so we archive whatever is left in done and trash
another template, which worked for us:<p>Later -&gt; Planned for some time later
Soon -&gt; Planned soon, when current tasks are finished
Doing Now -&gt; Currently implementing
Done -&gt; Implemented and tested
Inbox (for Review) -&gt; Feature collection
Trash -&gt; not relevant anymore<p>&quot;Inbox&quot; gets every feature suggestion, internally and externally via feedback from users. We then review it once a week and move it to Later, if we decide to do it. But only, if there is not too much in Later already. Then gradually they move from Later -&gt; Soon -&gt; Doing Now. And again whatever is in Done and Trash is archived once a week or month.<p>What systems and workflows work for you?
======
diegojromero
Besides what others have commented, I'd like to point some more metrics that
help you model your board:

\- Task classes.

\- Lead time.

\- Cycle time.

\- Number of backward movements for any list.

\- Cumulative card evolution.

There are some good books about this subject:

\- David J. Anderson's Kanban ([https://www.amazon.com/Kanban-Successful-
Evolutionary-Techno...](https://www.amazon.com/Kanban-Successful-Evolutionary-
Technology-
Business/dp/0984521402/ref=sr_1_1?s=books&ie=UTF8&qid=1490549174&sr=1-1&keywords=kanban+david+anderson))

\- Actionable Agile Metrics for Predictability: An Introduction
([https://www.amazon.com/Actionable-Agile-Metrics-
Predictabili...](https://www.amazon.com/Actionable-Agile-Metrics-
Predictability-Introduction/dp/098643633X))

Shameless plug: checkout my personal project DjangoTrelloStats that measures
these metrics for you: [https://github.com/diegojromerolopez/django-trello-
stats](https://github.com/diegojromerolopez/django-trello-stats) (include a
somewhat Trello synchronization tool).

~~~
EduardMe
Wow, never thought of metrics/stats tracking in Trello! Just had a quick look
at the GitHub, does it also work under macOS? I saw Linux `sudo apt-get
install` setup commands.

------
olegious
A few things here:

1\. in Kanban it is very important to have limits for the number of cards you
have in every column. I find that a column becomes useless if there is more
than 10-12 cards.

2\. given #1, forget about your "Backlog" column, it shouldn't be in the same
daily view that covers the validated features you know you will build or
things you're working on now. You may even want to take the radical step of
not keeping feature requests at all- you'll find that the important ones come
up again and again, and if they don't, then they're not important.

3\. I would suggest that your "Buffer" column contain only a prioritized set
of features that you know you'll actually build because you've already
validated them or are lightweight "this is what we're building to validate
hypothesis/idea X" tasks. Once again, don't have too many cards here- having
too many cards is difficult to prioritize.

Let me know if you want any more help, I love working on process problems.

~~~
EduardMe
Thanks for the tips! A card limit is something I just discovered recently and
a great suggestion.

I also like the idea of having the feature requests in a different view or
even different system. I used to collect all features and bugs inside a note-
taking app on my mac.

This worked well until I hit 100 or so items. It's true, the important things
come up again and again, but sometimes it's a small group of very noisy users,
which skew the true numbers. Or it comes from people using your trial, who are
not your ideal customers. So I would like to track how many and which users
are requesting something. Just to avoid biases and loud users with their edge-
cases. What's your experience with this?

Regarding the kanban columns. You know other column systems and workflows,
which are maybe more suitable?

Another point we missed out earlier is to review the cards and throw away
stale ones.

~~~
olegious
I think you have to always be focused on whatever top level goal/metric/OKR
you're currently focused on and then define and test hypotheses that you think
will get you to there in the fastest way possible. Think of it this way, if
your current goal is to improve metric X by Y % then you should only be
focused on ideas, hypotheses and feedback that can get you there, ignore
everything else.

In terms of columns, I like simplicity: Backlog -> Development -> Test -> Done

Depending on your workflows, between "Test" and "Done" you can have a "To Be
Deployed" column.

This assumes that all features in the Backlog have been validated. My process
there is:

OKR (metric) -> Hypothesis (how will I meet this OKR?) -> Assumptions (what
risky assumptions am I making in the hypothesis?) -> Experiment design (how
can I prove/disprove the assumption?) -> Experiment -> review outcome and
either discard the hypothesis or build it out to production level. In this
workflow, items make it into the Backlog at the Experiment and "build out"
stages. So you come up with an experiment that may require engineering effort,
that's a card in your backlog. Your experiment validates your hypothesis, so
you want to put more resources into the hypothesis, so that creates more cards
to fully build out the feature.

edit: I always liked this post by UserVoice because it shows you how far you
can take these things, I wouldn't necessarily use their workflow, but it is
interesting: [https://community.uservoice.com/blog/trello-google-docs-
prod...](https://community.uservoice.com/blog/trello-google-docs-product-
management/)

~~~
EduardMe
Thanks for the link! Loved the point about capturing user stories:

> Good specs tell the (customer) and business story rather than act as an
> implementation recipe

Reading about the pain of the customer in his own words vs working on some
feature description is so much better (first point is better). This gives the
whole thing a purpose and doesn't feel like an arbitrary, invalidated idea.

About the metrics. Sorry, I think I used the wrong word here. I mean some
prioritization scoring, such as impact & effort of a feature or bug to make
cards more comparable. I tried that once, but found that most of the time you
already know which points seem to be high impact and low effort. Not sure, if
one should score and then strictly follow the descending list of items.

Edit: Or you start working on something, which is supposed to be high effort,
but after re-reading the customer stories, you find a way more efficient and
easier way to implement the requested functionality. Then the priority was
incorrect the whole time.

~~~
olegious
Regarding scoring- in the ideal scenario, you're always working on cards that
are most aligned with your top line goals, so you always want to be
prioritizing based on your goals.

For level of effort scoring, we just use a 1, 2, 3, 5, 8 system. Where 8 is
too big to estimate and 1 is an hour or so. What happens if you misestimate?
Unless something goes from 5 to 8, then it really doesn't matter because you
should always be working on what's most important. If something goes from a 1
to a 5 and some commitment can't be met, then you must reduce the scope of the
card or change the commitment. Don't waste too much time on the scoring, it
isn't an exact science- spend more time ensuring you're working on the right
things.

~~~
EduardMe
Ok understood. We mostly don't use any estimates. Neither effort, nor time.

I have changed our boards to the main recommendations so far and now I hit
some wall. Our QC is testing away a new internal release and getting back with
the issues. But they create a new card in the "Test" list containing a check
list with all the potential bugs they found, around 10-12 items.

Should they rather create single cards for each issue and push it into the
inbox? I feel somehow its going against the system to summarize a dozen points
in a kind of "Test Report Card".

~~~
olegious
Typically bugs should go on an individual card. Yes, that will create many
cards, but if they're small, then they'll be fixed quickly.

This is more of an art than a science, experiment with different approaches,
read about how others are doing things and modify to match your own process.

~~~
EduardMe
Ok, this make sense!

------
loumf
We do pretty much the same on our team with only items for the current and
next release on the board. All other items that might be done are on a backlog
board with lists for bugs and features by priority. We build a new release
from that board.

So, left to right on the main board: Inbox, v.nextNext, v.next, In process,
Review Requested, In Testing, Done.next, Done.nextNext

Done lists get moved to a "Releases" board when the version is released.

Inbox is triaged once per week to somewhere on the Backlog board or to a
current release only if it's urgent or a regression introduced by those
releases (like the bug was found in a beta)

The key to using Kanban, IMO, is to keep the number of cards pretty low. Use a
pull to bring cards on when lists get empty -- do not just keep adding.

~~~
EduardMe
Thanks!

I can see a bit different approach in your workflow, which I like. In our
lists, we don't track specific releases, the cards are just continuously
moving up the chain. In your approach you are tracking releases separately
from the backlog.

I think the last part you mention, to keep a card limit, is what broke our
system from time to time. And the reviewing is probably very important, so
stale cards are not kept around for no reason.

Is there a special way you organize your backlog board? Such as sorting it in
themes of your application, urgency, etc? And do you keep track, if it was an
external user request (and from how many) vs internal request (your own ideas)
or other metrics? Do you find attaching estimates help, such as effort +
impact?

~~~
loumf
> Is there a special way you organize your backlog board?

One list for each priority/case-type pair, so High-bug, Medium-bug, ... High-
Feature, Medium-Feature, etc'

> ... external user request

If support made the card, they attach the case. No reporting on this though,
it's mostly for followup -- when the card is moved to a Done, we can let the
customer know what release it will be in.

> estimates

We do a very high-level estimate bucket. Mostly to size the release.

~~~
EduardMe
I like the idea of attaching the case. Not only for following up, but also for
reading the whole reason why something should be implemented and it shows the
developers that there is a real person with a real pain behind it. Furthermore
reading the original case sometimes helps in finding more efficient solutions.

> high-level estimate bucket

How does this bucket look like? Something like: `within 6 months`, `3 months`,
`this month` or kind of effort oriented like `high effort` (takes long),
`medium effort`, `low effort` (quickly implemented)?

~~~
loumf
Effort and in days. Cards that are more than 10 days get broken down.

------
elsasze
This is how we manage workflow in our 5-person team:

// Slack // general discussions, brainstorm, clarifying

// Agora (agora.co) // capture ideas on slack using a slash command

// Gitlab // we export ideas from the Agora web interface, using the Board
view (similar to Trello/kanban)

We organize our board on Gitlab based on these columns: Backlog > Sprint (for
the week) > Development (in progress) > Pull Request > Merged > Test > Prod >
Done

Hope that's helpful!

------
amk_
These are the concepts I associate with Kanban:

\- _Sprints /Cycles_:

A period of time, usually 2 weeks, where the scope of work is fixed.

\- _In Progress List_

A column of things actively being worked on.

\- _Backlog w /Work-in-Progress Limit_:

The backlog is a prioritized list of everything you are going to tackle _in
the current cycle_.

Some people would argue that if you don't have a work-in-progress limit, you
aren't really doing Kanban (maybe you'd call it a Scrum board or something
else instead). Estimate-aware tools like Pivotal Tracker and JIRA enforce this
by having you set a "point velocity" for your team and requiring story point
estimates for every feature. The cycle backlog automatically overflows into
the next cycle when you exceed the velocity.

\- _Icebox_ :

An _unprioritized_ list of features from which you draw to create the backlog
at the start of a cycle. Pivotal calls it the Icebox; in GitHub/GitLab it's
the main issues list; in JIRA it's wherever you set up your board to "pull"
issues. In less-structured tools like Trello you have to decide where to keep
this list; you might make a separate board to keep track of things needing
validation, needing designs, etc. and pull them into the backlog periodically.

The general idea of Agile is to keep this list short.

\- _Done /Ready_

A place to keep delivered tasks.

    
    
        --------
    

From personal experience there a few things that help keep a Kanban system
under control:

\- Specific, deliverable tasks. Tasks should be deliverable as a chunk. Group
subtasks into things that are delivered together (reviewed, tested, deployed).
A card should never be resurrected from the Done pile once it's there. Use
epics, tags, or labels to keep track of related cards if needed.

\- Tasks should have clear acceptance criteria so they can be reviewed swiftly
and developers can move onto the next item on the backlog. Fuzzy criteria lead
to back-and-forth and context-switching which hurt productivity.

\-- Automated testing, code coverage checkers, and linting speed up review.

\-- Feature flags make it easy to break up large projects into independently-
mergeable chunks, which improves task transparency.

\- Track bugs in a separate project/column than your Icebox, but make sure
they are added to the in-progress column when addressed. Make it easy for non-
devs at your company to add bugs to the list and aggressively dedupe.

\- Have some way of indicating task state beyond using columns for everything.
Every tool provides tags; some have options more tailored at dev teams. Here
are some things that you might want to track:

    
    
      -- Review wanted
      -- Accepted/Rejected
      -- Blocked (ideally, with a link to the blocker)
      -- Needs design
      -- Bug vs Feature
      -- Story points/weight
      -- Priority (for bugs)
    

\- Prefer tools that let you easily assign tasks right from the board view,
especially to _yourself_. GitHub-based tools are really bad at this for some
reason and require lots of clicks to assign people. Asana and Pivotal are good
at this.

I suggest at least playing around with Pivotal Tracker to get a sense of what
a very opinionated Agile/Kanban process looks like. You could keep using it or
take some of the lessons back to Trello, Asana, JIRA, etc.

~~~
EduardMe
Thanks!

> Backlog w/Work-in-Progress Limit:

I think the limitation of items insides lists and having cycles with limits is
a good point. Often I feel overwhelmed by the amount of ideas we have. I feel
most things will never be implemented and just add to the noise. It surely
"looks" good (having many cards), if you present it with many different colors
and images. But it really doesn't help to sort things out and do the most
impactful first.

> Icebox

I like the idea of the Icebox. I guess thats similar to the "inbox" concept,
where you throw in unvalidated requests, ideas, reports, etc.

> Tagging

Internally I once suggested we should tag by impact and effort score (reddish
tones for impact and blueish for effort), but this escalated quite quickly
into a rainbow of cards, just adding to the clutter. And it feels like our
product manager decides entirely from his gut feeling. Maybe tagging is just
noisy in Trello.

What's your experience, can you "over-tag" cards? Does it need a limit? Or
does it work better in say Pivotal Tracker?

~~~
amk_
The tags you are describing are useful for planning but they become noise for
developers in their day-to-day interactions with the board. Maybe you could
use a custom field on your cards or a bracket [3] notation for effort? You
could replace the impact tag by stack-ranking your cards in your Inbox/Icebox
project.

In general, I prefer to use tags to highlight actionable things like "this is
a high priority bug" or "this is ready to deploy" or "this is blocked" more
than categorization, which can be better accomplished with separate boards. If
your system has the concept of "epics" to group issues, that's a plus as well.

PS you might want to take a look at this article if you're having difficulty
prioritizing things: [https://medium.com/startup-grind/ruthless-
prioritization-e42...](https://medium.com/startup-grind/ruthless-
prioritization-e4256e3520a9#.tummgadw8)

~~~
EduardMe
Yep, that's much better and thanks for the article!

