

Tech Support - Onfire vs On Fire - teyc
http://rachelbythebay.com/w/2011/11/16/onfire/

======
daemin
That's pretty neat visualisation of data, but of course it doesn't represent
the whole story. I could imagine that some people might've been on a
particularily long support call or something. But if all you're employing the
people for is to resolve tickets, then it's great.

Needless to say if you tried to apply this to programmers, by integrating it
into the bug tracking and source control systems, it would be gamed pretty
damn quickly.

~~~
rachelbythebay
Thanks! You raise a good point in that it doesn't show everything people were
doing. I should do a follow-up post to elaborate on this, but since you
brought it up, I will answer it right here right now.

The majority of techs were supposed to sit there, take calls, and work
tickets. When they were actually at work and not on a break, in the bathroom,
or on lunch, they should have been "Auto-In" on the phones while working on
tickets. If they finished a ticket and a call wasn't presented to them, they
should have immediately gone to the "unassigned" view to grab something new.

You're right that it could represent a long phone call. That kind of stuff
happens. I used to run these tools on myself and see a dip in my data. I'd
worry about what happened and then realize, "oh yeah, that's when some
Enterprise customer called up and needed hand-holding through fixing Tomcat".

The difference with a long phone call is that their idle time would then be
followed by a ticket being created or updated to document what happened in the
call. I had another view of the data which would let you spot this situation
easily. It's not evident on the graphical view shown in this post.

Also, long phone calls were relatively infrequent. Most issues could be
resolved in a couple of minutes. "Please unlock my control panel" is short.
"Your hardware just took us down again and we are hopping mad" is long. Most
of the calls resembled the former and not the latter, fortunately.

It's when you see the same people, day after day, with these l-o-n-g gaps and
no ticket updates to suggest a phone call that you start wondering. After long
enough, you realize there are some who are just not pulling their weight.

~~~
teyc
Here's a product idea for you Rachel: An activity dashboard.

~~~
rachelbythebay
I'm not quite sure what you're referencing -- an activity dashboard for a
ticketing system, or something else using the same basic technique?

If you meant the ticketing system, you could see "live" status on techs in
that system by just asking to see the "active queue" page. If they had tickets
assigned to them, it would be obvious. I don't work there any more, though.

If not that, then I am not quite sure what it would be. Please advise if so
(or contact me through my site, URL in profile) - it could be interesting!

~~~
teyc
The charts on your blog are gold. They may be people who'd be prepared to pay
for that?

Core value prop:

Managers are always very curious where they stand on performance.

Provide benchmark service so that organisations could see how they are
performing relative to their peers. For an organisation with 4 employees
(budget $400k), $1k per year for a benchmark report would probably pass
muster. (keyword research "service desk metrics" - I think you can win this
space)

How to go about this:

Integration into FreshDesk, ZenDesk and other ITIL etc. Anonymize data,
segment into industry

Alternate value prop:

Enable tighter feedback loop for line managers and provide them with
actionable ideas (intervention, escalation, rotation, dispatch, pair).

Other possibilities:

Pipe activity to a TV screen in the service desk area. There's an overlap with
existing software, so you'd do it only if you could provide more value than
other software can. For example, you can gamify it (though not my preference).

Provide useful reports for end of year employee review.

When badly done, benchmarks can result in the wrong incentives being created.
However, you have a lot of text data that can be used to gauge the complexity
of a task. May be there's something in there.

