
Emailing SaaS companies to test support time - steve-benjamins
https://www.sitebuilderreport.com/blog/how-long-does-it-take-saas-companies-to-reply-to-support-emails
======
travelton
I've worked with two startups now, both with low support volumes, which grew
many times upon acquisition. In both cases, these acquisitions brought upon an
influx of new customers. It's simply bad business to scale your support staff
based on MRR in the long term.

When the tipping point between growth and headcount arrives, a dev is usually
assigned to figure out how to slow the incoming requests. There are many ways
to do this, and some of the best companies have figured it out and mastered it
very well at scale (see Google, Facebook, etc).

Some ideas:

1\. Offer self-help/self-resolution documentation or tools (Gmail's self help
is quite good!).

2\. Categorize support requests, capture low hanging fruit, deploy fixes (sign
up deficiencies, UI bugs, etc).

3\. Set up a status page to proactively notify customers for
outages/maintenances.

4\. Increase customer communication when large functional or UI changes
disrupt users.

5\. Automation/AI/Bots (Not easy, and hardly ever done right)

6\. Offer chat to increase interactions served by a single CS agent.

An interesting method, that is often heavily debated... Only offer support to
paying customers. I think businesses can only get away with this, if they
offer plenty of self-help documentation/tools, and your core product is
relatively rock solid. Operating on a freemium model always brings about
interesting challenges.

Edit: Formatting.

~~~
bartkappenburg
We implemented a 'smart' contact form. I basically was a decision tree based
on our FAQ (updating our FAQ also updated our contact form). People had to
choose the subject and some follow up questions wit radiobuttons. At the end
(just 2 or 3 clicks) we showed them the answer from the FAQ. It that didn't
answer their question they could fill in their question.

5^3 (5 choices and max 3 clicks) makes 125 possibilities and will cover most
questions (if not all).

It made the support questions drop with 90% with the same customer
satisfaction if not more.

TL;DR: People don't read FAQs, make them pick the correct subject for
contacting you while showing answers to common questions. It will decrease
support questions with 90% (n=1 though).

~~~
davidandgoliath
Link would be handy! :) Promise I won't accidentally send any messages.

The few times I've seen this done were pitiful, amazon's is easily the worst
out of the bunch — but, I'd love to see a good example as it's something I'm
tempted to integrate.

------
gk1
As Nassim N. Taleb said, you wouldn't attempt to walk across a river that's
_on average_ only four feet deep.

If you look at the raw data (kindly provided at the bottom of the post), there
are tremendous variations for individual companies between two email attempts.

For example:

Formdesk: 987, 6

Get Response: 4, 1445

Appointy: 12, 1209

Two attempts just aren't enough to reach conclusions.

~~~
steve-benjamins
Author here— totally agree. Just added a note about this in the conclusion as
I think it's worth pointing out.

There's definitely a limit to what kind of conclusions you can draw from my
data. Mostly I just thought it was interesting and worth sharing.

~~~
bravura
Sorry, this is just a pet peeve of mine:

But why do you post inconclusive data and draw potentially misleading
conclusions, and then hedge with "I thought it was interesting and worth
sharing"?

The argument of the GP is that, people are liable to draw the wrong
conclusions. So perhaps the fact that the data are "interesting" is actually
motivation _not_ to post it.

~~~
socalnate1
It's categorically silly to withhold inconclusive data just because you don't
have the time or ability to make final conclusions with it. This is a major
issue with academic research today.

Transparency > statistical purity

~~~
tyre
This isn't a question of transparency with regards to the data, but publicly
drawing conclusions from statistically insignificant data.

Given the major issue of reproducibility in academic research, I'm not sure
"withholding inconclusive data" is a major issue with academic research today.

------
trjordan
The larger companies were probably slower because they have started
prioritizing their support tickets by how much you pay.

If you signed up for a $5,000 / month plan with all of them, then emailed
support, you'd have a different experience.

~~~
cedsav
As one of the better performing companies in this study, I can attest that
preserving the quality of our support as customer count grows is really hard.

But response time is just one, quite imperfect, metric. I wouldn't draw
conclusions on that. We don't prioritize support based on customer $ value,
and in my experience, customer support from our vendors never seems to be
correlated to how much we spend!

------
pcora
While I find it interesting, I would be more interested in seeing the times of
these companies when you are a paying customer. I expect faster responses, but
I would not get surprised if some took ages..

~~~
NegativeLatency
Also the range of times the person requested support at, while limited, is
still a broad timespan. For small or even some medium sized companies there
could be huge variability in their response time.

------
codezero
If you're interested in a similar post, StatusPage emailed 100 companies a few
months ago and wrote it up here: [http://blog.statuspage.io/customer-service-
email](http://blog.statuspage.io/customer-service-email)

I'd really prefer to see the questions and depth of reply. Time to respond can
be easily lower bounded by giving a shitty answer fast.

In my experience, customers are much happier with a response that takes
longer, if they: 1) know it will come and 2) know it will be thorough.

~~~
CaptSpify
AKA: auto-responding humans.

IMO, it gives you an insight into the support culture of the company. If they
treat support as "close as many tickets as fast as possible", then I'm gonna
have a bad time.

If they treat support as "Make sure the issue is solved, and the customer only
has to email once or twice", then I'm interested in doing business with you.

~~~
codezero
Yep, totally agree. The StatusPage post does come to the conclusion that
conditional auto-responders are very helpful.

That is, if you know you won't be able to reply fast (weekend, late night,
company party, etc...), let them know so you can set expectations.

------
zimbatm
At what time did these emails get sent out?

I wouldn't be surprised if overseas companies for example took longer to
respond just because you reached them out of office times.

~~~
vamin
"I only emailed companies on a Monday or Tuesday morning during normal working
hours (I don’t think it’s fair to expect customer support at midnight)."

It's not 100% clear whether the author means his local business hours or the
company's.

------
bks
They should all be using
[http://www.leadactivate.com/](http://www.leadactivate.com/) \- it takes an
inbound email and then places an inbound call to a phone number or multiple
phone numbers and then uses text to speech to notify the agent about the call.

Typically used for sales, but it reduces response times down to about 20
seconds for inbound emails and leads.

------
FatAmerican
I would also expect the times to be shorter for paying customers who contact
them through established channels.

------
tonyedgecombe
I'm quite surprised how quickly people responded, I thought it would be
longer.

In my business I commit to one working day, although issues don't usually wait
that long I don't know how I could shorten it without having someone dedicated
to support.

------
troydavis
For any given SaaS company, the fastest response is probably not the optimal
result. At least 3 other variables are equally important:

1\. Quality. How thorough is the response? As the blog post says, "Plus email
reply time says nothing about the quality of support."

2\. Cost. Are customers willing to pay more for speed and/or quality (either
in dollars or because the company spends less elsewhere)? Up to what point?

Those 2 are the obvious trades
([https://en.wikipedia.org/wiki/Project_management_triangle#.2...](https://en.wikipedia.org/wiki/Project_management_triangle#.22Pick_any_two.22)).
There's at least 1 more:

3\. Point of diminishing returns. Assuming a customer is willing to pay more,
what's the real impact of a longer delay or lower quality? How {fast,good} do
they actually benefit from?

Faster is only better when the test is only measuring speed. If you're reading
this post and thinking "Man, our team should respond faster!" \-- maybe they
should, but maybe those other factors mean they shouldn't.

To give three examples I've seen in support teams:

\- Company A has a big team of inexperienced staff sending fast responses.
They succeed on speed (and would show up at the top of this list), but
probably fail on quality. As a user, this is form letter hell.

\- Company B has a big team of very skilled staff sending fast responses, or
has developers or whole engineering team task-switch when a question comes in.
They succeed on speed and quality but, depending on the product, probably fail
on cost. May also have lower quality, since customer support/service is a
skill that not everyone does well. As a user, this feels great. Sometimes the
person who coded the feature I'm asking about is replying to me. OTOH, if I
don't actually need knowledge that's unique to them, their time may be more
valuable working on that feature :-)

\- Company C succeeds on all of these, with a right-sized group providing
high-quality answers quickly (let's say 2 hours, not instantly). Alas,
customers aren't willing to pay for that. Their customers perceive diminishing
returns from answers in less than 24 hours, and/or the price premium they want
to pay for a faster (or maybe more thorough, or more clearly written..) answer
is very low.

Sounds obvious, but based on how often companies make one of these mistakes,
it's not. When one aspect is relatively easy to measure and others are hard or
impossible, they tend to lose.

------
timlod
I think talking of average when the sample size is 2 is misleading. I find
such a test (and possible conclusions) interesting, just wished it was carried
out in a more rigorous way...

------
exolymph
It's interesting that big companies did worse. I guess because they tend to
have longer support queues?

~~~
onion2k
Or they have the technology to automatically prioritise support requests based
on the content of the ticket and customer value (eg whether you're already a
customer or not). A support request from a non-paying customer asking a
technical rather than a sales question might not be something they consider
important, especially if their metrics should that those users rarely convert.

------
bluejekyll
Out of curiousity, why wasn't Salesforce part of this study?

~~~
steve-benjamins
No reason in particular. Just chose 52 companies relatively randomly (just
wanted them to be from the 5 industries I listed).

