
Don't build growth teams - jasim
https://conversionxl.com/blog/dont-build-growth-teams/
======
encoderer
He doesn't directly address that relentless A/B testing can be a pathology
that removes the soul and flexibility from your product and can wall in future
growth. This is because you have to optimize on one (or selected) KPIs but the
true health of your business is more nuanced than that. Large analytics
operations try to solve this by creating ever more complex KPI models but that
is not attainable for most.

As a crude example if you have a community site like Reddit you could harm the
community by over-optimizing your conversion funnel for Reddit Gold, while
still substantially growing the Reddit Gold business. For a time.

He touches on this by admitting that the super-optimized KISSMetrics homepage
was not the right strategy but I would have loved read more.

~~~
everythingswan
I believe there will be plenty more stories like this coming out and landing
well. Some of the testing I've managed hasn't helped end-to-end conversion
rate _at all_ and did nothing short-term results even when it looked great at
first glance. I feel pretty jaded about "performance marketing" and "growth
marketing" at this point. There are legitimate growth marketers out there but
it's become a cop out for The Hard Things in marketing.

Essentially, it's become really easy to get leads for cheap and incredibly
expensive to get them to convert. The question becomes, which master do you
serve?

I think marketing leaders need to delicately balance the KPIs with the vision
of the company or product. In High Output Management, Andy Grove talked about
negative indicators for KPIs as well (forgive me, don't have book within
reach). I don't think any time gets spent on that in my network. Granted, it
is hard to do with small teams and you might say it's the wrong thing to focus
on for small teams. Eventually, you reach the point where the numbers look too
good to be true and they usually are--meaning you won't convert these people
because they're not ready to buy what you're selling. They're just ready to
get the free info you're offering them in two clicks of their time.

I appreciate the candidness about failure even if it wasn't detailed. Can't
wait for more similar stories.

------
tacone
The issue is hyper-optimization is very easy to break and very hard to
maintain. In a way it's an anti-pattern you should do only if there's no other
way to grow.

That is: if you want a car to go faster you may improve the engine, or focus
to make the car lighter. If you hyper-optimize the engine the smallest
variation in the fuel will affect performance. If you hyper-optimize the
weight you better watch out on your own diet :) otherwise the car won't be as
fast as intended.

All in all it often goes back to following the Pareto principle. If you need
to hyper-optimize you landing pages you probably need to focus elsewhere. You
need good landing pages, not perfect ones.

~~~
aaronblohowiak
You are approaching a discussion of one of my favorite topics "adaptation vs
adaptability"[1]. Basically, there is a spectrum between perfect efficiency
and being completely flexible. we see this over and over again, from software
(where flexibility might show as generality or abstraction) to
organisms/ecosystems to statistical methods.

[https://people.clas.ufl.edu/ulan/files/Conrad.pdf](https://people.clas.ufl.edu/ulan/files/Conrad.pdf)

~~~
celticmusic
It's the basis of evolution in general. Something can be perfectly optimized
for an environment, but when that environment changes they may go extinct from
it.

so instead don't necessary want perfectly optimized, we want good enough,
because inside that good enough is where adaptability to changes comes from.

~~~
aaronblohowiak
yes, and the article i linked to explores this in terms of ecosystems and the
flows of material and energy between different organisms within the ecosystem.

------
vincent-toups
"Take 95% certainty compared to 99%. Because 95 is pretty close to 99, it
feels like the difference should be minimal. In reality, there’s a gulf
between those two benchmarks:

    
    
        At 95% certainty, you have 19 people saying “yes” and 1 person saying “no.”
        At 99% certainty, you have 99 people saying “yes” and 1 person saying “no.”
    

It feels like a difference of four people when, in reality, it’s a difference
of 80. That’s a much bigger difference than we expect."

I'm a data scientist, I feel like this is a really wildly confusing way to put
this.

~~~
CoffeePython
Yeah I had to read this over 3 or 4 times before I understood exactly what the
author was saying. Very unintuitive way of putting it.

~~~
oogali
Can either of you kindly help us understand in layman’s terms? (Or point to an
informative URL resource?)

~~~
jbattle
The way it makes sense to me is to invert the number you are looking at.

95 & 99 are very 'close together'

But if you look at 5 & 1 - the first number is 5 times the second. Huge
relative difference

~~~
around_here
Sure, but the deal is that at very small numbers, a massive change in %
difference doesn’t translate in to a massive difference. It’s _entirely_
dependent on context. The raw stats are meaningless.

------
Ididntdothis
“9. Growth teams have limited revenue potential.”

This one is important for almost all optimization efforts. Once the low
hanging fruit is gone further efforts often don’t produce much but can cause
harm instead. Stack ranking is a good example. It may make sense to identify
and lay off the bottom 10% for one or two years. The you should stop. But if
you keep going you get all kinds of weird dynamics that harm morale instead of
improving performance.

------
KaoruAoiShiho
Can someone shed some light onto the executives hated homepage that was
discussed? Since kissmetrics is an analytics company, doing this kind of thing
is their core competency. How could executives change it to a loser
immediately? There must be more to the story or some kind of other side to the
story.

Shooting from the hip here possible problems with the "overly optimized"
homepage:

1\. Long term brand damage? If people go to the site and see something not as
"polished" or "professional" looking, the bounces won't leave with just a
neutral impression but a straight up negative impression. Sometimes even
without a conversion a landing page can sell a visitor on a company and lead
to recommendations or return visits down the line.

2\. Lower value conversions? Is it possible that people who would immediately
sign up without a long sales page end up being less valuable? This is more
easily tracked so I'm sure OP would've accounted for this but it's still hard
to tell without a ton of data.

~~~
mlyle
This is really important, I think, and overlooked.

A/B testing micro-optimizes an outcome you're searching for. It doesn't tell
you the long term trajectory of customers, because that takes way too long to
get feedback. An example: "dark patterns". Facebook is deeply damaged, in part
by making choices from well-run A/B trials.

Worse, A/B testing searches out local maxima. Maybe you get locked into an
approach that precludes you from making the changes that would really drive
the visit ultimately.

Performance indicators and properly using quantitative information are really
important-- and many executives aren't versed at this. But, conversely, you
can't use A/B testing to decide who you are as a company and define your
relationship with the customer.

~~~
draw_down
I dunno, I remember going to a talk at SXSW in 2011 about how A/B testing
helps you reach local maxima but you need more to reach the next “peak”. I
believe the story about Google testing 40 shades of blue was also in common
currency back then.

I don’t think any of this stuff is new or undiscovered, really.

~~~
mlyle
Well, I don't think anyone anticipated the "dark patterns" stuff at the time--
that we could be training technology to be kinda-evil in a way that would have
massive consequences-- national political, reputation, etc.

------
cobbzilla
I liked this part:

“I have a rule-of-thumb for picking A/B test winners: Whichever version
doesn’t make any sense or seems like it would never work, bet on that. More
often than I like to admit, the dumber or weirder version wins.

This is actually how I tell if a company is truly optimizing their funnel.
From a branding or UX perspective, it should feel a little “off.” If it’s
polished and everything makes sense, they haven’t pushed that hard on
optimization.”

...but if you take this too far you end up with something like Amazon.com,
where everything is “a little off”?

~~~
UserIsUnused
And it works. Amazon is the leader. Are you sure it's despite its UX or did UX
contributed?

~~~
s3r3nity
This.

I always use Amazon and eBay as go-to examples of Optimization that works as a
counter-force to those that want massive redesigns every 6 months because some
other new app or competitor has some sexy slick UI.

------
seddona
Very useful content and perspective. I was certain it was all building to the
conclusion you should hire the author's growth agency to get that 2x
conversion gain and then leave the funnel alone, but the ask never came!

~~~
reggieband
I was expecting that too but it could also be a savvy attempt at gaining
credibility.

The real thing I was thinking was that productizing the activities of a growth
team would be ideal. If you could build a software product that efficiently
managed the process of the growth team you could make a good value
proposition. e.g. one person ($150k/year) using one tool ($100k/year) is
significantly cheaper than the $650k/year he quotes.

------
harryf
> I have yet to come across a designer, engineer, or marketer that intuitively
> understood probability on day one.

Sad but true. And it’s not even about understanding the more in depth math.
It’s about having an intuition about the 80/20 rule or Venn diagrams. “If I
work on this bug I can make 80% of our users happier but if I work on this
bug, which affects no one but our CMO who always complains loudly...” ... is a
type of discussion I’ve found happens too rarely

------
cousin_it
Good article, the author is clearly well informed. In the past I've made the
mistake of staying on a growth team for too long, well after the gains mostly
ran out. Toward the end, I was doing the best work I could, but it didn't have
much impact. Now I'm on another growth team still in the honeymoon phase,
grabbing 10% here and 20% there, and thinking what to do next when the gains
run out.

------
aaronblohowiak
I highly recommend considering [https://medium.com/convoy-tech/the-power-of-
bayesian-a-b-tes...](https://medium.com/convoy-tech/the-power-of-bayesian-a-b-
testing-f859d2219d5) as an alternative to p-value hacking / early stopping.

------
airnomad
There's always something to work on, so the problem is sometimes that growth
teams are ofte too narrowly focused. You're done with landing page, then move
on AdWords. You're done with AdWords, move on improving sales team. Making
forms work better is not everything.

------
C1sc0cat
I think the big take away is that still today the decision makers don't really
understand digital and are still working in the same way that Mad Men did in
the 60's

Won't mention the brand but I have in 2019 had in the context of a new
website) comments of don't want to much text we will let images lead - which
is just not going to work.

This is supposedly the digital native generation who are 20 years younger than
me

 __edited __to correct naïve to native __

~~~
chrisweekly
naïve (ignorant) -> native (fluent / expert)

not notpicking; those words have opposite meanings here

~~~
C1sc0cat
Oops a frieduan slip there (or an autocorrect gone wrong) you are of course
quite correct.

------
d--b
Wow, amazing article. Note for those who scan through, recommendations about
what to do instead are at the bottom.

------
cyborgx7
>At 95% certainty, you have 19 people saying “yes” and 1 person saying “no.”

>At 99% certainty, you have 99 people saying “yes” and 1 person saying “no.”

The article was worth the read to me just for this part. I thought I had a
pretty decent intuitive understanding of probabilities but this put things in
new perspective for me.

The rest of the article is marketing gobbledygook to me, though.

~~~
computerex
I didn't understand this part. Why is the author changing scales? Shouldn't it
be that at:

At 95% certainty, you have 95 people saying "yes" and 5 people saying "no"?

So it's easier to make an apples to apples comparison? What point is the
author trying to make with changing the scale?

~~~
meigwilym
Especially as he goes on to state:

> It feels like a difference of four people when, in reality, it’s a
> difference of 80. That’s a much bigger difference than we expect.

I had to stop reading here.

~~~
UserIsUnused
Depends on how you frame it. There is 80 of difference when you think about
"how many users say yes for each No"

So, what do you want to know, "yes for every 100" or "yes for every no" ? It
matters.

