A/B/n testing is certainly not dead. It's more likely Optimizely failed to see enough movement from their free plan into their paid tiers and in an effort to streamline/focus/cost-cut they're dropping the free plan.
This is interesting because Optimizely was in-part responsible for commoditizing A/B testing in 2010/11. Previously, it was a thousands-of-dollars per month product.
I think the reason Optimizely decided to ditch the free plan, self serve plans, and the lower-level enterprise plans is that churn is high in those segments.
Of course, the move upmarket may or may not improve the mechanics of churn in the short-term, but incumbents get displaced from the bottom, not the top. Optimizely may close bigger deals now, but those deals will become fewer and farther between until the spigot shuts down. Because of the move away from the low-end of the market, a generation of new marketers will not be using Optimizely. My expectation: ARR growth at Optimizely will continue to slow, churn will continue to be an issue, execs will soon depart. I wonder if the board will push for a sale. It's a shame, because speaking from the perspective of a churned customer, the reason we churned is that their packaging is crappy and their sales is terrible and their marketing-speak has become unbearable. We didn't want to churn, there was just no good plan for us.
When I started Optimizely with my co-founder, Pete Koomen, our goal was to enable anyone in the world to do website A/B testing. I believed then, as I do now, that we are democratizing the scientific method. That said, when a company doesn’t consider experimentation a business critical process, they are unlikely to dedicate the internal resources necessary to make it successful. We experimented (ha!) with a free plan to see if it would be a good model for getting customers to dip their toe into A/B testing. Turns out, the data showed it wasn’t as effective as a robust free trial, which we continue to offer for all products.
Also, A/B testing is far from dead. Companies that have a culture of experimentation have grown their market cap by 756% in the last five years. See our Opticon keynote for more context: https://www.optimizely.com/opticon/
We were using Optimizely until we didn't qualify for the lowest tier paid plan. Not enough impressions?
We knew at our traffic tier, Optimizely experiment analysis was not going to be statically robust. We were going to have our data person do the analysis with different techniques to deal with the smaller sample size. What we really wanted was the interface for a non-technical person to create the variation and Optimizely to manage which variation people saw.
Instead of letting us continue, our account was canceled. In the next months, when we do have the traffic, we are not going to consider them. Not impressed.
We were in a similar spot. We were on their (then) silver plan, but by the time we were ready to upgrade to gold, there was no gold. And no silver lining (pun intended). Only Enterprise. (see [0] for more details)
That was the reason I ended up creating Alephbet[1], an open-source JS A/B testing framework you can use with lots of different backends. And later Gimel[1], which is a backend that runs on AWS Lambda and Redis.
For developers, I think Alephbet and Gimel are a great free and open source alternative to commercial tools. We're running it in production for a couple of years now and our monthly AWS bills are $peanuts/month. It's NOT for marketers or non-technical users though.
You are welcome to use VWO. I can help you set up a trial account with us, and I usually hang around on HN to answer any of your questions. We have a comparison page here: https://vwo.com/comparison/optimizely/
I was disappointed that YCombinator funded Optimizely given that they were expressly copying the idea of a small, single-founder who was sharing his business and experience on these boards. As an observer admittedly without knowledge of who made that funding decision and why, it still seems the most unethical thing YC has ever done.
I get it, the economics of the web force articles to have silly clickbait titles like this one, but it's still annoying.
I thought the article actually did a good job of exposing the problems of "naive" A/B testing, but the conclusions of the article do not say, at all, that "A/B web testing is dead".
Google can afford to offer lots of stuff for free - their principal revenue remains Ads and until that needs to change then they'll be offering a lot for free.
As I've been saying for several years, most website A/B testing in the wild is statistically illiterate, expensive nonsense. Optimizely is right to cull their MVT product.
I'm curious if there will be any negative upstream effects in this decision down the road. Optimizely doesn't need to be free, but an affordable and mainstream option seems smart t get early-stage companies in the door. A wise competitor would take advantage of this new gap in the market.
Early stage companies do not need A/B testing. They need to set up their funnel and growth channels properly. Only when companies have a reasonably stable metrics like (CAC/channel, LTV, CTR) should they consider optimizing those metrics.
Moreover, without proper staffing (a dedicated resource for A/B testing), the effort will simply fizzle out. To get benefits out of A/B testing, a company needs to invest (in resources and process) and without that investment, A/B testing is a hit-and-miss kind of a thing.
You're totally right, early stage companies do not need A/B testing and don't have the resources to get it right. But that doesn't mean that many uninformed entrepreneurs won't want to try it.
If the startup in question does get to the stage where real A/B testing is necessary, the enterprise solution the founders are already familiar with will probably have an edge in the selection process.
I'm glad to hear it sparked your interest (btw, I didn't pick the headline).
I agree with what has been said about A/B testing so far - it is a great tool for more mature companies, but I hope I could also make my case for personalization.
After all, the "let's test that" mentality is not always applicable. Startups and SMEs should not underestimate how good their gut feeling is when it comes to optimizing their website for different audiences. And yes, splitting your visitors into different segments might seem counterintuitive when talking about testing but I strongly believe it's the way to go for companies who haven't achieved their desired results with A/B testing.
Disclaimer: I work for Unless, a personalization service.
Are there any open source alternatives to Optimizely? If not, would an experienced SWE comment on an estimated difficulty level to get an MVP together?
With a ton of assumptions, mainly that you have an existing data or analytics pipeline, a simple in-house experimentation framework could take on the order of weeks rather than months to build.
There are no open source alternatives for the wysiwyg editor part of optimizely. But before you roll your own backend implementation have a look at https://facebook.github.io/planout/ .
(We had success in combining planout with a cms for the visual part. Marketing could do simple a/b tests. More complicated tests where done in the backend together with a developer.)
You need to be VERY careful using Google Analytics as a backend. After a certain threshold, they start sampling your data, and results will therefore become highly inaccurate.
Disclaimer: I also created an A/B test open-source framework with Google Analytics as (one potential) backend, but then realized it's sub-optimal. I wrote about it on [0].
The article's title "A/B testing is dead" is very misleading. A/B testing is not a tool, it's mental model seeking learnings and improvements. Anytime you introduce a new change to your business, you're implicitly or explicitly trying to understand its impact. Most of the times, setting up an environment to tease out real effects of change is hard (experiments in strategy are usually very difficult) so you're left to guessing.
However, for mediums that allow easy tests (like website, landing pages where you can set up scientific experiments with ease and have enough traffic to get results in a short time period), why wouldn't you? The way I see it, rejection of A/B testing is the rejection of the scientific method of conjecture and improvement. In the highly contextual business environment with tens and hundreds of conditional variables, best practices and "gut feeling" can only go so far.
A/B testing for very small businesses (those who get attracted to free plans) doesn't make a lot of sense because they usually don't have enough traffic to get statistically significant results. Plus, optimization via A/B testing comes only after you've gotten your basics right and it's now time to scale. This is why at VWO we never had a free plan.
So Optimizely's decision to ditch free plan makes sense.
> A/B testing is not a tool, it's mental model seeking learnings and improvements
No, it's a validation phase most businesses use to sign off product decisions that are already made. They invest thousands in a new feature, run an A/B test, and then wait until they see an ambiguous 'win' in favour of the new feature. This is somewhat akin to flipping a coin until you get three heads in a row, and declaring yourself winner of the game.
You can denounce this as A/B testing gone wrong, but it's a reality of the web industry that most testing is badly motivated and statistically illiterate. The mentality I describe isn't limited to SMEs and tinpoint organisations, either. I've seen it espoused by highly respected product managers at prestigious organisations.
Where does this endemic of pseudoempiricism come from? Well, if I had to guess, I'd point to the marketing of companies like your own.
Take a look at https://vwo.com/ab-testing/ - "AB testing - The Complete Guide". There's not one mention of statistical significance either in word or spirit. There's no talk of setting a timeline in advance of the test. There's no hard numbers to set a context of what's an acceptable sample size - in fact the guide is specifically aimed at small business owners.
So if anyone is undermining trust in the "scientific process", it's not Optimizely, but in a way - you.
>There's no talk of setting a timeline in advance of the test. There's no hard numbers to set a context of what's an acceptable sample size - in fact the guide is specifically aimed at small business owners.
You should take a trial of our product. Before setting up a test in VWO, we ensure you understand that the test will have to be run until we have statistically significant results. We've moved away from frequentist methodologies (where peeking at results was an issue) to bayesian. Check out what we do here: https://vwo.com/blog/smartstats-testing-for-truth/
The guide is aimed at people who are just starting out, but I agree with you, we should have pointed them to the importance of getting statistically correct results.
> No, it's a validation phase most businesses use to sign off product decisions that are already made.
Interesting, could you elaborate further on this? I've only really worked with technically unsophisticated (i.e. non-tech companies), and they've always shown willing to do it properly and factor in results, and not set up leading experiments.
Is it a symptom of technical companies that staff are more bought-in to their solutions?
When it was discussed 2 years ago, I teased out[0] (with a dose of cynical attitude) some key points about structural problems with A/B testing tools; problems, which navigate people away from scientific methods, and into the realm of easy-exploitable bullshit artistry.
I never had an opportunity to use or check out VWO, so I'd like to ask you: does your tool structurally helps users create sound statistical experiments? Can you share something about how you do it?
This is interesting because Optimizely was in-part responsible for commoditizing A/B testing in 2010/11. Previously, it was a thousands-of-dollars per month product.