Experiments

Why A/B Testing Is Good for Your Company Culture

Why A/B Testing Is Good for Your Company Culture

Six years ago, my previous company had a problem. We had the habit of building large, time-intensive projects based on perceived market trends or tenuous research, and far too often, these projects failed to grow our business. We also had far too many meetings debating small details like the placement of buttons or which colors to use, and some of them even got contentious. Agile delivery counteracts a lot of these problems, but without the right systems and tools in place, we still ran into issues that slowed down progress to our goals. The solution we came up with evolved slowly, but looking back, there were a few simple changes to our process and some new tools that had a remarkable impact on accelerating our growth. As a bonus, we noticed it had positive effects on our company culture as well.

The first step we took was to get our data in order. We evaluated many BI (Business Intelligence) and analytics tools and ultimately decided that none of these tools gave us a deep enough understanding of our users. We opted instead to create a data wahouse where we could run our own queries and analytics, and then created reports using data visualization tools (which, by the way, are orders of magnitude cheaper than the BI tools). This had the additional bonus of having the data in the right shape (roughly) as we grew our data team.

The second step was to add an AB testing platform. At the time, there weren’t AB testing platforms on the market that we were happy with, so we built our own. Building it in-house meant that the AB testing could be easily and deeply integrated into the code, a huge advantage.

Finally, we changed our product process to incorporate a lot of testing. For each new project or iteration, teams were required to create a valid hypothesis about why it would be successful, and determine specific success metrics (for example, increasing retention or engagement rates). Then their hypothesis was tested against the control with our AB testing framework to see which won. After a few cycles of this, we were able to drastically increase the cadence of our testing.

Lessons learned

Becoming data-driven had remarkable and wide-ranging effects on our product process and company culture. Let’s go over the top four effects.

Alignment

Adopting the steps above allowed us, as a company, to focus on the metrics that would drive our business success. Having these clearly defined goals removed a lot of ambiguity about projects because we had clear success metrics. By making sure we defined our success metrics at the start of our planning cycle, we could drive alignment around the goals. This helped reduce the inevitable scope creep and pet features from inserting themselves — or at least gave us the framework to say “yes, but not now.” Knowing what success meant also allowed our developers to start integrating the tracking needed to know whether the project would be successful from the beginning, which was too often forgotten and done only as an afterthought. We also got in the habit of building the reports and dashboard in parallel to see what the current behavior was (if any), so they would be ready for tracking the projects’ effects and sharing results with the company.

Speed

The meetings where we reviewed designs or implementations became much faster to get through, as we would often say, “Let’s test it.” What tended to happen was that people let their opinions or biases affect their decisions. The biggest problem with these was that most of the time, we were not our users; we were essentially guessing what our users would prefer — and this was an opinion. When there was a disagreement between two opinions, it was very easy to take it personally or defer to the HiPPOs (Highest Paid Person’s Opinion). By focusing on which metrics defined success, we could remove the ego from the decision process. Whenever we found ourselves with an intractable difference of opinions, the answer became “let’s AB test it.” This allowed us to move on, increased the speed of development, and shortened the time between iterations. When properly implemented, AB testing should be straightforward and add minimal project overhead.

Intellectual humility

If there is one thing I’ve learned from our AB testing, it’s that most people are really bad at predicting user behaviors — myself included. For example, as an education company, we had to be sure that our members were over 13 to legally register. We had quite a number of ideas on how to do this, from small checkboxes on the user form to a full-screen statement, “You must be over 13 to continue.” When it came to AB testing these ideas, I felt certain that the minimal treatment would win. After all, the accepted wisdom is that one should reduce friction on the user forms by simplifying them as much as possible. However, the winner was the full-screen version, and by a lot! AB testing is nothing if not humbling. This was great news for our company as this attitude was incorporated into our culture — good ideas can come from anywhere, and no one person in our company is the arbiter of what is “right” and “wrong.” The HiPPO’s ideas are just as likely to be right or wrong as anyone else’s ideas.

Team collaboration

Initially, we had a growth team coming up with all the test ideas. However, since the odds of successful AB tests were (and are) low, having just one team come up with all the test ideas fundamentally limited our chances of big wins. We were aiming to run as many potentially high-impact tests as we could. We found that inviting ideas from the entire company opened up ideas we hadn’t considered, and in turn, led to some really good results. As a bonus, we also improved the culture of inclusion by inviting everyone to participate in the process.

In closing, moving to a data-driven process that focused on hypothesis-based AB testing had a remarkable impact on our growth and culture. We were able to choose the metrics we wanted to move and focus the teams on moving them. The results of these projects were directly measurable. We experienced some remarkable single tests that increased our revenue by ~20%. The excitement of these wins was palpable. The teams started executing faster and had greater alignment and collaboration, which was good for the company and great for our users.

Table of Contents

Related Articles

See all articles
Experiments
AI
What I Learned from Khan Academy About A/B Testing AI
Experiments
Designing A/B Testing Experiments for Long-Term Growth
Experiments
AI
How a Team of 4 Used A/B Testing to Help Fyxer Grow from $1M to $35M ARR in 1 Year

Ready to ship faster?

No credit card required. Start with feature flags, experimentation, and product analytics—free.