top of page

Simple guide to running your first in-product A/B test

Updated: Feb 14


A while back, I had to run A/B tests to improve the activation rate.


If you've never run in-product A/B tests before, it can be a little overwhelming so here's a simple guide on how to get started.



 



The first test


The first test we conducted showed a significant lift, increasing our core action from 14% to 21%.


It was a tiny change, but it made a huge impact. And got buy-in to run more experiments.



The second test


After we working on activation, we went a bit deeper making sure each part of the funnel was tested. We found a user insight that we wanted to actively solve for.


The result? If you look at the data below, you can see that the percentage of people that successfully went and selected a plan increased by 28%.



And all other health metrics improved. The test concluded that if site traffic remained constant, we had improved trial starts by 6%.




I also want to acknowledge that it was a relatively big change (involved positioning and messaging) that required a week of design and dev support. When we first launched the test variant, the test failed. So, we watched some Hotjar recordings. Discovered a few bugs and some design flaws.




Everyone jumped in to help and we quickly fixed it. Had we lost faith in the test early on, we wouldn't have done anything to improve it.



 


You can start via these 3 ways. I've over-simplified them so you know where to look and what to look for.



Option 1


Step 1) Use a feature flagging tool like Flagr to conduct A/B tests. Developers will create two different environments (control and test).


Step 2) Then send that data to Mixpanel, linked with the users' device IDs.


Step 3) Next, analyze the data in Mixpanel.


Mixpanel will directly calculate statistical significance for you, which is helpful.




Option 2


Step 1) Send data from Segment directly to a tool like Statsig. Or via code.


I chose Statsig because it was used by companies like Notion, Microsoft, Flipkart.


It was self-serve (no need to speak to a sales rep to get started). They also offer a nice startup program if you wanted that?



Statsig really speeds up the process of running experiments. You get to see all your metrics easily and can filter users with the user profile data that works best for you. I haven't completely explored the product myself, so it's a good idea to do some research on your own.


In my own journey, I've evaluated Eppo, Optimizely, Taplytics, Split.io, and Launchdarkly. I finally chose Statsig.


Step 2: Conduct experiments in Statsig.


The process is similar: developers set up an experience, and you run a test, sending data to Statsig.


Step 3: Then make a decision



Option 3


Step 1) Utilize the experimentation tool in your analytics platform, like Mixpanel Experiments.


Step 2) Ask developers to code the experience and run the test.




What to test?


To collect qualitative data for this test, I sent an email to users who signed up but did not activate in the product, asking for clear reasons why they didn't.


It worked well but only after a bit of tweaking (adjust copy, timing etc).


Here's the final template that worked well (h/t to Hillary for sharing):


Subject: Brutal feedback for {company name} Body: Hi {first name}, I noticed that you signed up for {company name} but never did {core action}. Any chances you'd share why?—even a single sentence would go a long way in helping us improve the platform. Thanks! {founder's name} Founder at {company name}


Start here with the user research if you want. Then validate with data.


And finally, move on to building your test using the three options I shared above!


I'm trying to better understand who my audience is. If you've got ten minutes, I would love to do a user research call: toption.org/10-minute


Thanks for reading!


Best,

Khushi

0 comments

Related Posts

bottom of page