A/B Testing in ASO: An iterative approach to growth
Nowadays A/B testing has become a common activity in performance marketing. It’s hard not to find a marketer who doesn’t mention or use A/B testing in her day-to-day activities. Regardless of whether we are talking about paid campaigns, creative assets, or App Store Optimization, A/B testing is at the basis of any company’s growth activities.
Growth, especially in a performance context, is the result of experimentation and innovation. But, how can we experience any of these without testing new ideas? Testing new ideas without tracking results and effects on the bottom line, however, is not plausible in a data-driven world and that’s how A/B testing helps us establish a methodical and data-driven process to make decisions.
What is A/B Testing?
A/B testing is a split testing that refers to a randomized experimentation process within two versions of a variant. It is a useful approach that allows marketers to gather the right data when developing, launching, or growing a mobile app.
In this article, we want to take a look at how our team of experts at REPLUG has applied this process to App Store Optimization activities, and how our partners have benefited from this approach.
In ASO, more specifically, A/B testing refers to the process of testing different versions of one element (textual or visual) of the Store Listing to see which one performs best. Performance, in this case, is defined as the change (positive or negative) in the conversion rate of our Store Page.
Before jumping into more details though, it is important to understand why A/B testing is important in a data-driven context.
Define, Setup, Test, Learn & Repeat: Why bother with A/B testing?
Before dipping our toes into the deep waters of A/B testing let’s talk about what this process really means and how it can help us improve the long-term growth strategy of our mobile app. As we said, testing and comparing changes to the user experience before actually implementing them, will help us improve the performance of our mobile apps and user acquisition efforts.
The A/B testing process is rather straightforward. We take a group of users and divide them into two groups (called “variants”), which will be subject to different experiences according to their group.
The experience is promoted in the following ways:
- Group A: it has a standard experience and that’s why it is called “control group”
- Group B: it has the alternative experience and that’s why it is called the “variant group”
The period of time of the test is decided beforehand, or until we gather enough statistically significant data to compare results and establish a winner.
The hypothesis we want to implement through an A/B test might have a positive, negative, or no effect at all, but that shouldn’t stop us from testing. Any learning we gather through A/B testing is good learning that will help us move a step forward.
In performance marketing, we can test pretty much everything we want. In our experience, a positive change may be triggered by a slightly different user experience, like a call-to-action on an ad, something related to design, like the color of the checkout button, or even the registration process.
It is fundamental to keep in mind that:
- Our goals and hypotheses should be established before doing the test, to have a clearer opinion on what we want to achieve.
- The user sample size should be equal.
To sum it up here the A/B testing visualization framework we implement at REPLUG:
When thinking about implementing this process in our strategy, there are a few more things to keep in mind:
- A/B testing can test the change of a single variable or one of many.
- It is recommended to test one element at a time because if we decide to test multiple variables at the same time identifying the determinant factor might be too complicated. A/B testing is valuable because different audiences may behave differently, giving us valuable insights into different subsets of the audience.
- Assumptions of the testing environment are critical to the success of the process
What about A/B testing in ASO?
After understanding how the A/B testing process works and the critical role it plays in the performance marketing area, it’s time to focus on its role in the App Store Optimization process.
A/B testing in ASO can help us improve 2 main factors:
- Conversion rate
- Discoverability
More specifically, by applying this iterative process to our ASO strategy, we can identify improvements on specific elements of our Store Listings, both textual and visual, for example:
- Icon
- Screenshots
- Video Preview
- Title
- Subtitle or Short Description
A/B testing in ASO: 4 ideas for implementation
Depending on the testing environment, there are several alternatives to perform A/B testing when talking about App Store Optimization. Here below, we take a look at the 4 most common ways, we have experimented with our partners:
1. Google Play Experiment
This is the easiest, and perhaps the fastest, and cheapest way for us to perform A/B testing on our Store Listing. It is implemented directly in the Google Play Console, and it’s relatively simple to use. Despite the obvious advantages, we need to consider the following drawbacks:
- There’s no customization on the elements to be tested
- Android users don’t necessarily behave in the same way (or respond to the same stimuli) as iOS users
2. Third Party Platforms
If we want to experiment also for our iOS users, we might need to rely on 3rd party tools, such as Store Maven and SplitMetrics. These tools basically mimic the Store Listings in a web page to study how the users interact with the variant of the original. Although these tools offer greater flexibility, they come at a cost that not always justify the benefits.
3. Apple Search Ads’ creatives test
Although this is a “hack”, it is still a valuable alternative and way to implement an A/B testing in the iOS environment. It allows us to test on organic search traffic in our Store Listing’s creative sets. The main drawback of this method is that there’s little flexibility.
4. Country Split
This is a simple and free-to-implement A/B testing idea. If we are active in more than one country, we can A/B test different variants in different countries. Although it allows us basically to operate in the Store environment, there are some big drawbacks, such as the audience in a specific country can react quite differently because of cultural reasons, rather than preferences, which then cannot be universally applied.
Understanding and track the results to implement the right learnings
Generally speaking, in an A/B testing environment, if one variable is better than the other you have a winner, and you should replace the losing variant. If neither is statistically better, then the element you tested may not be as important as you think or result in an improvement that is statistically significant.
Although understanding when one variable is better than another is not necessarily complicated, we need to track the results of the different tests, to ensure that the learnings we implement are significant over time. As a matter of fact, our hypothesis might be linked to different tests, that need to be tracked, understood, and eventually implemented.
A/B testing in ASO can highly improve the conversion rates of our Store Listings. It is an iterative approach to growth that needs to take a leading place in our strategy. At REPLUG, we implement this process methodically for our partners, to challenge the status quo and create a growth mindset.