Building for Better Revenue

Monetization tips and strategies for growing your app

How A/B Testing Can Help Dramatically Increase Your Revenue

The format, placement, and frequency of an ad you choose to implement in your app can have a big impact on your ad performance. For example, a native ad with a larger call-to-action button might do better than an ad with a smaller call-to-action button. Or, with an interstitial ad, you can test two different placements within a game to see which one drives better revenue and and experience. Testing the rates of ad frequency is another way to optimize the ad experience—too few and you won’t get enough clicks, and too many and users get frustrated.

To figure out which ad formats work best, consider running A/B tests to compare performance of different design choices for your placements.

What is A/B Testing icon


What is A/B Testing?

A/B testing allows you to test two different components, otherwise known as variables, of an ad to see which one performs better when it is shown to users. By analyzing these variables, you learn how to optimize your revenue. You can test variables including ad receptiveness, which means when people are most receptive to seeing an ad; creative components such as colors or button placement; and frequency, or how often the ad is shown.



Determine Which Variables You Want to Test

Here are the three main variables you can A/B test within in-app ads to optimize your revenue - receptiveness, creative, and frequency.

Before getting started, best practices are:

  • Keep all aspects of your ad constant except for the variable you are testing, so you know which one performs best
  • Allow at least 14 days for test duration to ensure you have sufficient data to gather results
  • Limit the number of tests you conduct at any given time to ensure sufficient data and accurate results


Receptiveness


RECEPTIVENESS VARIABLE

Receptiveness


Receptiveness relates to how or when a user might be more willing to see or interact with an ad. User experience research has found that there are often natural breaks within apps where ads tend to perform better. For example, people are more likely to click an ad after they’ve completed a major task, such as cleaning their phone memory or reaching the next level of a game.1

Given its importance, receptiveness should be the first thing you consider when placing ads. Even if you have a great placement design, performance could be weak if the ad isn’t shown in an optimal place within your app’s flow.

HOW TO TEST

How to Test for Receptiveness


  1. Select at least two Ad Spaces within your app for an ad placement to be shown. This is the variable that we will test.
  2. Create unique Placement IDs for each Ad Space. This will allow each Placement ID to generate unique performance data.
  3. Choose the same design for both of your placements so that you can be sure you are testing receptiveness only.
  4. If your ad server allows, allocate half of your inventory to one Placement ID and half to the other. If your ad server uses a waterfall to call one ad network after another, ensure that both Placement IDs are treated equally (in the same position in the waterfall), and that the audiences seeing each placement are equal and randomized.
  5. If the above method in step 4 is not possible with your existing ad server, or if you’re not currently leveraging an ad server, you can run one custom unit first and then change out your Placement ID with the other unit after at least 14 days.
  6. If you’re testing too many placements at once, user experience will be compromised and performance may suffer. If this happens, you can either segment your audience within your ad server, splitting your audience in half and showing one ad unit to each half, or turn on each ad unit for 14 days and compare average performance from the final days of each test.
  7. Ensure that you have your specific placement’s status set to enabled. After testing is complete, determine which placement performed better. Be sure to deactivate unused Placement IDs, leaving only the one associated with your optimal placement.
  8. Measure the results of your test (more information below).

Receptiveness


Receptiveness relates to how or when a user might be more willing to see or interact with an ad. User experience research has found that there are often natural breaks within apps where ads tend to perform better. For example, people are more likely to click an ad after they’ve completed a major task, such as cleaning their phone memory or reaching the next level of a game.1

Given its importance, receptiveness should be the first thing you consider when placing ads. Even if you have a great placement design, performance could be weak if the ad isn’t shown in an optimal place within your app’s flow.

How to Test for Receptiveness


  1. Select at least two Ad Spaces within your app for an ad placement to be shown. This is the variable that we will test.
  2. Create unique Placement IDs for each Ad Space. This will allow each Placement ID to generate unique performance data.
  3. Choose the same design for both of your placements so that you can be sure you are testing receptiveness only.
  4. If your ad server allows, allocate half of your inventory to one Placement ID and half to the other. If your ad server uses a waterfall to call one ad network after another, ensure that both Placement IDs are treated equally (in the same position in the waterfall), and that the audiences seeing each placement are equal and randomized.
  5. If the above method in step 4 is not possible with your existing ad server, or if you’re not currently leveraging an ad server, you can run one custom unit first and then change out your Placement ID with the other unit after at least 14 days.
  6. If you’re testing too many placements at once, user experience will be compromised and performance may suffer. If this happens, you can either segment your audience within your ad server, splitting your audience in half and showing one ad unit to each half, or turn on each ad unit for 14 days and compare average performance from the final days of each test.
  7. Ensure that you have your specific placement’s status set to enabled. After testing is complete, determine which placement performed better. Be sure to deactivate unused Placement IDs, leaving only the one associated with your optimal placement.
  8. Measure the results of your test (more information below).

Creative




CREATIVE VARIABLE

Creative


Once you’ve made the final decision about where to show an ad, you’re ready for creative testing. The purpose of testing is to determine which ad components, such as call to action button color, lead to better performance within your specific app experience and ad placement. While some apps see large performance differences when creative is customized, others see only minor differences. Until you test, you won’t know which outcome holds true for your app.


Example: Testing native ad placements
Example: Testing native ad placements

Custom buttons: If your app uses custom buttons, consider testing their inclusion in your ads too.

Button size: We’ve found that making a CTA more prominent in the design can lead to more engagement. Be sure to never hard code the CTA.

No
vs.
Close
:
Try replacing the
Close
option in the corner with a
No
button alongside the CTA. Because a
No
button is less common, this may slow your visitor’s reaction time and increase the likelihood they’ll notice the ad. Also, be sure that this option is clear to users so they know they have a choice to close the ad.

Different colors: Try testing different colors within the CTA or ad text to figure out what aligns best to your design.

Cover image: Although cover images are optional, we’ve seen engagement grow when they’re a prominent part of the ad.

Bigger screen elements: People are usually drawn to larger elements on a mobile screen. What elements of your design could be enlarged for testing?

Horizontal scroll: Once you’ve established a native standalone placement you’re comfortable with, consider testing it against a horizontal scroll. Since scrolling ads incorporate additional messages and CTAs, this option has the potential to unlock valuable new inventory.

How to Test Creative


  1. Select one placement within your app, for example a placement that isn’t getting high CPMs, where you would like to improve performance.
  2. Identify the creative ad component you want to test, for example button color (see illustration).
  3. You will need two different versions for the ad component you want to test. You can either use an existing ad design or create two new designs. For example, a blue button. This will be variable A. Then, create the second design. For example, a grey button. This will be variable B.
  4. Next, integrate each design using a unique Placement ID.
  5. If your ad server allows, allocate half of your inventory to one Placement ID and half to the other. If your ad server uses a waterfall to call one ad network after another, ensure that both Placement IDs are treated equally (in the same position in the waterfall). Either way, ensure that the audiences seeing each placement are equal and randomized.
  6. If the above method is not possible with your existing ad server, or if you’re not currently leveraging an ad server, you can run one creative version (variable A) first and then change out your Placement ID with the other creative version (variable B) after at least 14 days.
  7. Keep testing different ad components until you have reached your optimal results. Remember to deactivate all Placement IDs aside from the one associated with your winning creative version.

Creative


Once you’ve made the final decision about where to show an ad, you’re ready for creative testing. The purpose of testing is to determine which ad components, such as call to action button color, lead to better performance within your specific app experience and ad placement. While some apps see large performance differences when creative is customized, others see only minor differences. Until you test, you won’t know which outcome holds true for your app.


Example: Testing native ad placements
Example: Testing native ad placements

Custom buttons: If your app uses custom buttons, consider testing their inclusion in your ads too.

Button size: We’ve found that making a CTA more prominent in the design can lead to more engagement. Be sure to never hard code the CTA.

No
vs.
Close
:
Try replacing the
Close
option in the corner with a
No
button alongside the CTA. Because a
No
button is less common, this may slow your visitor’s reaction time and increase the likelihood they’ll notice the ad. Also, be sure that this option is clear to users so they know they have a choice to close the ad.

Different colors: Try testing different colors within the CTA or ad text to figure out what aligns best to your design.

Cover image: Although cover images are optional, we’ve seen engagement grow when they’re a prominent part of the ad.

Bigger screen elements: People are usually drawn to larger elements on a mobile screen. What elements of your design could be enlarged for testing?

Horizontal scroll: Once you’ve established a native standalone placement you’re comfortable with, consider testing it against a horizontal scroll. Since scrolling ads incorporate additional messages and CTAs, this option has the potential to unlock valuable new inventory.

How to Test Creative


  1. Select one placement within your app, for example a placement that isn’t getting high CPMs, where you would like to improve performance.
  2. Identify the creative ad component you want to test, for example button color (see illustration).
  3. You will need two different versions for the ad component you want to test. You can either use an existing ad design or create two new designs. For example, a blue button. This will be variable A. Then, create the second design. For example, a grey button. This will be variable B.
  4. Next, integrate each design using a unique Placement ID.
  5. If your ad server allows, allocate half of your inventory to one Placement ID and half to the other. If your ad server uses a waterfall to call one ad network after another, ensure that both Placement IDs are treated equally (in the same position in the waterfall). Either way, ensure that the audiences seeing each placement are equal and randomized.
  6. If the above method is not possible with your existing ad server, or if you’re not currently leveraging an ad server, you can run one creative version (variable A) first and then change out your Placement ID with the other creative version (variable B) after at least 14 days.
  7. Keep testing different ad components until you have reached your optimal results. Remember to deactivate all Placement IDs aside from the one associated with your winning creative version.

Frequency




FREQUENCY VARIABLE

Frequency


After you’ve selected your placement and creative, you’re ready to test ad frequency. This helps you determine how often you should show ads to your users so you can keep users engaged and interested in your app. There’s an inverse correlation between the number of ads shown to users and the CPM of each ad, as showing ads too frequently tends to decrease each ad’s performance.


HOW TO TEST

How to Test Ad Frequency


  1. When testing, we suggest using Audience Network as the only ad source running for the placement you’re testing. This helps you control for other factors such as demand type, creative execution and differences in quality.
  2. Identify the different frequency levels for a particular placement that you want to test.
  3. Create two unique Placement IDs in addition to an existing one. Implement all three Placement IDs into the same Ad Space. It’s important to the same creative design in each Placement ID so you can accurately measure the impact of the frequency level.



  4. To conduct the test, you can only activate one Placement ID at a time. We suggest you run each test for at least 14 days to get significant results. In your Audience Network dashboard, enable one of your new unique Placement IDs, then deactivate the other two.

    Within your ad server (or directly in your source code), plug in the newly active Placement ID and lower the ad call frequency to 50% of your existing implementation.





5.
Repeat Step 4 with the third, unused Placement ID, and set the frequency of ad calls to 50% greater than the original.

6.
After you’ve collected results on each Placement ID, you can determine the optimal frequency. Once you’ve done this, keep the placement with the winning frequency and deactivate the other Placement IDs.

TIP

Different ad networks may have different ideal frequencies, depending on the campaign types they offer. Therefore, it is most relevant to test your frequency through Audience Network (or one network) only if you’re using it as your primary demand source.

Frequency


After you’ve selected your placement and creative, you’re ready to test ad frequency. This helps you determine how often you should show ads to your users so you can keep users engaged and interested in your app. There’s an inverse correlation between the number of ads shown to users and the CPM of each ad, as showing ads too frequently tends to decrease each ad’s performance.


How to Test Ad Frequency


  1. When testing, we suggest using Audience Network as the only ad source running for the placement you’re testing. This helps you control for other factors such as demand type, creative execution and differences in quality.
  2. Identify the different frequency levels for a particular placement that you want to test.
  3. Create two unique Placement IDs in addition to an existing one. Implement all three Placement IDs into the same Ad Space. It’s important to the same creative design in each Placement ID so you can accurately measure the impact of the frequency level.



  4. To conduct the test, you can only activate one Placement ID at a time. We suggest you run each test for at least 14 days to get significant results. In your Audience Network dashboard, enable one of your new unique Placement IDs, then deactivate the other two.

    Within your ad server (or directly in your source code), plug in the newly active Placement ID and lower the ad call frequency to 50% of your existing implementation.





5.
Repeat Step 4 with the third, unused Placement ID, and set the frequency of ad calls to 50% greater than the original.

6.
After you’ve collected results on each Placement ID, you can determine the optimal frequency. Once you’ve done this, keep the placement with the winning frequency and deactivate the other Placement IDs.

TIP

Different ad networks may have different ideal frequencies, depending on the campaign types they offer. Therefore, it is most relevant to test your frequency through Audience Network (or one network) only if you’re using it as your primary demand source.

Measure Your Success icon


Measure Your Success

Most publishers conduct A/B tests to find out how to maximize revenue. For placement, creative execution, and frequency tests, your key performance indicators (KPIs) should be the comparison of shifts in CPM, as well as retention and engagement metrics.

When evaluating your data, there are several important factors to keep in mind:


Volume

Do you have enough volume to generate accurate data? For example, with fewer than 200,000 impressions per placement, per day, it’s difficult to generate a consistent CPM since small changes in CTR or post-click conversions may have big effects. Audience Network uses a predictive model to stabilize performance, and it’s difficult to generate this model without a large enough sample.

Consistency

Has the volume of your inventory grown or contracted significantly during your test period?

User Mix

Have you recently completed a marketing push for new users, or has your user mix stayed consistent during the testing period?

Regional Mix

Have your user demographics, specifically geographic location, differed significantly in the time since you started the test?

Monthly Demand

Many campaigns compete for limited inventory at the end of the month, which can have the effect of driving CPMs higher. This happens most often on the final day of the month.

Seasonality

CPMs tend to be highest in Q4 and lowest in Q1. If your tests span several months, you may notice a gradual increase over time, so we recommend you conclude all tests within a short window of 1-2 months.

Actions You Should Take

Start testing one or two variables with your ads to see if you can improve performance.
Download the companion white paper to this guide: Building for Better Revenue Download the White Paper

Give Us Feedback

Help us improve the content in this series by filling out our brief survey.

Submit Feedback
国产学生无码中文视频一区_国产高清不卡无码视频_国产不卡无码视频在线观看