Using Predictive Analytics in Mobile Gaming

Using Predictive Analytics in Mobile Gaming

AppQuantum's senior data analyst Fedor Loktionov explains what predictive analytics is, how to calculate how much money your users are going to bring into the app, and how to figure out the reasonable amount of money you can spend on ads.

What is Predictive Analytics 

Predictive analytics plays a significant role for every company working with mobile apps. It does not matter who you are — a mobile marketer, a publisher, or a mobile game developer — you are driven by the desire to make money from the project without spending unnecessary extra funds.

Based on the experience of app publishing that AppQuantum has gained together with a performance marketing agency AdQuantum, we have created an article telling you what predictive analytics is and how it can be implemented in game development.

Predictive analytics is in demand in almost any area: from financial planning and creating bots for trading on an exchange to forecasting app revenue. In this article, we will focus on the latter.

When are predictions needed? Let's say you have created a mobile app that is monetized by paid subscriptions. Each new user gets access to a 7-day trial version.

We launch an ad campaign on Facebook: traffic flows, money is spent. But we will only start receiving real data on purchases in 7 days when the trial period ends for the first users. At the same time, we will spend the whole week running Facebook ads, which, in the end, with a high probability, may not pay off. The budget will be spent inefficiently.

This is where predictive analytics comes in. Starting from the second day, it allows us to understand how effective the UA campaign is and what we should do next. Crucial decisions such as increasing ad budget or changing ad campaign strategy become much easier to make.

What Exactly is a Prediction?

In an ideal world, any mobile marketer, developer, or publisher would like to know how each user would behave in their app. This would help them shape their marketing strategies and set marketing budgets to bring the most relevant paying users to the app.

However, it is almost impossible to accurately predict user behavior. Therefore, we are predicting not actions, but total revenues from a cohort of users based on their first days and weeks in the app. The total revenue is what we get from the cohort over a period from 2 months to 3 years, the exact period depends on the app itself.

The most interesting and difficult thing is the calculation of the prediction itself. Forecasts can be made for any app or game. According to the calculation methods, it is most difficult to build predictions for the mid-core and hardcore mobile games, with rare exceptions. The lower the percentage of ad monetization in the app and the longer the player's lifetime, the lower the predictive accuracy is going to be — especially if the app is “young''.

For the first time, the idea of ​​creating a very abstract predictive model came to us after the release of an article describing the methods for approximating Retention and LTV. Initially, we tested only one hypothesis: can the function given in the article approximate the average LTV of the project well enough? For the main game we were working on back then, the answer was: "Yes, it can."

This is the function we work with: 

 
This function depicts an approximation of ARPDAU depending on LT.
 
These charts displaying the ratio of LTV accumulated to the Nth day to the LTV of a certain fixed day (in this case, the 90th) are what we call trends. And the values ​​corresponding to them are usually called Payment Profile.
 
Having tested our hypothesis, we realized that we can craft together a test version of the prediction. For this, nothing was needed at all:

1. Get enough traffic on the project.
In order to initially build a payment profile, we need at least 1000 installations, 75 paying users, and the age of the cohort starting from 7 days of the app usage. In reality, the first profiles were built and tested on tens of thousands of users with significantly longer app usage.

2. Assess the significance of the sample.
There are no surprises in this step: the more data we have, the more accurate the results and opportunities to verify them are.

3. Build payment profiles based on the most recent data.
For our first prediction, we build profiles based on 1 month of data.

4. Extrapolate the payment profile to the hypothetical lifetime of the user.
For this, we use the function from the same article.

5. Check the correctness of the model.

6. Recalculate the prediction on a daily basis and display the results in the dashboard.

This is what the general cycle of building a prediction looks like. Let’s move on to the details.

The real work with predictions begins when we prepare data at the user level. We collect data during the interval of their life within the app. This interval lasts for 24 hours.

For this we:

  1. Remove unvalidated payments and determine the total revenue from in-game purchases of the user.
  2. Separate a part of users with payments above a certain value. The data on this group of users can spoil the quality of the prediction;
  3. Determine the total revenue from ad views for each day of the user's lifetime within the app. For this we use a specific model;
  4. Select the cohorts by which we will build a payment profile for an interval of 15 days or more. Then we continue building it up in order to finish it.

Profiles used for predicting are divided by platform, country, traffic source, and type of optimization.

 
Let's take a profile by country — for example, Great Britain. For example, we know that the cohort earned $1,500 in the first month of life. According to the payment profile for traffic from the UK, by day 30, users are monetized by 30% of their lifetime monetization. Therefore, the predictor for this cohort will show $1500 * 100% / 30% = $5000.

Thus, in order to calculate the prediction, we multiply the revenue by the coefficient corresponding to the platform/country/traffic source/traffic optimization and cohort's life number of the day.

The predictive model was built via Google Spreadsheet using the Adjust API. Adjust is a system for collecting attribution, cost, and some behavioral metrics for mobile devices.

It looked like this:

And this is a screenshot of the output. 

It was a terrible system, requiring more than one man-hour per day to function properly.

It was soon replaced by another system. Instead of receiving data using the API, we started receiving Raw Data directly from Adjust, storing the data in Clickhouse, and processing it in Python.

This is it on using predictions when working with mobile apps. Now let's go further into the details.

Simplifying and Automating Predictions

In a schematic way, this is what the predictive building platform looks like:

And this is what the internal dashboards look like. This is where all the work with predictions is in full swing.

LTV is one of the most important metrics in mobile marketing. It shows how much money a user will bring to the app in their entire lifetime there. Thanks to LTV, we are able to figure out how much a customer is interested in our product. LTV affects income in direct proportion: the more money one user brings to the project, the higher the total income. Therefore, predictions are actively used to predict this metric.

In general, predictive analytics involves lengthy analysis. The larger this analysis — both in terms of the time period and the amount of data itself — the more accurate the prediction is.

Therefore, it will not be possible to build an accurate LTV prediction based on the metrics of the first few days. These days we have a lot of users with low engagement coming to the game — those who open the app once and delete it forever. Therefore, it will take time to calculate an approximate LTV. All the same applies to Retention Rate — these are highly interrelated metrics.

How exactly are all these calculations useful for UA managers? Let’s figure this out.
 
Working with predictions can be divided into two large blocks:
 
1. Assessment of the current traffic; 
2. Preparation for further purchase of traffic.
 
The use cases for predictive analytics will depend on the objectives of the marketing department.
 
Option number 1. We want to find out what users have purchased the app. Let's say 100 installs are from the USA. We need to understand what kind of users we attracted to the app, how they behave. And most importantly, what the predictive LTV for them is — how much money they will potentially bring to the app.
 
In this case, we indicate the optimization that we set up when buying traffic, type in our data into the prediction system, and see which campaigns brought us revenue and in which ones we took a loss.
 
How it looks in real life: we take data of any Facebook ad campaign on the user level, grouped by platform/traffic source/campaign/country. We type it into our BI (Business Intelligence) — we use Tableau.
 
Now we can see all metrics for the campaign. The most important metric here is ROI. It makes it clear what profit we get from attracted users and if they pay off at all. That is, their LTV should be higher than Cost per Acquisition (CPA, not to be confused with Cost per Action).
 
Our campaign is progressing remarkably well — now it is important to analyze why. We can use this information in the future. We go to our predictive analytics platform and see: well, we have good metrics here and there, and also a high Retention Rate.
 
If the campaign is not going so smoothly, we can also figure out why. Maybe we chose the wrong GEO or campaign optimization. You can also check if all creatives are performing well, as it often happens that a creative attracts an irrelevant or overly expensive audience.
 
And all this information, extremely important for a mobile marketer, can be gained just when analyzing traffic through our prediction.
 

Option number 2. We want to get ready for further ad campaigns. The algorithm is much more complicated here.
 
During our advertising campaign, we collect a huge amount of data. We can channel it into the right direction and use it for the second important predictive function — preparation for further purchase. Here we use the prediction more for its intended purpose — prediction of metrics. The global task is simple: to buy traffic with an install price lower than LTV.
 
Using the predictions, we find out how much money the user will bring to the app, what kind of ads they will watch, and how many purchases are expected to be made. Based on this data, we will be able to properly optimize purchasing at the source level.
 
Using the filter settings in the prediction system, we select the type of campaign optimization and GEO. Let's say we are going to launch an ad campaign optimized for purchases in the USA.
 
By calculating the prediction, we see that a campaign with this optimization and in this particular country will give us a user LTV of $10. We use this information to set ourselves the appropriate KPI. That is, we have to buy traffic up to or lower than $10 so that there are no losses. At the same time, we understand that users bring us $10 only with a certain number of purchases — accordingly, we put this metric in our KPI.
 
The prediction works in two directions: it allows you to both thoroughly evaluate the purchase of traffic separately for each campaign, and in a complex way to get a macro picture. When we buy traffic for $700K, it makes no sense to analyze each campaign in parts. This is a very large amount, so it is difficult to micromanage. We turn to predictions — and voilà, in a couple of clicks, we are already macro-managing our purchase.
 
Predictions not only help us estimate how much money the user will bring to the app. We can also understand how much they brought with the given metrics and how much will we earn per user for a specific amount of time, for example, in 1 day. This allows us to accurately work with the purchase and analyze the campaign down to the smallest details. For example, we can define ads that attract certain types of people and figure out if these people are those who a) we need b) pay off.

Conclusion

It can be difficult to choose the correct set of factors that affect the prediction accuracy. But that’s just for starters. Perhaps the biggest pain is data quality and sample size. It can be extremely difficult to collect high-quality data and isolate the most relevant data to build a prediction. But this problem is solvable. For this we use:

  • our payment validation mechanism;
  • antifraud from AppsFlyer;
  • models for calculating data that Facebook does not give at the user level.

Predictive analytics provides easy traffic scaling with a full understanding of how efficiently we are approaching the ads — this very second, here and now.

Simply put, we save money, time, nerves and reduce the chances of error. That is why a lot of companies now are willing to invest in predictive analytics. If you still do not use predictions working with a mobile app or game, we recommend you consider implementing it in your business.