How to Recognise a Hit: AppQuantum’s Insight

How to Recognise a Hit: AppQuantum’s Insight

We are frequently sharing new case studies about our successful game launches, where we talk about the timeline of events and show how AppQuantum gets such great results. However, many things have remained off-screen: our approach, work of each specific team, and the decision-making processes. 

In this article, we will tell you about how we approach choosing partners and how this process is evolving over time: with the changes in the industry, with structural shifts in our company and with the experience growth of our team.

In particular, this text would be helpful for the developers that may want to evaluate their games using our approach and learn more about how publishers make decisions in general.

Introduction

 

Choosing a project for publishing may look like a conveyor from the outside. You are taking a game, passing it through the test, looking at KPIs (if they are lovely — we are taking it, bad ones — leaving them be). Actually, there are tons of different nuances under the hood. 

Since the start of our publishing, we tried to use different approaches: looked at KPIs and compared them with benchmarks, decided to look only at ROAS, reverted back to benchmarks again and so on. 

As a result, we’ve come to a conclusion that… We have no unified standard for tests. It would be more accurate to say that we have the concrete pattern for all games: integrating specific SDKs and testing at pre-selected GEOs. But analysing the results is always different and depends on the project. We are considering the stage of the development, the relevance for the market, the developer’s team itself (if they are ready to make the game better, is it convenient and pleasant to communicate and work with them). 

Sometimes two similar projects with identical metrics may have very different destinies. One of them we will take for fine-tuning because the development team is very enthusiastic about their game, always making everything in time, and is communicative. And we would have to decline the other one because working with their team would be overly cumbersome.

That’s why we can’t tell you the one true and beautiful way of testing, no matter how hard we try. Everything is on an individual basis. But we can try to explain the decision making process in AppQuantum and show you the way projects move to get published. These insights alone are helpful and informative. Let’s begin!

Initial consideration

 

New projects that can be considered for publishing get to AppQuantum in two primary ways: devs contact us directly, or our BizDevs find them (in app stores, special analytics services, some selection lists; those games would be considered “cold”).

BizDev, or Business Development Manager — is the specialist that has duties of looking for new projects, selecting them and communicating with the developers. The primary objective of these employees is to establish contact with the development team and tell them in detail our advantages as a publisher, how the processes in our company look and what services we provide. After that, BizDev managers help through all the stages of project selection and testing, answering all the possible questions from the developers and providing any necessary assistance for the project and the team. After successful testing, BizDev managers don’t stop working; establishing a legal relationship between the publisher and developers is waiting ahead :) Answering any questions regarding the contract, discussing the terms of the agreement, all this stuff. In other words, helping by all means. 

As soon as the publishing relationships are legally established, we are smoothly transferring the communication to the producer working with this project. We create the operational chats and plan weekly meetings at this stage. After the producer finally takes over all the communication, the BizDev manager stops almost all work with the game. 

In many companies, that’s where the role of the BizDev manager ends. Still, even at AppQuantum’s stage of development, we understand that for the most efficient work, our managers need to get deep into the product, more profound than what’s usually seen on the market. 

Thanks to the well-tuned process of searching for projects and our success cases, we receive a lot of leads with different levels of quality, development, readiness for launch and test. Mechanical testing of every project would cost tons of resources for us and would also be harmful to the developers (for example, if their game is not even ready for tests because they would just waste their time).

That’s why our BizDevs evaluate the product and the team by several criteria, even in the very early stages of development:

- Quality of the product — comparing it with competitors and, if necessary, analysing the whole chosen niche;
- Level of completeness — does the game have a tutorial, English localisation (first testing is happening in English speaking countries), enough content, no apparent mistakes and no shortcomings in the gameplay;
- Monetisation — errors in showing ads, inadequate offers’ prices, etc.;
- Product’s relevance — are the game mechanics trendy or not, is FTUE meet the modern standards or not;
- The team and its experience — are there enough people in the team for tuning the project and supporting it afterwards, do they have experience in the chosen genre;
- Developer’s commitment — does the developer’s team really want to work with this game, believe in it, and have any ideas for the project’s growth.

Generally an analysis like this takes hours because of its complexity. Here you should study the niche itself, its trends, select the most relevant and successful projects and compare them with the game being considered for publishing. Collecting the information and analysing the gameplay can take a lot of time. But this work is essential for evaluating how promising the project is and can save you and your resources from testing futureless products. 

If the result of such analysis shows the complete project’s inviability on the market, then tests won’t even happen most of the time. Then the BizDev manager tells the developers what is wrong with the game, how they can make it better and if there is any reason in trying to improve it. Sometimes, resources you would spend on refining can be equal to making a new product from scratch with the experience from previous mistakes. And you should be emotionally ready for this from the beginning.

In this case, our team can offer co-development of the game to match the mechanics and setting with high chances for success. 

It is essential to evaluate and consult the development team. Do they really understand the scale of needed refining? Can they manage to do them with the current state of working power? If not, what is the minimum required amount they can do? 

Collecting information about the team is one of the most important stages of preparing for the test. It often happens that precisely the enthusiasm of the team, its willingness to work and improve the product, and readiness to implement advice from the publisher become the determining factors in deciding the future of the project. 

If needed, BizDevs consult with producers. In most cases, producers come into action after the game's first test when we have metrics, and it is possible to give feedback based on them. But in the case of complex projects, new niches or highly promising games, we get producers involved before the test results come in. 

Thus, even before the test, our employees are already deeply involved in the work with developers, their project and niche. We already have a knowledge base for lots of niches and genres with years of our team's experience: the results of previous tests, analysis of different niches over time, competitors’ deconstructions. 

And if we are facing something new for us, then the process may be a little longer. We may need several days for studying the product, the market and making an analysis. But as we said earlier, this approach would still be more efficient in terms of spending the resources of the publisher and developer. 

Getting into the product

 

If the BizDevs aren’t sure it is possible to evaluate the project correctly by themselves, we get the producers’ and game designers’ teams into this process.  

BizDevs hand all collected information about the game and the team to their colleagues and study the product with precision. After that, the decision is in the producers’ hands. Even in cases of uncertainty, we can give the “green light” if a person from our team believes in success and is ready to invest its energy and time in the project. 

Well, sometimes the opposite situation happens. The game is good, the team is excellent, and even the metrics are solid, but if no one from our publishing is ready to take the lead here, we can decline such a project. Experiments at the cost and expense of the developer is a lousy basis for building up a working relationship. 

Usually, after such bilateral evaluation of projects, we can see some prospects going directly to the test. But sometimes, it can be that after the analysis, our team can make some assumptions about making the game better. In most cases, it comes to the tutorial, UI/UX and bugs. Then we send the developer the list of improvements. Sometimes such suggestions are only a recommendation: you can make something better, but you can also launch it in this state. But in the case of some severe problems that make metrics worse, we insist on the correction before the test. 

For someone, it may sound like an overly long process and unnecessary expense of energy on the project without getting even the first metrics. But in our opinion, it is more important to help the development team tune the project and show them existing mistakes that they get rid of than just send the game to the test, get incorrect metrics and miss a potential hit. 

In our experience, we had an excellent example with the Idle Light City game. Developers came to us with metrics from another publisher, and they were unsatisfying. Their CPI was too high, and users were not really watching ad creatives. The project wasn't paying off. 

Before starting the test, we offered the developers to optimise their current ad placements and add several new ones. The test showed excellent results. Yes, the CPI was the same, but ad metrics improved, making the project profitable. The following work with the ad optimisation, placements, in-app monetisation, competent traffic buying and searching for creatives let us gain 8,000,000 installs during ten months and worked with this game for a long time and very efficiently. You can find more details on our work with Idle Light City in the success case on our website.

Testing

 

Our tests can be conventionally divided into three categories:

- Retention test or screening test;
- Monetisation test;
- ROAS test (or an extended test, as we call it).

Ideally, the test's structure should look like this: 

Firstly we conduct the retention test. We ask the developers to implement the Facebook and Appsflyer SDKs with the minimum required events. While the developer integrates all the necessary SDKs, we begin preparing the gameplay creatives. After everything is ready — we start the test. Here we are looking at the product and marketing metrics of the game. The most exciting things for us are the retention of the first, third and seventh days, average playtime, tutorial completion funnel, player's progress and CPI. After the test, we download and analyse the data. 

At this stage, it is essential to understand whether the game is interesting for players or not. For this purpose, you’ll need at least one of two parameters: retention or the average playtime of one session. If these characteristics are relatively high — we can say that there is an interest. To some teams, we can offer cooperative tuning up of the game and publishing at this stage if we are sure that we can help the developer make the product better. 

But if we have any doubts or metrics are not meeting the benchmark, we can offer some recommendations and repeat the retention once again or get to the monetisation test right away. 

However, there is always a “producer’s factor”. We upload all test results to a collective channel in the corporate messenger, and producers will take a look through them. We had several situations when projects with relatively low metrics were taken into the refining stage because our producers liked them greatly and saw potential there.  

Besides that, there is also a “cutie’s factor”. If the developer’s team wants to work, is ready to invest their energy and time into the game and manage to get right into the BizDev’s heart. Then sometimes, it happens that the BizDev manager can push producers a little to give their recommendations and help the devs to get through the selection process. 

The main question that most developers have is, “what KPI are you looking at during the tests?”. If you have read the article until now, you already know that KPIs for us are not strict guidelines. But it would also be wrong to say that we won’t look at numbers at all. We have these benchmarks for casual projects:

- For prototypes: >30% Retention of the first day;
- For MVP and higher: >40% Retention of the first day, >15% Retention of the seventh day, >5% Retention of the fourteenth day.

Nevertheless, even games that fall below these benchmarks could go further if we can see their potential.

The monetisation test starts when the project has its honed core gameplay, enough content, fleshed-out meta (additional mechanics that makes the main gameplay more complex and exciting) and monetisation. Here we track more events on watching ads and in-app purchases. By the way, for these calculations, we use our own approximation model that was built on the example of our user acquisition prediction model. 

If AppQuantum gets a project with a high level of completion, then we start the monetisation test. For the games that were signed during the previous stage, this test is being held as a standard procedure for a soft and then worldwide launch. 

ROAS test (or extended test) is used in several cases:

- Traffic for the game was bought for quite a long period of time;
- The project has a large player base;
- There are still some doubts about the game’s payback.

In all these cases, we analyse whether there are any points of growth and whether we can help tune it up competently to attract users through traffic buying effectively. 

Sometimes, producers also analyse the project before this test and write their recommendations. In some cases, the game is also transferred to the publisher's mediation to get the most trustworthy evaluation of the results. 

Such tests mainly differ from the others by quantity. We launch more varied ad campaigns, test connections and optimisation, cycle through various approaches in creatives. The whole process may take more than a month. However, the result is very accurate. The project is being evaluated "in action"; we connect it to our internal analytics and assess it as the projects on our complete operation. That's why the results of such tests are highly trustworthy. 

For example, in this way, we tested the project Idle Space Farmer, which was actively buying traffic and already had several hundred thousand users when it came to AppQuantum. Nevertheless, the developers had hit a wall and could scale no further. Our tests have shown that the game's potential is higher with the right strategy and many different creatives. 

Though, the opposite is the case as well. With one of the projects, the test showed that our results are pretty similar to what the developers achieved. And the work on the product showed only a slight improvement. It is more profitable for the developers to self-publish their games in such a situation. The publisher’s goal is to achieve results where even the profit share model will allow the developers to earn more than they would without our participation. 

For most projects, the testing stage is mandatory. But there are always some exceptions. That happened, for example, with Idle Evil Clicker. When we first met with Red Machine, their project was already profitable. But we’ve seen great potential in marketing scaling. 

We skipped the default testing stage and moved right onto signing because there were no questions about the project’s paying off. 

In practice, games rarely pass through all three stages of testing. Depending on the genre, niche, level of the project’s completion, we can skip some steps or, vice versa, add them. 

But sometimes, even with such a long selection process, some setbacks may happen. We’ve had a case when all involved specialists evaluated the project as promising, and it successfully passed all the tests. Still, at the scaling stage, we’ve seen that the tests showed sporadic successful results. Unfortunately, there were no possibilities for scaling, and we’ve gone separate ways with the developers.

Scaling

 

The case with the false evaluation of the game showed that we need to test scaling possibilities even in situations when there was no active traffic buying for the project.
The first game we tested such an approach was Gold and Goblins. We spotted  G&G at the early stages of development, and it was rather hard to evaluate its prospectiveness in paying off because many vital mechanics were not realised at this moment. 

However, the developer’s team was sure that they knew how to scale their game. So we agreed on phased testing alongside tuning up the product. In our Gold and Goblins case, we are sharing all the details. 

But not everything is down to numbers. At the stage of any bug test, we always connect the marketing team and the producer established for the specific project. In taking the final decision of the future destiny of the project, there are always three sides: BizDev’s, producers’ and marketing team’s. Everyone is making their own contribution to the discussion. BizDevs give details about the team, their involvement in the project and future prospects, plans for the scaling of their company and product. The User Acquisition team evaluates marketing prospects, and the producer should share their vision of the project and givethe final verdict (whether the producer is ready to work with this game or not). 

Conclusion

 

Selection of a potential hit is a very long and complicated process. Many employees from different departments are involved in this work: BizDevs, producers, marketers, creative team, QA and release managers.

Each game goes through the multi-staged selection process: we evaluate its relevance for the market and niche, its correspondence with actual trends. The product is being deconstructed to its foundation and deeply analysed. Whether the gameplay is exciting and understandable, are the mechanics nicely done, what’s with the monetisation, amount of bugs in the project, its level of development quality.   

Before testing the game, AppQuantum often offers recommendations of improvements to avoid the situation where the game shows poor metrics because of apparent mistakes. 

Tests are conducted in several stages. Commonly they are changed or skipped depending on the project, its genre, level of completeness, presence of the previous traffic buying and their results. 

After getting nice results, the producer (that would be responsible for scaling and developing the game further) inevitably should approve the project. 

Do you believe that you have the potential hit? Show it to us then!