登录

Assess prediction

A comparison of facts and forecasts help you asses prediction. You can build a report in the MyTracker interface and calculate the weighted average error as described below.

We recommend assessing dimensions that you usually use. So you get an accurate assessment of decisions taken, find dimensions with serious fluctuations, and know weak spots of predictive models.

Comparison report

Build a comparison report in MyTracker:

  1. In the Builder select a report period, for which you have factual data.
  2. Add predictive and factual metrics, for example, LTV Prediction 1m and LTV 1m, and click Calculate. The report will be built.
  3. Open the chart by the Show chart button above the report. Select a metric for comparison by clicking in the report table header: LTV Prediction 1m for LTV 1m and vice versa.

Add the dimension or filter Revenue type to assess an LTV forecast for purchases, subscriptions, in-app ads, and custom revenue.

Calc the weighted average error for LTV prediction

LTV prediction models can operate with wrong data — game and app mechanics are often changed which influences user behavior and lead to prediction errors. But knowing the historical error, you can be more confident in prediction and can choose the forecast horizon for your current strategy.

Assess LTV Prediction in the Revenue type dimensions to calc the weighted average error for IAP LTV (revenue from purchases), Subscription LTV, and Ads LTV individually.

Method

  1. Select a time frame to calculate the error. It should be a period for which you have facts and predictions.
  2. Divide all installs for the selected period into cohorts.
  3. For LTV on purchases:
    • Project+Date+Partner
    • Project+Date+Country+Partner
    • Project+Date
    • Project+Month
    • Project+Date+Campaign (further, divide small campaigns into cohorts: build payment allocation on 8th day for the selected period and choose three groups: <50%, 50-75%, 75-100%)
    For Ads LTV (revenue from in-app impressions):
    • Project+Date
    • Project+Month
    • Project+Date+Ad network (VK, Yandex Direct, etc.)
    • Project+Date+Country
    • Project+Date+Campaign
    For Subscription LTV:
    • Project+Date
    • Project+Month
    • Project+Date+Country
    • Project+Month+Country
    • Project+Month+Traffic type

  4. Define the sum of prediction for 1, 2, 3, 6 months, for 1 and 2 years individually for the selected cohorts.
  5. Example for the Project+Date cohort:
    • LTV Prediction 1m Project1Data1 + ...+ Project1DataN
    • LTV Prediction 2m Project1Data1 + ...+ Project1DataN
    • LTV Prediction 3m Project1Data1 + ...+ Project1DataN
    • LTV Prediction 6m Project1Data1 + ...+ Project1DataN
    • LTV Prediction 1y Project1Data1 + ...+ Project1DataN
    • LTV Prediction 2y Project1Data1 + ...+ Project1DataN
    • LTV Prediction 1m Project2Data1 + ...+ Project2DataN
    • ...

  6. Define the sum of factual data for 1, 2, 3, 6 months, for 1 and 2 years individually for the selected cohorts.
  7. Example for the Project+Date cohort:
    • LTV 1m Project1Data1 + ...+ Project1DataN
    • LTV 2m Project1Data1 + ...+ Project1DataN
    • LTV 3m Project1Data1 + ...+ Project1DataN
    • LTV 6m Project1Data1 + ...+ Project1DataN
    • LTV 1y Project1Data1 + ...+ Project1DataN
    • LTV 2y Project1Data1 + ...+ Project1DataN
    • LTV 1m Project2Data1 + ...+ Project2DataN
    • ...

  8. Calc the error for each cohort for 1, 2, 3, 6 months, for 1 and 2 years individually:
    Cohort error = | (sum_of_prediction+1$) / (sum_of_factual_data+1$) – 1 |,
    where 1$ added to exclude division by zero.
  9. Example for the Project+Date cohort:
    • Error for Project1 = |(LTV Prediction 1m+1$)/(LTV 1m+1$)–1|
    • Error for Project2 = |(LTV Prediction 1m+1$)/(LTV 1m+1$)–1|
    • ...
    • Error for Project1 = |(LTV Prediction 2m+1$)/(LTV 2m+1$)–1|
    • Error for Project2 = |(LTV Prediction 2m+1$)/(LTV 2m+1$)–1|
    • ...

  10. Calc the weighted average error for each cohort for 1, 2, 3, 6 months, for 1 and 2 years individually:
    The weighted average error = (Error_for_cohort_1*Sum_of_factual_data_for_cohort_1 + ... + Error_for_cohort_N * Sum_of_factual_data_for_cohort_N) / Sum of factual data for all cohorts
  11. Example for the Project+Date cohort:
    • The weighted average error = (Error_for_Project1*LTV_1m_for_Project1+ ... +Error_for_ProjectN*LTV_1m_for_ProjectN) / LTV_1m_for_Project1 + ... + LTV_1m_for_ProjectN

Result

With the weighted average error, you can assess the prediction quality and decide to use it, and do not forgot model restrictions. If you have big errors, pass results and data on sales, and updates to our support team  for some model adjusting.

Example of the weighted average error for IAP LTV Prediction

IAP LTV 1m IAP LTV 2m IAP LTV 3m IAP LTV 6m IAP LTV 1y IAP LTV 2y
Project + Date + Partner 11,5% 14,1% 15,8% 20,6% 29,7% 40,3%
Project + Date + Campaign 16,3% 20,1% 22,7% 28,6% 35,3% 42,2%
Project + Date + Country + Partner 19,9% 24,2% 28,9% 34,2% 36,2% 48,4%
Project + Date 10,4% 12,6% 14,2% 18,7% 24,8% 32,1%
Project + Month 5,7% 8,4% 10,2% 14,3% 22,5% 22,9%

Example of the weighted average error for Ads LTV Prediction

Ads LTV 1m Ads LTV 2m Ads LTV 3m Ads LTV 6m Ads LTV 1y Ads LTV 2y
Project + Date 11,5% 17,1% 18,8% 22,2% 30,3% 37,4%
Project + Month 8,8% 13,2% 17,3% 18,8% 26,2% 38,1%
Project + Date + Ad network 13,5% 20,5% 20,7% 26,0% 36,8% 45,2%
Project + Date + Country 13,5% 18,7% 19,3% 24,9% 37,6% 44,9%
Project + Date + Campaign 12,4% 19,2% 20,1% 23,9% 32,9% 40,0%

Example of the weighted average error for Subscription LTV Prediction

Subscrip­tion LTV 1m Subscrip­tion LTV 2m Subscrip­tion LTV 3m Subscrip­tion LTV 6m Subscrip­tion LTV 1y Subscrip­tion LTV 2y
Project + Date 13,1% 18,4% 20,2% 21,9% 31,8% 38,5%
Project + Month 6,2% 9,7% 11,8% 16,5% 20,1% 25,2%
Project + Date + Country 16,4% 20,2% 22,7% 30,4% 35,0% 41,3%
Project + Month + Country 11,8% 19,0% 21,6% 24,8% 28,5% 29,2%
Project + Month + Traffic type 8,3% 13,6% 13,8% 17,4% 21,2% 27,1%

Was this article helpful?
    Sorry, the translation has not been completed yet