PlanIQ uses a set of advanced metrics to assess the quality of your forecast model.

Read on to find out what criteria each metric uses to decide whether the quality of a forecast model is high, medium, or low. The better the quality, the more accurate the predictions. 

The mean absolute percentage error (MAPE) is the sum of the individual absolute forecast errors, divided by the actual values for each period. It's an accuracy measure based on the relative percentage of errors. The closer the MAPE value is to zero, the better the predictions. 

If the error is greater than the actual value, then the percentage of error can be more than 100%.
For example, if the actual value is 1 and we predicted 3, then that makes the forecast error a 2. The forecast error is greater than the actual value, so the MAPE result is greater than 100%.

As MAPE specifies the size of the error as a percentage rather than actual values, it can compare the forecast error between different data set sizes and different time scales.

The root-mean-square error (RMSE) is often used to measure the differences between the values that are predicted values made by the forecast model, and the actual values. 

RMSE uses the squared value of the forecast errors. This helps you identify the impact of outliers. RMSE is a good metric to use for use cases where a few incorrect predictions can be very costly if you implement them. 

A forecast model with a lower RMSE value indicates more accurate predictions.

The mean absolute error (MAE) measures the average amount of the forecast errors in a set of predictions, without considering their direction. 

Direction refers to whether the forecast over or under predicts.

MAE is the average absolute difference between X and Y. Mean absolute error is a scale-dependent metric. 

MAE cannot be used to make comparisons between series that involve different units. 

For example:
MAE cannot compare a SKU* that sold 100 units per month with another SKU that sold 1000 units per month. 

You can use MAE as a relative measure to compare forecasts for the same series across different models. A lower MAE value represents a more accurate model. 

*SKU is a stock keeping unit. 

The mean arctangent absolute percentage error (MAAPE) is a measure of forecast accuracy that improves quality measurement of zero or close-to-zero actual values. You can use MAAPE to compare forecast performance between different data series. 

Use MAAPE to evaluate intermittent demand forecasts. That is, forecasts for irregular levels of demand. MAAPE can be particularly useful when extremely large errors can occur because of mistaken or incorrect observations. A lower MAAPE value indicates a more accurate model. 

The mean absolute scaled error (MASE) is a measure of the accuracy of your forecasts.  It's the mean absolute error of the forecast values, divided by the mean absolute error of the naive forecast, where the naive forecast uses a previous value or an average of several previous points. 

The mean absolute scaled error can be easily interpreted. Values greater than one indicate that in-sample one-step forecasts from the naïve method perform better than the forecast values of the specific model you're looking at. 

MASE is a scale-free error metric. You can use it to compare forecast methods on a single series and also to compare forecast accuracy between series. This metric is well suited to intermittent-demand series. 

MASE is a very good metric to use unless all of the historical observations are equal. If they're equal, the observations display a straight line and the MASE result would be infinite or undefined.

The symmetric mean absolute percentage error (SMAPE) is an accuracy measure based on percentage (or relative) errors. 

Relative error is the absolute error divided by the magnitude of the exact value. 

In contrast to the mean absolute percentage error, SMAPE has both a lower bound and an upper bound. Since it's percentage-based, it's scale-independent, which means that it can be used to compare forecast performances between datasets. 

A limitation to SMAPE is that if the actual value or forecast value is 0, the value of error will approach 100%. The lower the SMAPE value of a forecast, the higher its accuracy. 

The overall model quality metric depends on whether the forecast consists of a single item or multiple items. MASE calculates the model quality for an item across the entire historical dataset and recent data. MASE is an optimal forecast quality metric. 

MASE has:

  • Symmetry, where it penalizes positive and negative forecast errors equally.
  • Scale invariance.
  • The ability to handle intermittent data.

A forecast with multiple items is a high quality model if at least 60% of the items received a high quality MASE result. 

If more than 50% of items resulted in a low quality MASE result, then the overall model is classified as low quality. 

Any other results give a medium model quality.