Forecast runs capture forecasts that are driven using different algorithms and configurations. The forecasts are all saved in a single forecast results module so you can compare the different forecast results.
Your feedback is important to us. To ensure you have a good content experience, we'd appreciate if you could tell us how you find the guidance on this page and if it addresses your questions.
To take part, complete the quick survey at the end of the article, and tell us how we can improve.
Forecast runs enable you to compare different forecasts. For example, if you want to weigh up multiple algorithms against each other, or compare multiple forecast models built over different data collections.
The forecast run takes it's name from that of the forecast action.
To compare different forecasts, follow this setup:
- Create a new list object and name it Forecast_Runs.
- Create a list and enter the names of the forecast actions as line items.
The results of the forecast actions are captured in the forecast results module.
- Create or modify a forecast results module and add the Forecast_Runs list to Pages (known as context selectors in the UX).
This list becomes a dimension.
- Configure an import action for the forecast results module.