Predictive Insights models tell you how the training algorithm matched your list of accounts across the range of company attributes, the account prioritization ranking, and the quality of the model.

Select any model in Predictive Insights to display the model's information panel. This panel gives access to additional configuration options and insights into the trained data.

In the Overview tab you can view details about your model such as the creation date, the dataset used, access a page of data insights, and an activity log

In the Account prioritization tab you can inspect the scores and rankings the Predictive Insights model generates when it is trained. A trained model will contain scored accounts, with existing customers and prospective customers. This ranking details how those accounts scored and groups them across 5 ranks, from the highest-scoring (A) to the lowest-scoring (E). Prospect accounts that rank highest are prime candidates for your sales team as they present a close match to your existing customer base, or have attributes that closely match ones that your company focuses on.

Note: Accounts ranked as E mean they have no match in Predictive Insights and may result from a misspelled name, duplication, or another error that leads to the system not matching the account.

You can adjust the ranking to create different ranks by score, on a scale of 1 to 100.

In the Model quality tab you can inspect how well does the model generalize data. Data generalization is the ability of a model to classify or forecast new data, namely, its ability to accurately predict the conversion outcome of new (unseen) accounts.

The model quality chart measures the model accuracy, which represents how effective the model is at predicting an outcome, and consists of:

  • An X-axis. This shows the coverage of all of the accounts in the dataset, both customers and prospects, sorted by score in a descending order from left to right.
  • A Y-axis. This shows the recall, or the percentage of customers in the dataset for each value of coverage of all accounts (the X-axis).

The chart plots a model line and a test line. The model accuracy is represented in the chart by the area under the curve (AUC), with a score that ranges from 0 to 1.

The train data line, shown in orange, represents the model recall (true positive rate) for every percentage of coverage of the scores. The scores range from 0 to 1.

The test data line, shown in red, represents the recall of the test set. The test set is a random sample of data that PI automatically withholds from the original data to estimate the quality of the model after it's been trained.