Create a connection in Anaplan Data Orchestrator to import data from Azure Blob Storage. Then use the connection to extract data and create a source dataset.

You will need your Azure Blob Storage credentials to connect the Azure Blob Storage data with Data Orchestrator. View the Azure Blob Storage documentation for more information about your credentials.

To create a connection:

  1. Select Data Orchestrator from the top-left navigation menu.
  2. Select Connections from the left-side panel.
  3. Select Create connection.
  4. Select Azure Blob Storage and then select Next.
    If you can't find the connector, enter a search term in the Find... field.
  5. On the Connection details page, enter these details and select Next:
    • Name: Create a name for your connection. The name can contain alphanumeric characters and underscores.
    • Description: Enter a description about your connection.
  6. On the Connection credentials page, enter your Azure Blob credentials and select Next:
    • Azure Secret Key
      (You can’t use a shared access signature (SAS) token as the secret key.)
    • Azure Blob Storage account name
    • Container name
  7. After the connection test is complete, select Done.

You can extract data from the Azure Blob connection to add source data to Data Orchestrator. The data extract creates a source dataset.

To extract data:

  1. Select Data Orchestrator from the top-left navigation menu.
  2. Select Source data from the left-side panel.
  3. Select Add data > From connection.
  4. On the Dataset details page, enter these details and select Next:
    • Connection
    • Dataset name
    • Description
    • Path name (see additional information below)
    • Column separator
    • Text Delimiter
    • Header Row
    • First Data Row
  5. On the Choose an upload type page, enter these details and select Next:
    1. Select the Load type:
      • Full replace: Completely replaces the current loaded data with the new data.
      • Append: Adds the new data to the end of the current table.
      • Incremental: Takes the data and incrementally updates what was previously loaded.
    2. Select the columns to import.

 Notes: 

  • The _ab fields are added by Data Orchestrator and aren't user data.
  • If you selected Incremental as the load type (partial replace): 
    • Select a Primary key checkbox. You can select more than one checkbox.
    • Data Orchestrator uses the  _ab fields as cursor keys to identify what's changed. The values aren't used for a Full replace
    • The Cursor Field is preselected.
  • If you selected Append as the load type (partial replace):
    • Data Orchestrator uses the  _ab fields as cursor keys to identify what's changed. The values aren't used for a Full replace
    • The Cursor Field is preselected, and is based on the last update date of the file.
  1. Select Create in the confirmation dialog.

When you extract data from Azure Blob connections, you are asked to enter the Path name. If the container includes files with the same file name pattern, you can enter  *.CSV to upload all the files with the same file name pattern.

Your container is called SALES_DATA, and you have files called SALES_wk01.CSV, SALES_wk02.CSV, and SALES_wk03.CSV. If you enter Sales_Files/Sales_*.CSV for the Path name, all three files are uploaded to Data Orchestrator. 

If you add more files to your bucket later with the same file name pattern, you can sync the data to upload the new files.