Streaming Data Using Customer Data Hub Data Warehouses Integrations



Use Data Warehouse integrations to configure integration faster and stream information directly to Totango DNX-CX to start analyzing and taking action on your customer data.

Data Warehouse integration saves time, reduces Totango Admin’s dependency on Ops and data teams, and enables easier integration management by reusing data connectors, editing and duplicating integrations, using automatic field mapping, and real-time previewing and validation.

Data Warehouse integrations main capabilities:

  • Reuse data source connectors
  • Sync data from several instances of the same data source
  • Filtering incoming information
  • Enabling incremental and snapshot mode syncs
  • Integration configurable scheduling
  • All information is monitored in a unified view

This article includes the following topics:

Related articles:

Configuring Integration

Configuring integrations is simple and include these steps:

  1. Configure your connection
  2. Select the integration type and data source (object, fields, and filter)
  3. Preview the data
  4. Map and validate the data
  5. Configure settings and integration schedule

Follow the steps below to configure your integration:

  1. Configure your connection
    • Go to your Totango instance and click on Global Settings > Data Management > Customer Data Hub > find the Data Warehouse source you wish to upload data from by searching or filtering connectors.
      (note: you will need Totango admin privileges). 

    • Follow the instructions in this article to configure the integration connections 

  2. After creating the connection, click on "View Integrations" to enter the connection page.
  3. Click on the green plus button and select "New Integration".

    These are the supported integration types:

    After selecting what you want to upload, enter the query (you can also paste it).

    Within the query, you can use all the data warehouse query capabilities including group by aggregations, complicated joins, ...

    Note, in order to make sure only the relevant data will be synced without querying the entire data set, use incremental query to fetch data from the last successful sync to Totango.
    To define an incremental query you must add the Totango incremental DateTime parameter to your query by typing 
    '{TOTANGO_LAST_SYNC_TIME}' in the query (For example, WHERE last_modified_time > '{TOTANGO_LAST_SYNC_TIME}').

  4. Preview the data
    Click "Load Preview" button to review 10 objects retrieved directly from your Data Warehouse.
    The preview action is a mandatory action because it tests the connector and data source configuration, error messages will be presented in the preview section.

  5. Map and validate the data
    After the preview and automatic mapping are done, review the keys and attribute mapping, and make the necessary changes.
    You can tell Totango what is your source field formatting, read more about it here.

    You MUST validate the file before saving the file integration.

    Note, creating new attributes is done from the mapping section. Make sure to configure the attribute type and dimension of the new attribute.

  6. You can add a note of the mapping for later discussion or to log the logic behind this mapping. Just click the "add note" icon

  7. Configure settings and integration schedule
    Initially, the integration name is automatically assigned based on the source connector and object, you can change it to a more meaningful name.
    It is also advised to add a description that reflects the integration essence.

    In case your integration contains new objects (like accounts or users) that you do NOT wish to create in Totango, turn OFF the “Create new objects” option.

    When the integration job is for objects under the account level (users, tasks, touchpoints, collections), you can separately control of you want new entities (users, tasks, touchpoints, collections) to be created and if you want new accounts to be created:

    Setup the integration schedule
    * You can sync the data now regardless of the scheduling defined by checking the relevant checkbox.

    * You can enable/disable the scheduling of this integration by checking the relevant checkbox.

  8. Save the integration.
  9. It is advised to review the integration notification settings to make sure the right people will be informed about the progress and outcome. Read more.


Maintaining Your Integrations

  • Click on the integration to view its details

  • Maintain your integration by clicking the integration menu

  • Edit integration configuration & Duplicate: You can edit and duplicate a recurring integration. One-time integrations can be duplicated but not edited.
  • Run Now: You can run incremental sync for already configured integration from the UI by clicking the “Run Now” option in the menu and this will not impact the existing schedule (if recurring).
    This incremental sync will update all information from previous successful sync.
  • Run Full Sync Now:  You can run full sync for already configured integration from the UI by clicking the “Run Full Sync Now” option in the menu and this will not impact the existing schedule (if recurring). This full sync will update all current object information (for example, sync all account up-to-date information)
  • Download: In order to analyze errors, you can download a file including all the information synced. You can download any file from any previous upload in the “History” modal.
  • Disable scheduled: In case of maintenance work or an error, you can disable the integration schedule.
  • History: You can analyze the integration history in the “History” modal. You can filter the integration history, view every integration details, and download the file which includes the synced data.

  • Trigger API Endpoint: In case you want to trigger the integration right when your company data process ends, use the trigger API option in the menu to find the API call structure and details. Since it works exactly as triggering a file integration you can read more about it here.
  • Delete: In case the integration is no longer needed, you can delete it. Note, this action is irreversible.

Customer Data Hub Whitelisting

In the event that your network is behind a firewall, you will need to whitelist our servers so that we may retrieve information from your data warehouse.

Please follow the instructions in this Customer Data Hub IP Whitelisting article to configure your IP whitelist.


SSL Encryption

All data warehouse connectors support SSL encryption out-of-the-box.
Totango connectors will use an SSL encrypted connection in case it is activated and use a non-SSL encrypted connection otherwise.

The implementation logic is as follows:
by default, SSL encryption is used during connecting, if the data warehouse server does not support SSL encryption, then the connection will using an unencrypted connection to stream data from the data warehouse.



Q: What happens if an attribute used in integration is deleted from the data model?
A: You cannot delete an attribute that is mapped in a recurring job. You will be able to delete an attribute that was in use in a one-time upload.

Q: Can I sync data that includes non-existing accounts in Totango?
A: Yes, these accounts can be created automatically as part of the sync depends on the account creation settings. You should allow the integration to create new objects in the integration settings section.

Q: Can I use the same object field twice?
A: Yes. Re-using the same object field more than once in integration is possible, just pick the same object field from the dropdown

Q: What is the best practice for syncing an Account Assignment attribute?
A: Account Assignment information usually contains the Account Assignment email and the Account Assignment name.
In case the source system contains only the Account Assignment email or both the Account Assignment email and name, we recommend using only the Account Assignment email and map it to the Account Assignment (tid) field.
In case the source system contains the only Account Assignment name info, then use it and map it to the Account Assignment field.
Keep in mind that syncing data based on name only can become an issue in case several people have the same full name. 


Q: Can I use WITH clause to filter the data?
A: Yes. Using a WITH clause is supported for Redshift and MS SQL Server connectors.
WITH Clause is an optional clause that always precedes a SELECT clause in the query statements.
WITH clause has a subquery that is defined as a temporary table similar to View definition.

Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request