Integrate data with your data warehouse or lakehouse management

Who can use this feature?

Integration jobs require a connection with your data warehouse / lakehouse management system first. After you establish a connection, you can create a one-time or scheduled integration job. Create scheduled jobs for the following cadences:

  • One time
  • By minutes
  • Hourly
  • Daily
  • Workdays (M - F)
  • Weekly
  • Monthly

Create an inbound integration job

The Totango objects available for inbound integration vary depending on the connector type. 

Totango allows you to import data for the following Totango objects:

  1. Accounts
  2. Users
  3. Usage
  4. Collections
  5. Usage aggregations
  6. Totango users
  1. Historical Activity Stream
  2. Account assignment
  3. Tasks (inbound)
  4. Touchpoints (inbound)
  5. Accounts foreign keys

After you select the object, you can map the source fields to Totango attributes.

  1. From within Settings, expand Data Management > Customer Data Hub.
  2. From the list of available connectors, hover over the connection you want to use, and click View Integrations.
  3. Click +Create Integration.
  4. Choose from the available objects to import.
    A new integration page appears, based on the selected object (e.g., New Accounts Integration).
  5. In the Select Data Source area, use the Query area to limit the data set for any of the selected object fields (e.g., SELECT id, name FROM sometable WHERE id > 100 AND name is not NULL).

    Try using AI to create query filters using natural language.

  6. Click Load Preview. The Preview Data area shows the first 10 rows that match. The preview action is mandatory because it tests the connector and data source configuration. 
  7. In the Map and Format Data Fields area, Totango attempts to map the columns in the data source (data warehouse) with attributes in Totango. Required attributes (keys) are mapped at the top of the list.
  8. Ensure that each source field (left) maps to the desired Totango attribute (right).
    • For each source field, you can edit the source format. This option is often used for dates, where Totango will transform the raw data from your file into the formatting you specify.
    • For each Totango attribute, you can choose a different attribute to map to if the auto-mapping wasn't correct.
    • Totango may also denote attributes to add as new if it didn't find an existing attribute. (Supported data types for adding new from the integration job include Text, Number, Currency, Date, Foreign Key.) 
    • Hover next to a mapping to remove it from the integration job. This option is useful if you don't want to modify the source file but want to skip a column.
    • Hover next to a mapping to optionally add a note to reference later or document the logic behind this mapping.
    • You can add new mapping to re-use a column from the data file or open the Function Editor and add a constant to all rows or some other logic.
  9. Click Validate Mapping. The validation must be successful in order to proceed.
  10. Preview the data in the file to ensure it looks correct.
  11. From the Settings area, set the following:
    • Name: Because you can later upload data for the same object, a name and description helps  you identify the job later.
    • Description: Add a description for the job.
    • Allow new accounts/objects creation: Allow new records to be created during the integration. If disabled, only records that already exist in Totango will be affected by the job (e.g., attribute values may be set but new records will be ignored).
    • Automatically resolve matching conflicts: If a conflict exists, choose the more recent value over the existing.
  12. From the Schedule area, set the following:
    • Run: Choose the recurring schedule you want this job to run.
    • Sync immediately after saving: Run the job now and then run according to the schedule thereafter, or run it according to the next schedule only.
    • Enable integration scheduling: Allow the scheduling job. If unchecked, the job will be saved in a disabled state.
  13. Click Save.
  14. The job is saved, and you can monitor upload progress in real time. The sync status changes automatically as the job completes.
  15. Click inside a job to view more details.

Data will be immediately reflected in the UI once the processing is done for the selected job.

Manage integration options for scheduled jobs

We advice you to enable notifications for data warehouse integrations. You can also hover over an integration job and click the Ellipses (..) to view options:

  • Edit: Change job mappings, settings, or schedule.
  • Run now: Run the job immediately. Will not impact the existing recurring schedule.
  • Run full sync now: Tell Totango to go into the Salesforce environment and grab all the data regardless of whether it’s changed or not.
  • Download: View the file that was uploaded and analyze the data outside of Totango. Download any file from a previous job from the Job History window.
  • Duplicate: Create a copy of the job.
  • Disable scheduling: Disable the integration job. You can re-enable a schedule anytime.
  • History: View the history of the job. Filter by integration history or details. Download the source file for any job.
  • Trigger API: Find the API call structure details for triggering the integration right when your company data process ends. Read more.
  • Rebuild mapping: This action will remove all the matching information and start full sync of this integration to rebuild the matching. 
  • Delete: If you don't need the integration job anymore, you can delete. This is a permanent action.



Question: Can I use the same object field twice?

Answer: Yes. You can re-use the same object field more than once in integration. Just pick the same object field from the dropdown.

Question: What is the best practice for syncing an Account Assignment attribute?

Answer: We recommend using only the Account Assignment email and map it to the Account Assignment (tid) field. See this article for more details.

In case the source system contains the only Account Assignment name info, then use it and map it to the Account Assignment field. Keep in mind that syncing data based on name only can become an issue in case several people have the same full name. 

Question: Can I use WITH clause to filter the data?

Answer: Yes. Using a WITH clause is supported for Redshift and MS SQL Server connectors. WITH clause is an optional clause that always precedes a SELECT clause in the query statements. WITH clause has a subquery that is defined as a temporary table similar to view definition.

Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request