Who can use this feature?
- Global admins or users with advanced permissions
- Available on all plans
Integration jobs require a connection with your data warehouse / lakehouse management system first. After you establish a connection, you can create a one-time or scheduled integration job. Create scheduled jobs for the following cadences:
- One time
- By minutes
- Hourly
- Daily
- Workdays (M - F)
- Weekly
- Monthly
Create an inbound integration job
The Totango objects available for inbound integration vary depending on the connector type.
Totango allows you to import data for the following Totango objects:
- Accounts
- Users
- Usage
- Collections
- Usage aggregations
- Totango users
- Historical Activity Stream
- Account assignment
- Tasks (inbound)
- Touchpoints (inbound)
- Accounts foreign keys
After you select the object, you can map the source fields to Totango attributes.
- From within Settings, expand Data Management > Customer Data Hub.
- From the list of available connectors, hover over the connection you want to use, and click View Integrations.
- Click +Create Integration.
- Choose from the available objects to import.
A new integration page appears, based on the selected object (e.g., New Accounts Integration). - In the Select Data Source area, use the Query area to limit the data set for any of the selected object fields (e.g., SELECT id, name FROM sometable WHERE id > 100 AND name is not NULL).
Try using AI to create query filters using natural language.
- Click Load Preview. The Preview Data area shows the first 10 rows that match. The preview action is mandatory because it tests the connector and data source configuration.
- In the Map and Format Data Fields area, Totango attempts to map the columns in the data source (data warehouse) with attributes in Totango. Required attributes (keys) are mapped at the top of the list.
- Ensure that each source field (left) maps to the desired Totango attribute (right).
- For each source field, you can edit the source format. This option is often used for dates, where Totango will transform the raw data from your file into the formatting you specify.
- For each Totango attribute, you can choose a different attribute to map to if the auto-mapping wasn't correct.
- Totango may also denote attributes to add as new if it didn't find an existing attribute. (Supported data types for adding new from the integration job include Text, Number, Currency, Date, Foreign Key.)
- Hover next to a mapping to remove it from the integration job. This option is useful if you don't want to modify the source file but want to skip a column.
- Hover next to a mapping to optionally add a note to reference later or document the logic behind this mapping.
- You can add new mapping to re-use a column from the data file or open the Function Editor and add a constant to all rows or some other logic.
- Click Validate Mapping. The validation must be successful in order to proceed.
- Preview the data in the file to ensure it looks correct.
- From the Settings area, set the following:
- Name: Because you can later upload data for the same object, a name and description helps you identify the job later.
- Description: Add a description for the job.
- Allow new accounts/objects creation: Allow new records to be created during the integration. If disabled, only records that already exist in Totango will be affected by the job (e.g., attribute values may be set but new records will be ignored).
- Automatically resolve matching conflicts: If a conflict exists, choose the more recent value over the existing.
- From the Schedule area, set the following:
- Run: Choose the recurring schedule you want this job to run.
- Sync immediately after saving: Run the job now and then run according to the schedule thereafter, or run it according to the next schedule only.
- Enable integration scheduling: Allow the scheduling job. If unchecked, the job will be saved in a disabled state.
- Click Save.
- The job is saved, and you can monitor upload progress in real time. The sync status changes automatically as the job completes.
- Click inside a job to view more details.
Data will be immediately reflected in the UI once the processing is done for the selected job.
Manage integration options for scheduled jobs
We advice you to enable notifications for data warehouse integrations. You can also hover over an integration job and click the Ellipses (..) to view options:
- Edit: Change job mappings, settings, or schedule.
- Run now: Run the job immediately. Will not impact the existing recurring schedule.
- Run full sync now: Tell Totango to go into the Salesforce environment and grab all the data regardless of whether it’s changed or not.
- Download: View the file that was uploaded and analyze the data outside of Totango. Download any file from a previous job from the Job History window.
- Duplicate: Create a copy of the job.
- Disable scheduling: Disable the integration job. You can re-enable a schedule anytime.
- History: View the history of the job. Filter by integration history or details. Download the source file for any job.
- Trigger API: Find the API call structure details for triggering the integration right when your company data process ends. Read more.
- Rebuild mapping: This action will remove all the matching information and start full sync of this integration to rebuild the matching.
- Delete: If you don't need the integration job anymore, you can delete. This is a permanent action.
FAQs
Question: Can I use the same object field twice?
Answer: Yes. You can re-use the same object field more than once in integration. Just pick the same object field from the dropdown.
Question: What is the best practice for syncing an Account Assignment attribute?
Answer: We recommend using only the Account Assignment email and map it to the Account Assignment (tid) field. See this article for more details.
In case the source system contains the only Account Assignment name info, then use it and map it to the Account Assignment field. Keep in mind that syncing data based on name only can become an issue in case several people have the same full name.
Question: Can I use WITH clause to filter the data?
Answer: Yes. Using a WITH clause is supported for Redshift and MS SQL Server connectors. WITH clause is an optional clause that always precedes a SELECT clause in the query statements. WITH clause has a subquery that is defined as a temporary table similar to view definition.