To help avoid performance bottlenecks, follow the best practices for running integrations and workflows.
Workflows
Review begin block actions
For workflows that involve processing large volumes of records, minimize the number of times the workflow runs each day. Reduce the processing workload by being more selective when you choose actions in the Begin block. For example, you can set the trigger to target only new or edited records.
If scheduling a workflow is unavoidable, stagger the workflow runs and run the workflows at off-peak periods. This approach helps in balancing the system load and ensures smoother overall operation.
Tip
Use the timeline legend on the integrations page to review the workflow frequency.
Consider creating a workflow widget on the Dashboard page to gain insights into the status and frequency of workflow runs.
Understanding integration status values
Tutorial: Creating dashboards for your workflows
Review begin block rule criteria
Review the rules that were added in the Begin block of workflows to ensure that workflows are run only when necessary. You can use saved searches and rules to minimize the processing workload.
Tip
Always use the SHOW IN LIST VIEW option in the Begin block to ensure that the workflow is targeting the correct records and to assess the performance impact of the workload
Workflow prioritization
To optimize the performance of workflows, workflows are prioritized based on the type of trigger that is used to kickstart the workflow. Manual user triggers, data entry, and API calls take precedence over workflows that are triggered by schedules. Running workflows that process large volumes of records can still cause performance bottlenecks that might result in processing delays for critical tasks. A better understanding of how workflow prioritization works will help you avoid performance bottlenecks.
To understand how to optimize workflow efficiency and to gain a deeper insight into our prioritization mechanism, refer to our guide on workflow prioritization.
Switch to nested workflows
Instead of embedding sequences of actions such as error handling in multiple workflows, create nested workflows to complete reusable actions. Using nested workflows reduce the complexity of workflows and the processing workload and make workflows easier to understand and to troubleshoot.
By consolidating common operations into nested workflows, any updates or modifications can be made in a single workflow, thereby making the process of maintaining and updating workflows simpler and efficient.
Getting started with nested workflows
Field updates and history management
To reduce the processing workload, minimize the use of recording field changes on the History tab when fields are edited. Review the field properties in the Data model and clear the Add Updates to History checkbox if you do not need to log changes to a field. Tracking changes is primarily used for auditing purposes and for triggering workflows. If neither applies, clear the checkbox and you will reduce the processing workload and improve performance.
Oomnitza API block
Use the Oomnitza API block instead of the standard API block for querying Oomnitza data. When you use the Oomnitza API block, you can limit the API response by querying a specific set of fields or a subset of records. By reducing the volume of data that is transferred to the workflow service, the data processing workload is reduced and the performance of workflows is improved. Consider upgrading to the Oomnitza API block and setting response limits for scheduled workflows that run frequently and process large volumes of data.
In addition to improved performance, the Oomnitza API block provides the following benefits to reduce the risk of errors generated when workflows are run:
- Unlike the standard API block, the Oomnitza block eliminates the need to enter credentials.
- It centralizes all Oomnitza APIs in one accessible location, thus negating the necessity to consult Swagger repeatedly for API specifics.
- It uses templates and provides instructions on how to fill out the request, body, and response sections to make it easier to configure the Oomnitza API block correctly.
- It has an AI assisted and user-friendly code editor that helps ensure that the code submitted to the API is accurate and well formed thus reducing the potential generation of errors.
Integrations
Scheduling
When running multiple integrations, avoid concurrent scheduling. Stagger integrations so that they run sequentially and reduce the frequency. For most integrations, a maximum frequency of every 2 hours is sufficient.
Tip
Open the timeline view on the integrations page to gauge the frequency of integrations.
Review mapped fields
Use the mapping view page for integrations to review the mapped fields for all custom and extended integrations. Unmap fields that don’t provide value to reduce the data bloat and processing workload.
By reviewing existing sync keys, you can not only enhance system performance by eliminating unnecessary processing but also prevent data redundancy and errors. Empty sync keys cause missing value errors when run in Oomnitza. Non-unique sync keys can cause records to be skipped or overwritten, depending on whether they were ingested in the same batch. Additionally, while using multiple sync keys can improve data accuracy, it also causes increased processing demands.
Filter integrations
On the mapping view page for integrations, you can create filters to refine the volume of data that you retrieve for your integrations. For example, you can add a filter to include users with a specific email address or users who are active. You can also use filters to minimize errors. You can exclude records where the primary key field is not provided and ensure that the integration only retrieves records when the primary key field is provided. By taking these measures you will improve the accuracy of the data ingested by Oomnitza, reduce data bloat and improve the processing performance of your integrations.
Comments
0 comments
Please sign in to leave a comment.