Author:
Fluent Commerce
Changed on:
9 Feb 2025
Batch enables bulk uploading of updates to the Fluent Commerce platform, handling high data volumes from thousands to million updates per operation. Leveraging Fluent 's Batch API and Inventory Batch Pre-Processing, it ensures faster processing and accurate availability in the Fluent Management System.
This scalable solution keeps data synchronized across channels, empowering businesses to manage stock efficiently and meet customer expectations with accurate, up-to-date stock information.
Capability | Description |
APIs to create & manage batch loading of information. | |
Optimized processing by de-duplicating redundant updates and filtering out unchanged records. |
To ensure successful , consider the following key factors:
Jobs are used to group Batches, providing a structured way to track each distinct set of Batches sent together. This grouping enables better visibility into the success or failure of related updates, ensuring that processing is efficient and transparent.
Additionally, Jobs determines whether Batch Pre-Processing is applied to the associated Batches. This ensures redundant updates are removed and unchanged records are filtered before processing, optimizing the handling of data. For more information, refer to the details on Inventory Batch Pre-Processing.
Batches are restricted to a maximum of 5,000 records per batch. For larger datasets, the updates must be divided into multiple batches, with each batch adhering to this limit.
Example:
If you need to upload 1,000,000 updates, split them into 200 batches of 5,000 records each to ensure compliance and efficient processing.
While each Batch can contain up to 5,000 records, sending only a small number of batches may reduce processing efficiency. The Batch API is designed to handle multiple datasets concurrently, distributing the workload across the platform for asynchronous processing. To maximize performance, it is recommended to spread a full dataset across 10 individual Batches.
Example:
For 15,000 updates, instead of splitting them into 3 batches of 5,000 records each, divide them into 10 datasets of 1,500 records. This ensures optimal load distribution. As update volumes grow, the integration should scale accordingly, maintaining a minimum of 10 datasets until individual batches reach the 5,000-record limit, after which additional batches can be added.
When running a Job with more than 10 Batches, it’s common to have several batches containing a maximum of 5,000 records and one smaller batch with fewer records. To ensure more consistent processing, it’s recommended that the total updates be distributed evenly across all Batch datasets.
Example:
If you have 53,900 items to update, distribute them into 11 batches of 4,900 records each rather than 10 batches of 5,000 records and one batch of 3,900 records.
For real-time or near-real-time updates, we recommend using Delta Inventory updates, which are ideal for handling incremental changes to levels instead of full data updates.
The current best tool for visibility is to use the Inventory Domain Events to understand the success and failures of updates sent via the Batch. This provides visibility into the individual items included in the Batches.
Note: Right now, only Failures & Exception Audit Events are visible for processing.
The Fluent Inventory Module provides a foundational solution for integrating Batch updates into your management workflows. It is designed to calculate an accurate Stock on Hand value using a set of predefined Quantity types. These types represent various states and activities, ensuring precise management.
Key Inventory Quantity Types:
These quantity types work together to calculate the Stock on Hand, providing an up-to-date value that reflects the current available for each Inventory Position. This ensures accurate visibility and supports effective decision-making in stock management and .