Parallel Engine
Smart Parallel Processing is a Master-level Golang concurrency engine built directly into the core of nLink. It allows you to process massive arrays of data (like 10,000+ items) concurrently using a fixed-size Worker Pool, dramatically increasing execution speed while perfectly protecting your system from Out-Of-Memory (OOM) crashes.
What can you do with Parallel Engine?
True Worker Pool Architecture
Instead of spawning thousands of threads simultaneously and crashing your server, the Engine distributes data arrays across a highly optimized, fixed-size Goroutine Worker Pool.
Lock-Free Array Preservation
Utilizes pre-allocated 2D slices to eliminate thread locking bottlenecks. This guarantees that your output array order matches your input array order perfectly, 100% of the time.
Fail-Fast Fault Tolerance
Intelligent failure detection. If a single thread encounters a fatal API error (like a 400 Bad Request), the Engine instantly drains the channel and aborts pending jobs to save system resources.
Detailed Usage & Configuration
Smart Parallel Processing is not a standalone node you drag onto the canvas. Instead, it is a Core Capability that can be activated on almost any execution node (like HTTP Requests, Google Sheets, or Database queries) to accelerate bulk processing.
1. How to Enable
Click on any standard action node (e.g., HTTP Request). Look under the execution parameters and toggle Execute Per Item (Batch) to On. A new setting for Smart Parallel Processing will appear.
- Execute Per Item: Natively chops incoming array data (e.g., 50 user records) and feeds them into the node sequentially (1 by 1).
- Smart Parallel Processing: Instructs the Engine to process those 50 records concurrently using multiple CPU threads.
- Concurrency Limit: Define exactly how many concurrent workers to spawn (default 10, max 1000).
2. Multi-Port Aggregation
Unlike basic automation platforms, nLink's Parallel Engine supports advanced routing. If you run a Switch or If/Else node in parallel, the Engine automatically tracks the multi-port outputs across all concurrent threads and perfectly stitches the branches back together without losing the routing logic.
