Back to Integrations
Action

AI Supervisor

The AI Supervisor node transforms your workflow into an intelligent Multi-Agent Orchestration engine. Powered by top-tier models from OpenAI, Anthropic, or Google, it maintains conversation memory, debates with other AI agents, and autonomously executes sub-workflows as tools to solve complex problems.

AI Supervisor
Advanced / Action
⚠️

What can you do with AI Supervisor?

Multi-Agent Orchestration

Delegate complex tasks to specialized Sub-Agents. The Supervisor coordinates multiple AI sub-workflows, debates with them, and synthesizes the final output autonomously.

Debate Log Traceability

Full transparency into the AI's thought process. Visually track the internal debate, tool calling parameters, and agent communications directly in the execution logs.

Unified LLM Intelligence

Seamlessly switch between OpenAI GPT-4o, Anthropic Claude 3.5, and Google Gemini models with a single click, all unified under one structured interface.

Persistent Session Memory

Automatically retains conversational context across multiple separate executions by storing short-term history via a flexible Session ID integration.

Structured JSON Output

Enforce strict programmatic JSON payloads natively across all LLM providers, entirely eliminating unwanted conversational wrapper syntax.

Detailed Usage & Configuration

The AI Supervisor node fundamentally changes how nLink automation operates. It transitions your workflow from a rigid sequence of steps into a dynamic, autonomous Multi-Agent engine powered by state-of-the-art Large Language Models (LLMs).

1. Multi-Agent Orchestration (WaaT)

Unlike standard nodes, the AI Supervisor has the autonomy to orchestrate other agents. It utilizes the Workflow-as-a-Tool (WaaT) architecture.

What are Tools? In nLink, a "Tool" for the Supervisor is simply another existing Sub-Workflow (e.g., an AI Writer, a Database Searcher, or a Web Scraper Agent).

The Debate Flow:

  1. The Supervisor analyzes the user's complex prompt and recognizes it requires specialized skills.
  2. It autonomously pauses its execution and calls a Sub-Agent Tool, passing exactly formatted JSON parameters.
  3. The Sub-Agent executes its own workflow (e.g., searching the web and summarizing) and returns the data.
  4. The Supervisor reads the returned data. If unsatisfied, it can call the Tool again or call a different Tool (e.g., a Proofreader Agent).
  5. Once satisfied, it synthesizes a final natural-language response back to the user.

2. Debate Log Traceability

Because Multi-Agent systems can act like a "black box," nLink provides a built-in Debate Log. When viewing the node's execution output, you can see the exact internal dialogue: every tool called, the exact JSON arguments passed, the responses from the Sub-Agents, and the Supervisor's final reasoning. This guarantees 100% observability.

3. Persistent Session Memory

To provide a human-like assistance experience for your end users, the AI Supervisor node uses a Redis-backed high-speed Memory system.

By mapping a unique Session ID field directly in the Node parameters (such as a customer's Zalo User ID, or internal UUID), the node securely isolates and stores continuous conversation history. The Node features an integrated Max Memory Limit controller, automatically truncating legacy conversations systematically on a First-In-First-Out (FIFO) basis to prevent exhausting API limits.

4. Strict Structured Output (JSON Mode)

When the JSON Output Only setting is explicitly activated, nLink aggressively enforces strict JSON-only responses natively via root-level provider configs across OpenAI, Gemini & Anthropic. The Agent is forced to cleanly output a raw, perfectly structured JSON object exactly suitable for instantaneous downstream processing.