Sequential Agents
Learn the Fundamentals of Sequential Agents in Flowise, written by @toi500
Last updated
Learn the Fundamentals of Sequential Agents in Flowise, written by @toi500
Last updated
This guide offers a complete overview of the Sequential Agent AI system architecture within Flowise, exploring its core components and workflow design principles.
Disclaimer: This documentation is intended to help Flowise users understand and build conversational workflows using the Sequential Agent system architecture. It is not intended to be a comprehensive technical reference for the LangGraph framework and should not be interpreted as defining industry standards or core LangGraph concepts.
Built on top of LangGraph, Flowise's Sequential Agents architecture facilitates the development of conversational agentic systems by structuring the workflow as a directed cyclic graph (DCG), allowing controlled loops and iterative processes.
This graph, composed of interconnected nodes, defines the sequential flow of information and actions, enabling the agents to process inputs, execute tasks, and generate responses in a structured manner.
This architecture simplifies the management of complex conversational workflows by defining a clear and understandable sequence of operations through its DCG structure.
Let's explore some key elements of this approach:
Node-based processing: Each node in the graph represents a discrete processing unit, encapsulating its own functionality like language processing, tool execution, or conditional logic.
Data flow as connections: Edges in the graph represent the flow of data between nodes, where the output of one node becomes the input for the subsequent node, enabling a chain of processing steps.
State management: State is managed as a shared object, persisting throughout the conversation. This allows nodes to access relevant information as the workflow progresses.
While both Multi-Agent and Sequential Agent systems in Flowise are built upon the LangGraph framework and share the same fundamental principles, the Sequential Agent architecture provides a , offering more granular control over every step of the workflow.
Multi-Agent systems, which are characterized by a hierarchical structure with a central supervisor agent delegating tasks to specialized worker agents, excel at handling complex workflows by breaking them down into manageable sub-tasks. This decomposition into sub-tasks is made possible by pre-configuring core system elements under the hood, such as condition nodes, which would require manual setup in a Sequential Agent system. As a result, users can more easily build and manage teams of agents.
In contrast, Sequential Agent systems operate like a streamlined assembly line, where data flows sequentially through a chain of nodes, making them ideal for tasks demanding a precise order of operations and incremental data refinement. Compared to the Multi-Agent system, its lower-level access to the underlying workflow structure makes it fundamentally more flexible and customizable, offering parallel node execution and full control over the system logic, incorporating conditions, state, and loop nodes into the workflow, allowing for the creation of new dynamic branching capabilities.
Flowise's Sequential Agents offer new capabilities for creating conversational systems that can adapt to user input, make decisions based on context, and perform iterative tasks.
These capabilities are made possible by the introduction of four new core nodes; the State Node, the Loop Node, and two Condition Nodes.
State Node: We define State as a shared data structure that represents the current snapshot of our application or workflow. The State Node allows us to add a custom State to our workflow from the start of the conversation. This custom State is accessible and modifiable by other nodes in the workflow, enabling dynamic behavior and data sharing.
Loop Node: This node introduces controlled cycles within the Sequential Agent workflow, enabling iterative processes where a sequence of nodes can be repeated based on specific conditions. This allows agents to refine outputs, gather additional information from the user, or perform tasks multiple times.
Condition Nodes: The Condition and Condition Agent Node provide the necessary control to create complex conversational flows with branching paths. The Condition Node evaluates conditions directly, while the Condition Agent Node uses an agent's reasoning to determine the branching logic. This allows us to dynamically guide the flow's behavior based on user input, the custom State, or results of actions taken by other nodes.
Selecting the ideal system for your application depends on understanding your specific workflow needs. Factors like task complexity, the need for parallel processing, and your desired level of control over data flow are all key considerations.
For simplicity: If your workflow is relatively straightforward, where tasks can be completed one after the other and therefore does not require parallel node execution or Human-in-the-Loop (HITL), the Multi-Agent approach offers ease of use and quick setup.
For flexibility: If your workflow needs parallel execution, dynamic conversations, custom State management, and the ability to incorporate HITL, the Sequential Agent approach provides the necessary flexibility and control.
Here's a table comparing Multi-Agent and Sequential Agent implementations in Flowise, highlighting key differences and design considerations:
Multi-Agent | Sequential Agent | |
---|---|---|
Structure | Hierarchical; Supervisor delegates to specialized Workers. | Linear, cyclic and/or branching; nodes connect in a sequence, with conditional logic for branching. |
Workflow | Flexible; designed for breaking down a complex task into a sequence of sub-tasks, completed one after another. | Highly flexible; supports parallel node execution, complex dialogue flows, branching logic, and loops within a single conversation turn. |
Parallel Node Execution | No; Supervisor handles one task at a time. | Yes; can trigger multiple actions in parallel within a single run. |
State Management | Implicit; State is in place, but is not explicitly managed by the developer. | Explicit; State is in place, and developers can define and manage an initial or custom State using the State Node and the "Update State" field in various nodes. |
Tool Usage | Workers can access and use tools as needed. | Tools are accessed and executed through Agent Nodes and Tool Nodes. |
Human-in-the-Loop (HITL) | HITL is not supported. | Supported through the Agent Node and Tool Node's "Require Approval" feature, allowing human review and approval or rejection of tool execution. |
Complexity | Higher level of abstraction; simplifies workflow design. | Lower level of abstraction; more complex workflow design, requiring careful planning of node interactions, custom State management, and conditional logic. |
Ideal Use Cases |
|
|
Note: Even though Multi-Agent systems are technically a higher-level layer built upon the Sequential Agent architecture, they offer a distinct user experience and approach to workflow design. The comparison above treats them as separate systems to help you select the best option for your specific needs.
Sequential Agents bring a whole new dimension to Flowise, introducing 10 specialized nodes, each serving a specific purpose, offering more control over how our conversational agents interact with users, process information, make decisions, and execute actions.
The following sections aim to provide a comprehensive understanding of each node's functionality, inputs, outputs, and best practices, ultimately enabling you to craft sophisticated conversational workflows for a variety of applications.
As its name implies, the Start Node is the entry point for all workflows in the Sequential Agent architecture. It receives the initial user query, initializes the conversation State, and sets the flow in motion.
The Start Node ensures that our conversational workflows have the necessary setup and context to function correctly. It's responsible for setting up key functionalities that will be used throughout the rest of the workflow:
Defining the default LLM: The Start Node requires us to specify a Chat Model (LLM) compatible with function calling, enabling agents in the workflow to interact with tools and external systems. It will be the default LLM used under the hood in the workflow.
Initializing Memory: We can optionally connect an Agent Memory Node to store and retrieve conversation history, enabling more context-aware responses.
Setting a custom State: By default, the State contains an immutable state.messages
array, which acts as the transcript or history of the conversation between the user and the agents. The Start Node allows you to connect a custom State to the workflow adding a State Node, enabling the storage of additional information relevant to your workflow
Enabling moderation: Optionally, we can connect Input Moderation to analyze the user's input and prevent potentially harmful content from being sent to the LLM.
Required | Description | |
---|---|---|
Chat Model | Yes | The default LLM that will power the conversation. Only compatible with models that are capable of function calling. |
Agent Memory Node | No | Connect an Agent Memory Node to enable persistence and context preservation. |
State Node | No | Connect a State Node to set a custom State, a shared context that can be accessed and modified by other nodes in the workflow. |
Input Moderation | No | Connect a Moderation Node to filter content by detecting text that could generate harmful output, preventing it from being sent to the LLM. |
The Start Node can connect to the following nodes as outputs:
Agent Node: Routes the conversation flow to an Agent Node, which can then execute actions or access tools based on the conversation's context.
LLM Node: Routes the conversation flow to an LLM Node for processing and response generation.
Condition Agent Node: Connects to a Condition Agent Node to implement branching logic based on the agent's evaluation of the conversation.
Condition Node: Connects to a Condition Node to implement branching logic based on predefined conditions.
Choose the right Chat Model
Ensure your selected LLM supports function calling, a key feature for enabling agent-tool interactions. Additionally, choose an LLM that aligns with the complexity and requirements of your application. You can override the default LLM by setting it at the Agent/LLM/Condition Agent node level when necessary.
Consider context and persistence
If your use case demands it, utilize Agent Memory Node to maintain context and personalize interactions.
The Agent Memory Node provides a mechanism for persistent memory storage, allowing the Sequential Agent workflow to retain the conversation history state.messages
and any custom State previously defined across multiple interactions
This long-term memory is essential for agents to learn from previous interactions, maintain context over extended conversations, and provide more relevant responses.
By default, Flowise utilizes its built-in SQLite database to store conversation history and custom state data, creating a "checkpoints" table to manage this persistent information.
This table stores snapshots of the system's State at various points during a conversation, enabling the persistence and retrieval of conversation history. Each row represents a specific point or "checkpoint" in the workflow's execution.
thread_id: A unique identifier representing a specific conversation session, our session ID. It groups together all checkpoints related to a single workflow execution.
checkpoint_id: A unique identifier for each execution step (node execution) within the workflow. It helps track the order of operations and identify the State at each step.
parent_id: Indicates the checkpoint_id of the preceding execution step that led to the current checkpoint. This establishes a hierarchical relationship between checkpoints, allowing for the reconstruction of the workflow's execution flow.
checkpoint: Contains a JSON string representing the current State of the workflow at that specific checkpoint. This includes the values of variables, the messages exchanged, and any other relevant data captured at that point in the execution.
metadata: Provides additional context about the checkpoint, specifically related to node operations.
As a Sequential Agent workflow executes, the system records a checkpoint in this table for each significant step. This mechanism provides several benefits:
Execution tracking: Checkpoints enable the system to understand the execution path and the order of operations within the workflow.
State management: Checkpoints store the State of the workflow at each step, including variable values, conversation history, and any other relevant data. This allows the system to maintain contextual awareness and make informed decisions based on the current State.
Workflow resumption: If the workflow is paused or interrupted (e.g., due to a system error or user request), the system can use the stored checkpoints to resume execution from the last recorded State. This ensures that the conversation or task continues from where it left off, preserving the user's progress and preventing data loss.
The Agent Memory Node has no specific input connections.
Required | Description | |
---|---|---|
Database | Yes | The type of database used for storing conversation history. Currently, only SQLite is supported. |
Required | Description | |
---|---|---|
Database File Path | No | The file path to the SQLite database file. If not provided, the system will use a default location. |
The Agent Memory Node interacts solely with the Start Node, making the conversation history available from the very beginning of the workflow.
Strategic use
Employ Agent Memory only when necessary. For simple, stateless interactions, it might be overkill. Reserve it for scenarios where retaining information across turns or sessions is essential.
The State Node, which can only be connected to the Start Node, provides a mechanism to set a user-defined or custom State into our workflow from the start of the conversation. This custom State is a JSON object that is shared and can be updated by nodes in the graph, passing from one node to another as the flow progresses.
By default, the State includes a state.messages
array, which acts as our conversation history. This array stores all messages exchanged between the user and the agents, or any other actors in the workflow, preserving it throughout the workflow execution.
Since by definition this state.messages
array is immutable and cannot be modified, the purpose of the State Node is to allow us to define custom key-value pairs, expanding the state object to hold any additional information relevant to our workflow.
When no Agent Memory Node is used, the State operates in-memory and is not persisted for future use.
The State Node has no specific input connections.
The State Node can only connect to the Start Node, allowing the setup of a custom State from the beginning of the workflow and allowing other nodes to access and potentially modify this shared custom State.
Required | Description | |
---|---|---|
Custom State | Yes | A JSON object representing the initial custom State of the workflow. This object can contain any key-value pairs relevant to the application. |
Specify the key, operation type, and default value for the state object. The operation type can be either "Replace" or "Append".
Replace
Replace the existing value with the new value.
If the new value is null, the existing value will be retained.
Append
Append the new value to the existing value.
Default values can be empty or an array. Ex: ["a", "b"]
Final value is an array.
To define a custom State using the table interface in the State Node, follow these steps:
Add item: Click the "+ Add Item" button to add rows to the table. Each row represents a key-value pair in your custom State.
Specify keys: In the "Key" column, enter the name of each key you want to define in your state object. For example, you might have keys like "userName", "userLocation", etc.
Choose operations: In the "Operation" column, select the desired operation for each key. You have two options:
Replace: This will replace the existing value of the key with the new value provided by a node. If the new value is null, the existing value will be retained.
Append: This will append the new value to the existing value of the key. The final value will be an array.
Set default values: In the "Default Value" column, enter the initial value for each key. This value will be used if no other node provides a value for the key. The default value can be empty or an array.
Key | Operation | Default Value |
---|---|---|
userName | Replace | null |
This table defines one key in the custom State: userName
.
The userName
key will use the "Replace" operation, meaning its value will be updated whenever a node provides a new value.
The userName
key has a default value of null, indicating that it has no initial value.
Remember that this table-based approach is an alternative to defining the custom State using JavaScript. Both methods achieve the same result.
Plan your custom State structure
Before building your workflow, design the structure of your custom State. A well-organized custom State will make your workflow easier to understand, manage, and debug.
Use meaningful key names
Choose descriptive and consistent key names that clearly indicate the purpose of the data they hold. This will improve the readability of your code and make it easier for others (or you in the future) to understand how the custom State is being used.
Keep custom State minimal
Only store information in the custom State that is essential for the workflow's logic and decision-making.
Consider State persistence
If you need to preserve State across multiple conversation sessions (e.g., for user preferences, order history, etc.), use the Agent Memory Node to store the State in a persistent database.
The Agent Node is a core component of the Sequential Agent architecture. It acts as a decision-maker and orchestrator within our workflow.
Upon receiving input from preceding nodes, which always includes the full conversation history state.messages
and any custom State at that point in the execution, the Agent Node uses its defined "persona", established by the System Prompt, to determine if external tools are necessary to fulfill the user's request.
If tools are required, the Agent Node autonomously selects and executes the appropriate tool. This execution can be automatic or, for sensitive tasks, require human approval (HITL) before proceeding. Once the tool completes its operation, the Agent Node receives the results, processes them using the designated Chat Model (LLM), and generates a comprehensive response.
In cases where no tools are needed, the Agent Node directly leverages the Chat Model (LLM) to formulate a response based on the current conversation context.
Required | Description | |
---|---|---|
External Tools | No | Provides the Agent Node with access to a suite of external tools, enabling it to perform actions and retrieve information. |
Chat Model | No | Add a new Chat Model to overwrite the default Chat Model (LLM) of the workflow. Only compatible with models that are capable of function calling. |
Start Node | Yes | Receives the initial user input, along with the custom State (if set up) and the rest of the default |
Condition Node | Yes | Receives input from a preceding Condition Node, enabling the Agent Node to take actions or guide the conversation based on the outcome of the Condition Node's evaluation. |
Condition Agent Node | Yes | Receives input from a preceding Condition Agent Node, enabling the Agent Node to take actions or guide the conversation based on the outcome of the Condition Agent Node's evaluation. |
Agent Node | Yes | Receives input from a preceding Agent Node, enabling chained agent actions and maintaining conversational context |
LLM Node | Yes | Receives the output from LLM Node, enabling the Agent Node to process the LLM's response. |
Tool Node | Yes | Receives the output from a Tool Node, enabling the Agent Node to process and integrate tool's outputs into its response. |
The Agent Node requires at least one connection from the following nodes: Start Node, Agent Node, Condition Node, Condition Agent Node, LLM Node, or Tool Node.
The Agent Node can connect to the following nodes as outputs:
Agent Node: Passes control to a subsequent Agent Node, enabling the chaining of multiple agent actions within a workflow. This allows for more complex conversational flows and task orchestration.
LLM Node: Passes the agent's output to an LLM Node, enabling further language processing, response generation, or decision-making based on the agent's actions and insights.
Condition Agent Node: Directs the flow to a Condition Agent Node. This node evaluates the Agent Node's output and its predefined conditions to determine the appropriate next step in the workflow.
Condition Node: Similar to the Condition Agent Node, the Condition Node uses predefined conditions to assess the Agent Node's output, directing the flow along different branches based on the outcome.
End Node: Concludes the conversation flow.
Loop Node: Redirects the flow back to a previous node, enabling iterative or cyclical processes within the workflow. This is useful for tasks that require multiple steps or involve refining results based on previous interactions. For example, you might loop back to an earlier Agent Node or LLM Node to gather additional information or refine the conversation flow based on the current Agent Node's output.
Required | Description | |
---|---|---|
Agent Name | Yes | Add a descriptive name to the Agent Node to enhance workflow readability and easily target it back when using loops within the workflow. |
System Prompt | No | Defines the agent's 'persona' and guides its behavior. For example, "You are a customer service agent specializing in technical support [...]." |
Require Approval | No | Activates the Human-in-the-loop (HITL) feature. If set to 'True,' the Agent Node will request human approval before executing any tool. This is particularly valuable for sensitive operations or when human oversight is desired. Defaults to 'False,' allowing the Agent Node to execute tools autonomously. |
Required | Description | |
---|---|---|
Human Prompt | No | This prompt is appended to the |
Approval Prompt | No | A customizable prompt presented to the human reviewer when the HITL feature is active. This prompt provides context about the tool execution, including the tool's name and purpose. The variable |
Approve Button Text | No | Customizes the text displayed on the button for approving tool execution in the HITL interface. This allows for tailoring the language to the specific context and ensuring clarity for the human reviewer. |
Reject Button Text | No | Customizes the text displayed on the button for rejecting tool execution in the HITL interface. Like the Approve Button Text, this customization enhances clarity and provides a clear action for the human reviewer to take if they deem the tool execution unnecessary or potentially harmful. |
Update State | No | Provides a mechanism to modify the shared custom State object within the workflow. This is useful for storing information gathered by the agent or influencing the behavior of subsequent nodes. |
Max Iteration | No | Limits the number of iterations an Agent Node can make within a single workflow execution. |
Clear system prompt
Craft a concise and unambiguous System Prompt that accurately reflects the agent's role and capabilities. This guides the agent's decision-making and ensures it acts within its defined scope.
Strategic tool selection
Choose and configure the tools available to the Agent Node, ensuring they align with the agent's purpose and the overall goals of the workflow.
HITL for sensitive tasks
Utilize the 'Require Approval' option for tasks involving sensitive data, requiring human judgment, or carrying a risk of unintended consequences.
Leverage custom State updates
Update the custom State object strategically to store gathered information or influence the behavior of downstream nodes.
Like the Agent Node, the LLM Node is a core component of the Sequential Agent architecture. Both nodes utilize the same Chat Models (LLMs) by default, providing the same basic language processing capabilities, but the LLM Node distinguishes itself in these key areas.
While a detailed comparison between the LLM Node and the Agent Node is available in this section, here's a brief overview of the LLM Node's key advantages:
Structured data: The LLM Node provides a dedicated feature to define a JSON schema for its output. This makes it exceptionally easy to extract structured information from the LLM's responses and pass that data to consequent nodes in the workflow. The Agent Node does not have this built-in JSON schema feature
HITL: While both nodes support HITL for tool execution, the LLM Node defers this control to the Tool Node itself, providing more flexibility in workflow design.
Required | Description | |
---|---|---|
Chat Model | No | Add a new Chat Model to overwrite the default Chat Model (LLM) of the workflow. Only compatible with models that are capable of function calling. |
Start Node | Yes | Receives the initial user input, along with the custom State (if set up) and the rest of the default |
Agent Node | Yes | Receives output from an Agent Node, which may include tool execution results or agent-generated responses. |
Condition Node | Yes | Receives input from a preceding Condition Node, enabling the LLM Node to take actions or guide the conversation based on the outcome of the Condition Node's evaluation. |
Condition Agent Node | Yes | Receives input from a preceding Condition Agent Node, enabling the LLM Node to take actions or guide the conversation based on the outcome of the Condition Agent Node's evaluation. |
LLM Node | Yes | Receives output from another LLM Node, enabling chained reasoning or information processing across multiple LLM Nodes. |
Tool Node | Yes | Receives output from a Tool Node, providing the results of tool execution for further processing or response generation. |
The LLM Node requires at least one connection from the following nodes: Start Node, Agent Node, Condition Node, Condition Agent Node, LLM Node, or Tool Node.
Required | Description | |
---|---|---|
LLM Node Name | Yes | Add a descriptive name to the LLM Node to enhance workflow readability and easily target it back when using loops within the workflow. |
The LLM Node can connect to the following nodes as outputs:
Agent Node: Passes the LLM's output to an Agent Node, which can then use the information to decide on actions, execute tools, or guide the conversation flow.
LLM Node: Passes the output to a subsequent LLM Node, enabling chaining of multiple LLM operations. This is useful for tasks like refining text generation, performing multiple analyses, or breaking down complex language processing into stages.
Tool Node: Passes the output to a Tool Node, enabling the execution of a specific tool based on the LLM Node's instructions.
Condition Agent Node: Directs the flow to a Condition Agent Node. This node evaluates the LLM Node's output and its predefined conditions to determine the appropriate next step in the workflow.
Condition Node: Similar to the Condition Agent Node, the Condition Node uses predefined conditions to assess the LLM Node's output, directing the flow along different branches based on the outcome.
End Node: Concludes the conversation flow.
Loop Node: Redirects the flow back to a previous node, enabling iterative or cyclical processes within the workflow. This could be used to refine the LLM's output over multiple iterations.
Required | Description | |
---|---|---|
System Prompt | No | Defines the agent's 'persona' and guides its behavior. For example, "You are a customer service agent specializing in technical support [...]." |
Human Prompt | No | This prompt is appended to the |
JSON Structured Output | No | To instruct the LLM (Chat Model) to provide the output in JSON structure schema (Key, Type, Enum Values, Description). |
Update State | No | Provides a mechanism to modify the shared custom State object within the workflow. This is useful for storing information gathered by the LLM Node or influencing the behavior of subsequent nodes. |
Clear system prompt
Craft a concise and unambiguous System Prompt that accurately reflects the LLM Node's role and capabilities. This guides the LLM Node's decision-making and ensures it acts within its defined scope.
Optimize for structured output
Keep your JSON schemas as straightforward as possible, focusing on the essential data elements. Only enable JSON Structured Output when you need to extract specific data points from the LLM's response or when downstream nodes require JSON input.
Strategic tool selection
Choose and configure the tools available to the LLM Node (via the Tool Node), ensuring they align with the application purpose and the overall goals of the workflow.
HITL for sensitive tasks
Utilize the 'Require Approval' option for tasks involving sensitive data, requiring human judgment, or carrying a risk of unintended consequences.
Leverage State updates
Update the custom State object strategically to store gathered information or influence the behavior of downstream nodes.
The Tool Node is a valuable component of Flowise's Sequential Agent system, enabling the integration and execution of external tools within conversational workflows. It acts as a bridge between the language-based processing of LLM Nodes and the specialized functionalities of external tools, APIs, or services.
The Tool Node's primary function is to execute external tools based on instructions received from an LLM Node and to provide flexibility for Human-in-the-Loop (HITL) intervention in the tool execution process.
Tool Call Reception: The Tool Node receives input from an LLM Node. If the LLM's output contains the tool_calls
property, the Tool Node will proceed with tool execution.
Execution: The Tool Node directly passes the LLM's tool_calls
(which include the tool name and any required parameters) to the specified external tool. Otherwise, the Tool Node does not execute any tools in that particular workflow execution. It does not process or interpret the LLM's output in any way.
Human-in-the-Loop (HITL): The Tool Node allows for optional HITL, enabling human review and approval or rejection of tool execution before it occurs.
Output passing: After the tool execution (either automatic or after HITL approval), the Tool Node receives the tool's output and passes it to the next node in the workflow. If the Tool Node's output is not connected to a subsequent node, the tool's output is returned to the original LLM Node for further processing.
Required | Description | |
---|---|---|
LLM Node | Yes | Receives the output from an LLM Node, which may or may not contain |
External Tools | No | Provides the Tool Node with access to a suite of external tools, enabling it to perform actions and retrieve information. |
Required | Description | |
---|---|---|
Tool Node Name | Yes | Add a descriptive name to the Tool Node to enhance workflow readability. |
Require Approval (HITL) | No | Activates the Human-in-the-loop (HITL) feature. If set to 'True,' the Tool Node will request human approval before executing any tool. This is particularly valuable for sensitive operations or when human oversight is desired. Defaults to 'False,' allowing the Tool Node to execute tools autonomously. |
The Tool Node can connect to the following nodes as outputs:
Agent Node: Passes the Tool Node's output (the result of the executed tool) to an Agent Node. The Agent Node can then use this information to decide on actions, execute further tools, or guide the conversation flow.
LLM Node: Passes the output to a subsequent LLM Node. This enables the integration of tool results into the LLM's processing, allowing for further analysis or refinement of the conversation flow based on the tool's output.
Condition Agent Node: Directs the flow to a Condition tool Node. This node evaluates the Tool Node's output and its predefined conditions to determine the appropriate next step in the workflow.
Condition Node: Similar to the Condition Agent Node, the Condition Node uses predefined conditions to assess the Tool Node's output, directing the flow along different branches based on the outcome.
End Node: Concludes the conversation flow.
Loop Node: Redirects the flow back to a previous node, enabling iterative or cyclical processes within the workflow. This could be used for tasks that require multiple tool executions or involve refining the conversation based on tool results.
Required | Description | |
---|---|---|
Approval Prompt | No | A customizable prompt presented to the human reviewer when the HITL feature is active. This prompt provides context about the tool execution, including the tool's name and purpose. The variable |
Approve Button Text | No | Customizes the text displayed on the button for approving tool execution in the HITL interface. This allows for tailoring the language to the specific context and ensuring clarity for the human reviewer. |
Reject Button Text | No | Customizes the text displayed on the button for rejecting tool execution in the HITL interface. Like the Approve Button Text, this customization enhances clarity and provides a clear action for the human reviewer to take if they deem the tool execution unnecessary or potentially harmful. |
Update State | No | Provides a mechanism to modify the custom State object within the workflow. This is useful for storing information gathered by the Tool Node (after the tool execution) or influencing the behavior of subsequent nodes. |
Strategic HITL placement
Consider which tools require human oversight (HITL) and enable the "Require Approval" option accordingly.
Informative Approval Prompts
When using HITL, design clear and informative prompts for human reviewers. Provide sufficient context from the conversation and summarize the tool's intended action.
The Condition Node acts as a decision-making point in Sequential Agent workflows, evaluating a set of predefined conditions to determine the flow's next path.
The Condition Node is essential for building workflows that adapt to different situations and user inputs. It examines the current State of the conversation, which includes all messages exchanged and any custom State variables previously defined. Then, based on the evaluation of the conditions specified in the node setup, the Condition Node directs the flow to one of its outputs.
For instance, after an Agent or LLM Node provides a response, a Condition Node could check if the response contains a specific keyword or if a certain condition is met in the custom State. If it does, the flow might be directed to an Agent Node for further action. If not, it could lead to a different path, perhaps ending the conversation or prompting the user with additional questions.
This enables us to create branches in our workflow, where the path taken depends on the data flowing through the system.
The Condition Node receives input from any preceding node: Start Node, Agent Node, LLM Node, or Tool Node.
It has access to the full conversation history and the custom State (if any), giving it plenty of context to work with.
We define a condition that the node will evaluate. This could be checking for keywords, comparing values in the state, or any other logic we could implement via JavaScript.
Based on whether the condition evaluates to true or false, the Condition Node sends the flow down one of its predefined output paths. This creates a "fork in the road" or branch for our workflow.
The Condition Node allows us to define dynamic branching logic in our workflow by choosing either a table-based interface or a JavaScript code editor to define the conditions that will control the conversation flow.
This visual approach allows you to easily set up rules that determine the path of your conversational flow, based on factors like user input, the current state of the conversation, or the results of actions taken by other nodes.
Required | Description | |
---|---|---|
Start Node | Yes | Receives the State from the Start Node. This allows the Condition Node to evaluate conditions based on the initial context of the conversation, including any custom State. |
Agent Node | Yes | Receives the Agent Node's output. This enables the Condition Node to make decisions based on the agent's actions and the conversation history, including any custom State. |
LLM Node | Yes | Receives the LLM Node's output. This allows the Condition Node to evaluate conditions based on the LLM's response and the conversation history, including any custom State. |
Tool Node | Yes | Receives the Tool Node's output. This enables the Condition Node to make decisions based on the results of tool execution and the conversation history, including any custom State. |
The Condition Node requires at least one connection from the following nodes: Start Node, Agent Node, LLM Node, or Tool Node.
The Condition Node dynamically determines its output path based on the predefined conditions, using either the table-based interface or JavaScript. This provides flexibility in directing the workflow based on condition evaluations.
Table-Based conditions: The conditions in the table are evaluated sequentially, from top to bottom. The first condition that evaluates to true triggers its corresponding output. If none of the predefined conditions are met, the workflow is directed to the default "End" output.
Code-Based conditions: When using JavaScript, we must explicitly return the name of the desired output path, including a name for the default "End" output.
Single output path: Only one output path is activated at a time. Even if multiple conditions could be true, only the first matching condition determines the flow.
Each predefined output, including the default "End" output, can be connected to any of the following nodes:
Agent Node: To continue the conversation with an agent, potentially taking actions based on the condition's outcome.
LLM Node: To process the current State and conversation history with an LLM, generating responses or making further decisions.
End Node: To terminate the conversation flow. If any output, including the default "End" output, is connected to an End Node, the Condition Node will output the last response from the preceding node and end the workflow.
Loop Node: To redirect the flow back to a previous sequential node, enabling iterative processes based on the condition's outcome.
Required | Description | |
---|---|---|
Condition Node Name | No | An optional, human-readable name for the condition being evaluated. This is helpful for understanding the workflow at a glance. |
Condition | Yes | This is where we define the logic that will be evaluated to determine the output paths. |
Clear condition naming
Use descriptive names for your conditions (e.g., "If user is under 18, then Policy Advisor Agent", "If order is confirmed, then End Node") to make your workflow easier to understand and debug.
Prioritize simple conditions
Start with simple conditions and gradually add complexity as needed. This makes your workflow more manageable and reduces the risk of errors.
The Condition Agent Node provides dynamic and intelligent routing within Sequential Agent flows. It combines the capabilities of the LLM Node (LLM and JSON Structured Output) and the Condition Node (user-defined conditions), allowing us to leverage agent-based reasoning and conditional logic within a single node.
Unified agent-based routing: Combines agent reasoning, structured output, and conditional logic in a single node, simplifying workflow design.
Contextual awareness: The agent considers the entire conversation history and any custom State when evaluating conditions.
Flexibility: Provides both table-based and code-based options for defining conditions, when catering to different user preferences and skill levels.
The Condition Agent Node acts as a specialized agent that can both process information and make routing decisions. Here's how to configure it:
Define the agent's persona
In the "System Prompt" field, provide a clear and concise description of the agent's role and the task it needs to perform for conditional routing. This prompt will guide the agent's understanding of the conversation and its decision-making process.
Structure the Agent's Output (Optional)
If you want the agent to produce structured output, use the "JSON Structured Output" feature. Define the desired schema for the output, specifying the keys, data types, and any enum values. This structured output will be used by the agent when evaluating conditions.
Define conditions
Choose either the table-based interface or the JavaScript code editor to define the conditions that will determine the routing behavior.
Table-Based interface: Add rows to the table, specifying the variable to check, the comparison operation, the value to compare against, and the output name to follow if the condition is met.
JavaScript code: Write custom JavaScript snippets to evaluate conditions. Use the return
statement to specify the name of the output path to follow based on the condition's result.
Connect outputs
Connect each predefined output, including the default "End" output, to the appropriate subsequent node in the workflow. This could be an Agent Node, LLM Node, Loop Node, or an End Node.
The Condition Agent Node allows us to define dynamic branching logic in our workflow by choose either a table-based interface or a JavaScript code editor to define the conditions that will control the conversation flow.
This visual approach allows you to easily set up rules that determine the path of your conversational flow, based on factors like user input, the current state of the conversation, or the results of actions taken by other nodes.
Required | Description | |
---|---|---|
Start Node | Yes | Receives the State from the Start Node. This allows the Condition Agent Node to evaluate conditions based on the initial context of the conversation, including any custom State. |
Agent Node | Yes | Receives the Agent Node's output. This enables the Condition Agent Node to make decisions based on the agent's actions and the conversation history, including any custom State. |
LLM Node | Yes | Receives LLM Node's output. This allows the Condition Agent Node to evaluate conditions based on the LLM's response and the conversation history, including any custom State. |
Tool Node | Yes | Receives the Tool Node's output. This enables the Condition Agent Node to make decisions based on the results of tool execution and the conversation history, including any custom State. |
The Condition Agent Node requires at least one connection from the following nodes: Start Node, Agent Node, LLM Node, or Tool Node.
Parameter | Required | Description |
---|---|---|
Name | No | Add a descriptive name to the Condition Agent Node to enhance workflow readability and easily. |
Condition | Yes | This is where we define the logic that will be evaluated to determine the output paths. |
The Condition Agent Node, like the Condition Node, dynamically determines its output path based on the conditions defined, using either the table-based interface or JavaScript. This provides flexibility in directing the workflow based on condition evaluations.
Table-Based conditions: The conditions in the table are evaluated sequentially, from top to bottom. The first condition that evaluates to true triggers its corresponding output. If none of the predefined conditions are met, the workflow is directed to the default "End" output.
Code-Based conditions: When using JavaScript, we must explicitly return the name of the desired output path, including a name for the default "End" output.
Single output path: Only one output path is activated at a time. Even if multiple conditions could be true, only the first matching condition determines the flow.
Each predefined output, including the default "End" output, can be connected to any of the following nodes:
Agent Node: To continue the conversation with an agent, potentially taking actions based on the condition's outcome.
LLM Node: To process the current State and conversation history with an LLM, generating responses or making further decisions.
End Node: To terminate the conversation flow. If the default "End" output is connected to an End Node, the Condition Node will output the last response from the preceding node and end the conversation.
Loop Node: To redirect the flow back to a previous sequential node, enabling iterative processes based on the condition's outcome.
The Condition Agent Node incorporates an agent's reasoning and structured output into the condition evaluation process.
It provides a more integrated approach to agent-based condition routing.
Required | Description | |
---|---|---|
System Prompt | No | Defines the Condition Agent's 'persona' and guides its behavior for making routing decisions. For example: "You are a customer service agent specializing in technical support. Your goal is to help customers with technical issues related to our product. Based on the user's query, identify the specific technical issue (e.g., connectivity problems, software bugs, hardware malfunctions)." |
Human Prompt | No | This prompt is appended to the |
JSON Structured Output | No | To instruct the Condition Agent Node to provide the output in JSON structure schema (Key, Type, Enum Values, Description). |
Craft a clear and focused system prompt
Provide a well-defined persona and clear instructions to the agent in the System Prompt. This will guide its reasoning and help it generate relevant output for the conditional logic.
Structure output for reliable conditions
Use the JSON Structured Output feature to define a schema for the Condition Agent's output. This will ensure that the output is consistent and easily parsable, making it more reliable for use in conditional evaluations.
The Loop Node allows us to create loops within our conversational flow, redirecting the conversation back to a specific point. This is useful for scenarios where we need to repeat a certain sequence of actions or questions based on user input or specific conditions.
The Loop Node acts as a connector, redirecting the flow back to a specific point in the graph, allowing us to create loops within our conversational flow. It passes the current State, which includes the output of the node preceding the Loop Node to our target node. This data transfer allows our target node to process information from the previous iteration of the loop and adjust its behavior accordingly.
For instance, let's say we're building a chatbot that helps users book flights. We might use a loop to iteratively refine the search criteria based on user feedback.
LLM Node (Initial Search): The LLM Node receives the user's initial flight request (e.g., "Find flights from Madrid to New York in July"). It queries a flight search API and returns a list of possible flights.
Agent Node (Present Options): The Agent Node presents the flight options to the user and asks if they would like to refine their search (e.g., "Would you like to filter by price, airline, or departure time?").
Condition Agent Node: The Condition Agent Node checks the user's response and has two outputs:
If the user wants to refine: The flow goes to the "Refine Search" LLM Node.
If the user is happy with the results: The flow proceeds to the booking process.
LLM Node (Refine Search): This LLM Node gathers the user's refinement criteria (e.g., "Show me only flights under $500") and updates the State with the new search parameters.
Loop Node: The Loop Node redirects the flow back to the initial LLM Node ("Initial Search"). It passes the updated State, which now includes the refined search criteria.
Iteration: The initial LLM Node performs a new search using the refined criteria, and the process repeats from step 2.
In this example, the Loop Node enables an iterative search refinement process. The system can continue to loop back and refine the search results until the user is satisfied with the options presented.
Required | Description | |
---|---|---|
Agent Node | Yes | Receives the output of a preceding Agent Node. This data is then sent back to the target node specified in the "Loop To" parameter. |
LLM Node | Yes | Receives the output of a preceding LLM Node. This data is then sent back to the target node specified in the "Loop To" parameter. |
Tool Node | Yes | Receives the output of a preceding Tool Node. This data is then sent back to the target node specified in the "Loop To" parameter. |
Condition Node | Yes | Receives the output of a preceding Condition Node. This data is then sent back to the target node specified in the "Loop To" parameter. |
Condition Agent Node | Yes | Receives the output of a preceding Condition Agent Node. This data is then sent back to the target node specified in the "Loop To" parameter. |
The Loop Node requires at least one connection from the following nodes: Agent Node, LLM Node, Tool Node, Condition Node, or Condition Agent Node.
Required | Description | |
---|---|---|
Loop To | Yes | The Loop Node requires us to specify the target node ("Loop To") where the conversational flow should be redirected. This target node must be an Agent Node or LLM Node. |
The Loop Node does not have any direct output connections. It redirects the flow back to the specific sequential node in the graph.
Clear loop purpose
Define a clear purpose for each loop in your workflow. If possible, document with a sticky note what you're trying to achieve with the loop.
The End Node marks the definitive termination point of the conversation in a Sequential Agent workflow. It signifies that no further processing, actions, or interactions are required.
The End Node serves as a signal within Flowise's Sequential Agent architecture, indicating that the conversation has reached its intended conclusion. Upon reaching the End Node, the system "understands" that the conversational objective has been met, and no further actions or interactions are required within the flow.
Required | Description | |
---|---|---|
Agent Node | Yes | Receives the final output from a preceding Agent Node, indicating the end of the agent's processing. |
LLM Node | Yes | Receives the final output from a preceding LLM Node, indicating the end of the LLM Node's processing. |
Tool Node | Yes | Receives the final output from a preceding Tool Node, indicating the completion of the Tool Node's execution. |
Condition Node | Yes | Receives the final output from a preceding Condition Node, indicating the end of the Condition Node's execution. |
Condition Agent Node | Yes | Receives the final output from a preceding Condition Node, indicating the completion of the Condition Agent Node's processing. |
The End Node requires at least one connection from the following nodes: Agent Node, LLM Node, or Tool Node.
The End Node does not have any output connections as it signifies the termination of the information flow.
Provide a final response
If appropriate, connect the End Node to an dedicated LLM or Agent Node to generate a final message or summary for the user, providing closure to the conversation.
The Condition and Condition Agent Nodes are essential in Flowise's Sequential Agent architecture for creating dynamic conversational experiences.
These nodes enable adaptive workflows, responding to user input, context, and complex decisions, but differ in their approach to condition evaluation and sophistication.
Condition Node | Condition Agent Node | |
---|---|---|
Decision Logic | Based on predefined logical conditions. | Based on agent's reasoning and structured output. |
Agent Involvement | No agent involved in condition evaluation. | Uses an agent to process context and generate output for conditions. |
Structured Output | Not possible. | Possible and encouraged for reliable condition evaluation. |
Condition Evaluation | Only define conditions that are checked against the full conversation history. | Can define conditions that are checked against the agent's own output, structured or not. |
Complexity | Suitable for simple branching logic. | Handles more nuanced and context-aware routing. |
Ideal Uses Cases |
|
|
Condition Node: Use the Condition Node when your routing logic involves straightforward decisions based on easily definable conditions. For instance, it's perfect for checking for specific keywords, comparing values in the State, or evaluating other simple logical expressions.
Condition Agent Node: However, when your routing demands a deeper understanding of the conversation's nuances, the Condition Agent Node is the better choice. This node acts as your intelligent routing assistant, leveraging an LLM to analyze the conversation, make judgments based on context, and provide structured output that drives more sophisticated and dynamic routing.
It's important to understand that both the LLM Node and the Agent Node can be considered agentic entities within our system, as they both leverage the capabilities of a large language model (LLM) or Chat Model.
However, while both nodes can process language and interact with tools, they are designed for different purposes within a workflow.
Agent Node | LLM Node | |
---|---|---|
Tool Interaction | Directly calls and manages multiple tools, built-in HITL. | Triggers tools via the Tool Node, granular HITL control at the tool level. |
Human-in-the-Loop (HITL) | HITL controlled at the Agent Node level (all connected tools affected). | HITL managed at the individual Tool Node level (more flexibility). |
Structured Output | Relies on the LLM's natural output format. | Relies on the LLM's natural output format, but, if needed, provides JSON schema definition to structure LLM output. |
Ideal Use Cases |
|
|
Choose the Agent Node: Use the Agent Node when you need to create a conversational system that can manage the execution of multiple tools, all of which share the same HITL setting (enabled or disabled for the entire Agent Node). The Agent Node is also well-suited for handling complex multi-step conversations where consistent agent-like behavior is desired.
Choose the LLM Node: On the other hand, use the LLM Node when you need to extract structured data from the LLM's output using the JSON schema feature, a capability not available in the Agent Node. The LLM Node also excels at orchestrating tool execution with fine-grained control over HITL at the individual tool level, allowing you to mix automated and human-reviewed tool executions by using multiple Tool Nodes connected to the LLM Node.