FlowiseAI
English
English
  • Introduction
  • Get Started
  • Contribution Guide
    • Building Node
  • API Reference
    • Assistants
    • Attachments
    • Chat Message
    • Chatflows
    • Document Store
    • Feedback
    • Leads
    • Ping
    • Prediction
    • Tools
    • Upsert History
    • Variables
    • Vector Upsert
  • CLI Reference
    • User
  • Using Flowise
    • Agentflow V2
    • Agentflow V1 (Deprecating)
      • Multi-Agents
      • Sequential Agents
        • Video Tutorials
    • Prediction
    • Streaming
    • Document Stores
    • Upsertion
    • Analytic
      • Arize
      • Langfuse
      • Lunary
      • Opik
      • Phoenix
    • Monitoring
    • Embed
    • Uploads
    • Variables
    • Workspaces
    • Evaluations
  • Configuration
    • Auth
      • Application
      • Flows
    • Databases
    • Deployment
      • AWS
      • Azure
      • Alibaba Cloud
      • Digital Ocean
      • Elestio
      • GCP
      • Hugging Face
      • Kubernetes using Helm
      • Railway
      • Render
      • Replit
      • RepoCloud
      • Sealos
      • Zeabur
    • Environment Variables
    • Rate Limit
    • Running Flowise behind company proxy
    • SSO
    • Running Flowise using Queue
    • Running in Production
  • Integrations
    • LangChain
      • Agents
        • Airtable Agent
        • AutoGPT
        • BabyAGI
        • CSV Agent
        • Conversational Agent
        • Conversational Retrieval Agent
        • MistralAI Tool Agent
        • OpenAI Assistant
          • Threads
        • OpenAI Function Agent
        • OpenAI Tool Agent
        • ReAct Agent Chat
        • ReAct Agent LLM
        • Tool Agent
        • XML Agent
      • Cache
        • InMemory Cache
        • InMemory Embedding Cache
        • Momento Cache
        • Redis Cache
        • Redis Embeddings Cache
        • Upstash Redis Cache
      • Chains
        • GET API Chain
        • OpenAPI Chain
        • POST API Chain
        • Conversation Chain
        • Conversational Retrieval QA Chain
        • LLM Chain
        • Multi Prompt Chain
        • Multi Retrieval QA Chain
        • Retrieval QA Chain
        • Sql Database Chain
        • Vectara QA Chain
        • VectorDB QA Chain
      • Chat Models
        • AWS ChatBedrock
        • Azure ChatOpenAI
        • NVIDIA NIM
        • ChatAnthropic
        • ChatCohere
        • Chat Fireworks
        • ChatGoogleGenerativeAI
        • Google VertexAI
        • ChatHuggingFace
        • ChatLocalAI
        • ChatMistralAI
        • IBM Watsonx
        • ChatOllama
        • ChatOpenAI
        • ChatTogetherAI
        • GroqChat
      • Document Loaders
        • Airtable
        • API Loader
        • Apify Website Content Crawler
        • BraveSearch Loader
        • Cheerio Web Scraper
        • Confluence
        • Csv File
        • Custom Document Loader
        • Document Store
        • Docx File
        • Epub File
        • Figma
        • File
        • FireCrawl
        • Folder
        • GitBook
        • Github
        • Google Drive
        • Google Sheets
        • Jira
        • Json File
        • Json Lines File
        • Microsoft Excel
        • Microsoft Powerpoint
        • Microsoft Word
        • Notion
        • PDF Files
        • Plain Text
        • Playwright Web Scraper
        • Puppeteer Web Scraper
        • S3 File Loader
        • SearchApi For Web Search
        • SerpApi For Web Search
        • Spider - web search & crawler
        • Text File
        • Unstructured File Loader
        • Unstructured Folder Loader
      • Embeddings
        • AWS Bedrock Embeddings
        • Azure OpenAI Embeddings
        • Cohere Embeddings
        • Google GenerativeAI Embeddings
        • Google VertexAI Embeddings
        • HuggingFace Inference Embeddings
        • LocalAI Embeddings
        • MistralAI Embeddings
        • Ollama Embeddings
        • OpenAI Embeddings
        • OpenAI Embeddings Custom
        • TogetherAI Embedding
        • VoyageAI Embeddings
      • LLMs
        • AWS Bedrock
        • Azure OpenAI
        • Cohere
        • GoogleVertex AI
        • HuggingFace Inference
        • Ollama
        • OpenAI
        • Replicate
      • Memory
        • Buffer Memory
        • Buffer Window Memory
        • Conversation Summary Memory
        • Conversation Summary Buffer Memory
        • DynamoDB Chat Memory
        • MongoDB Atlas Chat Memory
        • Redis-Backed Chat Memory
        • Upstash Redis-Backed Chat Memory
        • Zep Memory
      • Moderation
        • OpenAI Moderation
        • Simple Prompt Moderation
      • Output Parsers
        • CSV Output Parser
        • Custom List Output Parser
        • Structured Output Parser
        • Advanced Structured Output Parser
      • Prompts
        • Chat Prompt Template
        • Few Shot Prompt Template
        • Prompt Template
      • Record Managers
      • Retrievers
        • Extract Metadata Retriever
        • Custom Retriever
        • Cohere Rerank Retriever
        • Embeddings Filter Retriever
        • HyDE Retriever
        • LLM Filter Retriever
        • Multi Query Retriever
        • Prompt Retriever
        • Reciprocal Rank Fusion Retriever
        • Similarity Score Threshold Retriever
        • Vector Store Retriever
        • Voyage AI Rerank Retriever
      • Text Splitters
        • Character Text Splitter
        • Code Text Splitter
        • Html-To-Markdown Text Splitter
        • Markdown Text Splitter
        • Recursive Character Text Splitter
        • Token Text Splitter
      • Tools
        • BraveSearch API
        • Calculator
        • Chain Tool
        • Chatflow Tool
        • Custom Tool
        • Exa Search
        • Gmail
        • Google Calendar
        • Google Custom Search
        • Google Drive
        • Google Sheets
        • Microsoft Outlook
        • Microsoft Teams
        • OpenAPI Toolkit
        • Code Interpreter by E2B
        • Read File
        • Request Get
        • Request Post
        • Retriever Tool
        • SearchApi
        • SearXNG
        • Serp API
        • Serper
        • Tavily
        • Web Browser
        • Write File
      • Vector Stores
        • AstraDB
        • Chroma
        • Couchbase
        • Elastic
        • Faiss
        • In-Memory Vector Store
        • Milvus
        • MongoDB Atlas
        • OpenSearch
        • Pinecone
        • Postgres
        • Qdrant
        • Redis
        • SingleStore
        • Supabase
        • Upstash Vector
        • Vectara
        • Weaviate
        • Zep Collection - Open Source
        • Zep Collection - Cloud
    • LiteLLM Proxy
    • LlamaIndex
      • Agents
        • OpenAI Tool Agent
        • Anthropic Tool Agent
      • Chat Models
        • AzureChatOpenAI
        • ChatAnthropic
        • ChatMistral
        • ChatOllama
        • ChatOpenAI
        • ChatTogetherAI
        • ChatGroq
      • Embeddings
        • Azure OpenAI Embeddings
        • OpenAI Embedding
      • Engine
        • Query Engine
        • Simple Chat Engine
        • Context Chat Engine
        • Sub-Question Query Engine
      • Response Synthesizer
        • Refine
        • Compact And Refine
        • Simple Response Builder
        • Tree Summarize
      • Tools
        • Query Engine Tool
      • Vector Stores
        • Pinecone
        • SimpleStore
    • Utilities
      • Custom JS Function
      • Set/Get Variable
      • If Else
      • Sticky Note
    • External Integrations
      • Zapier Zaps
      • Open WebUI
      • Streamlit
  • Migration Guide
    • Cloud Migration
    • v1.3.0 Migration Guide
    • v1.4.3 Migration Guide
    • v2.1.4 Migration Guide
  • Tutorials
    • RAG
    • Agentic RAG
    • SQL Agent
    • Agent as Tool
    • Interacting with API
    • Tools & MCP
    • Structured Output
    • Human In The Loop
    • Deep Research
    • Customer Support
    • Supervisor and Workers
  • Flowise
    • Flowise GitHub
    • Flowise Cloud
Powered by GitBook
On this page
  • Overview
  • Step 1: Create the Start Node
  • Step 2: Add the Supervisor LLM
  • Step 3: Create the Routing Condition
  • Step 4: Configure the Software Engineer Agent
  • Step 5: Configure the Code Reviewer Agent
  • Step 6: Add Loop Back Connections
  • Step 7: Create the Final Answer Generator
  • How It Works
  • Example Interaction
  • Complete Flow Structure
  • Best Practices
Edit on GitHub
  1. Tutorials

Supervisor and Workers

PreviousCustomer Support

Last updated 11 hours ago

The Supervisor Worker pattern is a powerful workflow design where a supervisor agent coordinates multiple specialized worker agents to complete complex tasks. This pattern allows for better task delegation, specialized expertise, and iterative refinement of solutions.

Overview

In this tutorial, we'll build a collaborative system with:

  • Supervisor: An LLM that analyzes tasks and decides which worker should act next

  • Software Engineer: Specialized in designing and implementing software solutions

  • Code Reviewer: Focused on reviewing code quality and providing feedback

  • Final Answer Generator: Compiles the collaborative work into a comprehensive solution

Step 1: Create the Start Node

The flow begins with a Start node that captures user input and initializes the workflow state.

  1. Add a Start node to your canvas

  2. Configure the Input Type as "Chat Input"

  3. Set up Flow State with these initial variables:

    • next: To keep track of the next agent

    • instruction: Instruction for the next agent on what to do

Step 2: Add the Supervisor LLM

The Supervisor is the orchestrator that decides which worker should handle each part of the task.

  1. Connect a LLM node after the Start node

  2. Label it "Supervisor"

  3. Configure the system message, for example:

You are a supervisor tasked with managing a conversation between the following workers:
- Software Engineer  
- Code Reviewer

Given the following user request, respond with the worker to act next.
Each worker will perform a task and respond with their results and status.
When finished, respond with FINISH.
Select strategically to minimize the number of steps taken.
  1. Set up JSON Structured Output with these fields:

    • next: Enum with values "FINISH, SOFTWARE, REVIEWER"

    • instructions: The specific instructions of the sub-task the next worker should accomplish

    • reasoning: The reason why next worker is tasked to do the job

  2. Configure Update Flow State to store:

    • next: {{ output.next }}

    • instruction: {{ output.instructions }}

  3. Set the Input Message to: "Given the conversation above, who should act next? Or should we FINISH? Select one of: SOFTWARE, REVIEWER." The Input Message will be inserted at the end, as if the user is asking the supervisor to assign the next agent.

Step 3: Create the Routing Condition

The Check next worker condition node routes the flow based on the supervisor's decision.

  1. Add a Condition node after the Supervisor

  2. Set up two conditions:

    • Condition 0: {{ $flow.state.next }} equals "SOFTWARE"

    • Condition 1: {{ $flow.state.next }} equals "REVIEWER"

  3. The "Else" branch (Condition 2) will handle the "FINISH" case

This creates three output paths: one for each worker and one for completion.

Step 4: Configure the Software Engineer Agent

The Software Engineer specializes in designing and implementing software solutions.

  1. Connect an Agent node to Condition 0 output

  2. Configure the system message:

As a Senior Software Engineer, you are a pivotal part of our innovative development team. Your expertise and leadership drive the creation of robust, scalable software solutions that meet the needs of our diverse clientele.

Your goal is to lead the development of high-quality software solutions.

Design and implement new features for the given task, ensuring it integrates seamlessly with existing systems and meets performance requirements. Use your understanding of React, TailwindCSS, NodeJS to build this feature. Make sure to adhere to coding standards and follow best practices.

The output should be a fully functional, well-documented feature that enhances our product's capabilities. Include detailed comments in the code.
  1. Set Input Message to: {{ $flow.state.instruction }} . The Input Message will be inserted at the end, as if the user is giving an instruction to the Software Engineer Agent.

Step 5: Configure the Code Reviewer Agent

The Code Reviewer focuses on quality assurance and code review.

  1. Connect an Agent node to Condition 1 output

  2. Configure the system message:

As a Quality Assurance Engineer, you are an integral part of our development team, ensuring that our software products are of the highest quality. Your meticulous attention to detail and expertise in testing methodologies are crucial in identifying defects and ensuring that our code meets the highest standards.

Your goal is to ensure the delivery of high-quality software through thorough code review and testing.

Review the codebase for the new feature designed and implemented by the Senior Software Engineer. Provide constructive feedback, guiding contributors towards best practices and fostering a culture of continuous improvement. Your approach ensures the delivery of high-quality software that is robust, scalable, and aligned with strategic goals.
  1. Set Input Message to: {{ $flow.state.instruction }} . The Input Message will be inserted at the end, as if the user is giving an instruction to the Code Reviewer Agent.

Step 6: Add Loop Back Connections

Both worker agents need to loop back to the Supervisor for continued coordination.

  1. Add a Loop node after the Software Engineer

    • Set Loop Back To as "Supervisor"

    • Set Max Loop Count to 5

  2. Add another Loop node after the Code Reviewer

    • Set Loop Back To as "Supervisor"

    • Set Max Loop Count to 5

These loops enable iterative collaboration between the agents.

Step 7: Create the Final Answer Generator

The final agent compiles all the collaborative work into a comprehensive solution.

  1. Connect an Agent node to Condition 2 output (the "Else" branch)

  2. It is recommended to use a higher context size LLM like Gemini, due to the back-and-forth nature of the conversation, which consumes a large number of tokens.

  3. Set Input Message. This is important because Input Message will be inserted at the end, as if the user is giving an instruction to the Final Answer Generator to look at all the conversations, and generate a final response.

Given the above conversations, generate a detail solution developed by the software engineer and code reviewer.

Your guiding principles:
1. **Preserve Full Context**
   Include all code implementations, improvements and review from the conversation. Do not omit, summarize, or oversimplify key information.

2. **Markdown Output Only** 
   Your final output must be in Markdown format.

How It Works

The Supervisor Worker pattern enables several key benefits:

Intelligent Task Delegation: The supervisor uses context and reasoning to assign the most appropriate worker for each sub-task.

Iterative Refinement: Workers can build upon each other's output, with the software engineer implementing features and the code reviewer providing feedback for improvements.

Stateful Coordination: The flow maintains state across iterations, allowing the supervisor to make informed decisions about what should happen next.

Specialized Expertise: Each agent has a focused role and specialized prompt, leading to higher quality outputs in their domain.

Example Interaction

Here's how a typical interaction might flow:

  1. User: "Create a React component for user authentication with form validation"

  2. Supervisor: Decides SOFTWARE should act first to implement the component

  3. Software Engineer: Creates a React authentication component with validation logic

  4. Supervisor: Decides REVIEWER should examine the implementation

  5. Code Reviewer: Reviews the code and suggests improvements for security and UX

  6. Supervisor: Decides SOFTWARE should implement the suggested improvements

  7. Software Engineer: Updates the component based on feedback

  8. Supervisor: Determines the task is complete and routes to FINISH

  9. Final Answer Generator: Compiles the complete solution with implementation and review feedback

Complete Flow Structure

Best Practices

  • This architecture consumes a lot of tokens due to the back and forth communications between agents, hence it is not suitable for every cases. It is particularly effective for:

    • Software development tasks requiring both implementation and review

    • Complex problem-solving that benefits from multiple perspectives

    • Workflows where quality and iteration are important

    • Tasks that require coordination between different types of expertise

  • Ensure each agent has a well-defined, specific role. Avoid overlapping responsibilities that could lead to confusion or redundant work.

  • Establish standard formats for how agents communicate their progress, findings, and recommendations. This helps the supervisor make better routing decisions.

  • Use memory settings appropriately to maintain conversation context while avoiding token limit issues. Consider using memory optimization settings like "Conversation Summary Buffer" for longer workflows.

71KB
Supervisor Worker Agents.json