FlowiseAI
English
English
  • Introduction
  • Get Started
  • Contribution Guide
    • Building Node
  • API Reference
    • Assistants
    • Attachments
    • Chat Message
    • Chatflows
    • Document Store
    • Feedback
    • Leads
    • Ping
    • Prediction
    • Tools
    • Upsert History
    • Variables
    • Vector Upsert
  • CLI Reference
    • User
  • Using Flowise
    • Agentflow V2
    • Agentflow V1 (Deprecating)
      • Multi-Agents
      • Sequential Agents
        • Video Tutorials
    • API
    • Analytic
      • Arize
      • Langfuse
      • Lunary
      • Opik
      • Phoenix
    • Document Stores
    • Embed
    • Monitoring
    • Streaming
    • Uploads
    • Variables
    • Workspaces
    • Evaluations
  • Configuration
    • Auth
      • Application
      • Flows
    • Databases
    • Deployment
      • AWS
      • Azure
      • Alibaba Cloud
      • Digital Ocean
      • Elestio
      • GCP
      • Hugging Face
      • Kubernetes using Helm
      • Railway
      • Render
      • Replit
      • RepoCloud
      • Sealos
      • Zeabur
    • Environment Variables
    • Rate Limit
    • Running Flowise behind company proxy
    • SSO
    • Running Flowise using Queue
    • Running in Production
  • Integrations
    • LangChain
      • Agents
        • Airtable Agent
        • AutoGPT
        • BabyAGI
        • CSV Agent
        • Conversational Agent
        • Conversational Retrieval Agent
        • MistralAI Tool Agent
        • OpenAI Assistant
          • Threads
        • OpenAI Function Agent
        • OpenAI Tool Agent
        • ReAct Agent Chat
        • ReAct Agent LLM
        • Tool Agent
        • XML Agent
      • Cache
        • InMemory Cache
        • InMemory Embedding Cache
        • Momento Cache
        • Redis Cache
        • Redis Embeddings Cache
        • Upstash Redis Cache
      • Chains
        • GET API Chain
        • OpenAPI Chain
        • POST API Chain
        • Conversation Chain
        • Conversational Retrieval QA Chain
        • LLM Chain
        • Multi Prompt Chain
        • Multi Retrieval QA Chain
        • Retrieval QA Chain
        • Sql Database Chain
        • Vectara QA Chain
        • VectorDB QA Chain
      • Chat Models
        • AWS ChatBedrock
        • Azure ChatOpenAI
        • NVIDIA NIM
        • ChatAnthropic
        • ChatCohere
        • Chat Fireworks
        • ChatGoogleGenerativeAI
        • Google VertexAI
        • ChatHuggingFace
        • ChatLocalAI
        • ChatMistralAI
        • IBM Watsonx
        • ChatOllama
        • ChatOpenAI
        • ChatTogetherAI
        • GroqChat
      • Document Loaders
        • Airtable
        • API Loader
        • Apify Website Content Crawler
        • BraveSearch Loader
        • Cheerio Web Scraper
        • Confluence
        • Csv File
        • Custom Document Loader
        • Document Store
        • Docx File
        • Epub File
        • Figma
        • File
        • FireCrawl
        • Folder
        • GitBook
        • Github
        • Google Drive
        • Google Sheets
        • Jira
        • Json File
        • Json Lines File
        • Microsoft Excel
        • Microsoft Powerpoint
        • Microsoft Word
        • Notion
        • PDF Files
        • Plain Text
        • Playwright Web Scraper
        • Puppeteer Web Scraper
        • S3 File Loader
        • SearchApi For Web Search
        • SerpApi For Web Search
        • Spider - web search & crawler
        • Text File
        • Unstructured File Loader
        • Unstructured Folder Loader
      • Embeddings
        • AWS Bedrock Embeddings
        • Azure OpenAI Embeddings
        • Cohere Embeddings
        • Google GenerativeAI Embeddings
        • Google VertexAI Embeddings
        • HuggingFace Inference Embeddings
        • LocalAI Embeddings
        • MistralAI Embeddings
        • Ollama Embeddings
        • OpenAI Embeddings
        • OpenAI Embeddings Custom
        • TogetherAI Embedding
        • VoyageAI Embeddings
      • LLMs
        • AWS Bedrock
        • Azure OpenAI
        • Cohere
        • GoogleVertex AI
        • HuggingFace Inference
        • Ollama
        • OpenAI
        • Replicate
      • Memory
        • Buffer Memory
        • Buffer Window Memory
        • Conversation Summary Memory
        • Conversation Summary Buffer Memory
        • DynamoDB Chat Memory
        • MongoDB Atlas Chat Memory
        • Redis-Backed Chat Memory
        • Upstash Redis-Backed Chat Memory
        • Zep Memory
      • Moderation
        • OpenAI Moderation
        • Simple Prompt Moderation
      • Output Parsers
        • CSV Output Parser
        • Custom List Output Parser
        • Structured Output Parser
        • Advanced Structured Output Parser
      • Prompts
        • Chat Prompt Template
        • Few Shot Prompt Template
        • Prompt Template
      • Record Managers
      • Retrievers
        • Extract Metadata Retriever
        • Custom Retriever
        • Cohere Rerank Retriever
        • Embeddings Filter Retriever
        • HyDE Retriever
        • LLM Filter Retriever
        • Multi Query Retriever
        • Prompt Retriever
        • Reciprocal Rank Fusion Retriever
        • Similarity Score Threshold Retriever
        • Vector Store Retriever
        • Voyage AI Rerank Retriever
      • Text Splitters
        • Character Text Splitter
        • Code Text Splitter
        • Html-To-Markdown Text Splitter
        • Markdown Text Splitter
        • Recursive Character Text Splitter
        • Token Text Splitter
      • Tools
        • BraveSearch API
        • Calculator
        • Chain Tool
        • Chatflow Tool
        • Custom Tool
        • Exa Search
        • Gmail
        • Google Calendar
        • Google Custom Search
        • Google Drive
        • Google Sheets
        • Microsoft Outlook
        • Microsoft Teams
        • OpenAPI Toolkit
        • Code Interpreter by E2B
        • Read File
        • Request Get
        • Request Post
        • Retriever Tool
        • SearchApi
        • SearXNG
        • Serp API
        • Serper
        • Tavily
        • Web Browser
        • Write File
      • Vector Stores
        • AstraDB
        • Chroma
        • Couchbase
        • Elastic
        • Faiss
        • In-Memory Vector Store
        • Milvus
        • MongoDB Atlas
        • OpenSearch
        • Pinecone
        • Postgres
        • Qdrant
        • Redis
        • SingleStore
        • Supabase
        • Upstash Vector
        • Vectara
        • Weaviate
        • Zep Collection - Open Source
        • Zep Collection - Cloud
    • LiteLLM Proxy
    • LlamaIndex
      • Agents
        • OpenAI Tool Agent
        • Anthropic Tool Agent
      • Chat Models
        • AzureChatOpenAI
        • ChatAnthropic
        • ChatMistral
        • ChatOllama
        • ChatOpenAI
        • ChatTogetherAI
        • ChatGroq
      • Embeddings
        • Azure OpenAI Embeddings
        • OpenAI Embedding
      • Engine
        • Query Engine
        • Simple Chat Engine
        • Context Chat Engine
        • Sub-Question Query Engine
      • Response Synthesizer
        • Refine
        • Compact And Refine
        • Simple Response Builder
        • Tree Summarize
      • Tools
        • Query Engine Tool
      • Vector Stores
        • Pinecone
        • SimpleStore
    • Utilities
      • Custom JS Function
      • Set/Get Variable
      • If Else
      • Sticky Note
    • External Integrations
      • Zapier Zaps
  • Migration Guide
    • Cloud Migration
    • v1.3.0 Migration Guide
    • v1.4.3 Migration Guide
    • v2.1.4 Migration Guide
  • Tutorials
    • RAG
    • Agentic RAG
    • SQL Agent
    • Agent as Tool
    • Interacting with API
    • Tools & MCP
    • Structured Output
    • Human In The Loop
    • Deep Research
  • Use Cases
    • Calling Children Flows
    • Calling Webhook
    • Interacting with API
    • Multiple Documents QnA
    • SQL QnA
    • Upserting Data
    • Web Scrape QnA
  • Flowise
    • Flowise GitHub
    • Flowise Cloud
Powered by GitBook
On this page
  • Overview
  • Step 1: Create the Start Node
  • Step 2: Add the Planner Agent
  • Step 3: Create the SubAgent Iteration Block
  • Step 4: Build the Research SubAgent
  • Step 5: Add the Writer Agent
  • Step 6: Implement the Condition Check
  • Step 7: Create the Loop Mechanism
  • Step 8: Add the Final Output
  • Testing the Flow
  • Complete Flow Structure
  • Walkthrough
  • 🧠 Planner Agent
  • 🖧 Subagents
  • ✍️ Writer Agent
  • ⇄ Condition Agent
  • Advanced Configuration
  • Best Practices
Edit on GitHub
  1. Tutorials

Deep Research

PreviousHuman In The LoopNextUse Cases

Last updated 6 hours ago

Deep Research Agent is a sophisticated multi-agent system that can conduct comprehensive research on any topic by breaking down complex queries into manageable tasks, deploying specialized research agents, and synthesizing findings into detailed reports.

This approach is inspired by Anthropic's blog -

Overview

The Deep Research Agent workflow consists of several key components working together:

  1. Planner Agent: Analyzes the research query and generates a list of specialized research tasks

  2. Iteration: Creates multiple research agents to work on different aspects of the query

  3. Research SubAgents: Individual agents that conduct focused research using web search and other tools

  4. Writer Agent: Synthesizes all findings into a coherent, comprehensive report

  5. Condition Agent: Determines if additional research is needed or if the findings are sufficient

  6. Loop: Loop back to Planner Agent to improve research quality

Step 1: Create the Start Node

  1. Begin by adding a Start node to your canvas

  2. Configure the Start node with Form Input to collect the research query from users

  3. Set up the form with the following configuration:

    • Form Title: "Research"

    • Form Description: "A research agent that takes in a query, and return a detailed report"

    • Form Input Types: Add a string input with label "Query" and variable name "query"

  4. Initialize the Flow State with two key variables:

    • subagents: To store the list of research tasks to be carried out by subagents

    • findings: To accumulate research results

Step 2: Add the Planner Agent

  1. Connect an LLM node to the Start node.

  2. Set up the system prompt to act as an expert research lead with the following key responsibilities:

    • Analyze and break down user queries

    • Create detailed research plans

    • Generate specific tasks for subagents

  1. Configure JSON Structured Output to return a list of subagent tasks:

{
  "task": {
    "type": "string", 
    "description": "The research task for subagent"
  }
}
  1. Update the flow state by storing the generated subagents list

Step 3: Create the SubAgent Iteration Block

  1. Add an Iteration node.

  2. Connect it to the Planner output

  3. Configure the iteration input to the flow state: {{ $flow.state.subagents }}. For each item in the array, a subagent will be spawned to carry out the research task. Example:

{
  "subagents": [
    {
      "task": "Research the current state and recent developments in autonomous multi-agent systems technology. Focus on defining what autonomous multi-agent systems are, key technical components (coordination algorithms, communication protocols, decision-making frameworks), major technological advances in the last 2-3 years, and leading research institutions/companies working in this space. Use web search to find recent academic papers, industry reports, and technical documentation. Prioritize sources from IEEE, ACM, Nature, Science, and major tech companies' research divisions. Compile findings into a comprehensive technical overview covering definitions, core technologies, recent breakthroughs, and key players in the field."
    },
    {
      "task": "Investigate real-world applications and deployments of autonomous multi-agent systems across different industries. Research specific use cases in robotics (swarm robotics, warehouse automation), transportation (autonomous vehicle fleets, traffic management), manufacturing (coordinated production systems), defense/military applications, smart cities, and any other domains where these systems are actively deployed. For each application area, identify specific companies, products, success stories, and quantitative results where available. Focus on practical implementations rather than theoretical research. Use web search to find case studies, company announcements, industry reports, and news articles about actual deployments."
    }
  ]
}  

Step 4: Build the Research SubAgent

  1. Inside the iteration block, add an Agent node.

  2. Configure the system prompt to act as a focused research subagent with:

    • Clear task understanding capabilities

    • Efficient research planning (2-5 tool calls per task)

    • Source quality evaluation

    • Parallel tool usage for efficiency

  1. Add the following research tools, you can use your own preferred tools:

    • Google Search: For web search links

    • Web Scraper: For web content extraction. This will scrape the content of the links from Google Search.

    • ArXiv Search: For searching and loading content of academic papers

  1. Set the user message to pass the current iteration task: {{ $iteration.task }}

Step 5: Add the Writer Agent

  1. Connect a LLM node after the iteration completes.

  2. A larger context LLM like Gemini with 1-2 millions context size is needed to synthesize all findings and generate the report.

  3. Set up the system prompt to act as an expert research writer that:

    • Preserves full context from research findings

    • Maintains citation integrity

    • Adds structure and clarity

    • Outputs professional Markdown reports

  4. Configure the user message to include:

    • Research topic: {{ $form.query }}

    • Existing findings: {{ $flow.state.findings }}

    • New findings: {{ iterationAgentflow_0 }}

  1. Update the {{ $flow.state.findings }} with the output of Write Agent.

Step 6: Implement the Condition Check

  1. Add a Condition Agent.

  2. Set up the condition logic to determine if additional research is needed

  3. Configure two scenarios:

    • "More subagents are needed"

    • "Findings are sufficient"

  4. Provide input context including:

    • Research topic

    • Current subagents list

    • Accumulated findings

Step 7: Create the Loop Mechanism

  1. For the "More subagents needed" path, add a Loop node

  2. Configure it to loop back to the Planner node

  3. Set a maximum loop count of 5 to prevent infinite loops

  4. Planner Agent will look at the current report, and generate additional research tasks.

Step 8: Add the Final Output

  1. For the "Findings are sufficient" path, add a Direct Reply

  2. Configure it to output the final report: {{ $flow.state.findings }}

Testing the Flow

  1. Start with a simple topic like "Autonomous Multi-Agent Systems in Real-World Environments"

  2. Observe how the Planner breaks down the research into focused tasks

  3. Monitor the SubAgents as they conduct parallel research

  4. Review the Writer Agent's synthesis of findings

  5. Note whether the Condition Agent requests additional research

Report Generated:

Complete Flow Structure

Walkthrough

  1. 🧠 Planner Agent - analyzes the research query and generates a list of specialized research tasks

  2. 🖧 Subagents - creates multiple research subagents, conduct focused research using web search, web scrape, and arxiv tools

  3. ✍️ Writer Agent - synthesizes all findings into a coherent, comprehensive report with citations

  4. ⇄ Condition Agent - determines if additional research is needed or if the findings are sufficient

  5. 🔄 Loop back to Planner Agent to generate more subagents

🧠 Planner Agent

Act as an expert research lead to:

  • Analyze and break down user queries

  • Create detailed research plans

  • Generate specific tasks for subagents

Output an array of research tasks.

🖧 Subagents

For each task in the tasklist, a new subagent will be spawned to conduct focused research.

Each subagent has:

  • Clear task understanding capabilities

  • Efficient research planning (2-5 tool calls per task)

  • Source quality evaluation

  • Parallel tool usage for efficiency

Subagent has access to web search, web scrape, and arxiv tools.

  • 🌐 Google Search - for web search links

  • 🗂️ Web Scraper - for web content extraction. This will scrape the content of the links from Google Search.

  • 📑 ArXiv - search, download and read content of arxiv papers

✍️ Writer Agent

Act as a research writer that turn raw findings into a clear, structured Markdown report. Preserve all context and citations.

We find Gemini to be the best for this, thanks to its large context window that allows it to synthesize all the findings effectively.

⇄ Condition Agent

With the generated report, we let the LLM determine whether additional research is needed or if the findings are sufficient.

If more is needed, the Planner Agent reviews all messages, identifies areas for improvement, generates follow-up research tasks, and the loop continues.

If the findings are sufficient, we simply return the final report from the Writer Agent as the output.

Advanced Configuration

Customizing Research Depth

You can adjust the research depth by modifying the Planner's system prompt to:

  • Increase the number of SubAgents for complex topics (up to 10-20)

  • Adjust the tool call budget per SubAgent

  • Modify the loop count for more iterative research

But this also comes with extra cost for more token consumption.

Adding Specialized Tools

Enhance research capabilities by adding domain-specific tools:

  • Personal tools like Gmail, Slack, Google Calendar, Teams etc

  • Other web scraper, web search tools like Firecrawl, Exa, Apify etc

Adding RAG Context

Best Practices

  • Model selection and fallback options are crucial due to the large amount of findings that causes token overflow.

  • Tools need to be carefully crafted, when to use, how to limit the length of results returned from tool executions.

  • This is very similar to Trade-off Triangle, where optimizing two of the tree often negatively impacts another, in this case - Speed, Quality, Cost.

Example prompt -

Example prompt -

You can add more context to the LLM with . This allows LLM to pull information from relevant existing knowledge sources when needed.

Prompting is key. Anthropic open-sourced their entire prompt structure, covering task delegation, parallel tool usage, and thought processes -

research_lead_agent.md
research_subagent.md
RAG
https://github.com/anthropics/anthropic-cookbook/blob/main/patterns/agents/prompts
How we built our multi-agent research system
378KB
Deep Research Report.pdf
pdf
98KB
Deep Research Dynamic SubAgents.json