Deep Research
Last updated
Last updated
Deep Research Agent is a sophisticated multi-agent system that can conduct comprehensive research on any topic by breaking down complex queries into manageable tasks, deploying specialized research agents, and synthesizing findings into detailed reports.
This approach is inspired by Anthropic's blog -
The Deep Research Agent workflow consists of several key components working together:
Planner Agent: Analyzes the research query and generates a list of specialized research tasks
Iteration: Creates multiple research agents to work on different aspects of the query
Research SubAgents: Individual agents that conduct focused research using web search and other tools
Writer Agent: Synthesizes all findings into a coherent, comprehensive report
Condition Agent: Determines if additional research is needed or if the findings are sufficient
Loop: Loop back to Planner Agent to improve research quality
Begin by adding a Start node to your canvas
Configure the Start node with Form Input to collect the research query from users
Set up the form with the following configuration:
Form Title: "Research"
Form Description: "A research agent that takes in a query, and return a detailed report"
Form Input Types: Add a string input with label "Query" and variable name "query"
Initialize the Flow State with two key variables:
subagents
: To store the list of research tasks to be carried out by subagents
findings
: To accumulate research results
Connect an LLM node to the Start node.
Set up the system prompt to act as an expert research lead with the following key responsibilities:
Analyze and break down user queries
Create detailed research plans
Generate specific tasks for subagents
Configure JSON Structured Output to return a list of subagent tasks:
Update the flow state by storing the generated subagents list
Add an Iteration node.
Connect it to the Planner output
Configure the iteration input to the flow state: {{ $flow.state.subagents }}
. For each item in the array, a subagent will be spawned to carry out the research task. Example:
Inside the iteration block, add an Agent node.
Configure the system prompt to act as a focused research subagent with:
Clear task understanding capabilities
Efficient research planning (2-5 tool calls per task)
Source quality evaluation
Parallel tool usage for efficiency
Add the following research tools, you can use your own preferred tools:
Google Search: For web search links
Web Scraper: For web content extraction. This will scrape the content of the links from Google Search.
ArXiv Search: For searching and loading content of academic papers
Set the user message to pass the current iteration task: {{ $iteration.task }}
Connect a LLM node after the iteration completes.
A larger context LLM like Gemini with 1-2 millions context size is needed to synthesize all findings and generate the report.
Set up the system prompt to act as an expert research writer that:
Preserves full context from research findings
Maintains citation integrity
Adds structure and clarity
Outputs professional Markdown reports
Configure the user message to include:
Research topic: {{ $form.query }}
Existing findings: {{ $flow.state.findings }}
New findings: {{ iterationAgentflow_0 }}
Update the {{ $flow.state.findings }}
with the output of Write Agent.
Add a Condition Agent.
Set up the condition logic to determine if additional research is needed
Configure two scenarios:
"More subagents are needed"
"Findings are sufficient"
Provide input context including:
Research topic
Current subagents list
Accumulated findings
For the "More subagents needed" path, add a Loop node
Configure it to loop back to the Planner node
Set a maximum loop count of 5 to prevent infinite loops
Planner Agent will look at the current report, and generate additional research tasks.
For the "Findings are sufficient" path, add a Direct Reply
Configure it to output the final report: {{ $flow.state.findings }}
Start with a simple topic like "Autonomous Multi-Agent Systems in Real-World Environments"
Observe how the Planner breaks down the research into focused tasks
Monitor the SubAgents as they conduct parallel research
Review the Writer Agent's synthesis of findings
Note whether the Condition Agent requests additional research
Report Generated:
🧠 Planner Agent - analyzes the research query and generates a list of specialized research tasks
🖧 Subagents - creates multiple research subagents, conduct focused research using web search, web scrape, and arxiv tools
✍️ Writer Agent - synthesizes all findings into a coherent, comprehensive report with citations
⇄ Condition Agent - determines if additional research is needed or if the findings are sufficient
🔄 Loop back to Planner Agent to generate more subagents
Act as an expert research lead to:
Analyze and break down user queries
Create detailed research plans
Generate specific tasks for subagents
Output an array of research tasks.
For each task in the tasklist, a new subagent will be spawned to conduct focused research.
Each subagent has:
Clear task understanding capabilities
Efficient research planning (2-5 tool calls per task)
Source quality evaluation
Parallel tool usage for efficiency
Subagent has access to web search, web scrape, and arxiv tools.
🌐 Google Search - for web search links
🗂️ Web Scraper - for web content extraction. This will scrape the content of the links from Google Search.
📑 ArXiv - search, download and read content of arxiv papers
Act as a research writer that turn raw findings into a clear, structured Markdown report. Preserve all context and citations.
We find Gemini to be the best for this, thanks to its large context window that allows it to synthesize all the findings effectively.
With the generated report, we let the LLM determine whether additional research is needed or if the findings are sufficient.
If more is needed, the Planner Agent reviews all messages, identifies areas for improvement, generates follow-up research tasks, and the loop continues.
If the findings are sufficient, we simply return the final report from the Writer Agent as the output.
You can adjust the research depth by modifying the Planner's system prompt to:
Increase the number of SubAgents for complex topics (up to 10-20)
Adjust the tool call budget per SubAgent
Modify the loop count for more iterative research
But this also comes with extra cost for more token consumption.
Enhance research capabilities by adding domain-specific tools:
Personal tools like Gmail, Slack, Google Calendar, Teams etc
Other web scraper, web search tools like Firecrawl, Exa, Apify etc
Model selection and fallback options are crucial due to the large amount of findings that causes token overflow.
Tools need to be carefully crafted, when to use, how to limit the length of results returned from tool executions.
This is very similar to Trade-off Triangle, where optimizing two of the tree often negatively impacts another, in this case - Speed, Quality, Cost.
Example prompt -
Example prompt -
You can add more context to the LLM with . This allows LLM to pull information from relevant existing knowledge sources when needed.
Prompting is key. Anthropic open-sourced their entire prompt structure, covering task delegation, parallel tool usage, and thought processes -