Tools & MCP
Last updated
Last updated
In the previous tutorial, we explored how to enable LLMs to call external APIs. To enhance the user experience, Flowise provides a list of prebuilt tools. Refer to the section for the full list of available integrations.
In cases where the tool you need is not yet available, you can create a Custom Tool to suit your requirements.
We are going to use the same , and create a custom tool which can call the HTTP POST request for /events
.
Tool Name: create_event
Tool Description: Use this when you want to create a new event.
Input Schema: A JSON schema of the API request body which allows LLM to know how to automatically generate the correct JSON body. For instance:
Javascript Function: The actual function to execute once this tool is called
You can use any libraries imported in Flowise.
You can use properties specified in Input Schema as variables with prefix $
:
Property from Input Schema = name
Variable to be used in Function = $name
You can get default flow config:
$flow.sessionId
$flow.chatId
$flow.chatflowId
$flow.input
$flow.state
You can get custom variables: $vars.<variable-name>
Must return a string value at the end of function
After custom tool has been created, you can use it on the Agent node.
From the Tool dropdown, select the custom tool. You can also turn on Return Direct if you want to directly return the output from custom tool.
It can also be used as a Tool Node in a determined workflow scenario. In this case, Tool Input Arguments must be explicitly defined and filled with values, because there is no LLM to automatically determine the values.
Apart from the prebuilt MCP tools, the most powerful feature is Custom MCP, which allows users to connect to any MCP server of their choice.
MCP follows a client-server architecture where:
Hosts are LLM applications (like Flowise) that initiate connections
Clients maintain 1:1 connections with servers, inside the host application (like Custom MCP)
To handle the actual communication between clients and servers. MCP supports multiple transport mechanisms:
Stdio transport
Uses standard input/output for communication
Ideal for local processes
Streamable HTTP transport
Uses HTTP with optional Server-Sent Events for streaming
HTTP POST for client-to-server messages
Stdio transport enables communication through standard input and output streams. This is particularly useful for local integrations and command-line tools.
Only use this when using Flowise locally, not when deployed to cloud services. This is because running command like npx
will install the MCP server package (ex: @modelcontextprotocol/server-sequential-thinking
) locally, and it often takes long time for that.
It is more suited for desktop application like Claude Desktop, VS Code etc.
The Docker command is suitable when the machine running Flowise also has access to Docker. However, it is not suitable for deployments on cloud services where Docker access is restricted or unavailable.
Make sure Docker is running.
Refresh the Available Actions. If the image is not found locally, Docker will automatically pull the latest image. Once the image is pulled, you will see the list of available actions.
Building command-line tools
Implementing local integrations
Needing simple process communication
Working with shell scripts
If the MCP server configuration is working correctly, you can refresh the Available Actions, and Flowise will automatically pull in all available actions from the MCP server.
Give me the most recent issue
The agent is able to identify the appropriate actions from MCP and use them to answer the user's query.
Use Streamable HTTP when:
Building web-based integrations
Needing client-server communication over HTTP
Requiring stateful sessions
Supporting multiple concurrent clients
Implementing resumable connections
MCP () provides a standardized way to connect AI models to different data sources and tools. In other words, instead of relying on Flowise built in tools or creating custom tool, one can uses MCP servers that have been created by others. MCP is widely considered an industry standard and is typically supported and maintained by the official providers. For example, the GitHub MCP is developed and maintained by the GitHub team, with similar support provided for Atlassian Jira, Brave Search, and others. You can find the list of supported servers .
Servers provide context, tools, and prompts to clients (example )
For Windows, refer to this .
Docker provides a list of MCP servers, which can be found . Here's how it works:
Locate the MCP server configuration and add it to Custom MCP. For example:
We will use Github Remote MCP as an example. The beautiful part of , you don’t need to install or run it locally, new updates are applied automatically.
In order to access the MCP server, we need to create a Personal Access Token from Github. Refer to . Once PAT has been created, create a variable to store the token. This variable will be used in Custom MCP.
Create an Agent node, and add a new Custom MCP tool. For streamable HTTP, we just need to put in the URL and other necessary headers. You can use in the MCP Server Config with double curly braces {{ }}
and prefix $vars.<variableName>
.